...making Linux just a little more fun!
Smile Maker [britto_can at yahoo.com]
Folks ,
I have got centos 5 on my box and i went through the default installation which creates LVM and mounts in /.
When i just rebooted linux thrown error :
"System lookup error: mount:undefined symbol:blkid_get_cache"Any advice.
Thanks & regards, Britto
Amit Kumar Saha [amitsaha.in at gmail.com]
Hi all
I finally installed Xen using Synaptic on a fresh installed version of Ubuntu 7.04 (32-bit). For some reason the earlier efforts did not succeed However now I can boot to my custom Xen enabled 2.6.19 kernel.
Is there a problem with the amd-64 version of XEN? Can some one confirm this?
Cheers,
-- Amit Kumar Saha [URL]:https://amitsaha.in.googlepages.com
Jimmy Kaifiti [dgeemee03 at hotmail.com]
Hi , my name is Jimmy,can anyone help me fix the time on my PC. I change the Battery so many time ,I mean the new CMOS Battery ,but my time is still not read correct
René Pfeiffer [lynx at luchs.at]
Hello!
So, here's my question about that index problem I mentioned in the answer to Ben's question about backups.
Imagine you have two backups servers. Server A keeps a rather recent copy of live servers. Server B tries to archive stuff from server A in order to keep recent backups recent and to save space on server A. Of course this means that server B keeps accumulating files and directories. In order to avoid this one could think of a strategy of deleting files according to a mathematical distribution. There's a tool called fileprune which does just that (https://www.spinellis.gr/sw/unix/fileprune/). I found it by browsing through an old issue of ;login:. fileprune deletes data by using a Gaussian, exponential oder Fibonacci distribution. The problem is that fileprune needs to read the metadata of the entire tree into memory before it can decide which files to delete. Backup storage may have millions of files and directories.
I'd like to ask the filesystem directly "which files have an access time of older than X" and get an answer. In the database world you have indices for that. (Most) filesystems don't have such things (at least not exported to userspace), so you would have to maintain one for yourself. This could be done by the Linux kernel's Inotify API which tells you what changes were done in a specific filesystem tree. I tried it, it works, but I have no idea if I catch every modification when rsync or other tools come along (I am going to test this with higher load as soon as my load is lower).
Another way is to see whether existing filesystems have similar functionality. I believe Reiser4 went into this direction. Yet another way is to parse the filesystem tree seperately in order to maintain a metadata index.
Do you have some more ideas besides writing a new filesystem?
Just being curious, René.
Stelian Iancu [stelu at siancu.net]
Hello all,
I have recently received a Linksys WRT54GS wireless router from a friend of mine who had it laying around and didn't need it anymore. He told me he changed the firmware of the device several times, when he originally bought it and now he doesn't remember which firmware is installed.
So how can I figure out which one it is? I tried to connect with Firefox to the specific IP administrative addresses of DD-WRT and OpenWRT but there's no response.
Any help is more than appreciated!
With regards, Stelian Iancu
Kapil Hari Paranjape [kapil at imsc.res.in]
Hello,
Well one of the things at least! https://sam.zoy.org/zzuf/ (Also available in Debian testing of course!) Quite a neat toy!
Regards,
Kapil. --
[ Thread continues here (3 messages/3.00kB) ]
Amit Kumar Saha [amitsaha.in at gmail.com]
Hey all,
Suggest me a good, complete LATEX system - editors, typesetting system - preferably for Ubuntu (if thats important).
Cheers,
-- Amit Kumar Saha [URL]:https://amitsaha.in.googlepages.com
[ Thread continues here (13 messages/12.67kB) ]
Ben Okopnik [ben at linuxgazette.net]
As sometimes happens, shortly - or not so shortly - after we finish a discussion in TAG, more info about the problem pops up out of nowhere, usually as a result of a completely unrelated Google search. I swear, if we were to have a discussion about Martian blood-drinking weasels, next week I'd run into a "The Vampire Mustelids of Ares: A Personal Interview" while searching for soap-scum removal info...
Anyway, Karl-Heinz: while I was fiddling about trying to come up with a "max-spread" sorting algorithm [1], I ran across a reference to Parallel::ForkManager on 'https://perlmonks.org'. Doing a quick lookup on CPAN came back with this (snipped from the documentation):
This module is intended for use in operations that can be done in parallel where the number of processes to be forked off should be limited. Typical use is a downloader which will be retrieving hundreds/thousands of files. The code for a downloader would look something like this: [...]This sounds like exactly the kind of thing you were describing. It allows nicely fine-grained individual control of the child processes, etc. - take a look!
https://search.cpan.org/author/DLUX/Parallel-ForkManager-0.7.5/ForkManager.pm
There's also Parallel::ForkControl -
https://search.cpan.org/author/BLHOTSKY/Parallel-ForkControl-0.04/lib/Parallel/ForkControl.pm
[1] For the mathematicians among us, you might find this to be fun. See my next post.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
[ Thread continues here (2 messages/3.46kB) ]
axel.stevens [axel.stevens at webcreator.be]
Martin, Ben
There are still .sit.bin files to be found.
I located 2 files - mix68k.sit.bin and mixppc.sit.bin both 1,8 MB
best of luck
axel stevens
support engineer - Macintosh, Linux, Windows
Belgium
[ Thread continues here (2 messages/0.99kB) ]
Neil Youngman [ny at youngman.org.uk]
I was asked to look at a system that had a consistent load average around 5.3 to 5.5. Now I know very little about how to track down load average issues, so I haven't been able to find much. The CPU usage is about 90% idle, so it's not CPU bound.
I googled for "load average", "high load average" and "diagnose load average" and I found very little of use. the one thing I found was that if it's processes stuck waiting on I/O "ps ax" should show processes in state "D". There are none visible on this box.
Do the gang know of any good resources for diagnosing load average issues or have any useful tips?
Neil Youngman
[ Thread continues here (12 messages/18.50kB) ]
Ben Okopnik [ben at linuxgazette.net]
----- Forwarded message from Scott Rainey <scott.rainey@thelinuxfund.org> -----
Date: Wed, 27 Jun 2007 14:19:54 -0700 From: Scott Rainey <scott.rainey@thelinuxfund.org> Reply-To: scott.rainey@thelinuxfund.org To: editor@linuxgazette.net Subject: TUX ImagesHi,
I'm looking for a large format vector-based digital versions of Tux, both color and monochrome. I'm even willing to pay for a really good one in monochrome suitable for sand-blasting on glass.
Do you know whom I should contact?
All the best,
Scott
----- End forwarded message -----
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
[ Thread continues here (2 messages/2.44kB) ]
Ben Okopnik [ben at linuxgazette.net]
On Tue, Jul 10, 2007 at 12:34:05PM +0530, Nayanam,Sarsij wrote:
> I am writing a shell script to run on MC/SG cluster , and I am facing an > issue as mentioned below: > > if we have a package with a dash in the name say sgpkg-cust : > > # PKGsgpkg-cust=hello > sh: PKGsgpkg-cust=hello: not found. > # PKGsgpkg=hello > # echo $PKGsgpkg-cust > hello-cust
This is not surprising; a dash is not a valid character in a variable for Bourne-derived shells.
> I have a function get_package_fqdn which starts as below: > > 37 get_package_fqdn() > 38 { > 39 eval var=$`echo PKG$1` > 40 if [[ -z $var ]]; then > [...] > 64 fi > 65 } > > we will notice that if $1=sgpkg-cust, var will be equal to "-cust" and the > rest of the funcion "if[[ -z $var ]];" will not be used and nothing will > be introduced in the PKG$packagename variable.
This isn't shell-specific, but an excellent Perl programmer named Mark-Jason Dominus has a writeup called "Why it's stupid to use a variable as a variable name" (https://perl.plover.com/varvarname.html). The above problem is explicitly cited. In short: since variable names are restricted to a specific set of characters, and the set of characters that could be contained in your '$1' is essentially arbitrary, you're creating a problem when you do that. So don't do that.
In Perl, the answer is "use a hash". In Bash, well, you need to rethink what it is that you're trying to do and use different functionality. As a general approach, you could try "flattening" that arbitrary character set - be sure to do do in both populating and reading the strings:
package_name=$(echo -n "PKG$1"|tr -c 'a-zA-Z0-9_' '_')
ben@Tyr:~$ var=$(echo -n "xyzabc@$%^&*()_++sadbjkfjdf" | tr -c 'a-zA-Z0-9_' '_') ben@Tyr:~$ echo $var xyzabc____________sadbjkfjdfDo note that this will still break if a "special" Bash character (e.g., '!') appears in the string. Overall, you just need to rethink your approach to this problem. MJD is right: it's a bad idea to use a variable as a variable name.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
Ben Okopnik [ben at linuxgazette.net]
On Wed, Jun 20, 2007 at 01:20:31PM +0200, Peter Holm wrote:
> I have searched the net (google, newsgroups ...) to find an answer to > this question .- but without success. > > In KDE (for example) you can get individual desktops backgrounds for > each virtual desktop. > Well - i am used to a utility for M$-Windows called Xdesk that also > can set the the desktops to have individual icons / folders. > > I know that in the windows world they change a regkey that tells where > the desktop belongs for each switch so such a 'true virtual desktop' > > I have also in M$-Windows created bathc-files to use with less > intelligence window-managers, theese batch-files separately update the > regkey to get my own way to create 'true virtual desktops' > > Is there any program that i can get to have different desktops-folders > or is there any way to trick either kde / gnomw / idesk to have > different desktops?
I have very little experience with it myself, but based on what I do know, FVWM can probably accomodate you. You would, however, need to learn to write config files for it. I have no doubt that it has some kind of a "DetectDesktopSwitch" function, as well as either the capability of hiding the icons or allowing you to script such a function.
Here's an example of a very complicated (but still readable) FVWM config file:
https://www.cl.cam.ac.uk/~pz215/fvwm-scripts/config
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
[ Thread continues here (5 messages/13.80kB) ]
Chiew May Cheong [chiew_may at hotmail.com]
I have got a bash script called del_level0.sh that runs every Friday that looks like this:
#!/bin/bash cd /u1/database/prod/level0 rm *There's a cron entry for this script that runs every Friday:
linux:/usr/local/bin # crontab -l # DO NOT EDIT THIS FILE - edit the master and reinstall. # (/tmp/crontab.XXXXvi5yHq installed on Tue Jun 26 14:13:04 2007) # (Cron version V5.0 -- $Id: crontab.c,v 1.12 2004/01/23 18:56:42 vixie Exp $) 0 15 * * 5 /usr/local/bin/del_level0.sh > /dev/null 2>&1Can you help me so that the script runs at 3pm every 2nd and 4th Friday of the month?
Thanks.
Regards,
Chiew May
[ Thread continues here (6 messages/5.55kB) ]
René Pfeiffer [lynx at luchs.at]
Hello, TAG!
Every once in a while I am looking for a good vacation mailer that can read emails as good as I can while I am as far away from my mailbox as possible. I already tried
- vacation from Sendmail, - the Sieve vacation mailer that can be enabled in Cyrus and - a Perl script I wrote which is buried under the rubble of the company I worked for many years ago.What are your favourite vacation mailers that cause the least trouble with auto-generated emails? Do you have any preferences or experiences?
Best wishes, René.
[ Thread continues here (8 messages/10.67kB) ]
Jim Jackson [jj at franjam.org.uk]
I was interested to see the discussion in this article on passive ethernet "hubs" etc. Other maybe interested in this passive 3 port ethernet hub design...
https://www.zen22142.zen.co.uk/Circuits/Interface/pethhub.htm
I've not actually built it yet, but plan to do so soon.
Jim
[ Thread continues here (2 messages/2.56kB) ]
Thomas Adam [thomas.adam22 at gmail.com]
Hey all,
As I am sure many of you know, I don't get on with hardware. Thanks to Ben's suggestion I now have a USB Sun keyboard though, and despite people's horrific claims, I at least like it. So thanks, Ben.
My next question concerns a possibe replacement for my workstation. I've had my current PC for about three years now, kindly donated by a friend of mine. It's a nice system, but it needs replacing. I've had enough of the CPU being at 70C plus, despite cooling attempts.
So I was looking at buying a Shuttle PC. Specifically the SD39P2 which would be a bare-bones system [1]. What I'm curious to know is whether any of you have used one, and how they stack up against a regular PC? My reading suggests they can act as a pretty good desktop replacements. Whilst the model I'm looking at only has two PCI slots, I only really need to add a wireless PCI card and an NVidia graphics card, so that's perfect.
Does the model I'm listing [1] suggest any problems with running Linux on it? I can't see how it would at a cursive glance of what's available. My only real reservation is what would be the driving force of me buying this model when I could go to Dell and spend an equivalent amount of money and get a whole lot more.
Kindly,
Thomas Adam
[1] https://www.trustedreviews.com/pcs/review/2007/05/30/Shuttle-SD39P2-Barebone/p1
[ Thread continues here (8 messages/9.57kB) ]
Ben Okopnik [ben at linuxgazette.net]
[ If you're not a programmer or a mathematician, you might want to hit the 'delete' key right about now. Either that, or risk being bored to tears. Remember, I warned you! ]
As I've just mentioned in my previous post, I've been fiddling with a "max-spread" algorithm - i.e., if I have two lists, and I want the items in the first list to be spread as widely as possible (using the items in the second list as the "padding"), how do I interpolate them?
This can also be stated as follows: given a barbecue, a bunch of pork cubes, and a number of cherry tomatoes, how would you arrange the skewers in such a way that a) there's a pork chunk at the beginning and the end of every skewer, b) each skewer is arranged in as even a manner as possible, and c) you use up all the pork and all the tomatoes?
I got most of the way to a solution - essentially reinventing the Bresenham line algorithm [1] (and the wheel... and fire... sheesh. I'm a very poor mathematician, and a worse statistician), but got scooped by a fellow Perl monk from the Monastery (perlmonks.org) - really nice work on his part. I rewrote his script to actually sort arrays rather than strings and added some guard conditions. Sample output looks like this:
ben@Tyr:~/devel/0$ ./skewer 2 2 pork1|tmt1|tmt2|pork2 ---#00#--- ben@Tyr:~/devel/0$ ./skewer 3 3 pork1|tmt1|tmt2|pork2|tmt3|pork3 ---#00#0#--- ben@Tyr:~/devel/0$ ./skewer 4 4 pork1|tmt1|tmt2|pork2|tmt3|pork3|tmt4|pork4 ---#00#0#0#--- ben@Tyr:~/devel/0$ ./skewer 5 4 pork1|tmt1|pork2|tmt2|pork3|tmt3|pork4|tmt4|pork5 ---#0#0#0#0#--- ben@Tyr:~/devel/0$ ./skewer 7 4 pork1|tmt1|pork2|tmt2|pork3|tmt3|pork4|tmt4|pork5|pork6|pork7 ---#0#0#0#0###---Can you reproduce this algorithm? I found it a very interesting exercise, myself.
[1] https://en.wikipedia.org/wiki/Special:Search?search=Bresenham%20line%20algorithm
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
[ Thread continues here (11 messages/20.47kB) ]
Ben Okopnik [ben at linuxgazette.net]
All of us know - at least I hope we do - that we should all be doing regular backups; that's a given. Chances are, though, that we've all skipped one or two (or more) on the schedule ("heck, nothin's happened so far; it shouldn't make too much of a difference...") - that is, if you even have a schedule, rather than relying on some vague sense of "it's probably about time..." I have to admit that, as much as I advise my customers to have a solid backup plan, I'm less than stellar about having a polished, perfect plan myself.
In part, this is because backups are much easier with a desktop than a laptop - or even with a desktop to which you synch the laptop once in a while. Operating purely from a laptop, as I do, means that I don't have an always-connected tape drive (or whatever) - and doing a backup is always a hassle, involving digging out the external HD, hooking it up, and synchronizing. Moreover, since I do a lot of travelling, setting an alarm doesn't seem to be very useful; it usually goes off while I'm on the road, at which point I can only glare at it in frustration.
As with so many things, what I really need is a copy of "at" installed in my brain... but lacking that, well, I decided to dump the problem on you folks.
Can anyone here think of a sensible backup plan for the situation that I've described - laptop, external backup, arbitrary schedule - and some way to set up a schedule that work with that? Also, does anyone have a favorite WRT backup software? I really miss the ability to do incremental backups; that would be awf'lly nice (I don't mind carrying a few DVDs with me, and using the external HD for a monthly full backup.)
Good ideas in this regard - whether they competely answer the question or not - are highly welcome.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
[ Thread continues here (30 messages/58.31kB) ]
Ben Okopnik [ben at linuxgazette.net]
[[[ I viciously snipped out the entirety of the original message which headed this thread, as it was horrendously replete with html garbage. - Kat ]]]
On Fri, Jul 06, 2007 at 11:33:15AM -0400, Bank Of America wrote:
> Bank of America Higher Standards > Online Banking Alert > Need additional up to Re-Update Your Online Banking > the minute account > information? Sign in Because of unusual number of invalid login > attempts on you account, we had to believe > that, their might be some security problem on > you account. So we have decided to put an extra > verification process to ensure your identity > and your account security. Please click on sign > in to Online Banking to continue to the > verification process and ensure your account > security. It is all about your security. Thank > you. and visit the customer service section.
[snip] Nice, even though completely ungrammatical. The 'sign in to Online Banking' link points to
https://www.tsv-betzigau.de/contenido/includes/hypper/repute/bankofamerica/online_bofa_banking/e-online-banking/index.htmTheir images are coming from yet another domain:
https://release35.par3.com/images/client/bankofamerica/em_logo.gifYet another example of PHP fuckmuppetry. And this isn't going to stop, since the creators of PHP refuse to fix the well-known vulnerabilities in the language. [sigh]
If someone who speaks Deutsch wants to contact the owners of the first domain and let them know, that would be a nice thing to do. I've just contacted the webmaster of the second domain; hopefully, they'll take that garbage offline and fix their leaks.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
[ Thread continues here (4 messages/6.33kB) ]
Rick Moen [rick at linuxmafia.com]
This is a small excerpt from the discussion thread at Linux Weekly New, in response to LWN's news story about a "statement" posted at Microsoft Corporation's Web site, claiming they are not, and never will be, subject to the provisions of the GNU General Public License v. 3.
LWN is subscriber-supported, and well worth the minor expense.
https://lwn.net/Articles/240822/
*They're involved in the distribution*
Posted Jul 6, 2007 14:31 UTC (Fri) by guest coriordan
Microsoft arranged for Novell to give GNU/Linux to anyone with an MS voucher, and then proceeded to distribute those vouchers. Sounds like distribution to me (with a middle man which isn't legally relevant).
According to the GPLv3 lawyers, they're "procuring the distribution of" GPL'd software, and that requires permission from the copyright holder. So Microsoft are either distributing under the permissions which the GPL grants them, or they are violating copyright.
And, as I understand it, there's no time limit on those vouchers. Novell might have to declare the deal non-applicable (and thus the "protection" too) when they distribute GPLv3 software, or maybe Microsoft will have to make that declaration.
*They're involved in the distribution*
Posted Jul 6, 2007 15:09 UTC (Fri) by guest moltonel
In an eWeek article, they have a quote from "Bruce Lowry, a Novell spokesperson" saying "Customers who have already received SUSE Linux Enterprise certificates from Microsoft are not affected in any way by this, since their certificates were fully delivered and redeemed prior to the publication of the GPLv3".
So it sounds like Microsoft does not plan to be "distributing" GPL code in this manner anymore, and that what has already been distributed is protected the GPL's grandfather clause.
Well, that may not be a huge victory (Did anybody expect Microsoft to suddenly give up on its patents or start GPL'ing its code because of GPLv3 and the Novell deal ?), but it's something. It'll be interesting to watch the GPLv3 / Novell deal interpretation match in the next few weeks.
*They're involved in the distribution*
Posted Jul 8, 2007 0:47 UTC (Sun) by subscriber rickmoen
Ciaran O'Riordan wrote:
> According to the GPLv3 lawyers, they're "procuring the distribution of" > GPL'd software, and that requires permission from the copyright holder. > So Microsoft are either distributing under the permissions which the GPL > grants them, or they are violating copyright.
Quite. Moreover, this does affect preexisting software covered by the Novell-Microsoft patent-shakedown agreement, too (not just future releases under GPLv3), because a great deal of existing software in both Novell SLES10/SLED10, per upstream licensors' terms, can be received by users under GPLv2 or, at their option, _any later version_.
For that matter, Microsoft Services for Unix (nee Interix) is affected in exactly the same fashion, because it, too, includes a great deal of upstream, third-party code that users may accept under GPLv2 or any later version.
[ ... ]
[ Thread continues here (1 message/4.27kB) ]
Rick Moen [rick at linuxmafia.com]
LWN has a new story (currently subscriber-only) about SugarCRM announcing that its upcoming 5.0 release of SugarCRM Community Edition will be under GPLv3 (as opposed to the company's current badgeware licence, this having been the firm that invented the concept).
Press release: https://www.prnewswire.com/cgi-bin/stories.pl?ACCT=104&STORY=/www/story/07-25-2007/0004632607&EDATE=LWN item (brief mention only; no analysis): https://lwn.net/Articles/242968/
I've just posted this comment to LWN:
*Is this a blunder, or just too subtle for me?*
Posted Jul 26, 2007 4:18 UTC (Thu) by subscriber rickmoen
I may be missing something, here, so I'm phrasing this in the form of a question or two, and it's not rhetorical: Didn't FSF bow to pressure from sundry interest groups and remove the "ASP loophole" language[1] that had been present in some GPLv3 drafts? Therefore, what in Sam Hill is a Software as a Service (Saas) / ASP / Web 2.0 firm doing adopting a copyleft language whose copyleft language gets finessed by hosted deployment?
FYI, there are a number of genuinely open source licences, a couple of them OSI certified, that do apply copyleft obligations to the ASP industry. One of the best is Larry Rosen's OSL, and there is also Apple's ASPL, both of those being OSI-certified. Non-certified options include Affero GPL (newly reissued as a patch to GPLv3, by the way) and Honest Public License.
On the basis of recent history, it's possible that SugarCRM not only lacks any clever, non-obvious reason why it picked a non-ASP copyleft licence for ASP code, but also doesn't really have any idea what it's doing in this area, and picked GPLv3 just because it has had good press (good press that it generally deserved, IMVAO). Remember, this is the firm that created the first MPL-based ASP licence, and then acted shocked and indignant when it belatedly discovered that its licence permitted forking (when TigerCRM of Chennai forked the codebase), and overreacted by writing what became the prototype MPL + Exhibit B "badgeware" licence that impairs third-party usage through mandated logo advertising without a trademark licence.
It'd be more reassuring if I thought this firm had a master plan, but I now rather strongly suspect it's just a bunch of sales people in an office in Cupertino, staggering from one inadvertant move to the next.
Rick Moen rick@linuxmafia.com
[1] https://weblog.infoworld.com/openresource/archives/2007/03/gplv3_goes_weak.html
[ Thread continues here (2 messages/3.61kB) ]
Rick Moen [rick at linuxmafia.com]
[[[ I have included a portion of this thread for general interest, but the rest of the housekeeping has been elided. -- Kat ]]]
Quoting Ben Okopnik (ben@linuxgazette.net):
> July 2007 (#140): > > * Mailbag > * Talkback > * NewsBytes, by Howard Dyckoff
There was something I annotated at the time of my svn checkin of lg_bytes -- but just realised I should have ALSO put into the STATUS notes. (I'll bet in retrospect that nobody pays attention to svn checkin comments.)
Howard had:
Red Hat Adds Business Solutions to Open Source RHX RHX launch partners include Alfresco, CentricCRM, Compiere, EnterpriseDB, Groundwork, Jaspersoft, Jive, MySQL, Pentaho, Scalix, SugarCRM, Zenoss, Zimbra, and Zmanda.Problem: A bunch of those are JUST NOT OPEN SOURCE. Zimbra, SugarCRM, Compiere, Groundwork, and Scalix are classic "badgeware", which is under MPL-variant software with some restrictions -- while with CentricCRM, there's not even any room for controversy, since their licence doesn't even permit code redistribution. Jive Software (which I'd not heard of, before) turns out to be equally bad.
I have brought this matter, several times, to Red Hat's attention, and the presence of actively misleading wording on the Red Hat Exchange site, such as this at the top of https://rhx.redhat.com/rhx/support/article/DOC-1285:
Red Hat Exchange helps you compare, buy, and manage open source business applications. All in one place and backed by the open source leader. We've collaborated with our open source software partners to validate that RHX applications run on Red Hat Enterprise Linux and are delivered through the Red Hat Network. At RHX, Red Hat provides customers with a single point of contact for support.There has been no response and no correction of this error, but the very bottom of that page now has this "FAQ" item:
Are you only accepting open source ISVs into RHX? The initial set of participating ISVs all have an open source focus. We realize that there is debate about which companies are truly open source. To make it transparent to users, RHX includes information about each ISV's license approach. Longer term, we may introduce proprietary applications that are friendly with open source applications.That is, of course, anything but a straight answer. First, it's nonsense to speak of companies being open source or not -- and the above paragraph in general ducks the question. The issue is whether software is. Second, even if there were debate about the badgeware offerings allegedly being open source, there could be absolutely none about Jive Software's Clearspace or CentricCRM 4.1, which are unambiguously proprietary.
I'm surprised that this Red Hat's deceptive characterisation got past Howard without comment, given that the matter's been extensively covered in recent _Linux Gazette_ issues.
[ ... ]
[ Thread continues here (2 messages/5.40kB) ]
Rick Moen [rick at linuxmafia.com]
----- Forwarded message from Rick Moen <rick@linuxmafia.com> -----
Date: Wed, 27 Jun 2007 14:45:53 -0700 From: Rick Moen <rick@linuxmafia.com> To: Ashlee Vance <ashlee.vance@theregister.co.uk> Cc: Karsten Self <karsten@linuxmafia.com> Subject: Re: (forw) Re: when is an open source license open source?Quoting Ashlee Vance (ashlee.vance@theregister.co.uk):
> Rick, your explanation helped a great. Who are the main culprits of > consequence besides Sugar?
SugarCRM started the trend, and the other dozen-odd firms (Socialtext, Alfresco, Zimbra, Qlusters, Jitterbit, Scalix, MuleSource, Dimdim, Agnitas AG, Openbravo, Emu Software, Terracotta, Cognizo Technologies, ValueCard, KnowledgeTree, OpenCountry, 1BizCom, MedSphere, vTiger) literally copied their so-called "MPL-style" licence, with minor variations. MuleSource was for a long time a vocal backer of SugarCRM's position (but see below).
Alfresco used to be a major backer of that position, but then suddenly decided to shift to GPLv2, which is what they use now. (They are no longer a badgeware firm.) Company spokesman (and OSI Board member, and attorney) Matt Asay says he tried all along to convince them to do that, and I do believe him.
SocialText (and especially CEO Ross Mayfield) has taken a lead online role in trying to resolve the impasse -- though I personally find most of what he says to be almost purely rationalising, and for him to be mostly unresponsive to (or evasive of) substantive criticism.
MuleSource and Medsphere have, within the last few months, improved their respective MPL + Exhibit B licences (https://www.mulesource.com/MSPL/ https://medsphere.org/license/MSPL.html) very dramatically, in direct response to criticism on OSI's license-discuss mailing list. This work seems to be that of attorney Mitch Radcliffe, and encouraged by MuleSource and Medsphere Board member Larry Augustin (formerly of VA Linux Systems). I have great respect for this work, though I am still trying to properly assess and analyse it.
(Disclaimer: Larry is a friend of mine, though I see him only rarely, and I once was employed at one of his firms.)
You should be made aware of the role of vTiger (of Chennai) in all of this: In August 2004, it forked and since then has offered independently, under its own name and brand, an early version of the SugarCRM codebase, plus various changes of their own devising. It was in response to that forking event, to which SugarCRM CEO John Roberts responded very angrily at the time, that SugarCRM adopted the "Exhibit B" restrictive licensing addendum that then became the hallmark of badgeware licences generally. See second post on https://www.vtiger.com/forums/viewtopic.php?p=22 , and the matching apologia at https://www.vtiger.com/forums/viewtopic.php?p=22 by Christiaan Erasmus of badgeware firm ValueCard (South Africa).
----- End forwarded message -----
[ Thread continues here (5 messages/13.37kB) ]
Martin J Hooper [martinjh at blueyonder.co.uk]
https://www.theregister.co.uk/2007/07/25/rhx_change_redhat/
Rick you might be interested in this article as you had been commenting on the topic recently...
[ Thread continues here (2 messages/1.60kB) ]
Rick Moen [rick at linuxmafia.com]
[Forwarding Ben's private mail, with commentary, at his invitation.]
As a reminder, Centric CRM, Inc. has recently been one of the most problematic of the ASP/Web firms abusing the term "open source" for their products, in part because their flagship product (Centric CRM) has been notorious during most of this past year as the most clearly and unambiguously proprietary software to be offered with the ongoing public claim of being "open source".
I'd call this (below-cited) PR campaign blitz -- apparently, they're intensively hitting reporters known to be following this matter -- really good news, though it has to be read attentively:
o Former OSI General Counsel Larry Rosen's "OSL 3.0" licence is a really good, excellently designed, genuine copyleft licence that is especially well suited for ASP use, because it's one of the very few that have a clause enforcing copyleft concepts within the otherwise problematic ASP market. (In ASP deployments, there is ordinarily no distribution of the code, so the copyleft provisions of most copyleft licences such as GPLv2 have no traction, and are toothless.) Also, as Centric CRM, Inc. is keen to point out, OSL 3.0 is an OSI-certified open source licence. o At the same time, the careful observer will note that this announcement concerns the product "Centric Team Elements v. 0.9", which is not (yet?) the firm's flagship product. That flagship product remains the entirely separate -- and very, very clearly proprietary, product "Centric CRM v. 4.1", which one wryly notices has been carefully omitted completely from this communique. Just in case there is any doubt about Centric CRM 4.1's proprietary status, here's one key quotation from the product brochure, about the applicable licence, "Centric Public Licence (CRM)": "The major restriction is that users may not redistribute the the Centric CRM source code."Now, it may be that the Centric CRM product is on the way out, and that Centric Team Elements (with genuine open source licence) will be taking its place. Or maybe not. Either way:
The bad news, but perhaps not too bad, is that Centric CRM, Inc. has spent this past year to date falsely and misleadingly claiming that its product line is open source -- and deflecting critics by claiming that the term "open source" is (paraphrasing) subject to redefinition and needn't be limited to what OSI (inventer of that term in the software context, and standard body) defines it to be. That misleading and deceptive language is still very much a prominent part of the company's pronouncements to this day, remains on the Web site, and doesn't seem to be disappearing.
The good news is that the firm appears to be sensitive to the public relations problem it created for itself, and may be taking steps to fix it.
----- Forwarded message from Ben Okopnik <ben@linuxgazette.net> -----
[ ... ]
[ Thread continues here (1 message/16.71kB) ]
Rick Moen [rick at linuxmafia.com]
My reply, just posted, is reproduced below the forwarded posting from "khim".
----- Forwarded message from LWN notifications <lwn@lwn.net> -----
Date: 26 Jul 2007 16:01:24 -0000 To: rick@linuxmafia.com From: LWN notifications <lwn@lwn.net> Subject: LWN Comment response notification
The following comment (https://lwn.net/Articles/243195/) has been posted in response to https://lwn.net/Articles/243075/.
As you requested, the text of the response is being sent to you.
*Is this a blunder, or just too subtle for me?* [Announcements] Posted Jul 26, 2007 16:00 UTC (Thu) by khim FYI, there are a number of genuinely open source licences, a couple of them OSI certified, that do apply copyleft obligations to the ASP industry. Yup. And they are mostly unsuccessful ones. It's quite hard to distinguish two cases:To stop receiving these notifications, please go to https://lwn.net/MyAccount/rickmoen/. Thank you for supporting LWN.net.1) where your package is used for SaaS (like Google) 2) where your package is used for some private endeavour (like LWN)Licenses like AGPL/APSL punish equally - that's why I'll probably never use AGPL/APSL-licensed software. And if I'll be forced to use such software I'll do everything possible to not ever fix or change it. Even badgeware is better from practical viewpoint. If you'll think about it it's only logical. Yes, usurpation of the code by SaaS vendors is a problem but AGPL is worse medicine then disease itself...
----- End forwarded message -----
*Is this a blunder, or just too subtle for me?*
Posted Jul 26, 2007 21:38 UTC (Thu) by subscriber rickmoen
"khim" wrote:
> Yup. And they are mostly unsuccessful ones.
Your "unsuccessful". My "underappreciated so far".
> It's quite hard to distinguish two cases:
And, in my personal view, pointless. (My opinion, yours for a small fee and agreement to post my logo on your forehead. And by the way, I also deny the premise that LWN is a "private endeavour" in any sense meaningful to this context. Of course, Jon and co. happen to use their own code, IIRC.)
Rick Moen rick@linuxmafia.com
Rick Moen [rick at linuxmafia.com]
Ever have one of those days, where you fear that your sarcasm might have become so caustic that it might need hazardous-substance labels?
----- Forwarded message from Simon Phipps <Simon.Phipps@Sun.COM> -----
Date: Fri, 06 Jul 2007 18:21:01 +0100 From: Simon Phipps <Simon.Phipps@Sun.COM> To: Rick Moen <rick@linuxmafia.com> Cc: license-discuss@opensource.orgX-Mailer: Apple Mail (2.752.3)
Subject: Re: For Approval: Open Source Hardware License
On Jul 6, 2007, at 02:12, Rick Moen wrote:
>Quoting Jamey Hicks (jamey.hicks@nokia.com): > >>There are no OSI-approved licenses for open source hardware, so I am >>proposing this license. > >My understanding is that OSI's licence approval process >(https://www.opensource.org/docs/certification_mark.html) is specifically >for software licences. That scope limitation came up previously when >people proposed documentation licences for certification; I suspect the >same logic applies here.
I'm afraid the distinction between "software" and "hardware" is getting harder and harder to make. The Verilog that's used to make the UltraSPARC T1 is definitely software, and the GPL (or any other Free software license approved for open source community use by OSI) seems 100% applicable to me.
If we allow special "hardware" licenses because the copyrighted work is used for that purpose, we are on a slippery slope towards many other specialist (an in my view redundant) sub-categories.
S.
----- End forwarded message -----
----- Forwarded message from Rick Moen <rick@linuxmafia.com> -----
Date: Fri, 6 Jul 2007 11:43:36 -0700 From: Rick Moen <rick@linuxmafia.com> To: license-discuss@opensource.org Subject: Re: For Approval: Open Source Hardware LicenseQuoting Simon Phipps (Simon.Phipps@Sun.COM):
> I'm afraid the distinction between "software" and "hardware" is > getting harder and harder to make.
You know, there are circumstances in which I'd raise that point, too. However, I'd feel a bit silly raising it in circumstances where something is described very unambiguously aas a licence specifically for hardware _and used_ (or at least planned to be used) only for that purpose, to a groups that certifies licences specifically for software.
(I do doubt that OSI would refuse to certify a licence actually used for software, on no better grounds than it having the word "hardware" in it.)
Nonetheless, your ability to discern shades of grey is admirable. ;->
-- Cheers, English is essentially a text parser's way of getting Rick Moen faster processors built. rick@linuxmafia.com -- John M. Ford, https://ccil.org/~cowan/essential.html
Rick Moen [rick at linuxmafia.com]
Trying to figure out how little they can do?
----- Forwarded message from Matt Mattox <mmattox@redhat.com> -----
Date: Thu, 05 Jul 2007 13:20:56 -0400 From: Matt Mattox <mmattox@redhat.com> To: rick@linuxmafia.com Subject: RHX and License ClarityHi Rick,
Just a quick note responding to your comment in the "More About RHX" section of RHX. We're working on a solution that will make the license approach used by each RHX software vendor very clear to users, including whether or not they are OSI-approved. I'd love to get your feedback on our approach if you have the time and interest. Let me know....
Thanks, Matt Product Manager, RHX
----- End forwarded message -----
----- Forwarded message from Rick Moen <rick@linuxmafia.com> -----
Date: Thu, 5 Jul 2007 11:02:52 -0700 From: Rick Moen <rick@linuxmafia.com> To: Matt Mattox <mmattox@redhat.com> Subject: Re: RHX and License ClarityQuoting Matt Mattox (mmattox@redhat.com):
> Just a quick note responding to your comment in the "More About RHX" > section of RHX. We're working on a solution that will make the license > approach used by each RHX software vendor very clear to users, including > whether or not they are OSI-approved. I'd love to get your feedback on > our approach if you have the time and interest. Let me know....
Hi, Matt. Thank you for your note.
The main problem is actually the statements on the RHX main Web pages that serve as entry points to RHX (and, in the recent past, by all RHX press releases). For example:
Starts out with "Trusted open source software" -- without saying that some offerings are open source and some proprietary -- and goes on for the entire page talking how RHX has helped you select open source applications, etc.https://rhx.redhat.com/rhx/support/article/DOC-1285 ("More about RHX" page)
Starts out with "Red Hat Exchange helps you compare, buy, and manage open source business applications. All in one place and backed by the open source leader. We've collaborated with our open source software partners to validate that RHX applications run on Red Hat Enterprise Linux and are delivered through the Red Hat Network."Letting people _dig down_ to licensing specifics would be nice but wouldn't fix the problem of false and misleading general statements everyone encounters on the way in. The latter should be replaced without delay.
And future RHX press releases should mention that it includes both proprietary and open source applications.
The longer the delay in fixing this problem, the more Red Hat's reputation for integrity is suffering. Please make no mistake: Your firm is being damaged by this.
----- End forwarded message -----
[ Thread continues here (3 messages/5.47kB) ]
Rick Moen [rick at linuxmafia.com]
----- Forwarded message from rick -----
Date: Tue, 26 Jun 2007 21:21:42 -0700 To: ashlee.vance@theregister.co.uk Cc: Karsten Self <karsten> Subject: Red Hat Exchange has a serious flawDear Ashlee:
I note with interest your recent articles in ElReg: "Red Hat's Exchange roars like a muted lamb" and "Red Hat RHXes out to open source partners". However, I'd like to point out one problem you haven't yet covered:
Many of Red Hat Exchange's offerings, although all are implied to be open source, are in fact nothing of the kind. For example, the offered products from Zimbra, SugarCRM, Compiere, CentricCRM, and GroundWork are very clearly under proprietary licences of various descriptions.
When I attended Red Hat's RHEL5 product launch in San Francisco on March 14, I heard RHX described for the first time, immediately noticed the problem, and quietly called it to the attention of Red Hat CTO Brian Stevens. Stevens acknowledged the point, and said (loosely paraphrased) that their Web pages should to be adjusted to make clear that not all RTX offerings are open source -- which is indeed a sensible remedy, but it hasn't yet happened.
I also attempted to call Red Hat's attention to the problem via the designated RHX feedback forum, at https://rhx.redhat.com/rhx/feedback/feedback.jspa . (You'll note my comment near the bottom.)
I'm sure it's an honest slip-up, but, accidentally or not, Red Hat has mislead its customers for the past several months on this matter, and is continuing to do so.
Best Regards, Rick Moen rick@linuxmafia.com
----- End forwarded message -----
----- Forwarded message from Rick Moen <rick@linuxmafia.com> -----
Date: Wed, 27 Jun 2007 11:54:04 -0700 From: Rick Moen <rick@linuxmafia.com> To: Ashlee Vance <ashlee.vance@theregister.co.uk> Cc: Karsten Self <karsten> Subject: Re: Red Hat Exchange has a serious flawQuoting Ashlee Vance (ashlee.vance@theregister.co.uk):
> Thanks so much for this, mate. Will investigate.
Sure. In case it will help:
Most if not all of those codebases are ASP (Web app) code, which poses a thorny problem: Suppose you are, say, Google, and wish to behave benignly towards open source with your Web apps. You deploy a Web 2.0 hosted application, and release its source code to the community under a proper, forkable licence such as BSD / MIT X11 (simple permissive type) or GPLv2 (copyleft), that fully satisfies the Open Source Definition. For the sake of illustration, let's assume GPLv2.
Google's competitor EvilCo swoops by, grabs the source tarball, modifies it extensively behind closed doors, and deploys it under a completely different name as a hosted Web app (product/service) of its own, bearing very little resemblance to Google's original application. Let's say that EvilCo nowhere mentions its borrowing from Google, and that EvilCo doesn't provide anyone outside its employees access to the modified source code.
[ ... ]
[ Thread continues here (1 message/6.71kB) ]
Rick Moen [rick at linuxmafia.com]
Red Hat, Inc. seems to have taken quick and effective action to correct some misleading statements about licensing that had previously been on the firm's Web pages for the "Red Hat Exchange" (RHX) partner-software sales program:
https://www.redhat.com/rhx/ https://rhx.redhat.com (and sub-pages)
Their corrections have been quite thorough and accurate! The firm should be commended for this very responsive action. E.g., the main description pages for RHX say things like:
RHX helps you compare, buy, and manage business applications -- all available from the open source leader. All in one place. We've done the work for you. You'll find profiles, ratings, priceseven free trials -- for every application. Working in collaboration with our partners, applications are validated to run on Red Hat Enterprise Linux, delivered through Red Hat Network, and backed by Red Hat as the single point of contact for support.All of the former claims and implications of RHX's offerings being uniformly open source have been corrected, top to bottom. I've sent a specific thanks to the manager in question.
-- Cheers, "Learning Java has been a slow and tortuous process for me. Every Rick Moen few minutes, I start screaming 'No, you fools!' and have to go rick@linuxmafia.com read something from _Structure and Interpretation of Computer Programs_ to de-stress." -- The Cube, www.forum3000.org
[ In reference to "Build a Six-headed, Six-user Linux System" in LG#124 ]
Dave Wiebe [dawiebe at gmail.com]
Hello,
In response to Bobs "Build a six headed, six user linux system"
I am curious to know if this would work on my laptop, using the laptop display screen and a monitor attached through the monitor connection at the back. Would my standard laptop ATI video card be able to support that (as I read somewhere that Dual Head video cards are preferred).
Thanks for your response.
--David Wiebe.
Laptop: Dell Inspiron 1501
[ Thread continues here (2 messages/2.20kB) ]
[ In reference to "/tag/9.html" in LG#68 ]
Mike Steele [mike_steele_2000 at yahoo.com]
In response to
Users are caught in the middle of a debate over whether reverse records should be used for identification. The pro argument is that it helps identify spammers and abusers. The con argument (which I believe) is that the purpose of domain names is convenience: so you don't have to remember a number, and so that a site can maintain a "permanent" identifier even if they move to another server or a different ISP. You shouldn't /have/ to have a domain name, much less have it set to any particular value. And to identify scRipT kIddyZ, just do a simple traceroute. The second-last hop is their ISP (or part of their own network), and ISPs always have their own domain name showing. And what if a computer has several domain names, each hosted at a different organization? There can be only one reverse record, so all the other names will be left out in the cold.
at https://linuxgazette.net/issue68/tag/9.html
I always assumed the cost to be for the liability. Otherwise they would just automate it and have you do it online. This seems to thwart spam; email administrators put an additional reverse record lookup to make sure it is not a "rogue" mail server put online by just anyone. They have to contact their ISP and get a reverse record. It's like adding locks to your door. It isn't a guarantee - just a deterrent. That one reverse record is for the mail server. Otherwise RR's are not necessary and so it doesn't matter if there are multiple domains. You can CNAME the domains MX records to the actual server name.
Dunno for sure but thats my 2 cents.
Mike
[ In reference to "Away Mission: Sem-Tech 07 Conference, May 2007, San Jose, CA" in LG#140 ]
Neil Youngman [ny at youngman.org.uk]
We seem to have a broken link in this article, just below the screenshot it says "To view a demonstration of Cognition's technology, please visit www.CognitionSearch.com" and links to https://linuxgazette.net/current/'https://cognitionsearch.com
There's probably also a small typo "Hogkin's disease" should probably be "Hodgkin's disease"
Neil
[ Thread continues here (2 messages/1.85kB) ]
Peter Knaggs [peter.knaggs at gmail.com]
I've been gathering together all the hardware setup info needed to get Ubuntu working (surprisingly well) with the Intel Apple iMac
https://www.penlug.org/twiki/bin/view/Main/LinuxHardwareInfoAppleiMac24
There's not much original work on that page, but at least having all the hardware setup info one place is handy, so I thought I'd post it as a 2 cent tip.
Ben Okopnik [ben at linuxgazette.net]
I just got an iPod Shuffle, and am about to load it up with my favorite tunes; however, I didn't want to dink around with multiple reloads of the song list if it was too big. Since flash devices only have so many write cycles before they start losing their little brains, minimizing writes is a Good Thing. So, I needed to figure out the total size of the files in my M3U playlist - and given the kind of questions that often come up in my shell scripting classes and on LG's Answer Gang lists, I thought that our readers would find some of these techniques (as well as the script itself) useful. Herewith, the "m3u_size" script. Enjoy!
#!/bin/bash # Created by Ben Okopnik on Thu Jul 12 15:27:45 EDT 2007 # Exit with usage message unless specified file exists and is readable [ -r "$1" ] || { printf "Usage: ${0##*/} <file.m3u>\n"; exit; } # For the purposes of the loop, ignore spaces in filenames old=$IFS IFS=' ' # Get the file size and sum it up for n in `cat "$1"`; do s=`ls -l "$n" | cut -d ' ' -f 5`; ((total+=$s)); done # Restore the IFS IFS=$old # Define G, M, and k Gb=$((1024**3)); Mb=$((1024**2)); kb=1024 # Calculate the number of G+M+k+bytes in file list G=$(($total/$Gb)) M=$((($total-$G*$Gb)/$Mb)) k=$((($total-$G*$Gb-$M*$Mb)/$kb)) b=$((($total-$G*$Gb-$M*$Mb-$k*$kb))) echo "Total: $total (${G}G ${M}M ${k}k ${b}b)"
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
[ Thread continues here (13 messages/20.54kB) ]
By Howard Dyckoff and Samuel Kotel Bisbee-vonKaufmann
Please submit your News Bytes items in plain text; other formats may be rejected. A one- or two-paragraph summary plus a URL has a much higher chance of being published than an entire press release. Submit items to bytes@linuxgazette.net.
In effort to grow its open source Solaris community, Sun launched an effort to make the Solaris OS more like Linux distros. Ian Murdock, founder of the Debian distro and Linux company Progeny, explained the details to Sun partners and the press in July, and in the process revealed more about his role at Sun since being hired there.
Murdock had spoken to a packed session at JavaOne, last May, and talked in general terms of improving the packaging and content of Solaris to make it friendlier to the Open Source community. Project Indiana is that effort, and adds a road map for Solaris developers.
Murdock spoke about the similarities and differences of Linux and Solaris, noting the "distro" model as a Linux "innovation" for users and developers. Both Solaris and Linux support GNOME and the X Window System, office and Web apps, he noted. "Right now, it's confusing. If you want to compile Open Solaris, you need to use Solaris Express Community Edition first," according to Murdock. "Originally, Linux didn't have a binary deliverable, so I and others stepped up."
Project Indiana aims to combine the best of both the Solaris and Linux worlds. With Murdock on-board, Sun has been looking at the distro and package model that is now intrinsic to Linux. Open Solaris will have a developer community, while still having enterprise-grade support from Sun and its partners. Sun hopes users of Open Solaris will eventually convert to full enterprise support licenses.
"We will provide Open Solaris as a binary distro, with a strong focus on unique Solaris features and the famous Solaris compatibility." The main features of Indiana will include an easier install, ZFS as default file system, and Net-based package management.
"We are competing in the same sense that RH competes with Debian, or Ubuntu competes with Debian. We are growing the Open Source environment," Murdock explained. "We can bridge the gap for developers, but we have to give them some compelling reasons to cross over. We have an opportunity here to provide the same level of choice that Linux has provided, but without the fracture..." Murdock added.
Project Indiana will be for developers and early adopters, with short 6-month release cycles, while Enterprise Solaris will have long release cycles. It will be a 2-tier model, with a dev tier more like the Linux model. The first Indiana release is planned for the Spring of 2008, with an earlier test release to selected users in the Fall of 2007.
Murdock was more closed when discussing the issue of open source licenses. First, he said, that was under consideration, and would be decided by others at Sun. He did note that the criteria would include what was best for Sun, its partners, and the Solaris community.
"We are very focused on the technology parts of this. I don't tend to find licensing discussions very interesting. To me, licenses are a necessary evil. The real question is... is the fact that Solaris is not under GPL going to matter? My view on the license changes is that if it benefits Sun and the OS community, we should do it. Another way of looking at this is... we could take the device driver code in Linux, and help make the driver support in Solaris much better than it is today. So, that to me is a reason to GPL Solaris. The ability to borrow code could be an advantage, but then our code can also be used." Murdock added that the decision process would have to include the growing developer community for Open Solaris. In May and June, Sun CEO Jonathon Schwartz expressed a strong interest in the new GPL v3, but seems to be backing away from that position more recently.
At the Usenix '07 Technical Conference in June, Mary Lou Jepson, CIO of One Laptop Per Child (OLPC), detailed progress on the hardware and software development of this new portable computing platform, and explained that the first mass shipments to Third World countries was planned for the end of summer. That has now been pushed back to October, to accommodate fixes and new features and to wait for the required 3 million-order threshold. (The first generation units will cost $150-175, but will approach the $100 goal over the next year.)
Intel, whose executives had strongly criticized the non-standard technology and novel approach of OLPC, buried the conflict with OLPC in July and joined its board in July. Intel has what it sees as a competing effort with its "Classmate" mini-laptop design, based on Windows and Intel chipsets. (There is also an option to use Mandriva.) The target price for the "Classmate" is about $300, but includes newly developed educational software. A fuller description of this PC is here: https://www.classmatepc.com
In contrast, OLPC uses the open source development model, and is trying to develop a system that Third World kids and their teachers can support and fix themselves. The OS is a reduced version of Red Hat Linux, and many design changes have been incorporated from Third World pilot testers, to support tropical environments, erratic power, the likelihood of laptops falling on stone or concrete floors, etc. The keyboard is sealed to allow use in rainy environments. New designs in the LCD design and graphics controller allow the laptop screen to be scrolled without CPU involvement, and used in direct sunlight with only .1 watt per hour power consumption. That allows an OLPC unit with a charged battery to run all day and all night. There are also hand-powered generators and solar charging stations for classroom use. (Who wouldn't want one of these for camping or car trips?) The XO is also the greenest PC, in terms of manufacturing and recycling.
The OLPC "XO" units employs an efficient dual antenna, and automatically creates an inter-meshed peer-peer network across a village, with line of sight distances of up to half a kilometer between units. There is also learning software for kids called "Sugar" that is still in development. Linux kernel developers and software engineers are encouraged to join in different parts of this community project. Here is the link to sign up: https://wiki.laptop.org.
The OLPC keynote is now available on the Usenix conference archive (https://www.usenix.org/events/usenix07/tech/slides/jepsen.pdf). Photos from the presentation are available here: picasaweb.google.com/howarddy/Usenix07OLPC. OLPC is based on learning theories pioneered by Seymour Papert and Alan Kay.
Moblin.org is the site that hosts the Mobile & Internet Linux Project open source community. The site allows developers to prototype new ideas, and build community around them. Intel has also garnered support from vendors Canonical and Red Flag Linux.
Currently, moblin.org hosts a number of projects, including an Image Creator, Browser, UI framework, power policy manager, and various non-PC-oriented applications and software components.
"Linux on mobile devices has gotten a lot of traction in the last few years," says Dirk Hohndel, Intel's chief Linux and open source technologist. "Moblin.org will help to improve the integration of many of the existing components targeted at such devices, and should help foster development of new use models and new features. In particular, both power management and UI need to be fine-tuned for these new form factors, and that is an area where development at moblin.org has already shown good results."
Canonical has released Storm, a generic open source object relational mapper (ORM) for Python. Storm is designed to support communication with multiple databases simultaneously. Canonical is known for the popular Ubuntu operating system and Launchpad, a Web-based collaboration platform for open source developers.
"Storm is an ORM that simplifies the development of database-backed applications in Python, especially for projects that use very large databases or multiple databases with a seamless Web front-end", said Gustavo Niemeyer, lead developer of Storm at Canonical. "Storm is particularly designed to feel very natural to Python programmers, and exposes multiple databases as /stores/ in a clean and easy-to-use fashion."
The project has been in development for more than a year, and is now publicly available under the LGPL license. This is the first complete Launchpad component to be released as open source software. Launchpad currently includes developer data for several thousand projects, and is used by tens of thousands of developers and other free software contributors.
The Storm project welcomes participation, and has a new Web site at https://storm.canonical.com. That site includes a tutorial, and links to allow developers to download, report bugs, and join the mailing list.
Canonical also announced the beta release of the Launchpad Personal Package Archive (PPA) service, a new way for developers to build and publish packages of their code, documentation, and other contributions to free software.
Individuals and teams can each have a PPA, allowing groups to collaborate on sets of packages, and solo developers to publish their own versions of popular free software. Developers upload packages to a PPA, and have it built for multiple architectures against the current version of Ubuntu. Each user gets up to one gigabyte of Personal Package Archive space, which works as a standard Ubuntu software package repository. Free PPAs are available only for free ("libre") software packages.
Mark Shuttleworth, founder of Ubuntu, explained the significance of Launchpad Personal Package Archives for the Ubuntu community: "Many developers want to modify existing packages, or create new packages of their software. The PPA service allows anyone to publish a package, without having to ask permission or join the Ubuntu project as a developer. This is a tremendous innovation in the free software community."
The Launchpad PPA service is currently in beta. To participate in the beta program, send an e-mail to ppa-beta@launchpad.net.
Launchpad PPA Service will be released for general use on August 22, 2007, and will be available at https://launchpad.net/ubuntu/+ppas
SIGGRAPH officials will hold the international FJORG! competition - an "iron animator" event - where 15 competing teams from around the world have 32 hours to create the world's best character-driven animation in front of a live, "Gladiator-style" audience and judging panel, at the San Diego Convention Center.
To earn the title of "Viking Animator" and several other prizes, each team will be tasked with creating a 15-second (or longer) animation, based on a theme to be provided during the FJORG! kickoff on Monday, 6 August 2007. In the spirit of true competition, FJORG! will test contestants' skill, talent, creativity, teamwork, and physical endurance - all throughout multiple staged distractions such as live music, belly dancing, acrobatics, and martial arts performances.
FJORG! contestants will be required to complete the animations using their own talents and skills, along with technology assets supplied at the event. (Outside resources are not permitted.) Competitors will have access to the following applications:
With support provided by AMD, DreamWorks Animation, and HP, FJORG! is being held in conjunction with SIGGRAPH 2007, the 34th International Conference and Exhibition on computer graphics and interactive techniques at the San Diego Convention Center, August 5-9, 2007. For video of the competition and the winning animations, visit https://www.workstations.tv, throughout SIGGRAPH. For more information about the competition in general, visit https://www.siggraph.org/s2007/presenters/fjorg/.
Frances XiTi Monitor has reported solid gains for Mozilla-based Web browsers. Overall share of the European market grew from 24.1% to 27.8%. This growth includes Firefox's gain of nearly 7 points in the last year.
Slovenia has the highest Firefox use in Europe, with a rate that is now over 45%. Finland is close behind. For more details see: https://www.xitimonitor.com/en-us/browsers-barometer/firefox-july-2007/index-1-2-3-102.html.
The great state of Massachusetts had come out of the wilderness like a latter-day Solomon, and decided to support both Microsoft's Office Open XML format and the OASIS Open Document Format that it has previously supported. The decision to adopt the ECMA version of Open XML was made in early July, and is part of the Massachusetts Enterprise Technical Reference Model (ETRM) 4.0, a specification for the state's IT operations. The draft listed ECMA-376 as one of its major revisions. Earlier, the ETRM specified only ODF as a standard, open format.
Since its original adoption of ODF, two of the state's CIOs have been forced to resign.
Microsoft has been lining up Linux distros to be part of the Open XML Translator project, which is hosted on SourceForge. Currently, these include Linspire, Novell, and Xandros. Sun has also released its own Plugin ODF translator for Office. The state IT group took these efforts into account during its deliberations.
The Linux Foundation has challenged the decision, saying "... it is wrong for the ITD to conclude that a specification that helps to perpetuate the dominance of a single product can be properly called a true open standard", and it encouraged concerned members of the open source community to send their comments to the Massachusetts ITD at standards@state.ma.us.
Linux Foundation board member Andrew Updegrove wrote to the Massachusetts IT Department, asking them to consider the public and historical impacts, "It is also important to the future of open document formats in the wider world, as well. The impact of Massachusetts in rectifying the historical situation has already been profound. But it has not been sufficient. Earlier this decade, Microsoft properly looked out for its stockholders' best interests by declining to participate in the OASIS working group that created ODF, thereby increasing the likelihood that ODF would die, and increasing the likelihood that its dominance would continue. That decision was not, of course, the best decision for end-users, including government purchasers, because it perpetuated a situation where long-term access to important documents was in the control of a single vendor.
"But ODF did not die. Instead, it was completed, although it received little attention, either publicly or among potential implementers. Only with the announcement of the ITD's decision and the realization that a market for ODF-compliant products might develop did interest broaden and deepen. Microsoft is hardly to be blamed for lending no support to the success of ODF. But neither should it be rewarded for launching a competing, self-serving standard as a next-best defense against erosion of its dominant position."
This issue is important in itself for the future of open standards, but the main battle is the September ballot in ISO on supporting OOXML. That could derail ODF and a wider range of open standards.
Here is a link to articles discussing the position of the Linux Foundation: https://lxer.com/module/newswire/byuser.php?user=Andy_Updegrove.
FiXs West Coast Conference
July 31 - August 1; Monterey,
California; https://www.fixs.org/events.htm
Black Hat Security Conference
July 31 - August 2; Las Vegas, NV; https://www.blackhat.com/html/bh-link/briefings.html
Security '07 / HotSec '07
August 6-10; Boston, MA; https://www.usenix.org/events/hotsec07/
MetriCon 2.0, Workshop on Security Metrics
August 7; Boston, MA; https://www.securitymetrics.org/content/Wiki.jsp?page=Metricon2.0
SIGgraph 2007
August 5 - 9; San Diego, CA; https://www.siggraph.org/s2007/attendees/schedule/
Linux World - 2007
August 6 - 9; San Francisco, CA; Linux World - 2007
Real-World Java Seminar
August 13; Roosevelt Hotel, New York City; https://realworldjava.com/
Linux Kernel '07 Developers Summit
September 4 - 6; Cambridge, UK; https://www.usenix.org/events/kernel07/
1st International LDAPv3 Conference
September 6 - 7; Cologne, Germany; https://www.guug.de/veranstaltungen/ldapcon2007/
Rich Web Experience Conference
September 6 - 8; Fairmont Hotel, San Jose, CA; https://www.therichwebexperience.com/
BEAWorld 2007 - San Francisco
September 10 - 12; Moscone Convention Center, San Francisco, CA; https://www.bea.com/beaworld/us/
RailsConf Europe 2007
September 17 - 19; Berlin, Germany; https://www.railsconfeurope.com/
Gartner Open Source and Web Innovation Summits
September 17 - 21; Las Vegas, NV; https://www.gartner.com/EvReg/evRegister?EvCd=OS3
Intel Developer Forum - 2007
September 18 - 20; Moscone Center West, San Francisco, CA; https://developer.intel.com/IDF/
Software Development Best Practices 2007 and Embedded Systems Conference
September 18 - 21; Boston, MA; https://www.sdexpo.com/2007/sdbp/
RFID World Boston
September 19 - 20; Boston, MA; https://www.shorecliffcommunications.com/boston/
AJAXWorld Conference West
September 24 - 26; Santa Clara, CA; https://www.ajaxworld.com/
Semantic Web Strategies Conference 2007
September 30 - October 1; San Jose Marriott, San Jose, CA; https://www.semanticwebstrategies.com/
Ethernet Expo 2007
October 15 - 17, 2007; Hilton, New York, NY; https://www.lightreading.com/live/event_information.asp?survey_id=306
ISPCON FALL 2007
October 16 - 18; San Jose, CA; https://www.ispcon.com/
Interop New York
October 22 - 26; https://www.interop.com/
CSI 2007
November 3 - 9; Hyatt Regency Crystal City, Washington, DC; https://www.csiannual.com/
Interop Berlin
November 6 - 8; https://www.interop.eu/
Oracle OpenWorld San Francisco
November 11 - 15; San Francisco CA; https://www.oracle.com/openworld/
Supercomputing 2007
November 9 - 12; Tampa, FL; https://sc07.supercomputing.org/
The latest stable version of the Linux kernel is 2.6.22.1
In late July, Linspire released the updated version of its distro, including the Microsoft media codecs and file filters. Version 6.0, based on the Ubuntu 7.04 distro, includes KDE 3.5, SMP (Dual Core) kernel support, improved boot times, and support for the current Windows Media 10 audio and video codecs. Linspire 6 will have Microsoft TrueType fonts, including Arial, Georgia, Times New Roman, and Verdana.
The Open XML Translator enables bi-directional compatibility, so that files saved in Open XML can be opened by OpenOffice.org users, and files created by OpenOffice.org can be saved in Open XML format. As a result, end users of Microsoft Office and OpenOffice.org will now be able to more easily share files, as documents will better maintain consistent formats, formulas, and style templates across the two office productivity suites.
[ Except... not really. Once you get past the hype of the Linspire press release, please see "The Commonwealth of Massachusetts adds Open XML Document Format" on this very NewsBytes page. -- Ben ]
The open source Open XML/ODF Translator project can be viewed here: https://sourceforge.net/projects/odf-converter/.
Sun Microsystems has announced it will release the Solaris Cluster source code through the HA (High Availability) Clusters community on the OpenSolaris site. Sun's first contributions are application modules, or agents, which enable open source or commercially available applications to become highly available in a cluster environment.
The Open HA Cluster code will be made available in three phases, beginning at the end of June and continuing over the next 18 months. In the first phase of Open HA Cluster, Sun will deliver code for most of the high-availability agents offered with the Solaris Cluster product. Solaris Cluster's high-availability agents allow developers to cluster-enable applications to run as either scalable or failover services. Sun is also making available the source code for the Solaris Cluster Automated Test Environment (SCATE), along with agent-related documentation, to assist in testing new agents. The test framework and first test suite will be contributed at the end of June 2007. Among these agents are the Solaris Containers agent, the BEA Weblogic agent, and PostgreSQL.
Agents written using Open HA Cluster will also run on Solaris Cluster version 3.2 on the Solaris 10 OS. Subsequent phases of Open HA Cluster will include delivery of the code for the recently released Solaris Cluster Geographic Edition - software that enables multi-site disaster recovery by managing the availability of application services and data across geographically dispersed clusters. Later, Sun will release the code for the core Solaris Cluster infrastructure, again with SCATE infrastructure tests and documentation. Sun is using the Common Distribution and Development Licence (CDDL) for the code, as it currently does for Solaris.
To learn more about Open High Availability Cluster and the HA Clusters community on OpenSolaris, and to review a complete list of the high-availability agents offered with the Solaris Cluster product, please visit: https://www.opensolaris.org/os/community/ha-clusters/
Samba.org has announced that future distros will be released under the new GPL v3. Prior versions of Samba will remain under the GPL v2.
In the July announcement, Samba.org specified that all versions of Samba numbered 3.2 and later will be under the GPLv3, whereas all versions of Samba numbered 3.0.x and before will remain under the GPLv2. But there is a caveat, as noted in the FAQ about the licensing change:
"The Samba Team releases libraries under two licenses: the GPLv3 and the LGPLv3. If your code is released under a "GPLv2 or later" license, it is compatible with both the GPLv3 and the LGPLv3 licensed Samba code.
"If your code is released under a 'GPLv2 only' license, it is not compatible with the Samba libraries released under the GPLv3 or LGPLv3, as the wording of the 'GPLv2 only' license prevents mixing with other licenses. If you wish to use libraries released under the LGPLv3 with your 'GPLv2 only' code, then you will need to modify the license on your code."
Zenoss, known for its existing free IT monitoring product Zenos Core 2 (released in June under GPL2, with all sub-projects under GPL or compatible licenses), has released Zenoss Enterprise Edition 2.0 on top of its Core 2 platform. Enterprise Edition 2.0 will provide all the same features as its free, open source relative, but will provide "end-user experience monitoring" for Web, e-mail, and database applications. This allows IT managers to receive integrated user reports along with the standard network and equipment statuses, while providing more depth to the IT person's understanding of users' habits. Zenoss is also releasing custom ZenPacks - a plug-in framework that allows community members to write their own features, skins, etc. - that will be published only under their Enterprise Subscription. For more information on Zenoss's products, subscriptions, and how they all relate to one another, visit https://www.zenoss.com/product/overview.
Zenoss runs on Linux (GNU build environment required; Red Hat Enterprise, Fedora Core, Ubuntu, and SUSE are known to work), FreeBSD, and Mac OS X. VMplayer and Zenoss Virtual Appliance are required to run Zenoss products on Windows.
Zenoss's homepage: https://www.zenoss.com
On July 28th, Everex announced that its IMPACT GC3502 PC will feature the open-source office suite OpenOffice.org. Aimed at back-to-school students, Everex is hoping to offer potentially low-budget consumers a powerful, fully featured package. Most PC producers offer either proprietary, high-cost office suites, which sharply increases the cost of the PC, or pre-installed trial software, which dissatisfies customers and does not look good for the producer. Everex believes that by keeping its prices low and its offers large it will get a jump start on its competitors, this quickly approaching school year.
Everex IMPACT GC3502 specifications: https://www.everex.com/gc3502/
OpenOffice.org applications included: Writer, Calc, Impress, Draw, Base, Math
OpenOffice.org's homepage: https://www.openoffice.org
Everex's homepage: https://www.everex.com
[Editor's note: While I think it is wonderful that OpenOffice.org is receiving more attention and more companies are seeing open source as more than a server side solution, I am not entirely sure that generations of Windows Office-trained users are ready for the switch. Having used OpenOffice.org on Windows and Linux platforms in two separate educational environments, now - both times as a student - I remember experiencing some formatting issues when exporting .doc files, which all my professors wanted, and printing. While I was easily able to overcome them, this just added another level of frustration that may prove too much for the average computer user. Also, most school's IT departments that I have experienced will not support non-standard Mac and Windows software. While OpenOffice.org has come leaps and bounds in solving these transitional issues, I would not want to deploy a solution in an office - or my grandparent's house - unless the basic functionality translated seamlessly. -- S. Bisbee]
Network Engines, Inc., a provider of storage and security appliance products and services, announced two software initiatives to help software developers deliver applications as appliances. First, Network Engines announced Appliance Certified Edition Linux (ACE Linux), the first Linux distribution to deliver integrated lifecycle management for server appliances. Second, the company also announced that Network Engines Web-based Services - NEWS - its hardware and image management system for Windows-based application appliances, is now available for ACE Linux.
Hardware and image management integration with an appliance-proven Linux release should lower costs and speed time to market for Linux-based appliances. "As the appliance market grows, ISVs recognize that they need to focus on adding value to their application, not the underlying components like the OS and management plane," said Hugh Kelly, Vice-President of Marketing for Network Engines. "We developed our Network Engines Web-based Services to provide these functions for Windows applications, and now we are making them available for Linux."
ACE Linux is a Conary-based Linux distribution that will be customized for each customer's platform. Network Engines will provide full platform support including drivers, which are commonly a drain on development resources if ISVs attempt to implement these tools themselves. In addition, NEI will provide performance qualification for all of its hardware platforms. This combination of reduced footprint and technical support will enable ISVs to deliver more secure appliances with much lower development cost.
Kelly said that integrating NEWS with the company's own distribution of Linux enables ISVs to save time, money, and engineering resources, because Network Engines eliminates the burden of qualifying new OS releases and the associated expense of update and patch distribution. The NEWS infrastructure will enable all of the software on an appliance, including the OS, applications, and management plane, to be managed by a robust update management system.
Network Engines is the only company with a full suite of appliance-related services for both Linux and Windows environments, Kelly said. "While ISVs focus on their core competencies, we provide all the services they need to deliver their solutions as true appliances in much less time," Kelly said.
There are now over 500 commercial customers for XenEnterprise and XenServer - a doubling of their customer base in the past quarter. New customers include AmerisourceBergen, Cimex Media UK, Harvard University, Intuit, Investcorp, KBC Clearing, the Miami Herald, Moen, NASDAQ, Palm, Rollins, Inc., and Sankyo. Additionally, their free product, XenExpress has had more than 100,000 downloads, illustrating intense interest in virtualization solutions.
"Enterprises are realizing that they have a choice in virtualization and XenSource offers great products that are easy to use, offer high performance for Windows and Linux, and are very innovative and open," said John Bara, VP of marketing for XenSource.
XenSource will introduce the newest release of its XenEnterprise product, version 4.0, in late August 2007.
Borland Software Corporation announced an integration with VMware software, to enable enterprise software development organizations to more cost-effectively perform multi-configuration and cross-platform software application testing, by maximizing the use of virtualization for test environments.
Borland will deliver native support for virtual lab environments, by integrating its SilkCentral Test Manager, a core component of Borland's Lifecycle Quality Management (LQM) solution, with VMware Lab Manager version 2.5. SilkCentral Test Manager 2007 will extend test management with seamless access to virtualized test environments in VMware Lab Manager, addressing a critical challenge to fully test applications across multiple configurations and platforms.
With the ability to call VMware Lab Manager environments directly from SilkCentral Test Manager, users can create and assign individual tests to run on many different configurations or platforms, without the high costs of hardware or time needed to administer physical systems.
Expected availability for Borland SilkCentral Test Manager 2007 is the third calendar quarter of 2007.
For more information on Borland SilkCentral Test Manager and related technologies, please visit: https://www.borland.com/us/products/silk/silkcentral_test/.
Talkback: Discuss this article with The Answer Gang
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
Sam was born ('87) and raised in the Boston, MA area. His interest in all things electronic was established early by his electrician father and database designer mother. Teaching himself HTML and basic web design at the age of 10, Sam has spiraled deeper into the confusion that is computer science and the FOSS community. His first Linux install was Red Hat, which he installed on a Pentium 233GHz i686 when he was about 13. He found his way into the computer club in high school at Northfield Mount Hermon, a New England boarding school, which was lovingly named GEECS for Electronics, Engineering, Computers, and Science. This venue allowed him to share in and teach the Linux experience to fellow students and teachers alike. Late in high school Sam was abducted into the Open and Free Technology Community, had his first article published, and became more involved in the FOSS community as a whole. After a year at Boston University he decided the experience was not for him, striking out on his own as a software developer and contractor.
By Anonymous
Start your search engine and search for the following keywords both on the Web and in the newsgroups:
GRUB SATA confused
You will get reports galore of GRUB failing with SATA, most certainly failing when facing a mix of old IDE (PATA) and new SATA disks. I'm submitting this as evidence, but I personally don't need any, since GRUB cost me some 10 hours of toiling in exactly this context.
You will find less evidence for Lilo, but Lilo will also fail. Xandros 4.0, which relies on Lilo, gave me a bad time. Further, Lilo is on its way out; the major distros rely on GRUB.
So what is to be done, if you have a mix of PATA and SATA, and want to have a few GNU/Linux distros ready to boot, and need to have Windows anyway? Repeat after me: do not install GRUB to the Master Boot Record on the disk carrying the Windows C partition.
It is assumed the problem does not stem from the hardware, because there are indeed mobos with both SATA and PATA, and are able to handle them together.
If we write a GRUB boot record to a partition, how do we activate it? Not from GRUB's Master Boot Record (MBR); we don't have one, given our earlier decision. The options are:
(i) from Windows's MBR
(ii) from a real floppy, from a (DOS floppy emulation) CD, or from a (DOS floppy-emulation) USB stick.
Although frequently recommended, (i) is not a solution. You will fail if you install GRUB to a boot sector, extract that boot sector to a file, and append an entry for that file in boot.ini: It will not work with a mix of PATA and SATA. Something else is needed.
Maybe you belong to the lucky ones who never had problems with GRUB. Otherwise, here follow solutions A and B, for your attention. They will work in most cases, but still there might be hardware constellations where they will fail. There are too many variations to test, and I certainly do not have access to the hardware.
The instructions given here target Windows XP. They can probably be adjusted for Vista.
Some terminology first. Under Windows, the boot drive is the
one that carries boot.ini
and the loader, ntldr
. The system
drive is the one that carries the Windows directory. They
can be different, although often they're not. We say
"boot directory" to indicate the directory containing
boot.ini
. For Windows XP, the boot partition has to be the
first one seen by the operating system - with the drive
letter C slapped on it. Apparently, Vista accepts any
partition, and just re-arranges the drive letters.
When installing or restoring or updating your Linux distro, make sure that the Master Boot Record for the Windows boot drive is not touched. If it is, you will have to go through some rescue operations. That MBR is reserved for Windows - and GRUB should keep its GRUBby fingers off it. Let it install a boot record to the boot sector in the partitions - e.g., to /dev/hda4 - but never to /dev/hda.
Be aware that the installation routine may seem set to comply with your instructions, but then it may go ahead and modify the MBR anyway. It's difficult to say whether the blame is with GRUB directly or with the install routines, but it happens. Be ready for a fight, be ready for emergencies. You might even consider trying the following trick: let the boot record be installed to a floppy, then do everything else by hand. The trick works even if there is no physical floppy.
The trouble stems from one simple fact: when GRUB starts
from the Master Boot Record and is showing its menu, it sees
the available devices in an order which may differ from
the order it sees after launching initrd
and then the
kernel. In other words, its device map changes on you
without any warnings or compliments. After activating
the required entry from the menu, all hell breaks loose because
essential files are not found. I.e., kernel panic.
Ubuntu is trying to handle the problem using UUIDs for the hard disks. UUID means Universally Unique IDentifier, and is intended to be the immovable rock in the sea storm of boot loader, initrd, and kernel. So you will see Ubuntu's GRUB configuration showing lines like:
kernel /vmlinuz root=UUID=f0bfe866-2449-4d75-8222-b444ff564876
Long story short - it does not help. This is my empirical finding. Some theory from Linus Torvalds himself:
https://lwn.net/Articles/65209/
Method A uses the 'hide/unhide' feature in GRUB, to hide away all boot partitions that are contributing confusion - in the extreme case, all boot partitions but the one that has a specific operating system. At that moment, there can be no confusion in the device map, since only one entry is left.
This approach is explained here:
https://www.justlinux.com/forum/showthread.php?t=143973
It is based on a GRUB floppy, either as a real floppy or as a floppy image burnt to a CD. It is not quick to set up, since it requires extensive trial and error but - to be fair - the other method is not much quicker.
Note that with this method, when you are running a distro, Windows may be hidden away. If you want to transfer files from the distros to Windows, you will have to have a FAT partition accessible at all times - or have access to Linux file systems from Windows (available only for ext2/ext3, as far as I know).
Go and download a modified version of GRUB, including grldr
(mnemonics: 'grub loader'). Put grldr
in c:\ and add the
following line to boot.ini
:
c:\grldr = "sundry distros"
You also need GRUB's menu.lst
in the same directory. Edit it
as appropriate for each of your distros, and you are done.
Fine - but what is grldr
, and where do you get it from? grldr
is a GRUB console that gets along with Windows booting, and
weighs less than 190K. It is part of a free software project
with a puzzling name:
https://grub4dos.sourceforge.net/
You really only need that one file, grldr
; the adventurous
can play around with the rest.
As stated above, GRUB's problem with a mix of PATA and SATA
is that its device map shifts while the boot is in process. So,
if you opt for method B, what device map are you going to
throw at it? None: you do not need a device.map
file for
method B. But you still need to edit menu.lst
, and thus you
need to know what to call your available disks according to
both GRUB and kernel conventions.
Start with a menu.lst
file that might be right. When the
GRUB menu pops up, go to the GRUB's command line and type
root (
pressing 'Tab' to get a list of all possible completions, as seen by GRUB here and now. This is the list of the available devices. For each one of them repeat the trick, e.g.,
root (hd0,
plus a 'Tab' will list all partitions in hd0. This way, you get
a list of all partitions on all disks with the correct GRUB
denomination and the partition type. Partitions from the
Unix world will be recognized; so will FAT partitions. NTFS will
not be seen in pure GRUB, but grub4dos
can manage. That should be
sufficient for you to identify the hardware.
The hard part of the exercise is giving the devices correct
names according to kernel conventions. Would hd0 be /dev/hda
,
or what? The shift in the device map occurs here, and you may
have to rely on trial and error. When you have the mapping
of the devices right, the mapping of the partitions is
trivial.
All this looks like a royal hassle - and it is. Installing an operating system should not affect operating systems already installed. Windows has bad manners in this respect, but does that mean that GRUB should also have bad manners? It should not; actually, it's supposed to be friendly and co-operative.
Well, if you go to the GRUB Web site (https://www.gnu.org/software/grub/), you'll learn that there is a discontinued GRUB version (0.97) and a new GRUB version (1.95, as of the time of writing.) The former is called legacy GRUB; the latter is called GRUB-2 and has been in the works for 4-5 years. The FAQ for the new GRUB asks straight away why there is a need for a fresh rewrite of GRUB. Excerpt from the answer:
Because GRUB Legacy has become unmaintainable, due to messy code and design failures.
It also says that the new version is "usable". Obviously, it is not, since the major distros rely on legacy GRUB (customized for their own purposes) and ignore the new one - I was unable to find even one single distro that uses it.
Let's hope GRUB-2 will come up to speed very soon, and that it is not going to turn into a repeat of the Hurd saga. The situation is unbearable. While a Windows install takes over the MBR and impedes booting other operating systems, Windows still manages to pull itself up by its boot straps. With a mix of PATA and SATA, installing SUSE or Ubuntu destroys the Windows MBR, and then fails to boot ANY operating system at all!
Talkback: Discuss this article with The Answer Gang
A. N. Onymous has been writing for LG since the early days - generally by
sneaking in at night and leaving a variety of articles on the Editor's
desk. A man (woman?) of mystery, claiming no credit and hiding in
darkness... probably something to do with large amounts of treasure in an
ancient Mayan temple and a beautiful dark-eyed woman with a snake tattoo
winding down from her left hip. Or maybe he just treasures his privacy. In
any case, we're grateful for his contribution.
-- Editor, Linux Gazette
By Silas Brown
LG #138 contained an article, "Debian on a Slug", in which Kapil Hari Paranjape described how to install Debian on an NSLU2 device, so that it can be made into a general-purpose server (a firewall, backup server, Web server, etc.) He also added a sound device so that it can be used to play music.
Another application for such a device is as an alarms and reminders system. This is more than a simple alarm clock or PDA, because the Slug is much more programmable: once you have installed a lightweight speech synthesizer, the Slug can be made not only to generate verbal reminders (which few PDAs can do), but also to check for information on the Internet and adjust its announcements, accordingly. It is a useful alternative to leaving a PC switched on for long periods just to do that job (or possibly forgetting something because the computer was switched off). If you are attempting to learn a foreign language, the Slug reminders system can help with that, also.
The eSpeak speech synthesizer is lightweight enough to run on the Slug, and is also available as a Debian package, although if you are running Debian stable then you will likely find that installing from source will get you a significantly improved version. (Be sure to completely remove the Debian package and its libraries, before compiling from source.) eSpeak produces very clear English speech in several accents, and also supports quite a few other languages (some better than others). Installation is straightforward, except you might find that the audio output doesn't work; if this is the case, then simply write audio to a file or pipe and play using aplay in the alsa-utils package. (Note, however, that some old versions of eSpeak won't write to a pipe when asked. If it's just a short reminder, you can write to a file in /dev/shm and delete it after playing.)
Personally I use eSpeak with my language-practice program Gradint, which I have adapted to run on the Slug. Besides generating vocabulary-practice sessions using "graduated-interval recall", Gradint can also be made to produce other speech-reminder "patterns" such as:
For languages that eSpeak cannot yet produce well, if all the possible utterances are known in advance, then Gradint allows you to generate them on another system and transfer them across (or simply use recorded sounds instead).
It appears that some USB sound adapters will fail, from time to time, especially if they are plugged into an unpowered hub. This is probably due to their highly variable power consumption. The symptoms are that the sound stops, and the system behaves as though the sound adapter has disappeared from the USB bus. To restore sound functionality, the adapter may need to be unplugged and re-inserted, or perhaps even the hub it has been connected to may need to be unplugged and re-inserted. For this reason, never put the sound adapter on the same hub as the storage device(s). (Some kernel versions won't be able to use certain sound adapters, if they are attached to USB 2 hubs, anyway; they need to be attached to USB 1.1 hubs or directly to the NSLU2, unless you update your kernel.) It is also advisable to minimise the number of devices the sound adapter shares an unpowered hub with, or even connect it directly to the NSLU2, if you don't have many other USB devices to connect. (You could connect it to a powered hub, but I am trying to avoid the use of powered hubs in an attempt to minimise extra power consumption and reduce the amount of wiring.)
In order to avoid missing your reminders due to the sound adapter having failed, it is advisable to periodically run a script that ensures the sound device is still present, and alerts you if it is not. The alert can be by means of a console beep. (Unfortunately, there does not yet appear to be an NSLU2 equivalent of the PC-speaker kernel patches that allow the speaker to generate more than a simple beep.) The following script will do this:
#!/bin/bash if ! amixer scontents >/dev/null; then # sound adapter has somehow gone down cd if test -e .soundcheck-beeping; then exit; fi touch .soundcheck-beeping while true; do for N in 1 2 3 4 5; do echo $'\a' > /dev/tty1; sleep 1; done if amixer scontents >/dev/null; then break; fi # came back done rm .soundcheck-beeping fi
The script should be run as root, so that it can access /dev/tty1 to make the NSLU2 beep. It should be run at various times (perhaps from crontab), but take care not to run it in the middle of the night, unless you want to be awoken whenever the sound happens to fail. (It may be better to wait until just before the time the morning alarm would have happened.)
It will likely help to use amixer to set the sound level low, so as to reduce peaks in USB-bus current. If using unpowered speakers (which is a good idea because powered speakers can be more susceptible to picking up annoying noise from mobile phones, etc., and anyway they take more power), consider attaching only one speaker, if stereo sound is not necessary, since this should further reduce the current, and it also means you can salvage an unpowered speaker from an unused pair of powered speakers (one of which is usually unpowered), rather than having to obtain new ones. Then experiment with different levels, to find how low you can go whilst still being able to hear it clearly. This will vary with the sound adapter and the speakers.
Wireless headphones may need the level set lower still; cheap FM cordless headphones can easily be overloaded and lose the signal, if the input sound peaks too loudly. You may find that the lowest setting of amixer (the resolution of which is limited to that of the sound adapter) is still too high, in which case you need to ensure that the sound data itself is not too loud. This is one of the functions I had to add to Gradint.
The NSLU2 has only one built-in input device: the power button. Thankfully, this can be re-programmed, so, for example, you can use it to acknowledge an alarm without having to connect some other input device or connect across the network. On Debian at least, the NSLU2 power button sends a "Ctrl-Alt-Delete" event to init, so you can edit /etc/inittab and change the ctrlaltdel line to run whatever script you want. Since it's rather long-winded to script an automatic edit of /etc/inittab followed by a signal to init, every time one of your scripts wants to change the function of the power button, it makes sense to point inittab to a shell script somewhere, which you can then modify at will. Personally, I use something like the following:
#!/bin/bash cd ~user # note: we are root if test -e .powerbutton.pid; then kill $(cat .powerbutton.pid) elif test -e .powerbutton2.pid; then kill $(cat .powerbutton2.pid) else if test -e .about-to-shutdown; then reminders.sh "en Shutting down." rm -f .powerbutton.pid .about-to-shutdown /sbin/halt fi echo $'\a' > /dev/tty1 touch .about-to-shutdown reminders.sh "en Press again to shut down." & (sleep 10 ; rm .about-to-shutdown) & # the & is important fi
This looks for a file called .powerbutton.pid, which should, if it exists, contain the process ID of some process that needs to be terminated when the power button is pressed (for example, the alarm process). If .powerbutton.pid does not exist (and there is a check for .powerbutton2.pid also, in case you need to run some lower-priority reminder sequence at the same time as the immediate one), then the power button will halt the machine, but, before it does so, it will prompt the user to press again (within 10 seconds), in order to protect against accidents: if you pressed the button half a second after the process happened to terminate by itself, then you probably don't want to shut down the machine. The console beep is there so that, if the sound or speech somehow fails, there is at least some indication of response. The line in the script marked "the & is important" is marked thus because the script needs to return control to init, so that init can catch the repeat press of the power button within the 10-second period; otherwise, init may queue that event until after the script finishes, when the 10 seconds are up, which will mean it will not be possible to use the power button to halt the machine.
If you want to reduce the amount of light given off by the Slug (for example because you want to run it in a bedroom which needs to be dark), you can turn off all the LEDs except the Ethernet LED and the power button LED, by using the leds command. For example, you can put this in root's crontab:
@reboot sleep 5; for N in ready status disk-1 disk-2; do leds $N off; done
The sleep 5 is to avert a race condition with the system init scripts, which will otherwise switch the LEDs back on. You may still have too much light coming from the LEDs on USB devices (most flash storage devices have bright LEDs that may flicker during use), so you may have to position these carefully, and/or point their LEDs downward onto a dark surface.
The NSLU2 seems to keep good clock time (better than many PCs), but you might want to install ntp to keep the clock synchronised. To save RAM, you can prevent ntp from running as a daemon by adding an exit command near the start of /etc/init.d/ntp.conf, and instead run it from root's crontab using something like
37 2 * * * /usr/sbin/ntpd -n -q -g >/dev/null
since updating once a day should easily be accurate enough. (Note that it should run after 2am, if you want it to pick up daylight-saving changes.)
If you do not have a PDA to connect as an NSLU2 terminal, you may also like to try using a screenreader with eSpeak in place of a display. An NSLU2 with a screenreader and a USB keyboard could be enough to make a simple workstation for a blind user, although it does require some setting up.
Talkback: Discuss this article with The Answer Gang
Silas Brown is a legally blind computer scientist based in Cambridge UK. He has been using heavily-customised versions of Debian Linux since 1999.
Securing a local network (LAN) usually means creating firewall rules, host access rules and proper configuration of Mail, DNS and web servers. All these methods are primarily based on the assumption that the threat to your network is from the big bad internet. In this article I will take a reverse point of view—that is, users of the local network who are (possibly) the bad guys.
Such an assumption may be justified in the following contexts:
Although I spoke about “users” above, the real actors in a computer network are computers. By making the (often false!) assumption that each computer is doing exactly what its user wants it to do, I will reduce the question to the following:
How can computer Abdul that wants to talk to computer Chin be reasonably confident that computer Betaal is not able to intercept the conversation and/or impersonate Chin?
In order to understand why a slightly sophisticated solution is required, we need to realise that a LAN is a not like a telephone network and an IP address is not an identifying label in the same sense that a telephone number is.
The more sophisticated reader will know about “hardware” addresses (also known as Ethernet or MAC addresses) which are built into the hardware unlike IP addresses. However, the description of the network given in the paragraph below is equally appropriate if you replace “IP address” with “MAC address”.
A typical LAN is a packet based network. Each computer is always “connected” to all other computers. Conversations are based on packets which are sent “via the wire” and are “heard” by all the computers on the network. Each packet is labelled by the recipient's IP address so that the relevant party can “listen to it” (in other words, copy it to its memory). The packet also contains sender's IP address so that the recipient knows where to send replies.
Computer Betaal (as the ghost in the machine) can collect (copies of) packets meant for any destination and can also inject packets with any label(s) desired.
So, Abdul must (digitally) sign every packet sent out and encrypt it so that only Chin can read it. If Abdul only signed the packets, then Betaal (still) could collect them. Since there are a lot of packets that make up any conversation, Betaal could re-send the packets later in a more profitable order—thus delivering a blow to Chin. If Abdul only encrypted the packets then Betaal could inject his own encrypted packets of gobble-de-gook (illegible/undecipherable data) and disrupt the conversation between Abdul and Chin.
Let me re-state this in jargon. "In a packet based network, secrecy and authenticity go hand in hand."
When Tatu Ylonen originally wrote ssh
it was thought of as a
replacement for telnet
and rsh
which are
programs/protocols for remote logins (for remote shell access). However,
ssh
is a network protocol, so it can be used to create
secure conversations between computers.
Each SSH server has a private key—usually located at
/etc/ssh/ssh_host_rsa_key
. Often, there is a second private key in
/etc/ssh/ssh_host_dsa_key
. Network administrator's job is to
collect public keys associated to each of these private keys (in the same
place, with a .pub
extension) and distribute them to all
computers on the network.
The simplest way to do this is to go to each computer and copy these files to a USB stick:
cp /etc/ssh/ssh_host_rsa_key.pub /media/usb/<ip_addr>.rsa.pub cp /etc/ssh/ssh_host_dsa_key.pub /media/usb/<ip_addr>.dsa.pub
Admin then creates a “known hosts” file:
for type in rsa dsa do for i in /media/usb/*.$type.pub do addr=$(basename $i .$type.pub) (echo -n "$addr "; cut -f1-2 -d' '< $i)>> known_hosts done done
This known_hosts
file is then copied to
/etc/ssh/ssh_known_hosts
on each computer. Finally, we
set configuration parameters
echo "StrictHostKeyChecking yes" >> /etc/ssh/ssh_config
on each computer. (Users on each computer may also need to modify the
configuration file $HOME/.ssh/config
if it exists and remove/edit
$HOME/.ssh/known_hosts
if it exists).
After this (admittedly) long-winded (but not difficult) procedure, Abdul and Chin have each other's public keys. So, Abdul can encrypt packets which only Chin can read and Chin can verify signatures made by Abdul. (The actual SSH protocol is more complex and doesn't concern us here).
So, now, on Abdul one can do ssh Chin
and be confident that it
is Chin who is answering. Chin will still ask for the password unless
all servers enable HostBasedAuthentication
in
/etc/ssh/sshd_config
. This procedure might be risky for Chin,
unless the root
user on Abdul is to be considered equivalent to
the root
user on Chin.
What about other (than SSH) types of data exchange? Luckily, this too has been thought of. If Abdul wants to open TCP port (say) 80 on Chin now, then Abdul runs
ssh -q -f -N -L 8080:localhost:80 Chin
Now, opening https://localhost:8080
on Abdul gives Chin's web
server.
What about securing all of data exchange? This has been thought of as well. In fact, SSH provides at least two ways:
ssh -q -f -N -D 1080 ChinThere are also wrapper libraries like
tsocks
that can
“teach” any TCP application to use SOCKS.ssh -q -f -N -w 0:any ChinWith some additional network configuration at each end, this tunnel can be used by all TCP applications, but not by applications that use UDP instead of UDP for their transport.
Despite these efforts, SSH is not always adequate for the problem we set out to solve for the following reasons:
OpenVPN can be thought of as SSH with a solution to all three problems noted above.
One machine is configured as the openvpn
server and all of the
other boxen - as clients. Server passively waits for the client to initiate a
connection. Once the connection is established, roles of the two computers in
the conversation are completely symmetric.
The server (respectively, client) can be configured using the sample
server.conf
(client.conf
for the client) file that
comes with the openvpn package (sample config files should be located at
/usr/share/doc/openvpn-<version>/sample-config-files
, if
you're using a prepackaged version. Otherwise, they're located at
sample-config-files
directory in the source tarball). In client's
configuration file one needs to edit only one line starting with
remote
and put in the correct server to connect to. In server's
configuration file line that starts with server
can to be edited
to put in some random network in form of 10.a.b.0
(you could also
use 172.16-32.a.0
or 192.168.a.0
) instead of the
default. Since we want clients to talk with each other, we also enable the
client-to-client
option in server's configuration file. In
addition, we will edit these files to put in appropriate names like
host.key
and host.crt
for certificate and key files
(see below).
One nice feature of openvpn
is that it can use certificates.
This completely simplifies key distribution — we no longer need to
distribute public key(s) of a new host to all other hosts. This gain comes at
the “pain” of setting up a Certificate Authority.
First, we have to set up a Certificate Authority (CA) on one computer (network administrator's personal computer, for example). There are a number of ways to do this.
A simple way to setup a CA is provided with openvpn
. We begin
by copying the “Easy RSA 2.0” directory to a suitable place.
mkdir /root/openvpn_CA cd /root/openvpn_CA cp -a /usr/share/doc/openvpn/examples/easy-rsa/2.0 .
Next, we have to edit last few lines of the vars
file in this
directory to reflect our organisation. The relevant lines:
export KEY_SIZE=2048 export KEY_COUNTRY= export KEY_PROVINCE= export KEY_CITY= export KEY_ORG= export KEY_OU= export KEY_EMAIL=
Then, we generate key of the Certificate Authority.
. ./vars ./clean-all ./pkitool --initca
Once all queries from the last command have been properly answered, we have
a bunch of files in the keys
subdirectory.
Having set up the certificate authority, we have to sign keys of each host. First, each host generates a signing request:
ln -s /etc/ssh/ssh_host_rsa_key.pub /etc/openvpn/host.key cd /etc/openvpn openssl req -new -extensions v3_req \ -key host.key -out host.csr
All of the queries should be answered carefully. In particular, it is a good
idea to use the fully qualified domain name for the common name (CN) entry.
Then, host.csr
is copied to the keys
directory, where
the certificate authority was installed with a name like
<hostname>.csr
. The CA then verifies and signs the key with
the following commands:
. ./vars ./pkitool --interact --sign <hostname>
Then, ca.crt
and <hostname>.crt
files from
the keys
directory of CA are copied back to the original
host's /etc/openvpn
; we also rename (or symlink)
<hostname>.crt
to host.crt
.
Now, to start the tunnel, we run
/etc/init.d/openvpn start <config>
where the <config>
is server
or
client
as appropriate. We can start multiple client versions which
are directed to the same server. Since we want the clients to talk with each
other, we enable the client-to-client
option in the server's
configuration.
So, let us say that Octavio is the server and Abdul and Chin are two
clients. When Abdul and Chin have a conversation over openvpn
(which is ensured by Abdul by opening a connection to the
10.a.b.x
address assigned to Chin) they can be reasonably
confident that no one—not even Octavio—can intercept this
conversation. Since openvpn
asks for the certificate at the
start of the conversation, Abdul is also confident that it is Chin
at the other end of the conversation. At the very least Abdul is certain
that this has been certified by the Certificate Authority.
Have Abdul and Chin solved their problem? Can they communicate without worrying about Betaal?
Some problems remain:
The solution to these two problems has essentially been described by Rene Pfeiffer in his article I and article II on IPsec. We will vary from his prescription in two respects:
The first ensures that routing is “automatic”. The second allows us to migrate to an IPsec network without disrupting existing connections. Once all machines that need to speak securely to each other are configured we can switch to the “require” mode to ensure that all conversations are encrypted.
One difference between IPsec and openvpn is, that, in IPsec, a
separate daemon handles the key exchange and authentication. In GNU/Linux, this
is racoon. We configure the /etc/racoon/racoon.conf
file
as follows. First of all, we put in the path to the certificates. This can be
the same as certificates generated for OpenVPN. Next we configure
authentication.
remote anonymous { exchange_mode main; certificate_type x509 "$HOST_CERT" "$HOST_KEY"; verify_cert on; my_identifier asn1dn; proposal { encryption_algorithm aes; hash_algorithm sha1; authentication_method rsasig; dh_group modp1024; } }
Here we have to replace $HOST_CERT
and $HOST_KEY
with certificate and key locations, respectively. The next section in the
configuration file describes the encryption used after successful
authentication.
sainfo anonymous { pfs_group modp768; encryption_algorithm 3des; authentication_algorithm hmac_md5; compression_algorithm deflate; }
Next, we instruct the kernel to use IPsec whenever possible. To do this, we
ensure that the following directives are loaded by command setkey
.
Flush the security associations and security policies.
flush; spdflush;
The policy is to use the ESP protocol and AH protocol for all
packets between this host and any other host on the network if
possible. In commands below, one needs to put in the correct
$IP_ADDR
and $NETMASK
values
spdadd $IP_ADDR $NETMASK any -P out ipsec esp/transport//use ah/transport//use; spdadd $NETMASK $IP_ADDR any -P in ipsec esp/transport//use ah/transport//use;
This means that all hosts will use encrypted and authenticated traffic for every host in the LAN which supports encrypted traffic. This allows one to enable this configuration on all hosts in LAN one host at the time without disrupting the existing network in the process. Once all hosts are configured for IPsec, this can be replaced with
spdadd $IP_ADDR $NETMASK any -P out ipsec esp/transport//require ah/transport//require; spdadd $NETMASK $IP_ADDR any -P in ipsec esp/transport//require ah/transport//require;
Now, it is relatively easy to configure machines to use encryption and authentication in a local area network. Today, computers and networks are fast enough, so extra calculations and extra network packets, that are required for this, do not cause noticeable delays. Also, it is quite easy to implement such a solution without bringing down the entire network until all machines are reconfigured.
So! What are you waiting for? Choose a solution that is appropriate for your use and put it into use!
In the IMSc network we have tested and implemented the openvpn
solution. A number of my colleagues here helped debug various aspects of this.
I thank them all for their help. The documentation of ssh
and
openvpn
is also very good. There is a number of great articles on
IPsec including those from LG that have been mentioned above. I thank the
authors of these documents for their assistance.
This document was translated from LATEX by HEVEA.
Talkback: Discuss this article with The Answer Gang
Kapil Hari Paranjape has been a ``hack''-er since his punch-card days.
Specifically, this means that he has never written a ``real'' program.
He has merely tinkered with programs written by others. After playing
with Minix in 1990-91 he thought of writing his first program---a
``genuine'' *nix kernel for the x86 class of machines. Luckily for him a
certain L. Torvalds got there first---thereby saving him the trouble
(once again) of actually writing code. In eternal gratitude he has spent
a lot of time tinkering with and promoting Linux and GNU since those
days---much to the dismay of many around him who think he should
concentrate on mathematical research---which is his paying job. The
interplay between actual running programs, what can be computed in
principle and what can be shown to exist continues to fascinate him.
By Shane Lazar
Linux has always been a good choice for a server OS. In practical terms, however, this functionality has been out of reach for the everyday computer user, mainly due to the technical know-how required to manage a dedicated server OS. On the other hand, our homes today are more filled with computers than ever before - and, in a multi-node network, a server can provide many benefits. In this article, I am going to try to guide you in setting up a useful server for your home network, one that is headless (i.e., without monitor, keyboard, or mouse) and can be stowed away neatly out of view.
This setup will be ideal for:
Hardware required:
The basic setup:
Internet <--> Ubuntu Server <--> Ethernet Hub <--> LAN Machines
Services we are going to be running on our system:
Let us dive right in!
Download the Ubuntu (currently at version 7.04, "Feisty Fawn") Server CD image from Ubuntu's download page.
Burn the ubuntu-7.04-server-i386.iso image to a CD using your favorite image-burning program. Remember, burn the image; do not extract the files from the image file. If you are going to be using an old CD-ROM, burn the CD at the slowest possible speed, for reliability.
Ubuntu is well known for having an easy installation process. For now, plug in a monitor, keyboard, and the network cables (Internet and LAN, both), put in the Ubuntu server CD, and boot up! You may need to change your BIOS settings to allow booting from CD.
Before we continue, I did mention that I would try to make this as simple as possible, and now you are probably wondering what you are doing in a CLI. This is necessary, as we want our server to run as lean as possible. After all, it is going to be stowed away in a closet, so who needs a fancy GUI? I promise we won't be spending much longer on the CLI. A couple of tips for new users:
First thing we will do on our new system is to check if we are connected to the Internet. Do this simply by pinging Google.
ping www.google.comStop the pinging with Ctrl+C. If all went well, you should be getting responses to your pings. If not, try switching the LAN and Internet cables around. Most probably, you will get a ping response by now. Keep in mind which card your Internet is configured on, eth0 or eth1, and modify the instructions accordingly. In this guide, the Internet is on eth0 and the LAN is on eth1.
Now, we will configure our LAN network card. We will do this using vim, a CLI text editor.
Four simple commands you will use in vim are:
Let us open our network configuration file with administrative privileges:
sudo vim /etc/network/interfacesYou will be asked to enter the administrator's password. Navigate with the cursor key and add the following at the end of this file:
auto eth1 iface eth1 inet static address 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255
If you need to change the configuration of your Internet connection, you should do this now in the eth0 section. Restart your network interfaces using:
sudo /etc/init.d/networking restartInstall any available updates by:
sudo apt-get updateand then
sudo apt-get upgradeNow we will install the packages required for Webmin, the Web-based administration tool:
sudo apt-get install libnet-ssleay-perl openssl libauthen-pam-perl libio-pty-perl libmd5-perlDownload Webmin:
wget https://prdownloads.sourceforge.net/webadmin/webmin_1.350_all.debIf this does not work, there is probably a newer version of Webmin. Get the link to the latest *.deb file from the Webmin site.
Install it:
sudo dpkg -i webmin_1.350_all.debYou will get the following output:
Webmin install complete. You can now login to https://your-server-name:10000/ as root with your root password, or as any user who can use sudo to run commands as root.
And that's it! We are done with the CLI. Log out:
exitNow, you can disconnect the monitor and keyboard, stow your server away, and continue from your desktop machine on a beautiful Web-GUI!
However, before you do that, you will have to configure your desktop machine's network card. Set it up as follows:
IP address: 192.168.0.2 Subnet mask: 255.255.255.0 Gateway: 192.168.0.1 DNS server: 192.168.0.1
Your other machines would have incrementing IP addresses, e.g., 192.168.0.3, 192.169.0.4,...
Open your favorite Web browser and navigate to https://192.168.0.1:10000. Enter the administrator's user name and password. Welcome to the powerful Webmin!
On the tree menu on the left, go to Webmin > Webmin Configuration. Click Upgrade Webmin, and, with "Latest version from www.webmin.com" selected, click the Upgrade button. If there is an upgrade available, it will be installed for you.
To install the Shorewall firewall, go to System > Software Packages and in the "Install a New Package" section, select "Package from APT", enter shorewall, and click Install. This may take some time, depending on your Internet connection, but Shorewall will be installed.
Now, go to Networking > Shorewall Firewall, and we'll begin setting up your firewall. Do not start the firewall yet, or you might lock yourself out of the server. We will configure Shorewall section by section.
Network Zones: This section defines zones to which we will assign "levels of trust". We will create three zones: the firewall, Internet, and local zones.
Click Add a new network zone. You will be provided with a number of options. We are interested in the Zone ID field and the Zone type list. For each zone, enter the options as follows, and click Create before returning to the page to create the next.
Network Interfaces: This section tells the firewall which Ethernet card is connected to the Internet, and which one to the LAN. In our case, we have only two interfaces.
Click Add a new network interface, and again you will be presented with a vast array of options. We will define only Interface, Zone name, and Broadcast address. Here, also, you will have to setup one interface at a time, clicking Create before returning to configure the next. Configure as follows:
Default Policies: The default policies tell the firewall what to do with packets coming from various sources. We will set it to drop all requests from the Internet, and accept all from the LAN and the firewall itself. Click Add a new default policy. As before, we will define one policy at a time, clicking Create before proceeding. Configure the policies as follows:
Firewall Rules: This section defines specific rules for specific services. We will enable them as the need arises, later.
TOS: This section optimizes Web browsing as much as you can on your end. Click Add a new type of service, and we will proceed to configure the services one by one.
Masquerading: This tells the server to forward requests from the LAN to the Internet, which is required for Internet connection-sharing. Click Add a new masquerading rule, and enter the following rule.
When Stopped: This allows machines whose IP addresses are specified to access the server even when the firewall is not running. No other IP addresses will have access. Add as many as you want, but there should be at least one, just in case. In the example below, I have allowed access from two IP addresses on the LAN. Click Add a new stopped address, and configure as follows:
We don't need to add any other settings.
Back on the Shorewall main page, click "Check Firewall". You should get the thumbs up. Note that an "OK" result here does not guarantee the firewall will work properly, or will work at all. It simply checks the rules syntax.
There is a security feature that prevents an unconfigured Shorewall from being started up, when booting. This has to be changed manually. For this, you will need a Java-enabled Web browser to do it using Webmin, or you could resort to using vim from the CLI.
What you have to do is change the line
startup=0in the file /etc/default/shorewall to
startup=1In Webmin, go to Others > File Manager. This will give you a nice Java-based file manager. Navigate to the above mentioned file, and click the "Edit" button at the top. A text editor window will pop up. (Disable pop-up blocker.) Make the change, and then save and close.
Again, using this browser, browse to the file /etc/shorewall/shorewall.conf, click "Edit", and find the line IP_FORWARDING=Keep. Change the value from Keep to On. Save and close.
Now, let us make sure that Shorewall is set to start at bootup. Go to System > Bootup and Shutdown, look for shorewall in the list. Tick the checkbox, and click "Start Now and On Boot" at the bottom. Go back to the Networking > Shorewall Firewall page, and you should see six buttons where there were previously only two. Click "Show Status", to verify that all is running well. Your Internet connection sharing should be set up, now. Try it out!
Ubuntu server pretty much does all the configurations necessary for a working BIND DNS server. There is, however, one thing we can do to make the lookups faster. We can tell our server to forward unknown requests to your ISP's DNS server.[1]
Go to Servers > BIND DNS Server, click on "Forwarding and Transfers", and, in the fields marked "Server to forward queries to", enter the IP addresses of your ISP's DNS servers. Save, and click "Apply Changes" in the main BIND DNS server page.
Now, we will move on to installing and setting up Squid as your caching proxy server. Go to Servers >Squid Proxy Server. Webmin will inform you that Squid is not installed on your system, and provide you with an option to install it using APT. Click on the link (labelled "Click here") provided, to install Squid. Webmin will keep you informed of the progress and, once completed, will give you some information on the installed packages.
Go back to the main page for Squid, and now you should have a host of configuration tools available. I will not explain all the options available, but, if you require more clarification, help is available at the top left of the tool's page. (You will have to disable your browser's popup blocker.)
Ports and Networking: Here we will tell Squid which port it will be listening on. The default is port 3128. We will stick to this, but you can change it. In the "Options for port" field, enter transparent. This will make Squid a transparent proxy server, which eliminates the need to configure machines on your LAN. Save the changes.
Memory Usage: Here, you can define memory usage limits for Squid, or choose to go with the default settings. I would draw attention to the "Maximum cached object size" option. Here, you can define the maximum size of cached files.
Cache Options: The option I would recommend you changing here is the "Cache Directories" one. Squid defaults to a 100MB cache, which is pretty minuscule for our caching proxy objective. Decide how much of your hard disk you wish to use for the cache; I use 5GB out of my 40GB hard disk. In the "Directory" field, enter /var/spool/squid, "Type" as UFS, in "Size (MB)", enter however much you decided on in megabytes, for the 1st- and 2nd-level directories, enter one of the following numbers; 16,32,64,128 or 256 (defaults being 16 and 256, respectively). These numbers basically define the file structure of your cache. Read the help documentation, for more information on this and other options. Save your changes.
Helper Programs: In the "DNS server addresses" field, enter 192.168.0.1, select the radio button, and save. This tells Squid to send DNS requests to the BIND DNS server running on your server.
Access Control: Here, we will define which LAN machines will be able to use Squid, by their IP addresses. At the bottom of the "Access Control Lists" section, select Client Address from the drop down list, and click "Create new ACL". In the page that appears, enter a name of your choice in the "ACL Name" field (e.g., Local_Network), define the range of IP addresses you wish to grant access to, and the Netmask, e.g., From = 192.168.0.2, To = 192.168.0.7, Netmask = 255.255.255.0. If you would like to grant access to all machines on your LAN, enter as follows; From = 192.168.0.0, To = *leave blank*, Netmask = 255.255.255.0. Save your changes.
Having defined the machines on our LAN, we will now tell Squid what to do with requests from these machines. Click "Add proxy restriction" in the "Proxy Restrictions" section. Select the "Allow" action, and the ACL you just created (Local_Network) from the "Match ACLs" list. Save your changes.
Your new restriction will be at the bottom of the restrictions list, and, since they are effectuated in order, you will have to move your new rule up the list to third place. Do this using the "Move" arrows, to the right of the defined restrictions.
For security reasons, we will create a new user named squid who will run squid. Go to System>Users and Groups. Click "Create a new user", and enter the following;
Leave the rest unchanged. Click "Create".
Now, we will grant permissions to the user squid to write to our cache. Go to Others > Command Shell, and execute the following command:
chown -R squid:squid /var/spool/squid/Return to the Squid Proxy Server page.
Administrative Options: In the "Run as Unix user" field, click the browse button, and select squid from the list of users. In the "Visible hostname" field, enter the name of your server. This you can find out from the "System Information" page in Webmin, as "System hostname". Save the changes.
Click "Initialize Cache". Once this terminates successfully, return to the Squid main page and click "Start Squid". Since we are making a transparent proxy server, we need to add some rules in the firewall, to redirect requests to pass through Squid. Go to Networking>Shorewall Firewall>Firewall Rules>Manually Edit File, and paste the following rule:
#squid transparent proxy redirect REDIRECT loc 3128 tcp www
If you changed the port Squid listens to, earlier on, use that port in this rule, instead of 3128. Save the changes, and Apply Configuration.
Test if your desktop machines have access to the Internet. The difference between a simple Internet connection sharing and using a caching proxy is that frequently visited Web sites will load faster, as some content is stored on your server.
Now, we'll move on to installing and setting up Samba for file sharing to both Linux and Windows machines. Go to Servers > Samba Windows File Sharing. As was the case with Squid, Webmin detects that Samba is not installed, and provides an easy link to install it using APT. Go ahead and click the link, to download and install Samba. Once this is done, we will now configure file sharing.
Since we are sharing on a trusted network, we will setup our file server with read and write permissions for everybody.
Return to Servers > Samba Windows File Sharing, and, in the first section, click "Create a new file share", then complete as follows:
This will create the share public, with Read-only permissions for all. Using Others > File Manager, navigate to /home, select the folder public, and click Info. In the info window that opens, in the Permissions section, select all the checkboxes for User, Group, and Other, thereby giving permission to everybody to read and write to this folder.
Now, navigate to /etc/samba, select smb.conf, and click Edit. Look for the line
; security = userand change it to
; security = shareScroll down to the end of the file, to find the section which describes the share we just created, and edit it to it look like this:
[public] comment = public path = /home/public public = yes writable = yes create mask = 0777 directory mask = 0777 force user = nobody force group = nogroup
Save and close. If you need to change your Workgroup, do that from the Windows Networking tool in the Global Configuration section on the Samba Windows File Sharing page. Samba's default workgroup is, ironically, MSHOME. Click Restart Samba Server, and verify that you have access to the shared folder with read and write permission from your desktop machine, by creating and deleting a file in the share. The only settings you will have to enter on your LAN machine to gain access are:
For those of us who use Bittorrent for peer-to-peer file sharing, we will install TorrentFlux, which is a Web-based Bittorrent client. Some of the advantages of using TorrentFlux include;
In your Web browser, go to the TorrentFlux Web site, and download the latest version of TorrentFlux. In Webmin, go to Others > Upload and Download. In the "Upload files to server" section, browse to the torrentflux_2.x.tar.gz file you just downloaded in the "Files to upload" field. In the field "File or directory to upload to", enter /var/www. Select in the Extract ZIP or TAR files options the Yes, then delete radio button. Click "Upload" to upload, and unpack TorrentFlux.
Using Other>File Manager, browse to the /var/www/torrentflux_2.x directory, and double-click the INSTALL file, to open it in your browser. Read the instructions carefully.
First, and very important, we will set the root password for our MySQL database. Note that this root user is different from the system root user. The same applies to all MySQL users.
Go to Servers > MySQL Database Server, and click User Permissions from the Global Options section. From the list of users, click on any of the instances of root. In the password field, select Set to.., and enter a password for the MySQL root user. You may be asked to log in, after setting the password. Repeat for all the other instances, with the same password.
TorrentFlux uses MySQL for its database features. So, let us go ahead and create a database for TorrentFlux. On the main MySQL page, click Create a new database. In the "Database name" field, enter torrentflux and don't make any other changes. Click Create.
To create the required tables, click on the torrentflux database we just created, then click the "Execute SQL" button. In the second section, which says "Select an SQL commands file to execute on database", select "From local file", and browse to the file /var/www/torrentflux_2.x/sql/mysql_torrentflux.sql, click Ok, and then Execute. Now, if you return to the table list, you will see that some tables have been created.
For security reasons, we will create a MySQL user specifically for TorrentFlux. On the MySQL main page, click "User Permissions", and then "Create new user". Enter the following, and make sure to select the appropriate radio buttons:
Don't select any of the permissions, and Save.
Now, we will allow this new user to modify the torrentflux database, only. Back on the MySQL main page, click on "Database Permissions", and then on "Create new database permissions". Remembering to select the appropriate radio buttons, select the following;
For the permissions, hold the Ctrl key, and select the following;
That's it; we're done with MySQL!
Now, we will tell TorrentFlux about the database settings we have just implemented. Using the Java browser, navigate to /var/www/torrentflux_2.x/html, select the config.php file, and click "Edit". Modify the "Your database connection information" section, entering the correct settings. Hints are provided. It should look something like this:
$cfg["db_type"] = "mysql"; // mysql, postgres7 view adodb/drivers/
$cfg["db_host"] = "localhost"; // DB host computer name or IP
$cfg["db_name"] = "torrentflux"; // Name of the Database
$cfg["db_user"] = "torrentflux"; // username for your MySQL database
$cfg["db_pass"] = "*password for MySQL user torrentflux*"; // password for database
Save and close.
Now, we will tell the Web server, Apache httpd, to serve TorrentFlux on port 80. Go to Servers > Apache Web server. You should have a Default Server and a Virtual Server, set up for you already. Click on the Virtual Server, and, at the bottom, in the "Virtual Server Details" section, make the following changes;
and Save. Then, on the Apache server page, click "Apply Changes" at the top right.
Now, in your browser, navigate to https://192.168.0.1, and you should get the TorrentFlux login page. Note that the username and password you enter here will create the administrator's account settings. Don't forget these. Choose wisely, and proceed to login.
You will be taken to the settings page, where we will change a few things.
Have a look at the other settings, and change them as you wish. You can change them later, as well. Click "Update Settings". There are "lights" that indicate problems in your settings. All should be green. Notice that TorrentFlux will download directly to our shared folder, giving instant access over the LAN.
A nice feature of TorrentFlux is queueing. Click on "queue" at the top, and choose if you want to enable it, and define how many torrent connections you want to allow to run in total (server threads) and per user (user threads). Click "Update Settings". Going with the 40% max download bandwidth per Torrent and allowing two connections total to run at a time still leaves 20% of the bandwidth for Web browsing.
Use the "new user" page to create normal or admin users for any one you want to grant access to. Other settings include search engine options and filters, external links, rss feeds, and database backups.
Adding torrents is done either by uploading from your desktop machine, pasting the URL of the torrent file, or searching using the available search engines. Files will be saved in folders according to TorrentFlux usernames in the shared folder.
Now, we will open ports 40000-40010 in Shorewall for the Torrent software to work properly. Go to Networking > Shorewall Firewall >: Firewall Rules > Manually Edit File, and paste this rule at the end:
#torrentflux ACCEPT net $FW tcp 40000:40010
If you wish to access your TorrentFlux from the Internet, e.g., while at work, and have a static external IP address, simply open port 80 on the external firewall, by adding this rule:
#Apache Web server ACCEPT net $FW tcp 80
Click Save, and then Apply Configuration in the Shorewall main page. You can then access TorrentFlux from anywhere, by browsing to https://*your external IP address*
If you have a dynamic IP address, then you will also have to use a service such as that provided by Dynamic DNS, which is free. Instructions for this are available on ubuntuguide.org. Although they are meant to be done at the actual machine, you can do them through Webmin, running the sudo commands in Others > Command Shell and editing the dyndns_update file in the Java file manager tool.
One thing to be wary of is completely filling up your hard disk. This will inevitably cause problems. So, just make sure you have enough space, before you decide to run your Torrent session.
Speaking of space, although system log files are useful in diagnosing problems, they sometimes occupy a whole lot of space. We will now limit the size of the log files.
Go to System > Log File Rotation>Edit Global Options and set the "Maximum size before rotating" to 50M (for 50MB) and the "Number of old logs to keep" to 4. This should allow you to have decent system logs, without eating up all your disk space. For a few days under normal use, keep an eye on the size of log files in /var/log using the Java file manager. See which logs are huge, fiddle with their settings in System>Log File Rotation and System>System Logs. Bear in mind that all that logging might be due to a real problem in your system. In general, though, the debug logs are pretty massive, and not very important for our purpose, especially the ones that debug network traffic.
Once you have set everything up, and all is working fine, it would be wise to backup your settings, in case you get too adventurous trying to fiddle around and break something, or even if you decide to change your server machine. This will enable you to restore all your settings.
Go to Webmin > Backup Configuration Files. In the "Modules to backup" list, select all of them (using the Shift key); for the Backup destination choose Local file, and enter a path, e.g., /home/*admin username*/backup-*date*.tar; in the "Include in backup" section, check "Webmin module configuration files" and "Server configuration files", and click "Backup Now". I recommend naming your backup files including the date, as choosing which one to restore from becomes easier.If you wish, you can set up Webmin to periodically backup your configurations automatically, in the "Scheduled backups" section. I set mine to backup up daily and weekly. Previous scheduled backups are replaced, and only the latest one is kept. Restoring is simply a matter of choosing which modules to restore, from which backup file, and whether the configurations should be applied.
Once in a while, it would be wise to update your server to get the latest fixes and patches. Do this by going to System>Software Packages, and in the "Upgrade all Packages" section, select:
Click "Upgrade Now", and it will all be done automagically, giving all the information about the upgrade. Also periodic upgrades to Webmin, as we did at the beginning of this guide, are advisable.
Some examples of other functionality you may be interested in including:
As you may have gathered by now, administering a Linux server is not a brain-twisting business, as some may have you think. Once you have everything set up to meet your needs, your LAN server/gateway should run like clockwork, requiring only occasional upgrades and maybe a pat on the back. Moreover, Webmin makes it a pleasant point-and-click affair, although, like everything else, you have to know what it is you want to do. This is where the vast documentation and help from the Linux community is priceless and indispensable.
With luck, everything has worked as expected, so far, and you now benefit from a free (as in free speech), powerful, flexible, easy-to-manage, easy-to-use, and cheap solution to your home networking needs. This, dear friends, is the brilliance of free and open source software!
[1] Rick Moen comments: On balance, I'd recommend against forwarding queries to one's ISP's nameservers. At minimum, users should understand the tradeoff.
Declaring forwarders in BIND declares a list of nameserver IPs to consult on all lookups BIND cannot answer from its local cache, in preference to the various nameservers that are authoritative for those domains, that would have been otherwise consulted. In the very short term, this gives you faster responses on early queries (i.e., for a short while after your local BIND instance fires up). After a while, most likely queries will be hits on the local cache, making your relatively high speed of access to the ISP's nameservers mostly irrelevant.
What you lose, through such forwarding, constitutes in my view a very serious drawback: you are making your DNS rely on the performance, reliability, and security of your ISP's nameservers. Most ISP nameservers turn out to have poor performance, are seriously lacking in performance and reasonableness of configuration, and have alarming security problems.
To be specific about some of those things: Many ISPs appear to have deliberately overridden the Time to Live (TTL) values that would normally be shared with cached DNS records, to artificially prolong the life of that cached information even though it may have been sent with low TTL values to keep cached copies from being used after becoming obsolete. The ISPs presumably do this, subverting the intended DNS-updating mechanisms, to reduce (needed) network traffic.
The worst problem is the security one: Studies such as Dan Kaminsky's have shown that a large number of ISP nameservers are vulnerable to cache poisoning, a form of attack in which they are induced to cache fraudulent data. This is especially true when those nameservers are used by forwarding nameservers in the fashion that this article describes.
The alternative, of course, is to just simply run your own recursive-resolver nameserver without forwarders, which by default will then get its data from the root zone and the immense tree of authoritative servers it defines. Fortunately, this works just perfectly without touching anything at all, out of the box, and imposes no performance deficiencies once you've started loading the cache.
I guess, overall, there's an instinctive tendency in people to think "Let's rely on the ISP's services, because they're the pros, and we should get better performance and reliability, that way." However, it turns out that, most often, none of those things is actually the case, and you can do much better on your own, with surprising ease.
One last suggestion: BIND9 is certainly a respected do-it-all nameserver, but is vastly overfeatured when all you need is a simple recursive resolver (and, in particular, aren't serving up locally defined domains of your own, which is called authoritative service). Also, BIND9 is infamously bloated and slow.
For all of those reasons, I long ago started keeping a catalogue of alternatives to BIND9, and in all honesty cannot say which of them I'd recommend, since I've stayed with BIND9 despite reservations, but you might browse the alternatives.
Talkback: Discuss this article with The Answer Gang
Shane is a Medical Resident in Romania. He has been a ardent user of FOSS and Linux since 2004. He spends a sizeable amount of time on Linux forums learning about it and helping others where he can. Currently his favorite distro is Ubuntu, while he has used Mandrake/Mandriva in the past on his desktop and still does for his home network server.
The gory details of code and hardware often hide the details of the "wetware", the human beings using technology. GNU/Linux software is driven by hundreds and thousands of people who in turn make the experience of using a computer a pleasant task for many more. There are many projects out there that want to reach the non-technical groups of our society. One of these projects is the One Laptop per Child (OLPC) organisation.
Technology as seen in software and hardware doesn't fall from the sky. There have to be skilled designers and developers at work who think, code, build, test, and produce. These people don't fall from the sky, either. They have to learn their skills at school or at university. Learning to write a computer program is a lot easier when you have access to computers and tools that enable you to get your code to run. This sounds a bit like a vicious circle. Indeed, this is true, and can be readily experienced by teaching, let's say, Perl programming without a Perl interpreter. This is how bleak it looks in many schools around the world. It is especially true in developing countries and rural areas; in short, anywhere where there is not much budget for equipment.
This is where the One Laptop per Child project comes into play. It tries to address two issues. First, it supplies a tool that can be used for teaching. The XO-1, formerly known as the $100 laptop, is a laptop that can be used inside and outside school. It can be used for reading, for writing, for playing, for collaborating and lots of other activities. In fact, activities is the rather fitting name for applications on the XO-1. The XO-1 is networked by using wireless technology, thus finessing any need to integrate the One Crossover Cable per Child project (or worse) into the OLPC initiative. The second issue is giving access to technology for children in a playful way. If your only association with computers is a big black mainframe sitting in a air-conditioned dungeon, being ready to devour you alive, then you won't have much fun working with computers, and probably are not very keen on learning any technological skills. Managing a first contact situation without traumata is one of the primary goals of teachers.
So, all in all, the XO-1 is a wonderful thing - and it runs Linux! Which is all the better.
My first contact with a XO-1 machine went without shock, but with even more curiosity. It happened during the Linuxwochen in Vienna, an annual event presenting talks, workshops, and companies working with Free Software at locations throughout Austria. Aaron Kaplan held a talk about the OLPC project, and managed to bring two XO-1s for anyone to try. The laptops fell prey to the visitors, so you had to be quick to get a glance at the system. However it was easy to hear them, for many played with the musical sequencer software called TamTam. The XO-1s desktop is tailored for children. This means that you navigate by using symbols. The desktop also tries to express its messages by means of graphics and icons.
The hardware is tuned for low power consumption, in order to make deployment in areas with an unstable electrical power grid easier. Recharging can be done via solar panels or mechanical generators, such as a spindle that can be operated by pulling a rope. The display is either backlit or operates in a contrast mode; the latter allows for reading text on the screen in broad sunlight. The CPU of the newer models is capable of displaying video clips on-screen. Networking is done by using wireless network cards. The laptops autoextend the range of the network by using mesh technology; this means that every XO-1 acts as a wireless client and as a mesh router. Mesh routing is also done while the system is not being used - i.e., in sleep mode.
There is a lot of design in this laptop. Which brings me to the people who thought of all this.
So, how did the two XO-1s end up at the Linuxwochen, and how did Aaron get involved in all of this? Since curiosity is part of my job description, I asked Aaron a couple of questions.
Hello, Aaron! What is your background in terms of computing and Unix systems?
I started in around '92 or so with BSD. I think it was still called BSD-Lite, and it came on 3.5" floppy discs. These were chunks, and you had to literally "cat" them together. Anyway, at around the same time, I was telneting myself from hop to hop via the MCI systems to my first real e-mail account / Unix box on the Net: well.com. Remember, that was ~19200 baud modem times. So, like many people from that time who experienced the Net for the first time, I got totally hooked on all the online Unix services that were out there, just waiting to be explored. Then, for some time, I was a bit active in the FreeBSD community.
I guess from then on, just one interesting topic just kept coming in after the previous. So that is where I am now :) It was always a playful experience all the way.
How did you get involved with the OLPC project?
First of all, I have to state that I am a volunteer in olpcaustria.org - a local grassroots organization that is not directly affiliated with OLPC per se. However, we are in good contact with OLPC, and try to help the core team and the whole idea.
About the history: I got intensively involved only recently. Before, in 2006, I was looking for a job and a colleague of mine pointed me to the Google Summer of Code for OLPC. So, I tried to apply myself, but eventually ended up as a mentor for Arthur Wolf. The SoC project could have gone better, but - hey - all students need to get some experience. Actually, it would be interesting to look at Arthur's ideas again, and re-implement them. Then, for some time, I was not involved in any activity related to OLPC.
Recently, I had the chance to visit OLPC and MIT, and only then did I realise how close the works of funkfeuer.at (a wireless community network, similar to freifunk) had been networkwise to OLPC. Both projects work on mesh networks, and I do believe these mesh networks could complement each other.
Where do you see the biggest benefits to education, in the countries that deploy the OLPC?
Personally, I believe that, in the short term, these countries will have a lot of good engineers who will understand computers, maths, physics, biology, etc., very intuitively. However, please remember, this is only the first step. You still need good education and teachers later (as a university student). The whole topic is quite big, not simple, but I still believe: if you give kids a chance to program their own games, to playfully explore maths, physics, and the like, then this is the very first and most important step. Will every kid with a laptop be an engineer? No! Of course not, luckily, not.
So, I think OLPC is a great, great grand vision, and it will only be the first step in revolutionizing education. There are other new approaches as well, such as Connexions, MIT open courseware, etc.
A lot of companies support the OLPC project. What is the role of the volunteer developers? Which skills are needed for which tasks?
As far as I know - everything is needed. The core system is currently heavily in development. Personally, I see the power of the open source community primarily in creating cool mesh-enabled activities.
("Activities" are the applications of the XO-1.)
A lot of people say that they aren't programmers, and lack highly technical skills. Are there any opportunities for these volunteers, as well?
Sure! Lobby for the OLPC project at your local government. :) Program some activities. And most important: test with kids!
But for a better explanation, I would like to redirect the humble reader to: Getting_involved_in_OLPC
How is the development organised? Who reviews the code and who decides which improvements go into the production version?
Do you think other communities can benefits from the OLPC project, as well?
Yes! There is a wealth of really cool tech coming out of OLPC. Take, for example, the microsleep mode: The laptop has a DCON chip that keeps the image stable but everything else can go to sleep, including the CPU. So that the CPU does not have to wake up every HZ ticks, there is, according to my knowledge, a special adaption to the timing code. In other words: the CPU can go to sleep in milliseconds, and wake up again in milliseconds. This drastically increases the battery hours you get out of the laptop. I think this change will come to mainstream Linux laptops. It is just too good to leave out.
I deliberately wrote "our turn": Teaching children the ways of the world can be done (and should be done) by any of us. Teaching children what Free Software can do may be something to start with. You don't need programming skills to do that. All you need is to write, to read, and to talk, as Aaron said. And the OLPC project is not the only one of its kind. There are lots of efforts in progress to achieve similar goals, and most of them go nowhere without a community.
I picked some links to the OLPC project, volunteer Web sites, and things I mentioned in the article. I don't wish to rate any efforts by omission; I just mention a few, and probably missed many.
Talkback: Discuss this article with The Answer Gang
René was born in the year of Atari's founding and the release of the game Pong. Since his early youth he started taking things apart to see how they work. He couldn't even pass construction sites without looking for electrical wires that might seem interesting. The interest in computing began when his grandfather bought him a 4-bit microcontroller with 256 byte RAM and a 4096 byte operating system, forcing him to learn assembler before any other language.
After finishing school he went to university in order to study physics. He then collected experiences with a C64, a C128, two Amigas, DEC's Ultrix, OpenVMS and finally GNU/Linux on a PC in 1997. He is using Linux since this day and still likes to take things apart und put them together again. Freedom of tinkering brought him close to the Free Software movement, where he puts some effort into the right to understand how things work. He is also involved with civil liberty groups focusing on digital rights.
Since 1999 he is offering his skills as a freelancer. His main activities include system/network administration, scripting and consulting. In 2001 he started to give lectures on computer security at the Technikum Wien. Apart from staring into computer monitors, inspecting hardware and talking to network equipment he is fond of scuba diving, writing, or photographing with his digital camera. He would like to have a go at storytelling and roleplaying again as soon as he finds some more spare time on his backup devices.
These images are scaled down to minimize horizontal scrolling.
All HelpDex cartoons are at Shane's web site, www.shanecollinge.com.
Talkback: Discuss this article with The Answer Gang
Part computer programmer, part cartoonist, part Mars Bar. At night, he runs
around in his brightly-coloured underwear fighting criminals. During the
day... well, he just runs around in his brightly-coloured underwear. He
eats when he's hungry and sleeps when he's sleepy.