...making Linux just a little more fun!
[In reference to the article Re-compress your gzipp'ed files to bzip2 using a Bash script (HOWTO) - By Dave Bechtel]
Dmitri Rubinstein (rubinste at graphics.cs.uni-sb.de)
Thu Feb 9 01:28:46 PST 2006
There is following code in the script:
# The Main Idea (TM) time gzip -cd $f2rz 2>>$logfile \ |bzip2 > $f2rzns.bz2 2>>$logfile # rc=$? [ $? -ne 0 ] && logecho "!!! Job failed." # XXX Currently this error checking does not work, if anyone can fix it # please email me. :-\
The gzip error code will be always ignored so error checking will not work correctly.
"set -o pipefail" should help in this case, however it is available
only since bash 3.0.
Maybe better to use PIPESTATUS variable. e.g:
> true | false | true > echo ${PIPESTATUS[*]} 0 1 0
[In reference to the article A Short Tutorial on XMLHttpRequest() - By Bob Smith ]
Willem Steyn (wsteyn at pricetag.co.za)
Mon Feb 20 09:30:41 PST 2006
This example in the 4th exercise works great.Just one problem that I have with all these AJX & XMLHTTPREQUEST is that the user's browser (IE) prompts every time at the call-back that the page is trying to access off-site pages that are not under its control, and that it poses a security risk. How do you get past this without asking the user to turn off the security settings?
Take for example a real-world scenario where a transactional page must get a real-time transaction back from another server-based page. Then upon clicking something like a SUBMIT button, the user is going to get this little message popping up, stating that it is a security risk. Any ideas?
Thanks in advance.
[Ben] -
Hi, Willem -
First off, please turn off the HTML formatting when emailing TAG. Many people in help forums, including a number of people here, will not take the trouble to answer you if you require them to wade through HTML markup, and will discard your mail unread; others will just be annoyed at having to clean up the extraneous garbage. In addition, you've tripled the size of your message. I've corrected the above problems, but some information may have been lost in the process.
Pardon me... you've used 'IE' and 'security' in the same sentence, without the presence of modifiers such as 'bad' or 'non-existent'. I'm afraid that makes your sentence unparseable. :)
As I sometimes point out to people, the term "Linux" in "Linux Gazette" tends to imply certain things - one of which is that answering questions like "How do I make IE do $WHATEVER" comes down to "use Linux". Asking us how to fix broken crapware in a legacy OS is, I'm afraid, an exercise in futility; the very concept of "security settings" in a *browser*, a client-end application designed for viewing content, is something that can only be discussed after accepting the postulate that water does indeed run uphill, and the sun rises in the west.
Perhaps your client can mitigate the problem somewhat by running a browser that isn't broken-as-designed; Mozilla Firefox is a reasonable example of such a critter. For anything beyond that, I'd suggest asking in Wind0ws- and IE-related forums.
[In reference to the article PyCon 2006 Dallas - By Mike Orr (Sluggo) ]
David Goodger (goodger at python.org)
Sun Mar 5 07:49:28 PST 2006
There was a grocery store (Tom Thumb's) a 30-minute walk away, or you could take the hotel's free shuttle service. Alternatively, a Walmart with a large grocery section was a 10-minute walk away, just behind the CompUSA store (not to be confused with the CompUSA headquarters building beside the hotel).
[In reference to the article Build a Six-headed, Six-user Linux System - By Bob Smith ]
Richard Neill rn214 at hermes.cam.ac.uk
Sat Mar 4 09:23:16 PST 2006
Re multi-seat computers, I think you are heading for much pain to use the
/dev/input/X devices directly!
If USB devices are powered on in a different order, these will move around!
Use Udev rules, and symlinks....
I have 4 mice on my system (don't ask why!); here is my config in case it is useful:
/etc/udev/rules.d/10-local.rules ------------------------------------------------------ #Rules for the various different mice on the system (the PS/2, Synaptics and Trackpoint) #PS/2 mouse (usually /dev/input/mouse0) #Symlink as /dev/input/ps2mouse BUS="serio", kernel="mouse*", SYSFS{description}="i8042 Aux Port", NAME="input/%k", SYMLINK="input/ps2mouse" #Synaptics touchpad (in mouse mode) (usually /dev/input/mouse1) #Symlink as /dev/input/synaptics-mouse BUS="usb", kernel="mouse*", DRIVER="usbhid", SYSFS{bInterfaceClass}="03", SYSFS{bInterfaceNumber}="00", SYSFS{interface}="Rel", NAME="input/%k", SYMLINK="input/synaptics-mouse" #Synaptics touchpad (in event mode) (usually /dev/input/event5) #Symlink as /dev/input/synaptics-event BUS="usb", kernel="event*", DRIVER="usbhid", SYSFS{bInterfaceClass}="03", SYSFS{bInterfaceNumber}="00", SYSFS{interface}="Rel", NAME="input/%k", SYMLINK="input/synaptics-event" #Trackpoint (usually /dev/input/mouse2) #Symlink as /dev/input/trackpoint BUS="usb", kernel="mouse*", DRIVER="usbhid", SYSFS{bInterfaceClass}="03", SYSFS{bInterfaceNumber}="01", SYSFS{interface}="Rel", NAME="input/%k", SYMLINK="input/trackpoint" #WizardPen (usually /dev/input/event6) #Symlink as /dev/input/wizardpen BUS="usb", kernel="event*", DRIVER="usbhid", SYSFS{bInterfaceClass}="03", SYSFS{bInterfaceNumber}="00", SYSFS{interface}="Tablet WP5540U", NAME="input/%k", SYMLINK="input/wizardpen" -----------------------------------------
Then I can configure paragraphs in xorg.conf like this:
Section "InputDevice" Identifier "ps2mouse" Driver "mouse" Option "Device" "/dev/input/ps2mouse" Option "Protocol" "ExplorerPS/2" Option "ZAxisMapping" "6 7" EndSection Section "InputDevice" Identifier "trackpoint" Driver "mouse" Option "Device" "/dev/input/trackpoint" Option "Protocol" "ExplorerPS/2" Option "ZAxisMapping" "6 7" #We want to use emulatewheel + emulatewheelltimeout. But need Xorg 6.9 for that #Option "EmulateWheel" "on" #Option "EmulateWheelButton" "2" #Option "EmulateWheelTimeout" "200" #Option "YAxisMapping" "6 7" #Option "XAxisMapping" "4 5" #Option "ZAxisMapping" "10 11" EndSection
[Bob] - Picture Bob smacking his head and saying "Doh!" You are right
and the next article, the one where I get it *all* working, will use udev and
symlinks.
Using /dev/input was not too bad since the USB bus is enumerated the same way
on each power up. I only had to move things around once right after the initial
set up.
[[RichardN]] - I shall be interested to see how it goes, especially if you can make the resulting system stable. Also, I wonder whether you can use udev to distinguish between 6 different copies of a nominally identical item? Some may let you use a serial number. Using the BUS id would be cheating - one ought to be able to plug it together randomly and have it work!
P.S. An interesting challenge would be to get separate audio channels working (eg a 5.1 channel motherboard could give 2 1/2 stereo outputs!) Or you could use alsa to multiplex all the users together so that everyone can access the speakers (and trust the users to resolve conflicts).
[Ben] -
> I have 4 mice on my system (don't ask why!); here is my config in case
> it is useful:All right, I won't ask.
...
OK, maybe I will. :) So, why do you have _4_ mice on your system? And what's the difference between 'mouse mode' and 'event mode'?
[[RichardN]] - OK. I have an IBM ultranav keyboard containing a trackpoint and a touchpad.
1)The trackpoint is my primary mouse.
2)The touchpad is also used as a mouse (mainly for scrolling). It appears as 2 different devices, so you can either treat it as a mouse (I called this mouse-mode), or as an event-device. The latter is the way to use its advanced features with the synaptics driver, but I'm not actually doing this for now.
3)A regular, PS2 mouse, for things like PCB design.
4)A Graphics tablet
[Ben] - Perhaps you should just write an article on Udev. :)
[[RichardN]] - You're very kind. But there is one already: https://reactivated.net/udevrules.php
[Ben] - [snip code] I can see a lot of sense in that. Judging from this example, I should be able to figure out a config file that will keep my flash drive at a single, consistent mountpoint (instead of having to do "tail /var/log/messages" and mounting whatever random dev it managed to become.) I've heard of Udev before, and even had a general idea of what it did, but never really explored it to any depth. Nice!
[[RichardN]] - Indeed. You'll get a random /dev/sdX, but a consistent symlink /dev/my_flash_drive (which you specify). Then, you can specify where it should be mounted via options in fstab.
Even better: you can specify that a digital camera should have umask and dmask (in fstab), which will fix the annoying tendency of JPGs to be marked as executable!
[In reference to the article Build a Six-headed, Six-user Linux System - By Bob Smith ]
Andy Lurig (ucffool at comcast.net)
Sun Mar 5 10:19:42 PST 2006
Have you tried a USB audio solution for each station as well, and how would that be worked into the configuration?
[Bob] - I have not tried it yet and do not know. The keyboards in my system have a built in, 1 port hub. I was going to try to tie that hub to the user at that keyboard. So, a flash drive plugged in would only be visible by that user. Kind of the same with audio; they would have to bring their own audio set but it would work if they plugged it in.
BTW: Two reader have mentioned https://linux-vserver.org as a way to securely isolate users from each other.
[In reference to the article Build a Six-headed, Six-user Linux System - By Bob Smith ]
Richard June (rjune at bravegnuworld.com)
Tue Mar 7 06:37:46 PST 2006
I do a lot with LTSP, so I'm familiar with sharing a machine between multiple users. KDE's kiosktool will let you do quite a bit to disallow the more CPU intensive screensavers. Verynice is also helpful in keeping rogue processes in check.
[In reference to the article Build a Six-headed, Six-user Linux System - By Bob Smith ]
vinu (vinu at hcl.in )
Wed Mar 8 22:24:36 PST 2006
I had worked on this multi terminal setup for around 6 months(from dec 2004 - july 2005) with RHEL 3.0 and RHEL 4.0, I made this setup with ati and nvidia cards. But my biggest problem was that many application makes problems on running on suh a setup especially sound applications. I rectified many of the things thru some wiered ways. But some apps like the SWF plugin for mfirefox made the big problems as only one instance of the plugin can be used, bcos the sound is redirected to the same device always, say /dev/dsp.To solve this problem we used a very ugly way,that we mounted the same /tmp folder for all the users and made the app thinks that it was using a seperate /temp and the device is always /dev/hda.This actually ruins the security of linux to a big extend.while some other apps like tuxracer, konquest etc,which using the videocards directly(DRI enabled).So I think the application design plays a vital role in such a setup. Though the last kernel i used was 2.9.11, I didn't tried any new patches or solutions aroused in last 8-10 months,but i am just sharing My other observations also:
1) I tried to make this setup on mother boards that of Intel make, whic was an unsuccessful attempt, for which i don't get a good explanation
2)These setups are working good on AMD,and Some MSI mother boards, with Intel or NVIDIA chipset with Nvidia cards.
3)The NVIDIA AMD combination shows excellent performance over others
4)The applications like mplayer shows certain level of uncertainity even we configured it properly
5)If u are using audio/visual applications u the application should be configurable(eg: mplayer on which we can select the audio device from the configuration options whic also supports the alsa devices).
6) For a normal user it's not possible to configure applications like swf plugin for firefox for all the six users.
7)this setup is best suited for a lab environment,and newbies bcos for advanced users the CPU speed may not be sufficient(like a kernel compilation fatally degrades the performance of the whole system)
I Dont know these observations are correct or not as per current situation.but
I think most are still relevent.
regards
vineesh
[Bob] - Thanks, Vineesh.
My solution to the sound problem was to turn it off for everyone -- not a very
elegant solution. A couple of readers have pointed out a system that would break
the PC into six separate systems, each with its own sound, X, and especially
important, security. The virtual server page is at https://linux-vserver.org.
I have not tried it yet but it looks promising.
BTW: another web page dedicated to multi-head Linux PC can be found here: https://www.c3sl.ufpr.br/multiterminal/index-en.php.
Please let me know if there is anything I can do to help you with your project.
[In reference to the article Build a Six-headed, Six-user Linux System - By Bob Smith ]
Chuck Sites (syschuck at kingtut.spd.louisville.edu)
Wed Mar 15 13:31:29 PST 2006
Hi Bob,
I've been posting on Chris Tylers blog regarding my Multiseat configuration. I've also had problems with a kernel Opps when the last person (seat 1 of a two seat configuration) logs out. I was looking at you kernel opps message and the call trace is very similar to yours. I was wondering if you still have that system running, and if so could you send me a copy of an 'lspci -vvv'. Also, I'm interested in hearing your experiences using the 'nv' driver. Were you having a similar Opps?
Best Regards,
Chuck Sites
[Bob] - The system is not running right now but I can set it up again fairly easily. What did you want to know from the 'lspci -vvv'?
I tried both the nv and vesa drivers. Neither could successfully get all six heads working. The nVidia driver was easily the best driver I tested.
[In reference to the article Build a Six-headed, Six-user Linux System - By Bob Smith ]
Claude Ferron (cferron at gmail.com)
Fri Mar 17 19:23:24 PST 2006
When I try to start a screen with the -sharevts option CPU climb up
to 100%....
I have the following in the system:
Kernel 2.6.15.6 6 x nVidia Corporation NV5M64 RIVA TNT2 Xorg version 6.9 on slackware 10.2
[Bob] - Could you give a little more information.
What was the full line to invoke X?
Does the error occur on the first head or the last?
Could you send the x.org configuration for one head?I saw this problem several times, but each time it was because I mis-typed a line in the x.org config file.
thanks
Talkback: Discuss this article with The Answer Gang
Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.
When away from the keyboard, her hands have been found full of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler.
Benjamin A. Okopnik (ben at linuxgazette.net)
Wed Mar 29 00:50:24 PST 2006
Hi, Gang -
Anyone interested in hearing Bruce Perens do his "Open Source State of the Union" speech in Boston (4/5/2006), please let me know and I'll arrange a press pass for you (it would be a Very Nice Thing if you sent in a conference report as a result.) Do note that this usually requires a recently-published article with your name on it.
In fact, if you'd like to attend any other industry conferences, the same process applies. I will ask that you do a little research in those cases and send me the contact info for the group that's running the particular con you're interested in.
Bob van der Poel (bvdp at uniserve.com)
Sun Feb 26 18:31:54 PST 2006
Following up on a previous discussion
Just to follow up on this ... I have it working perfectly now. A few fellows on the ALSA mailing list gave me hand and found out that for some reason the OSS sound stuff was being loaded and then the ALSA was ignored (still, I don't see why a cold plug did work). But, the simple solution was to delete or rename the kernel module "usb-midi.ko.gz" so that the system can't find it. Works like a damn.
Brian Sydney Jathanna (briansydney at gmail.com)
Tue Mar 14 18:18:19 PST 2006
Hi all,
I was just wondering if there is a way to enter the default inputs at the cmdline recursively when a program asks for user input. For eg. while compiling the kernel it asks for a whole lot of interactions wherein you just keep pressing Enter. The command 'Yes' is not an option in this case cos at some stages the default input would be a No. Help appreciated and thanks in advance.
[Thomas] - Sure. You can use 'yes' of course -- just tell it not to actually print any characters:
yes "" | make oldconfig
That will just feed into each stage an effective "return key press" that accepts whatever the default is.
[[Ben]] - Conversely, you can use a 'heredoc' - a shell mechanism designed for just that reason:
program <<! yes no maybe !
In the above example, "program" will receive four arguments via STDIN: 'yes<Enter>', 'no<Enter>', 'maybe<Enter>', and the <Enter> key all by itself.
sam luther (smartysam2003 at yahoo.co.in)
Sat Mar 18 12:32:22 PST 2006
I want to develop C code to transfer files from one PC to another PC over the parallel ports(DB 25 on my machine) of linux machines. I got the 18 wire cable ready but not sure with the connections and how to open and control the parallel port. Plz help...advices, sample codes, relevant material will b very helpful. Thx
[Thomas] - Well, if this is Linux -> Linux, setup PLIP.
(Searching linuxgazette.net for 'PLIP' will yield all the answers you need). Having done that you can setup NFS or sshfs, or scp, in the normal way.
[Jimmy] - Hmm. Advice, sample code, relevant material...
https://linuxgazette.net/issue95/pramode.html
https://linuxgazette.net/118/deak.html
https://linuxgazette.net/122/sreejith.html
https://linuxgazette.net/112/radcliffe.htmlare just some of the articles published in previous issues of LG that discuss doing various different things with the parallel port. The article in issue 122 also has diagrams, which may be of help.
Brian Sydney Jathanna (briansydney at gmail.com)
Tue Mar 21 14:44:57 PST 2006
Is there a way to modify the commandline of a running process? By looking at /proc/<pid>/cmdline it looks as though it is readonly even to root user. It would be helpful to add options or change arguments to a running command, if this was possible. Is there a way to get around with it? Thanks in advance.
[Thomas] - I don't see how. Most applications only parse what they're told to at init time, via --command-line options. This means you would have to effectively restart the application.
However, if a said application receives its options via a config file, then sending that application a HUP signal might help you -- providing the said application supports the command-line options in the config file.
[[Ben]] - Not that I can make a totally authoritative statement on this, but I agree with Thomas: when the process is running, what you have is a memory image of a program _after_ it has finished parsing all the command-line options and doing all the branching and processing dependent on them; there's no reason for those command-line parsing mechanisms to be active at that point. Applications that need this kind of functionality - e.g., Apache - use the mechanism that Thomas describes to achieve it, with a "downtime" of only a tiny fraction of a second between runs.
manne neelima (manne_neelima at yahoo.com)
Wed Mar 22 12:08:12 PST 2006
I have a question about rm command. Would you please tell me how to remove all the files excepts certain Folder in Unix?
Thanks in Advance
Neelima
[Thomas] - Given that the 'rm' command is picky about removing non-empty directories anyway (unless it is used with the '-f' flag) I suspect your question is:
"How can I exclude some file from being removed?"
... to which the answer is:
"It depends on the file -- what the name is, whether it has an 'extension', etc."
Now, since you've provided next to nothing about any of that, here's a contrived example. Let's assume you had a directory "/tmp/foo" with some files in it:
```
[n6tadam at workstation foo]$ ls
a b c
'''Now let us add another directory, and add some files into that:
```
[n6tadam at workstation foo2]$ ls
c d e f g
'''Let's now assume you only wanted to remove all the files in foo:
```
[n6tadam at workstation foo]$ rm -i /tmp/foo/*
'''Would remove the files 'a', 'b', and 'c'. It would not remove "foo2" since that's a non-empty directory.
Of course, the answer to your question is really one of globbing. I don't know what kind of files you want removing, but for situations such as this, the find(1) command works "better":
```
find /tmp/foo -type f -print0 | xargs -0 rm
'''... would remove all files in /tmp/foo _recursively_ which means the files in /tmp/foo2 would also be removed. Woops. Note that earlier with the "rm -i *" command, the glob only expands the top-level, as it ought to. You can limit find's visibility, such that it will only remove the files from /tmp/foo and nowhere else:
```
find /tmp/foo -maxdepth 1 -type f -print0 | xargs -0 rm
'''Let's now assume you didn't want to remove file "b" from /tmp/foo, but everything else. You could do:
```
find /tmp/foo -maxdepth 1 -type f -not -name 'b' -print0 | xargs -0 rm
'''... etc.
[Ben] - Well, first off, Unix doesn't have "Folders"; I presume that you're talking about directories. Even so, the 'rm' command doesn't do that - there's no "selector" mechanism built into it except the (somewhat crude) "-i" (interactive) option. However, you can use shell constructs or other methods to create a list of files to be supplied as an argument list to 'rm', which would do what you want - or you could process the file list and use a conditional operator to execute 'rm' based on some factor. Here are a few examples:
# Delete all files except those that end in 'x', 'y', or 'z'
rm *[^xyz]# Delete only the subdirectories in the current dir
for f in *; do [ -d "$f" ] && rm -rf "$f"; done# Delete all except regular files
find /bin/* ! -type f -exec /bin/rm -f {} \;
[[Ben]] - Whoops - forgot to clean up after experimenting. :) That last should, of course, be
find * ! -type f -exec /bin/rm -f {} \;
[[[Francis]]] - My directory ".evil" managed to survive those last few. And my file "(surprise" seemed to cause a small problem...
[[[[Ben]]]] - Heh. Too right; I got too focused on the "except for" nature of the problem. Of course, if I wanted to do a real search-and-destroy mission, I'd do something like
su -c 'chattr -i file1 file2 file3; rm -rf `pwd`; chattr +i *'[evil grin]
No problems with '(surprise', '.evil', or anything else.
[[[[[Thomas]]]]] - ... that won't work across all filesystems, though.
[[[[[[Ben]]]]]] - OK, here's something that will:
1) Copy off the required files.
2) Throw away the hard drive, CD, whatever the medium.
3) Copy the files back to an identical medium.There, a nice portable solution. What, you have more objections? :)
Note that I said "if *I* wanted to do a real search-and-destroy mission". I use ext3 almost exclusively, so It Works For Me. To anyone who wants to include vfat, reiserfs, pcfs, iso9600, and malaysian_crack_monkey_of_the_week_fs, I wish the best of luck and a good supply of their tranquilizer of choice.
[[[Francis]]] - (Of course, you knew that.
[[[[Ben]]]] - Actually, I'd missed it while playing around with the other stuff, so I'm glad you were there to back me up.
[[[Francis]]] - But if the OP cares about *really* all-bar-one, it's worth keeping in mind that "*" tends not to match /^./, and also that "*" is rarely as useful as "./*", at least when you don't know what's in the directory. I suspect this isn't their primary concern, though.)
[[[[Ben]]]] - Well, './*' won't match dot-files any better than '*' will, although you can always futz around with the 'dotglob' setting:
# Blows away everything in $PWD
(GLOBIGNORE=1; rm *)...but you knew that already. :)
[Martin] - try this perl script... This one deletes all hidden files apart from the ones in the hash.
#!/usr/bin/perl -w use strict; use File::Path; #These are the files you want to keep. my %keepfiles = ( ".aptitude" => 1, ".DCOPserver_Anne__0" => 1, ".DCOPserver_Anne_:0" => 1, ".gconf" => 1, ".gconfd" => 1, ".gnome2" => 1, ".gnome2_private" => 1, ".gnupg" => 1, ".kde" => 1, ".kderc" => 1, ".config" => 1, ".local" => 1, ".mozilla" => 1, ".mozilla-thunderbird" => 1, ".qt" => 1, ".xmms" => 1, ".bashrc" => 1, ".prompt" => 1, ".gtk_qt_engine_rc" => 1, ".gtkrc-2.0" => 1, ".bash_profile" => 1, ".kderc" => 1, ".ICEauthority" => 1, ".hushlogin" => 1, ".bash_history" => 1, ); my $inputDir = "."; opendir(DIR,$inputDir) || die("somemessage $!\n"); while (my $file = readdir(DIR)) { next if ($file =~ /^\.\.?$/); # skip . & .. next if ($file !~ /^\./); #skip unless begins with . # carry on if it's a file you wanna keep next if ($keepfiles{$file}); # Else wipe it #print STDERR "I would delete $inputDir/$file\n"; # you should probably test for outcome of these #operations... if (-d $inputDir . "/" . $file) { #rmdir($inputDir . "/" . $file); print STDERR "Deleteing Dir $file\n"; rmtree($inputDir . "/" . $file); } else { print STDERR "Deleting File $file\n"; unlink($inputDir . "/" . $file); } } closedir(DIR);
[[Ben]] - All that, and a module, and 'opendir' too? Wow.
perl -we'@k{qw/.foo .bar .zotz/}=();for(<.*>){unlink if -f&&!exists$k{$_}}':)
Ramon van Alteren (ramon at vanalteren.nl)
Wed Feb 8 15:44:14 PST 2006
Hi All,
I've recently built a 9Tb NAS for our serverpark out of 24 SATA disks & 2 3ware 9550SX controllers. Works like a charm, except....... NFS
We export the storage using nfs version 3 to our servers. Writing onto the local filesystem on the NAS works fine, copying over the network with scp and the like works fine as well.
However writing to a mounted nfs-share at a different machine truncates files at random sizes which appear to be multiples of 16K. I can reproduce the same behaviour with a nfs-share mounted via the loopback interface.
Following is output from a test-case:
On the server in /etc/exports:
/data/tools 10.10.0.0/24(rw,async,no_root_squash) 127.0.0.1/8 (rw,async,no_root_squash)
Kernelsymbols:
Linux spinvis 2.6.14.2 #1 SMP Wed Feb 8 23:58:06 CET 2006 i686 Intel (R) Xeon(TM) CPU 2.80GHz GenuineIntel GNU/Linux
Similar behaviour is observed with gentoo-sources-2.6.14-r5, same options.
CONFIG_NFS_FS=y CONFIG_NFS_V3=y CONFIG_NFS_V3_ACL=y # CONFIG_NFS_V4 is not set # CONFIG_NFS_DIRECTIO is not set CONFIG_NFSD=y CONFIG_NFSD_V2_ACL=y CONFIG_NFSD_V3=y CONFIG_NFSD_V3_ACL=y # CONFIG_NFSD_V4 is not set CONFIG_NFSD_TCP=y # CONFIG_ROOT_NFS is not set CONFIG_NFS_ACL_SUPPORT=y CONFIG_NFS_COMMON=y #root at cl36 ~ 20:29:44 > mount10.10.0.80:/data/tools on /root/tools type nfs (rw,intr,lock,tcp,nfsvers=3,addr=10.10.0.80) #root at cl36 ~ 20:29:56 > for i in `seq 1 30`; do dd count=1000 if=/dev/ zero of=/root/tools/test.tst; ls -la /root/tools/test.tst ; rm /root/ tools/test.tst ; done 1000+0 records in 1000+0 records out dd: closing output file `/root/tools/test.tst': No space left on device -rw-r--r-- 1 root root 163840 Feb 8 20:30 /root/tools/test.tst 1000+0 records in 1000+0 records out dd: closing output file `/root/tools/test.tst': No space left on device -rw-r--r-- 1 root root 98304 Feb 8 20:30 /root/tools/test.tst 1000+0 records in 1000+0 records out dd: closing output file `/root/tools/test.tst': No space left on device -rw-r--r-- 1 root root 98304 Feb 8 20:30 /root/tools/test.tst 1000+0 records in 1000+0 records out dd: closing output file `/root/tools/test.tst': No space left on device -rw-r--r-- 1 root root 131072 Feb 8 20:30 /root/tools/test.tst 1000+0 records in 1000+0 records out dd: closing output file `/root/tools/test.tst': No space left on device -rw-r--r-- 1 root root 163840 Feb 8 20:30 /root/tools/test.tst
<similar thus snipped>
I've so far found this.
Which seems to indicate that RAID + LVM + complex storage and 4KSTACKS can cause problems. However I can't find the 4KSTACK symbol anywhere in my config.
For those wondering.... no it's not out of space:
10.10.0.80:/data/tools 9.0T 204G 8.9T 3% / root/tools
Any help would be much appreciated.......
Forgot to mention:
There's nothing in syslog in either case (loopback mount or remote machine mount or server)
We're using reiserfs 3 in case you're wondering. It's a raid-50 machine based on two raid-50 arrays of 4,55 Tb handled by the hardware controller.
The two raid-50 arrays are "glued" together using LVM2:
--- Volume group --- VG Name data-vg System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 9.09 TB PE Size 4.00 MB Total PE 2384134 Alloc PE / Size 2359296 / 9.00 TB Free PE / Size 24838 / 97.02 GB VG UUID dyDpX4-mnT5-hFS9-DX7P-jz63-KNli-iqNFTH --- Physical volume --- PV Name /dev/sda1 VG Name data-vg PV Size 4.55 TB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 1192067 Free PE 0 Allocated PE 1192067 PV UUID rfOtx3-EIRR-iUx7-uCSl-h9kE-Sfgu-EJCHLR --- Physical volume --- PV Name /dev/sdb1 VG Name data-vg PV Size 4.55 TB / not usable 0 Allocatable yes PE Size (KByte) 4096 Total PE 1192067 Free PE 24838 Allocated PE 1167229 PV UUID 5U0F3v-ZUag-pRcA-FHvo-OJeD-1q9g-IthGQg --- Logical volume --- LV Name /dev/data-vg/data-lv VG Name data-vg LV UUID 0UUEX8-snHA-dYc8-0qLL-OSXP-kjoa-UyXtdI LV Write Access read/write LV Status available # open 2 LV Size 9.00 TB Current LE 2359296 Segments 2 Allocation inherit Read ahead sectors 0 Block device 253:3[Kapil] - I haven't used NFS in such a massive context so this may not be the right question.
[[Ramon]] - Doesn't matter, I remember once explaining why I was still at work and with what problem to the guy cleaning our office (because he asked). He asked one question which put me on the right track solving the issue....in half an hour after banging my head against it for two days ;-)
Not intending to compare you to a cleaner... sometimes it helps a lot if you get questions from a different mindset.
[Kapil] - Have you tested what happens with the user-space NFS daemon?
[[Ramon]] - Urhm, not clear what you mean.... Tested in what sense ?
[[[Kapil] ]]] - Well. I noticed that you had compiled in the kernel NFS daemon. So I assumed that you were using the kernel NFS daemon rather than the user-space one. Does your problem persist when you use the user-space daemon?[[[[Ramon]]]] - Thanx, no it doesn't.
[[[Kapil]]] - The reason I ask is that it may be the kernel NFS daemon that is leading to too many indirections for the kernel stack to handle.
[[[[ Ramon]]]] - That appears to be the case. I'm writing a mail to lkml & nfs list right now to report on the bug.
[[[[[Ramon]]]]] - Turned out it did, but with a higher threshold. As soon as we started testing with files > 100Mb the same behaviour came up again.
It turned out to be another bug related to reiserfs.
For those interested more data is here and here.
In short: Although reiserfs reports in its FAQ that it supports filesystems up to 16Tb with the default options (4K blocks) it supports only 8Tb. It doesn't fail however and appears to work correctly when using the local filesystem, the problems start showing up when using NFS
I fixed the problem by using another filesystem. Based on comments on the nfs-ml and the excellent article in last months linuxgazette we switched to jfs.
So far it's holding up very good. We haven't seen the problems reappearing with files in excess of 1GbThanx for the help though.
fahad saeed (fahadsaeed11 at yahoo.com )
Thu Feb 9 20:22:37 PST 2006
Hello,
I am an linux enthusiastic and I would like to be on the answer blog team. How
may this be possible. I have one linux article to my credit.
https://new.linuxfocus.org/English/December2005/article390.html
Regards
FAHAD SAEED
[Ben] -
Hi, Saeed -
Yep, I saw that - very good article and excellent work! Congratulations to your entire team.
You're welcome to join the Linux Gazette Answer Gang; simply go to https://lists.linuxgazette.net/mailman/listinfo/tag and sign up. As soon as you're approved, you can start participating.
[Kapil] -
Hello,
On Fri, 10 Feb 2006, fahad saeed wrote:
> <html><div style='background-color:'><DIV class=RTE>
> <P>Thankyou all for the warm welcome.I will try to be of use.</P>
> <P>Kind Regards</P>I somehow managed to read that but please don't send HTML mail!
I started reading your article.
Since I'm only learning the ropes with wireless could you send me some URL indicating how the wireless Ad-Hoc network (the links) are set up? Just to give you an idea of how clueless I am, I didn't get beyond "iwconfig wifi mode ad-hoc".
Regards,
Kapil.
P.S. Its great to see such close neighbours here on TAG.
[Kapil] -
Dear Fahad Saeed,
First of all you again sent HTML mail. You need to configure your webmail account to send plain text mail or you will invite the wrath of Thomas and Heather [or Kat] (who edit these mails for inclusion in LG) upon you. This may just be a preference setting but do it right away!
On Fri, 10 Feb 2006, fahad saeed wrote:
> the command you entered is not very right i suppose.The cards are
> usually configured( check it with ifconfig) as ath0 etc.The command
> must be (assuming that the card configured is 'displayed' as ath0)
> iwconfig ath0 mode ad-hoc.You can also do the same if u change the
> entry in ifcfg-ath0 to ad-hoc.Actually, I just used the "named-interface" feature to call my wireless interface "wifi". So "iwconfig wifi mode ad-hoc" is the same as "iwconfig eth0 mode ad-hoc" on my system.
> Hope this helps.I didnt get your question exactly.Please let me know
> what exactly are you trying to do and looking so that i can be of
> more specific help.My question was what one does after this. Specifically, is the IP address of the interface configured statically via a command like ifconfig eth0 192.168.0.14 and if so what is the netmask setting?
Just to be completely clear. I have not managed to get two laptops to communicate with each other using ad-hoc mode. Both sides say the (wireless) link is up, but I couldn't get them to send IP packets to each other. I have only managed to get an IP link when there is a common access point (hub).
The problem is that getting wireless cards to work with Linux has been such a complicated issue in the past that most HOWTO's spend a lot of time in explaining how to download and compile the relevant kernel modules and load firmware. The authors are probably exhausted by the time they got to the details of setting up networking :)
[[Ben]] - That's something I'd really like to learn to do myself. I've thought about it on occasion, and it always seemed like a doable thing - but I never got any further than that, so a description of the actual setup would be a really cool thing.
When I read about those $100 laptops that Negroponte et al are cranking out, I pictured a continent-wide wireless fabric of laptops stretching across, say, Africa - with some sort of a clever NAT and bandwidth-metering setup on each machine where any host within reach of an AP becomes a "relay station" accessible to everyone on that WAN. Yeah, it would be dead slow if there were only a few hosts within reach of APs... but the capabilities of that kind of system would be awesome.
I must say that, in this case, "the Devil is not as bad as he's painted". In my experience of wireless cards under Linux has consisted of
1) Find the source on the Net and download it;
2) Unpack the archive in /usr/src/modules;
3) 'make; make install' the module and add it to the list in
"/etc/modules".Oh, and run 'make; make install' every time you recompile the kernel. Not what I'd call terribly difficult.
[[[Martin]]] - When my Dad had a wireless card it wasn't that simple...
It was a Netgear something or other USB - I don't think we ever got it work in Linux in the end.
Its not a problem now though as he is using Ethernet over Power and it's working miles better in both Windows and Linux.
[[[[Ben]]]] - Hmm, strange. Netgear and Intel are the only network hardware with which I haven't had any problems - whatever the OS. Well, different folks, different experiences...
[[[Saeed]]] - I would have to agree with Kapil here.Yes the configuration process is sometimes extremely difficulty.As Benjamin potrayed it ...it seems pretty easy and in theory it is. But when done practically it is not that straightforward. The main problem is that the drivers available are for different chipsets.The vendors do not care about the chipsets and change the chipsets without changing the product ID. It happened in our case with WMP11 if I am remembering correctly.
Obviously once you get the correct sets of drivers, kernel and chipsets it is straightforward.
The lab setup that we did in UET LAHORE required the cards to work in ad-hoc mode.We used madwifi drivers. Now as you may know that there is beacon problem in the madwifi drivers and the ad-hoc mode itself does not work reliably.The mode that we implemented was Ad-hoc mode Cluster Head Routing. In simple words it meant that one of the PC's were configured to be in Master mode and there were bunch of PCs around.It would have been really cool if we could get it work in 'pure ad-hoc' mode nevertheless it served the LAB purposes.
[[[Jason]]] - Huh. Weird. I've got an ad-hoc network set up with my PC and my sister's laptop, and it "just works". The network card in my PC is a Netgear PCI MA311, with the "Prism 2.5" chipset. ("orinoco_pci" is the name of the driver module.)
:r!lspci | grep -i prism 0000:00:0e.0 Network controller: Intersil Corporation Prism 2.5 Wavelan chipset (rev 01)The stanza in /etc/network/interfaces is:
auto eth1 iface eth1 inet static address 10.42.42.1 netmask 255.255.255.0 wireless-mode ad-hoc wireless-essid jason wireless-key XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XXFWIW, the commands I was using to bring up the interface before I switched to Debian were:
sbin/ifconfig eth1 10.42.42.1 netmask 255.255.255.0 /usr/sbin/iwconfig eth1 mode Ad-Hoc /usr/sbin/iwconfig eth1 essid "jason" /usr/sbin/iwconfig eth1 key "XXXX-XXXX-XXXX-XXXX-XXXX-XXXX-XX"I have dnsmasq providing DHCP and DNS. The laptop is running Windows 2000 with some off-brand PCMCIA wifi card.
[[[[Kapil]]]] - I get it. While the link-layer can be setup in ad-hoc mode by the the cards using some WEP/WPA mechanism, there has to be some other mechanism to fix the IP addresses+netmasks. For example, one could use static IP addresses of each of the machines as Fahad Saeed (or Jason) has done. Of course, in a server-less network one doesn't want one machine running a DHCP server. Alles ist klar.
I will try this out the next time I have access to two laptops...
[[Saeed]] - First of all I am sorry abt the html format thing.
Dear Kapil what is your card's chipset type and what drivers you used to configure them with linux.
[[[Kapil]]] - The card is the Intel one and it works with the driver ipw2200 that is built into the default kernel along with the firmware from the Intel site.
In fact, I have had absolutely no complaints about getting the card to work with wireless access stations with/without encryption and/or authentication. But all these modes are "Managed" modes.
I once tried to get the laptop to communicate with another laptop (a Mac) in Ad-Hoc mode and was entirely unsuccessful. So when you said that you managed to get an Ad-Hoc network configured for your Lab, I thought you had used the cards in Ad-Hoc mode.
However, from your previous mail it appears that you configured one card in "Master" mode and the remaining cards in "Managed" mode, so that is somehow different. In particular, I don't think the card I have can be set up in "Master" mode at all.
[[[[Saeed]]]] - You got it right; we did used the master mode type in the lab for demonstration purposes because it was more reliable than the pure 'ad-hoc' mode and it served the lab purposes.However we did use the ad-hoc mode in the lab and it worked fine but wasn't reliable enough for lab purposes ...which has to work reliably at all times.
Talkback: Discuss this article with The Answer Gang
Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.
When away from the keyboard, her hands have been found full of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler.
J. Bakshi (j.bakshi at icmail.net)
Sat Mar 25 22:06:39 PST 2006
Dear list,
I am using discover 2.0.6 in a debian sarge(testing) machine with Linux 2.6.8-2-k7 kernel. There is no */etc/discover.conf* file. I can't understand where discover is looking after for its default configuration. Pls let me know so that I can change its scanning options. *discover -t modem* or *discover -t printer* don't detect my modem or printer BUT hwinfo detect these H/W perfectly. What may be the problem ?
I have presently the files in my machine are
/usr/bin/discover /etc/discover.d /lib/discover /usr/share/discover /usr/share/man/man1/discover.1.gz discover.conf-2.6 discover-modprobe.conf discover.conf.d discover-v1.conf discover.d
What to do so that discover can load the drivers module automatically during boot ? Kindly solve my doubts. Please CC to me with best complements
[Rick] - The Debian systems I have at my disposal at this moment don't have discover installed, so I'm limited to the information available on-line. However, searching https://packages.debian.org/ for "discover" turns up a number of packages including the main "discover" one, providing this list of contents on i386 sarge (which is _not_ the testing branch, by the way: sarge=stable, etch=testing -- for quite some time, now):
I notice in that list a file called "etc/discover-modprobe.conf". Perhaps that's what you're looking for? Its manpage (also findable on the Internet) describes it as "the configuration file for discover-modprobe, which is responsible for retrieving and loading kernel modules".
> *discover -t modem* or *discover -t printer* don't detect my modem or
> printer BUT hwinfo detect these H/W perfectly. What may be the problem?Ah, now _that_ is a completely different question. J. Bakshi, you really would be well advised to be careful of overdefining your problem, when you're asking for help. If your _real_ problem is that you haven't yet figured out how to configure Debian-sarge to address your modem and printer, please _say so_ and give us relevant details.
If, by contrast, you claim the problem is that you need to find discover's configuration file, _that_ is all you'll get help with, even if your problem has nothing to do with discover in the first place.
I have a suggestion: Why don't you back up and start over? Tell us about your modem and printer's nature and configuration, what relevant software is installed on your system, what is an is not happening, what you've tried, and exactly what happened when you tried that.
_Note_:
You should not attempt to recall those things from memory, but rather attempt them again while taking contemporaneous notes. Post information from those notes, instead of from vague recollections and reconstructions of events in your memory. Thank you.
[[jbakshi]] - My installed sarge is still in testing mode that's why I have explicitly mention * sarge(testing) * I have no broad-band to upgrade my system. Dial-up is too poor.
Automatically You have given answear of a question which was asked latter -:) Yes, discover-modprobe automatically loads kernel module for the discover_detected H/W .
*cat /etc/discover-modprobe.conf* shows as below
# $Progeny: discover-modprobe.conf 4668 2004-11-30 04:02:26Z licquia $ # Load modules for the following device types. Specify "all" # to detect all device types. types="all" # Don't ever load the foo, bar, or baz modules. #skip="foo bar baz" # Lines below this point have been automatically added by # discover-modprobe(8) to disable the loading of modules that have # previously crashed the machine:But I am still looking for the configuration_file which define how the command *discover* detects buses.
I thought this issue is also related with the same doscover_configuration_file but now I have the answear. please see below
I have Epson C20SX parallel inkjet printer.
running *hwinfo --printer* detects my printer correctly. below is the o/p of the command
14: Parallel 00.0: 10900 Printer [Created at parallel.153] Unique ID: ffnC.GEy3qUgdsRD Parent ID: YMnp.ecK7NLYWZ5D Hardware Class: printer Model: "EPSON Stylus C20" Vendor: "EPSON" Device: "Stylus C20" Device File: /dev/lp0 Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #9 (Parallel controller)but *discover -t printer* returns nothing. I have founded the answear. man pages of discover (version 2.0.6) says that discover scans only ati, pci, pcmcia, scsi, usb. That's why it can't detect my parallel printer and PS/2 mouse, serial modem
[[[Rick]]] - [Quoting J. Bakshi (j.bakshi at icmail.net):]
> My installed sarge is still in testing mode
No. It's really not.
[Snip bit where I suggest that you start over with a fresh approach, and where you don't do that. Oh well. If you decide you want to be helped, please consider following my advice, instead of ignoring it.]
[[[[jbakshi]]]] -
Please Try to understand
1) I installed Sarge from CD pack. When I did the installation *sarge was in testing mode THEN*
2) Now sarge has become stable
3) As I don't have broadband and the dialup is also too poor, I couldn't upgrade my *installed* sarge hence *that installed sarge in my box is still a testing release*
4) I have no intention to ignore any advice as I need those seriously.
[[[[[Rick]]]]] - [Quoting J. Bakshi (j.bakshi at icmail.net):]
> 2) Now sarge has become stable
I know all this.
[no broadband]
> hence *that installed sarge in my box is still a testing release*
i knew what you were saying, but it's was somewhat misleading to refer to sarge as the testing branch. Irrespective of the state of your system's software maintenance, it's 2006, and sarge is now the stable branch. Shall we move on?
> 4) I have no intention to ignore any advice as I need those seriously.
OK, so, when you wish to do that, you'll be starting fresh by describing relevant aspects of your hardware and system, and saying what actual problem you're trying to solve, what you tried, what the system did when you tried various things, etc. You may recall that _that_ was my advice.
I said you weren't electing to follow my advice because you weren't doing that. So far, you still aren't. But that is still my suggestion. To be more explicit: You started out by defining your problem as "discover" configuration -- but there was actually no reason to believe that was the case. Therefore, you were asking the wrong question. Start over, please.
[Thomas] - I'm jumping in here to say that "discover" is a real PITA. It's the most annoying package ever, and should die after installation. It conflicts no end with anything held in /etc/modules, and actively disrupts hotplug for things like USB devices.
You don't need it for anything post-installation. If you somehow think you do, you're wrong.
J. Bakshi (j.bakshi at icmail.net)
Wed Mar 29 04:19:05 PST 2006
Dear list,
Please help me to solve a doubt. In linux world *discover* *hwinfo* *kudzu* are the H/W detection technology. Do they directly probe the *raw* piece of H/W to collect information about it ? what about lspci, lsusb, lshw ?? Do they also provide H/W information by probing the *raw* piece of H/W or just retrieve the information which are already provided by some H/W detection technology as mentioned above ?? kindly cc to me.
[Thomas] - Not as such. They tend to query the kernel which itself has ""probed"" the devices beforehand.
> lsusb, lshw ?? Do they also provide H/W information by probing the
> *raw* piece of H/W or just retrieve the information which are already
> provided by some H/W detection technology as mentioned above ??What difference does it make to you if it has or has not? I'll reiterate once more that you DO NOT need discover at all. In fact, it's considered harmful [1]
[1] Just by me, for what that's worth.
[[jbakshi]] - Nothing as such but my interest to know those technologies, specially from those who have already used those and quite aware the advantage and disadvante. Thanks for your response.
[[jbakshi]] - Just find out from man page. *lspci -H1* or *lspci -H2* do direct H/W access. *lspci -M* enable bus mapping mode
[[Kapil]] - In case that makes you feel happier Thomas, I saw a mail (which I can't find right now) that indicates debian-installer team is in agreement with you. However, the replacement "udev/hotplug/coldplug" may also make you unhappy :)
[[[Thomas]]] - Ugh. :) Udev is a fad that perpetuates the dick-waving wars of "Oooh, look at me, I only have three entries in /dev. You have 50000, so mine's much better". It has absolutely *no* real advantage over a static /dev tree whatsoever.
[[[[Pedro]]]] - What about the symlink magic explained here (https://reactivated.net/writing_udev_rules.html)?
To hotplug different scanners, digital cameras or other usb devices in no particular other, and make them appear always in the same known point inside the filesystem, looks like a good feature. How would that be done without udev?
[[[[[Thomas]]]]] - You can use a shell-script to do it. I did something similar a few years ago.
[[[[Ben]]]] - Are you sure, Thomas? Not that I'm a huge fan of Udev - it smacks too much of magic and unnecessary complexity - but I like the idea of devices being created as they become necessary. Expecting some Linux newbie to know enough to create '/dev/sr*' or '/dev/audio' when the relevant piece of software throws a cryptic message is unreasonable. Having them created on the fly could eliminate a subset of problems.
[[[[[Thomas]]]]] - Well, pretty much. At least you have some kind of assurance that the device "exists" whether it's actully attached to some form of physical device or not. Plus, one the permissions are set on that device and you have the various group ownerships sorted out... it's trivial.
I realise Udev does all of this as well, but you have to have various node mappings and layered complexity not observed with a static /dev tree. Then there's all sorts of shenanigans with things like initrds and the line for some kernels. It's just not worth the effort to effectivelty allow a "ls /dev" listing to show up without the use of a pager. ;)
[[[[Ben]]]] - Well, actually, now that I'm thinking about it - so could a '/dev' that's pre-stuffed with 50,000 entries. Hmm, I wonder why that's not the default - since, theoretically, these things take no space other than their own inode entry.
The following originally came under the thread "Please solve a doubt about linux H/W detection". Given the drift, I've renamed this subthread for benefit of findability.
OBTW - I would like to get my hands around the throat of the person who decided that '/dev/dsp' and '/dev/mixer' are unnecessary in OS/X. Really, it would only be for a few moments... is that too much to ask?
(Installing X11+xmms on the latest OS/X is a HUGE PAIN IN THE ASS, for anyone who's interested.)
[Thomas] - That's not the only thing that's unneccesary in OS/X from what I hear. :)
[[Ben]]- [rolling eyes] Don't get me started. "Just like real BSD"... yeah, *RIGHT*.
At least they have Perl, Python, ssh, tar, and find (well, some freakin' weird version of 'find', but at least it's there) pre-installed. That part keeps me sane when dealing with Macs. Last night, a friend of mine who uses one asked my opinion of various $EXPENSIVE_BACKUP_PROGRAMS for his Mac. My response was "why would you want to actually spend *money* on this? You have 'tar', 'cpio', and 'pax' already installed." Made him happy as a lark; 'tar cvzf' was right up his alley even though he'd never so much as seen an xterm before.
[[[Kapil]]] - There is also "fink" which (in conjunction with "xterm") is a lifesaver when it comes to turning a Mac into something almost normal. (For those not in the know---"fink" is "apt" for OS/X which actually uses a .deb packaging format as well).
This alone makes OS/X far-far better than its predecessors. Sample of earlier conversion with typical Mac user whose computer I wished to use for a brief while.
[[[[Ben]]]] - Well... more or less. As in "more broken or less useful". You get a subset of the APT utilities for the command line, as well as Fink Commander which is a GUI; "apt-get", or at least the repository behind it, is badly broken since at least a quarter (IME) of the "available" packages have unsatisfiable dependencies. It also throws persistent errors (something like "/usr/foo/bar/zotz/cc1plus not executable") which can't be fixed by installing any of the available software. As well, "apt-cache search" provides a meager subset of the listings shown at Fink.org - e.g., there are several dozen OpenOffice packages shown at the latter, whereas the former has never heard of OO.
Fink Commander is even more "wonderful": when you hit the 'Upgrade Fink Commander' button, it breaks itself _and_ the entire APT kit due to a stupid interface error (the CLI app that it's wrapping blocks on a question - you can see it in the dialog area - and there's no way to feed its STDIN; all you can do is kill the GUI, which interrupts the process half-way through installation.) To fix it, you have to remove the '/sw' directory - which contains *all the software you ever installed with Fink* - and start from scratch. That's the recommended procedure. Cute, eh?
Building a compiler toolchain on a Mac with Fink, BTW, is impossible. They've got GCC - both 3.3 and 4.0 - but literally almost *nothing* else that's required.
In short, pretty bloody awful. It kinda smells like *nix, even sorta resembles it... but falls way short when there's any real work to be done.
[[[Kapil]]]
Me: I'd just like a terminal window.
Mac user: What's that?
Me: A window where you type commands.
Mac user: What are commands?
Me: Things you tell the computer to do.
Mac user: You can actually tell the computer to do things? The Mac doesn't work that way. You point and click and it guesses what you want to do and just goes ahead and does it.At this point each of us walks off in disgust+surprise at having met an alien right here on earth.
[[[[Ben]]]] - Don't even bother asking; they won't have any idea. Just hit 'Apple-F' in Finder, type 'Terminal', click on the icon, and do your work as they sit there in utter bafflement. :)
[[[[[Breen]]]] - Don't forget to drag Terminal.app to the Dock so that it's there when you need to fix something the next time...
Bob van der Poel (bvdp at uniserve.com)
Sun Feb 19 12:51:53 PST 2006
Just wondering if there is a simple (complex? Any?) solution to viewing certain websites which rely way too much on absolute positioning code? Most sites on www.homestead.com are quite un-viewable on my system with Firefox. They render somewhat better with Opera.
I think the problem is one of expected font and font size. I use a fairly large min. size ... but don't know if this is the problem or not.
Here's an example of something completely impossible to read: https://www.yahkkingsgate.homestead.com/ using Firefox. Again, it fares better (but still pretty ugly) in Opera. I don't have a windows box handy, but am assuming that it renders okay using IE.
[Lew] - For what it's worth, I tried several sites from the "site sampler" on www.homestead.com, and didn't find any readability (or operability) issues with Firefox 1.5.0.1 running in Slackware Linux 10.1
From my Firefox Preferences window, my
Default Font -> Bitstream Vera Serif (size 12) Proportional -> Serif (size 12) Serif -> Bitstream Vera Serif San Serif -> Bitstream Vera Sans Monospace -> Bitsream Vera Sans Mono (size 12) Display Resolution -> 96 dpi Minimum Font size -> 12 "Allow pages to choose their own fonts" checked Default Character Encoding -> Western (ISO-8859-1)(https://www.yahkkingsgate.homestead.com/) Renders well for me, same settings as above.
[[Bob]] - Thanks for taking time to have a look. I'm wondering if I'm missing some fonts for firefox? I'm using the same firefox version. My font settings were different, but changing them to what you have doesn't seem to make much difference. Certainly, my min. is much larger ... 16 as opposed to your 12. I'm using a 19" monitor ... and the 12 is completely unreadable.
In my /etc/X11/fs/config I have the following:
# # Default font server configuration file for Mandrake Linux workstation # # allow a max of 10 clients to connect to this font server client-limit = 10 # when a font server reaches its limit, start up a new one clone-self = on # alternate font servers for clients to use #alternate-servers = foo:7101,bar:7102 # where to look for fonts # catalogue = /usr/X11R6/lib/X11/fonts/misc:unscaled, /usr/X11R6/lib/X11/fonts/drakfont, /usr/X11R6/lib/X11/fonts/drakfont/Type1, /usr/X11R6/lib/X11/fonts/drakfont/ttf, /usr/X11R6/lib/X11/fonts/100dpi:unscaled, /usr/X11R6/lib/X11/fonts/75dpi:unscaled, /usr/X11R6/lib/X11/fonts/Type1, /usr/X11R6/lib/X11/fonts/TTF, /usr/X11R6/lib/X11/fonts/Speedo, /usr/share/fonts/default/Type1, /usr/share/fonts/default/Type1/adobestd35, /usr/share/fonts/ttf/decoratives, /usr/share/fonts/ttf/dejavu, /usr/share/fonts/ttf/western # in 12 points, decipoints default-point-size = 120 # 100 x 100 and 75 x 75 default-resolutions = 100,100,75,75 # use lazy loading on 16 bit (usually Asian) fonts deferglyphs = 16 # how to log errors use-syslog = on # don't listen to TCP ports by default for security reasons no-listen = tcpDoes anything here leap out?
[Francis] - Is that because text 1/6th of an inch high is too small for you to read, or because 12pt-text on your display is not 1/6th of an inch high?
If the latter, you may see some benefit from telling your X how many dots-per-inch your screen actually uses. For example, a 19" diagonal on a 4:3 aspect ratio monitor (a TFT screen, for example) means the horizontal size is about 15.2 inches; if you display at 1600x1200 pixels, that's about 105 pixels per inch.
If X then believes you're displaying at 100 pixels per inch, everything measured in inches will be about 5% smaller than it should be. For text at the edge of legibility, that's enough to push it over.
[[Ben]] - Not only that, it'll usually result in a bad case of the jaggies (aliasing) in text when X tries to interpolate that fractional size difference. This definitely _will_ make marginally-legible text unreadable.
[[Bob]] - Well, probably a bit of both. I can not read 1/6" lettering anymore. Used to ... but that was when I was younger can could do a lot of things I can't now. Of course, I could break down and get glasses to see the screen, but I'm resisting. I already need to put on cheaters when I read a book, etc (mind you, I seem to manage not to badly in bright light). Ahh, the joys of aging. But as someone told me: aging beats the alternatives.
[Francis] - "xdpyinfo" will tell you what X thinks the current dimensions and resolution are. If they don't match what your ruler tells you, you should consider reconfiguring.
[[Ben]] - [Nod] If there's one thing I've learned from playing around with all the X resolution configuration options, it's exactly that. As a testimonial from the positive side of things, once I got X resolution properly synched up with the actual screen size, I was able to switch to 1280x1024 without any problems. Previously, that mode produced unreadable text in many applications.
(Yes, I'm aware that I could tweak ~/.gtkrc, etc. I found it ridiculous that I would have to - and, once I got X configured correctly, I didn't have to. I like for the world to make sense, at least sometimes. :)
[[[Bob]]] - And I learn that the interface between user programs and X is hopelessly broken. Now, we are not going to fix that little issue :)
I have discovered one thing in Firefox: in the menu bar there is a <view><page style> which (I think) lets you diable the css stuff (not clear, but I think that is what it does). At least, doing that I can get rid of all overlaying text.
But, the interesting thing is that now matter what I do the page I was trying to view presents itself with overlaying text. If I turn off the min. text size completely I can get text too small for any mortal to read, but it still overlays. I've also installed additional fonts (namely some msfonts collection), but that doesn't seem to make any difference.
Oh well, perhaps we should just let ugly find its own home :)
[Francis] - If the problem is broken absolute positioning code, the quick answer is to use a browser which ignores absolute positioning code.
I don't see anything obvious in the firefox about:config page to disable that bit of their rendering or layout engine, though. If you accept that (a) you will see their content in a layout they did not expect or desire (which is already the case); and (b) you *will* see their content (which is not already the case), use a browser with CSS support which tries to be perfect at the "ignore it completely" end of the scale, rather than any which try to get to the "implement it all correctly" end. I'll mention w3m and dillo, but I'm sure you'll be able to find one that suits you. (These other browsers do have their own quirks and configuration requirements, so depending on how your distribution set things up, you may have some testing and learning to do before things work the way you want them to. And it's also possible that things *won't* work the way you want them to, no matter what.)
w3m -- the content seems all there, but the links are a bit tough to follow (because of the design choices they made: <a href=link><img alt="" src=img></a> doesn't leave much clickable space for the link).
dillo -- the content seems all there, and the images seem all there. Just not both where the designer might have hoped. Keep scrolling down...
For contrast: my firefox, in my "normal" config -- some text/text overlap, but nothing major obviously unreadable; with 16pt minimum text -- lots of unreadable bits because of text/text and text/image overlapping.
[Ben] - There's a large number of pages on the Net that are badly broken WRT
layout; however, over time, I've discovered (much to my surprise) that
*other*, less- (or even un) broken pages exist as well. Based on this amazing
fact, I've developed a strategy: as soon as I encounter the former, I do a bit
of clicking, or even typing, and I'm soon looking at the latter. :)
Seriously - truly bad layout can make a page nearly unusable, which is much like not having the info in the first place. Given that much of the data on the Net is available in multiple places, I consider finding an alternate site - rather than trying to curse my way through
{ font: 3px bold italic "Bloody Unreadable"; text-background: bright-white; color: white; }that's half-hidden behind a blinking yellow "THIS SITE UNDER CONSTRUCTION!!!" GIF - to be a perfectly valid strategy, unless the info on that specific site is truly unique. For me, it saves lots of wear and tear - and time.
The above may seem obvious as hell to some, but lots and lots of people tend to get hyperfocused on Must Fix This Problem instead of looking at alternatives - I've done it myself, lots of times, and still catch myself doing it on occasion. Larger context is something that usually bears consideration.
My own default browsing technique consists of firing off my "google" script, which invokes "w3m" and points it at Google.com with a properly-constructed query. I page through the hits that Google provides, search until I find what I want, and - if I really need a graphical browser to examine the content, which is only true in about 5% of the cases - I hit '2-shift-m', which fires up Mozilla and feeds it the current URL (since I've set "Second External Browser" in 'w3m' to 'mozilla'.) I very, *very* rarely have to deal with really bad layout since text mode prevents many problems by its very nature.
(If the page really *is* unique, _and_ is badly broken, _and_ you really need to view it in a graphical browser, _and_ you have to keep going back there, Mozilla and Firefox have a plugin that allows you to "edit" the HTML of a given site and remembers your edits locally. Whenever you go to that page, your edits are auto-applied. Unfortunately, I don't recall the name of the plugin... Jimmy, I think I learned about it from you. Do you recall the name?)
[[Jimmy]] - Mozilla and Firefox let you override the CSS for any page you want
(user.css IIRC); there's also Greasemonkey for Firefox, which lets you
run your own javascript on any page.
Recently, I came across something that allows you to edit a web page and save your edits as a Greasemonkey script: I'll pass on the link as soon as I can dig it up.
[[[Predrag]]] - Platypus(https://platypus.mozdev.org)? I also like Web Developer(https://www.chrispederick.com/work/webdeveloper/)
[[[[Jimmy]]]] - That's it!
[Ben] -
> > I think the problem is one of expected font and font size. I use a
> > fairly large min. size ... but don't know if this is the problem or not.It would often be a problem, yes. Many sites are laid out so that there's no room for the expansion necessary to accomodate larger text.
( https://www.yahkkingsgate.homestead.com/) Looks totally fine in 'w3m'. Oh, and if I want to view it in a larger font, I simply Ctrl-right-click my xterm and set the fontsize to 'Large', or even 'Huge'.This is why I prefer to use the 'Tab' key in 'w3m' rather than clicking the links: it never misses, no matter how small the link. :)
[[[Bob]]] - Yes, reformatting and rewriting broken code is always an option :)
But, really, the "problem" is that a guy using a cheap (relatively) out-of-the-box windows system can view these sites just fine (I'm assuming here since I have not tried it). And, one fellow here says it views fine with his Firefox. So, I have to assume that something on my setup is buggering up things.
I get lots of overlays when viewing the site. Nope, not the end of the world by any means. But, this is not the only one and I guess I was in an easily-annoyed mood yesterday :)
Since starting this thread I have added extra TT fonts, played with my resolutions and the font settings in firefox. Gotten some "interesting" results, but none which fix the problem.
Oh well, life does go on without reading badly formatted html!
[[[Karl-Heinz]]] - opera also has that option of using user-css -- which means switching off the regular css and having basically no styling at first. Then you can swtich on some default CSS sheets like high-contrast b/w or w/b,... This helps on many sites to make them quite readable and usable -- if the site works without css that is. If they broke the layout so badly that the page (links,...) won't work unstyled I usually refuse to stay at that site.
And yes: I also have a rather large min. font size and that *does* lead to trouble when boxes stay fixed but the text gets (much) larger then intended by the designer. I'm a strong advocate of fluid box layout which adapts to font changes ;-)
[[[[Bob]]]] - I'm quite convinced that being able to specify absolute positioning
or absolute font size in html is a dumb thing. No idea why people think it
is necessary. Interesting that at one time html was supposed to be browser agnostic.
bob van der Poel (bvdp at xplornet.com)
Mon Mar 13 09:39:44 PST 2006
The fat lady will not sing. Actually, no one on the opera stage will sing....
I just upgraded my internet connection from a very slow dialup connection to a much faster Ka satellite. Still slow by some measures, but when it is all you can get you get happy real fast.
Everything seems to work just dandy. Everything except the Opera browser. No matter what site I try to access I get messages like:
[[[[[[[[[[
You tried to access the address https://www.shopping.com/?linkin_id=3005960, which is currently unavailable. Please make sure that the Web address (URL) is correctly spelled and punctuated, then try reloading the page.
Make sure your Internet connection is active and check whether other applications that rely on the same connection are working.
Check that the setup of any Internet security software is correct and does not interfere with ordinary Web browsing.
If you are behind a firewall on a Local Area Network and think this may be causing problems, talk to your systems administrator.
Try pressing the F12 key on your keyboard and disabling proxy servers, unless you know that you are required to use a proxy to connect to the Internet. Reload the page.
]]]]]]]]
Heck, I can't even get the Opera help pages.
I have tried disabling all my firewall settings, but that has no effect. Besides, that is silly since firefox and konqueror both work just fine. I just tried lynx and it works fine as well.
SOoooooo ... any idea why opera is being such a pain? BTW, I had version 8 installed and just tried a 9.0 beta. I've dumped my .opera file.
I suspect that there is a proxy or firewall issue, but am no expert in this. Suggestions?
[Neil] - Have you followed the advice above? Does opera have a proxy server setup (if you click F12 is there a tick by "enable proxy servers")?
[[BobV]] - Thanks Neil. Yes, I have tried that. Just checked again, and there is NO tick beside that option. Just to test, I clicked on the option, but the tick does not appear. Which is probably just as well since I don't have a proxy server set up.
[[[Neil]]] - The error messages you were getting sound like those I got when I had Opera set up to use privoxy and it wasn't running. It seems that network access is not an issue, as you say konqueror and firefox are both working OK, but it sounds as though opera is having a problem with network access.
As it's a beta, you may be best asking on one of the opera newsgroups hosted at news.opera.com
Sorry I couldn't help more.
[Ben] - A couple of things to try:
1) Fire up a web server on your machine and try surfing to https://localhost - even if you somehow, somewhere magically created a firewall without knowing about it, you should still be able to get through to your own host.
[[BobV]] - Nope. Not on any of my browsers. I assume that is since I don't have apache (or other) server running. But that should not effect web access?
[[[Ben]]] - Bob, you've got to read _every_ part of what's been written. If you don't fire up a web server, then you can't expect to surf to 'localhost' - there won't be a web server there. It wasn't a question of affecting web access but of testing your browser's ability to connect to a server *known* not to be behind a firewall.
[[[[BobV]]]] - Opps. Yes "web server". Okay ... I don't need one on my system and I've not time to go though and install a full blown server. Is there something simple I can install just to test this?
[[[[[Karl-Heinz]]]]] - what do you use for printing? If its a cups server, try: https://localhost:631/
[Ben] -
2) Look at a local file. I.e., try a URI like file:///etc/hosts - that will tell you if your browser is capable of rendering anything at all.
[[BobV]] - Yes, that works with everything, including opera. I can view pages stored on my system with opera.
[[[Ben]]] - OK, so rendering works - that's a plus.
[Ben] -
> I have tried disabling all my firewall settings, but that has no effect.
> Besides, that is silly since firefox and konqueror both work just fine.
> I just tried lynx and it works fine as well.Well, it's obviously in Opera itself. However, whether it's something hosed in the settings - shouldn't be, but you never know - or the app itself is dead, you can't really tell without experimenting.
> SOoooooo ... any idea why opera is being such a pain? BTW, I had version
> 8 installed and just tried a 9.0 beta. I've dumped my .opera file.Hmmm, *beta*. Dangerous waters you swim in, Bob... :)
[[BobV]] - Well, remember that I did have a non-beta installed. That didn't work either. So, that's when I tired the newer version with the same problems.
[Ben] - If you really want to see all the files that Opera sources as it starts up - which may be a bit more than just that .opera file - you should run 'strace' on it and look for "open" calls, like so:
strace -o opera.strace /usr/bin/operaor wherever your executable is. Then, open "opera.strace" and search for lines starting with "open":
grep '^open' opera.strace | lessYou may find some interesting clues in there.
Conversely, if you just want the thing to work (as opposed to fixing the problem), then just revert to the last working version you used. :)
[[BobV]] - okay. Did this with -eopen and -econnect. But, nothing obvious shows up.
[[[Ben]]] - What do you mean by "nothing obvious"? I'm certain that there are _some_ files that Opera is opening as it starts up.
[[[[BobV]]]] - I meant "nothing obvious TO ME". Maybe you can see something in the attached log file.
[[BobV]] - I think I'll just be content to use firefox :)
[[[Ben]]] - Well, as I've said - you have to know what your goal is. If it's just to use a browser, then there are plenty of options (including using Firefox.) If you're trying to figure out the problem with Opera, then you need to do some troubleshooting - which is what we've all been pointing you toward. :)
[[Thomas]] - This is why strace has an '-e' flag:
strace -eopen -o opera.strace opera
It's a pity Opera isn't dumping core, as that would have been much easier to remedy. :)
[[[Neil]]] - I would have thought connect would be more interesting than open.
[[[[Ben]]]] - I see both as useful, certainly - it would be interesting to see which 'connect' fails - but I think the 'open' calls should come first in the troubleshooting stack. If there's some sort of a configuration file that's left over from the previous install of Opera, it would be a good idea to "neutralize" it while testing. Also, seeing that a given 'connect' fails rarely tells you why it has done so, whereas an 'open' failure always reports the reason.
[[[Neil]]] - Also /usr/bin/opera is a script on my system, so the -f option to follow children would be essential to get anything useful from strace.
[[[[Ben]]]] - Ah - good point. I've never used Opera under Linux, so no way
for me to know, but definitely a good thing to keep in mind. As well, it may
make
sense to just run the executable that the script calls; I recall a Mozilla startup
script (the last time that I installed Mozilla from a tarball) that was seriously
broken, but the browser itself ran just fine.
[[BobV]] - [Quoting Ben]
>1) Fire up a web server on your machine and try surfing to https://localhost - even if you somehow, somewhere magically created a firewall without knowing about it, you should still be able to get through to your own host.
Nope. Not on any of my browsers. I assume that is since I don't have apache (or other) server running. But that should not effect web access?
[[[Ben]]] - Bob, you've got to read _every_ part of what's been written. If you don't fire up a web server, then you can't expect to surf to 'localhost' - there won't be a web server there. It wasn't a question of affecting web access but of testing your browser's ability to connect to a server *known* not to be behind a firewall.
[[[[BobV]]]] - Opps. Yes "web server". Okay ... I don't need one on my system and I've not time to go though and install a full blown server. Is there something simple I can install just to test this?
[[[[[Karl-Heinz]]]]] - what do you use for printing? If its a cups server, try: https://localhost:631/
[[[[[[BobV]]]]]] - In Firefox that works fine (and Lynx).
Now Opera. 1st time I tried it I got the same "can't access" error. Well, I'm pretty sure I did. I just tried it again so I could copy down the error and it worked fine. Well, as far as accessing the main cups menu. I can go though the various cups options in opera, until I go to the download software item which brings up:
You tried to access the address https://www.cups.org/, which is currently unavailable. Please make sure that the Web address (URL) is correctly spelled and punctuated, then try reloading the page.Does this help track down the problem???
[[[[[[[Brian]]]]]]] - It sure feels like a proxy problem to me. Works fine on localhost stuff, but not out to the world? When Firefox works to the outside world? I will go to the wall on this one - let's see if I can get Opera running on this Gentoo AMD64 setup...
{cue Jeopardy music}
Okay, emerge search shows Opera 8.5 available via portage. Installing pulls in app-emulation/emul-linux-x86-qtlibs-2.2 (needed because of the AMD64-ness)
And here we go ... https://portal.opera.com/startup/
That just works (tm).
Now: In Firefox (which you say is working), Select from the menu Edit -> Preferences. On the General tab, click the connection settings button. What does it say there? Of the 4 options (Direct connection, Auto-detect proxy, Manual proxy, Automatic proxy), which radio button is enabled? If it says that Direct Connection is the way, well then. I'm confused. If. OTOH, one of the others is selected, and you can translate those settings to Opera's Tools -> Preferences: Advanced Tab, Network, click on Proxy Servers dialog, then maybe this will have been some help.
[[[[[[[[BobV]]]]]]]] - Yup, "Direct Connection"
I'm confused as well :)
CHecking (again) opera under Tools->Preferences->Advanced->Network it says to click for setting proxies if not connected directly to the internet. Double checking that, these fields are all empty.
[[[[[[[Brian]]]]]]] - Odd that lynx works, if proxying is on. What does
echo $http_proxyyield, from the commmand line?
[[[[[[[[BobV]]]]]]]] - Nothing. Nor does "set | grep -i htt"
Just to summarize some things:
- this all worked just fine before I upgraded from dialup to my new connection. I had a 8.x version installed (which worked fine). When that didn't work with the new connection I dl'd 9.0 beta ... but the results are identical.
- every other net aware program I've used so far (Thunderbird, Firefox, Bittorrent, gtk-gnutella, ntpd ...) all work just fine.
- the only things I did when the new "modem" was installed was to use the Mandriva wizard to set up the ethernet. As soon as the connection was established, everything worked. The only thing I didn't need to do was to dial in.
My only other GUESS is that it is something to do with my connection. The broadband connection is Ka satellite. I did check with the provider, but they had nothing on the help system for Opera problems.
Also, I've checked a number of web references and did find some stuff similar, but they appear to be of the windows + proxy type.
Getting ready to erase opera ...
[[[[[Ben]]]]] - [quoting BobV]
Is there
> something simple I can install just to test this?
> >>>strace -o opera.strace /usr/bin/operaI happen to really like 'thttpd'. Dirt-simple, no configuration necessary, and nicely secure. E.g.: create a directory with an 'index.html' file in it (grab any HTML file and rename it), then 'cd' to that directory and type 'su -c thttpd'. Enter your root password, and there you are; you should now be able to see that page by surfing to https://localhost/ .
[[[[[[Thomas]]]]]] - I happen to prefer its so-called predecesor: lighttpd, since this addresses several glaring limitations and bugs in thttpd.
[[[[[Ben]]]]] - [quoting BobV]
> >>okay. Did this with -eopen and -econnect. But, nothing obvious shows up.
> >
> >What do you mean by "nothing obvious"? I'm certain that there are _some_
> >files that Opera is opening as it starts up.
>
> I meant "nothing obvious TO ME". Maybe you can see something in the
> attached log file.
Let's see... here's some interesting stuff:> open("/home/bob/.opera/opera6.ini", O_RDONLY|O_LARGEFILE) = 4
> open("/home/bob/.opera/lock", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 4
> open("/home/bob/.opera/lock", O_RDWR|O_CREAT|O_LARGEFILE, 0666) = 4
[ ... ]
> open("/home/bob/.opera/styles", O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = 5
> open("/home/bob/.opera/styles/debugwithoutline.css", O_RDONLY|O_LARGEFILE) = 6
> open("/home/bob/.opera/styles/accessibility.css", O_RDONLY|O_LARGEFILE) = 6
> open("/home/bob/.opera/styles/contrastbw.css", O_RDONLY|O_LARGEFILE) = 6
> open("/home/bob/.opera/styles/contrastwb.css", O_RDONLY|O_LARGEFILE) = 6
> open("/home/bob/.opera/styles/hidecertainsizes.css", O_RDONLY|O_LARGEFILE) = 6
> open("/home/bob/.opera/styles/disabletables.css", O_RDONLY|O_LARGEFILE) = 6
> open("/home/bob/.opera/styles/hidenonlinkimages.css", O_RDONLY|O_LARGEFILE) = 6
> open("/home/bob/.opera/styles/imageandlinkonly.css", O_RDONLY|O_LARGEFILE) = 6
> open("/home/bob/.opera/styles/nostalgia.css", O_RDONLY|O_LARGEFILE) = 6
> open("/home/bob/.opera/styles/showstructure.css", O_RDONLY|O_LARGEFILE) = 6
> open("/home/bob/.opera/styles/textonly.css", O_RDONLY|O_LARGEFILE) = 6
> open("/usr/share/opera//ini/pluginpath.ini", O_RDONLY|O_LARGEFILE) = 5
> open("/home/bob/.opera/pluginpath.ini", O_RDONLY|O_LARGEFILE) = 5Well, you did say that you blew all that away - so presumably, all of the above got created fresh.
> open("/home/bob/.kde/share/config/kcmnspluginrc", O_RDONLY|O_LARGEFILE) = 5
> open("/usr/lib/jre-1.4.2_09/lib/i386/libORBitCosNaming-2.so.0", O_RDONLY) = -1 ENOENT (No such file or directory)Hmmm... could be time to upgrade that JRE version. Mozilla gave me lots of grief until I did, anyway.
You're right; doesn't seem to be anything obvious. :)
[[[[[[BobV]]]]]] - Cool. Got thttpd running. Serves out the thttpd "got
it working" message in both firefox and opera. Firefox will take me to
the program's web
site; opera just hangs (seemingly forever).
Doing a bit more playing and I've another hint here.??? When you launch opera try a web address it hangs. Apparently forever (at least more than 5 minutes). Try the same address, same behaviour. Try another address and you get the "unavailable" error message. From that point on accessing anything external brings the "unavailable" message.
I'm really starting to think it is a network problem due to my satellite connection. But, that really does seem silly. Doesn't it?
[[[[[[[Ben]]]]]]] - Well, it can't be the network since the other browsers
use the network without any problems. It sounds as though the code that does
the
socket-related functions in Opera is broken (which makes Neil's suggestion of
checking the 'connect' calls a better guess than mine. :)
Just out of curiosity, try this:
strace -f -o opera.strace -e trace=connect `which opera`The resulting file should be pretty short - and might show exactly where Opera is falling down. If it does, that would be a bug report that Opera developers might be very happy to see.
[[[[[[[[BobV]]]]]]]] - Okay. Here it is. Hope it means more to you than to me :)
[[[[[[[[[Ben]]]]]]]]] -
> 18802 connect(3, {sa_family=AF_FILE, path="/tmp/.X11-unix/X0"}, 19) = 0 > 18810 +++ killed by SIGKILL +++
*That* looks like the dude. When Opera makes that call to the X socket - which, BTW, you should check to make sure that it exists and is usable:
ben at Fenrir:~$ ls -l /tmp/.X11-unix/X0 srwxrwxrwx 1 root root 0 2006-03-17 11:29 /tmp/.X11-unix/X0 ben at Fenrir:~$ netstat|grep X0 unix 3 [ ] STREAM CONNECTED 8734 /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 6769 /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 6551 /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 5811 /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 5747 /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 5214 /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 5212 /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 5208 /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 5198 /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 5193 /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 5191 /tmp/.X11-unix/X0 unix 3 [ ] STREAM CONNECTED 5177 /tmp/.X11-unix/X0- that subprocess in Opera dies (perhaps it's killed by some kernel process for trying to corrupt the stack? That's just a SWAG on my part, and you'd need to do a full 'strace' and look at the calls that would have been between 18802 and 18810.) I would venture to say that it's internal to Opera rather than the library that does the 'connect' call; there's lots of other stuff that would break if 'connect' itself was at all buggy.
[[[[[[[[BobV]]]]]]]] - Thanks again for taking time to work though this little mystery.
[[[[[[[[[Ben]]]]]]]]] - You're welcome, Bob. [grin] I have lots of fun with this kind of detective work; helps me improve my own troubleshooting skills for when I need them.
[[[[[[[[[[BobV]]]]]]]]]] - Just to close this thread out :) I have reinstalled my linux distro (Mandriva 2006), downloaded Opera 8.5 and it connects/runs just fine. Honestly, I have no idea what the original problem was ... but I suspect that something broke when I changed from a dialup/ppp system to using the new connection (satellite with a LAN modem). I really do wish I knew what the problem really was ... but that'll have to remain one of the many mysteries of life!
Talkback: Discuss this article with The Answer Gang
Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.
When away from the keyboard, her hands have been found full of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler.
PRASHANT BADALA (prashant_badala at yahoo.co.in)
Mon Mar 13 05:26:32 PST 2006
Sir,
My name is Prashant Badala and I'm a final year computer engineering student.
As I'm doing my project I've got stuck as I'm not able to understand two concepts
namely,
1> After I set up a dial-up internet connection (in LINUX) by writing a PPP script how does the web browser get to know that it has to follow the corresponding link. Could you explain to me the entire functioning of the web browser and the corressponding system files involved.
2> If apart from the above mentioned PPP link my system is also connected to the ethernet which in turn is connected to the internet then which link would the web browser use to get connected to internet - the PPP or the ethernet ?
I'll be really grateful to you if you could help me out with the same. Looking forward to your quick reply,
Thanking you
Prashant Badala
[Thomas] - On Mon, Mar 13, 2006 at 01:26:32PM +0000, PRASHANT BADALA wrote:
> Sir,
Let's get something straight. You DO NOT, EVER, cross-post like this to other mailing-lists. The influx of emails will just be stupid, and the flow of conversations may well be one-sided as people may not be able to post to the lists you have so easily forced us to reply to.
> My name is Prashant Badala and I'm a final year computer engineering
> student. As I'm doing my project I've got stuck as I'm not able to
> understand two concepts namely, 1> After I set up a dial-up internet
> connection (in LINUX) by writing a PPP script how does the web browser
> get to know that it has to follow the corresponding link. Could youWe don't do your homework. If you don't understand how DNS works (there's a clue for you), along with routing concepts (networking), then your lecturers either aren't doing a good enough job (unlikely) or rather you're not paying good enough attention in class.
> explain to me the entire functioning of the web browser and the
> corressponding system files involved.No. But the two clues I gave you above will enable you to do all that you want.
Desperation is not a means for you to spam others' in the hope that you will receive an answer, just because you have an assignment due in and you're panacing all of a sudden.
[Neil] - The entire functioning of the web browser is way too big a topic.
To address more narrowly the topic of how the browser knows to use the PPP link, the short answer is that it doesn't. It communicates via the sockets library, which is an interface to the kernel's TCP/IP stack. It just calls a library function, which requests a communication channel (socket) to a given IP address (I've skipped DNS lookups here). The kernel then has the job of routing the message. The kernel looks up, which link to use in it's routing table. At some point before this the PPP script should have added an entry to the routing table, indicating that connections to the internet can be made via the PPP connection that has been established.
It's a bit more complicated than that, but I'm no expert on routing tables. I dare say googling for linux routing tables will come up with all the information you need and more.
I believe the command "ip route" is used to examine and manipulate routing tables in modern linux distributions.
> 2> If apart from the above mentioned PPP link my system is
> also connected to the ethernet which in turn is connected
> to the internet then which link would the web browser use
> to get connected to internet - the PPP or the ethernet ?I guess that would depend on what's in the routing table.
[Ben] - [ long CC list snipped ]
Hi, Prashant -
First, I *strongly* suggest that you read a list etiquette FAQ before posting to any more lists; your cross-posting is a violation of basic etiquette, and is going to offend a lot of people. I'm sure that you did it because you don't know any better, but if you're going to participate in forums on the Internet, you definitely need to learn some basic protocol. Here's ours, with a number of links to good resources elsewhere:
https://linuxgazette.net/tag/ask-the-gang.html
Incidentally, I suggest that you revise your estimate of Thomas Adams' reply to you; he was actually being quite kind and helpful. You'll know what I mean if you get responses from any other lists, which generally tend to come down with great big heavy boots on people who violate protocol.
On Mon, Mar 13, 2006 at 01:26:32PM +0000, PRASHANT BADALA wrote:
> Sir,
> My name is Prashant Badala and I'm a final year computer
> engineering student. As I'm doing my project I've got stuck
> as I'm not able to understand two concepts namely,
> 1> After I set up a dial-up internet connection (in LINUX) by
> writing a PPP script how does the web browser get to
> know that it has to follow the corresponding link.The simple answer is that it doesn't. A web browser, just like most other network applications, connects to a _socket_ - a virtual connection that consists of port and an IP. This kind of modularization (i.e., applications don't need to "know" anything about the underlying network protocols, while the network doesn't "care" about applications) is one of the main features of networking that makes it so robust.
I'd suggest that you take a look at, study, and understand in depth the basic four-layer TCP/IP stack (search for it at, e.g., Google.) This will give you a better understanding of this type of separation.
> Could you explain to me the entire functioning of the
> web browser and the corressponding system files involved.Well, no. If this is what you _really_ want - and I think that you've mistaken the critical path for understanding the solution to your problem - then you need to download the sources for your favorite browser and start reading. However, I suspect that once you understand the networking end, you'll see that it's unnecessary in the first place.
> 2> If apart from the above mentioned PPP link my system is
> also connected to the ethernet which in turn is connected
> to the internet then which link would the web browser use
> to get connected to internet - the PPP or the ethernet ?It all depends on your routing table, which is (usually) configured at connection time. For example, if I have an Ethernet connection up, then make a PPP connection, and have PPP configured with "defaultroute" and "replacedefaultroute", then the PPP route will replace the previous one. I'm sure that you can imagine some of the possibilities for the above scenario based on the information I just gave you.
> I'll be really grateful to you if you could help me out with
> the same. Looking forward to your quick reply,Erm... this is another one of those things you want to avoid doing on a list. "Quick reply" implies a degree of arrogance on the part of the person requesting it - unless they're willing to pay for such "service".
[[[Thomas]]] - On Tue, Mar 14, 2006 at 10:07:36AM +0000, PRASHANT BADALA wrote:
> Sir,
You can call me Thomas. Please be aware that as I am answering this via TAG at linuxgazette that you *also* reply to that address. I am not answering you directly, as that would be consultancy, and I don't (yet (thankfully) charge for that). If you fail to reply to tag at lists.linuxgazette.net, I shan't answer you.
> I recieved your mail and realised that I have to give you a complete
> description of what I know and what my problem is. Well to beginExcellent. :)
> with, I've set up PPP dial-up internet connection (on my linux
> installed machine) by writing a script. As a part of this script
> writing I had to modify a few system files and create a few of them.
> To give you a precise idea I'm sending in a list of the files as an
> attachment.That really wasn't necessary, but thank you for doing so.
> A stepwise description of what I understand is the procedure is as
> follows :
>
> 1> I dial into the ISP's(BSNL) host/router via modem.
> 2> I get authenticated for username and password as a physical
> connection is established between me and the ISP's host.
> 3> This link is then disconnected ensuring that the modem is not reset and
> then the PPP is been started at the ISP's server end.The modem isn't "reset" as such -- it still provides the mechanism of modulating and demodulating the signals.
> 4> Then the PPP is being initiated at my end and a PPP link is been setup.
> 5> Via the (-defaultroute) option and the gateways of the ISP I can now
> get connected to the internet.
> 6> In this entire process primary and the secondary nameservers (specified in /etc/resolv.conf) of the ISP have got special significance as they help in ]
> host-name-address resolution.
>
> The very fact that I've been connected to the 'net can be confirmed
> by pinging any site say www.google.com or directly opening it in the
> web browser (say mozilla). Also the exchange of LCP packets can be
> observed.Well, pinging a site and trying to view a site's webpage are two different processes in effect. Ping sends out a certain data packet -- an ICMP request. When you look at a web browser, you're reliant on various ports to connect and talk to processes listening on it, etc. This might not always be possible, despite the fact you can still ping the domain.
> Now my Question is that how does the web browser get to know that
> it has to follow the PPP link set up by us to get the URL request
> satisfied for the user ? (if what I'm thinking is correct). ThereBecause your routing table (see 'man route') details how to connect everything together. I'll refer you back to the linuxgazette in this instance. Google for "subnetting and routing 101". That explains all of that side of things in more detail than I care to remember.
> obviously has to be certain O.S assistance or system settings that
> direct the web browser in that direction.No, not at all. OS has nothing to do with it -- you might connect to an ISS server or Apache, but what difference does that make to you? What happens is that your browser (when you click on a link) has to translate the https:// URI from a domain name to an IP address. Connections (or routes as we like to think of them) at the transmission level use numbers to talk to one another -- DNS provides a convenience service to translate IP to hostname, and vice versa. Typically, your ISP will have DNS so you don't have to worry.
Of course, as I mentioned in my previous email to you (you did read it, I assume?) you're rarely going to go from A --> B directly. Instead you'll connect to a series of computers to reach B. And you can see for yourself how this is achieved:
traceroute google.com... for instance will tell you how many hops it took to reach google.com
Once the connection has been reached, Apache, at the other end on the remote server will send back a request detailing whatever content it needs to, to satisfy the URL you clicked on. Note that computer "B" undergoes the same process in terms of DNS, etc., to reach your computer.
> So what is entire stepwise
> procedure that takes place and what all system files help the web
> browser to get on to the PPP link. (I hope I'm being a bit more
> clear in my doubts now.)Well, it depends how deep you want to go. I suggest you read up on "Bind and DNS" by O'Reilly.
> Now I have one more doubt. Suppose apart from the above mentioned
> connection I'm also connected to an ethernet which in turn has got
> internet access. so the web browser has got two different ways of
> handling the URL request. Either by the dial-up PPP connection or
> by the ethernet link to the internet. So which one would it use and
> why?Both are used implicitly. Look at your routing table -- you'll probably find you have a non-public routable IP address that your ethernet card (or modem, or router, or what have you) is being a gateway for -- hence you have NAT.
> Kindly try to help me by sending in your views about whether I'm
> thinking on the right lines or not and if I am then what is the
> solution to the above mentioned problem."Solution"? I don't see a problem, other than wanting to have a concept explained. You'll notice I have been deliberately vague in some areas. What you're asking is for a book to be sent for you essentially -- the subject area is _vast_. I hope though that you have sufficient information.
[[[[Prashant]]]] - Reply to tag at lists.linuxgazette.net informing that I was helped by Mr. Thomas Adam. I also wanna thank you for the valuable help and guidance.
PRASHANT BADALA (prashant_badala at yahoo.co.in)
Mon Mar 20 09:36:00 PST 2006
Hi everyone,
I'm Prashant Badala, a final year Computer Engineering student. I'm trying to
study and program the D-BUS as a part of my project. Hence I was going through
a book that says -
===========================================================
The D-BUS C API
Using D-BUS starts with including its header:
#include <dbus/dbus.h>
The first thing you probably want to do is connect to
an existing bus. Recall from our initial D-BUS discussion
that D-BUS provides two buses, the session and the
system bus. Let's connect to the system bus:
DBusError error;
DBusConnection *conn;
dbus_error_init (&error);
conn = dbus_bus_get (DBUS_BUS_SYSTEM, &error);
if (!conn) {
fprintf (stderr, "%s: %s\n",
err.name, err.message);
return 1;
}
Connecting to the system bus is a nice first step, but we
want to be able to send messages from a well-known address.
Let's acquire a service:
dbus_bus_acquire_service (conn, "org.pirate.parrot",
0, &err);
if (dbus_error_is_set (&err)) {
fprintf (stderr, "%s: %s\n",
err.name, err.message);
dbus_connection_disconnect (conn);
return;
}
Now that we are on the system bus and have acquired
the org.pirate.parrot service, we can send messages
originating from that address. Let's send a signal:
===========================================================
Now in this particular text I'm not quite able to understand the concept
of a "service" and the "address". What service does "org.pirate.parrot"
specify ? What is the meaning of a message originating from it's address ? Could
you also explain what is the differnce between "org.pirate.parrot.attr"
and "org/pirate/parrot/attr"
when been specified indivisually in the function "dbus_message_new_signal".
Basically I'm trying to write two programs that could send text to
each other (over d-bus) when executed on the same system. For this purpose do
I need to be
connected to the 'net at the time of executing the above mentioned programs.
(Might sound stupid... but somehow taking a look at the given program prompts
me to ask this...). Can you give me some advise regarding this ?
I'd be really thankful to you if you could explain to me these concepts as I'm not able to find a good referece for this. Prior to this also I've sent queries to you guys and the response has been really good and helpful. Kindly help.
Thanking you,
Prashant Badala.
[Ben] - I'd say that you have to look at the code or the documentation of the
API to figure out what "dbus_bus_get" and "dbus_bus_acquire_service"
do; since these are (presumably) defined somewhere before this part of the book,
you'll need to go back and set a mental bookmark for those functions.
> Could you also explain what is the differnce between
> "org.pirate.parrot.attr" and "org/pirate/parrot/attr"
> when been specified indivisually in the function
> "dbus_message_new_signal".No idea - for the same exact reason as above. It all depends on what the function does.
> Basically I'm trying to write two programs that could
> send text to each other (over d-bus) when executed
> on the same system. For this purpose do I need to be
> connected to the 'net at the time of executing the above
> mentioned programs. (Might sound stupid... but somehow taking
> a look at the given program prompts me to ask this...). Can
> you give me some advise regarding this ?Well, local communications usually take place via a unix(7) socket, which certainly does not require a network connection; however, I suspect from the content of what you've sent us that the author is using some kind of a specialized protocol/method of sending the data. It _shouldn't_ need anything like that - it would be like requiring someone to drive a car as a part of moving a piece of furniture from one room to another within a house - but anything is possible.
It all comes back to Reading The Fine Manual.
[Jimmy] - Um... it's pretty clear English: the service is what you want to use, the address is where it lives. Think of the service as something like "http server", and address as something like "port 80".
"org.pirate.parrot" is the name of the service provided by the example program; a message originating from its address means that when that service talks to another program using D-BUS, it first identifies itself as coming from "org.pirate.parrot".
> Could you also explain what is the differnce between
> "org.pirate.parrot.attr" and "org/pirate/parrot/attr"
> when been specified indivisually in the function
> "dbus_message_new_signal".Just a slightly different naming scheme. "org.foo.bar" refers to a service, but "/org/foo/bar/some/object" is used when you want to access an object provided by that service. I think the naming scheme is different so you will know by sight what you are accessing: the service name is more like a (backwards) domain name, the object(s) provided like a file path.
For example: "net.linuxgazette" would be the address for a viewer program for LG issues; "/net/linuxgazette/124/" would provide an object that provides a list of articles from that issue, "/net/linuxgazette/124/lg_bytes.html" could provide a file, etc.
> Basically I'm trying to write two programs that could
> send text to each other (over d-bus) when executed
> on the same system. For this purpose do I need to be
> connected to the 'net at the time of executing the above
> mentioned programs. (Might sound stupid... but somehow taking
> a look at the given program prompts me to ask this...). Can
> you give me some advise regarding this ?No; see Ben's mail.
> I'd be really thankful to you if you could explain to me
> these concepts as I'm not able to find a good referece for
> this. Prior to this also I've sent queries to you guys and
> the response has been really good and helpful. Kindly help.D-BUS Tutorial: https://dbus.freedesktop.org/doc/dbus-tutorial.html
The D-BUS Missing Tutorial: https://raphael.slinckx.net/dbustutorial.php
Net::DBus::Tutorial (D-BUS from Perl) https://search.cpan.org/~danberr/Net-DBus-0.33.1/lib/Net/DBus/Tutorial.pod
[Thomas] - I'm formally putting in a request for there to be a new email alias:
doMyHomeworkForMeICertainlyCannot at lists.linuxgazette.netIt would be really cool. I promise. :)
[[Ben]] - I generally like to distinguish between somebody coming to us and saying "I'm having this Linux-related problem at school, I've tried to figure it out but can't, can you help?" and "Answer these questions: 1. What is an inode? 2. How does Linux boot? 3. What is the weight of a coconut-laden African sparrow?" In the first case, the person (at least in my opinion) has followed the net.help procedures and is asking for assistance; in the second, they're being a lazy ass who wants to have their work done for them.
I don't know about anyone else, but the fact that a problem is associated with schoolwork does not faze me or influence my willingness to answer one way or the other; there are plenty of valid problems, and of people trying to solve them, in many different venues. On the other hand, freeloading lazy bums who are unwilling to make an effort - no matter what the context - cause my fingers to twitch toward my flamethrower, or (more commonly) exert about the same amount of effort that they have toward solving the problem (i.e., none.)
IMO, Prashant fits the first case far more than the second. I may be wrong, but that's my best estimate.
Thomas Adam (thomas at edulinux.homeunix.org)
Tue Mar 21 13:25:19 PST 2006
Dear all,
Whilst this is slightly OT (and most likely hardware related), I thought I'd ask here anyway, since it's an odd issue to say the least.
To put it simply, my DVD-ROM drive no longer wants to read _any_ DVD I give it, yet it manages to read data CDs and audio CDs just fine. For some reason, my DVD drive was working fine one day, and the next it has trouble in reading any DVD you feed it -- even the same DVD that was known working and being read by the same DVD drive a day before.
I can't even say that the issue is a read one, since as soon as I put a DVD into the DVD drive, the drive just starts to splutter, spin up and slow down, and will remain like that for at least thirty minutes, until such time that the DVD drive will stop, or I eject the disc. Trying to use MPlayer on the drive fails in that MPlayer sees nothing. There's not even any suspect { SeekReadComplete } type of errors in /var/log/messages. In fact, nothing is printed to that file in any way to suggest there's a problem with the drive.
I did (on a hunch) shove a CD-cleaner CD into the drive in the hopes that might help, alas, it didn't. DVDs are still not being read.
I also tried to bugger about with hdparm and toggling DMA and I/O support which didn't help either.
Can anyone shed some light onto what I do next? Is the drive kaput?
[Ben] - I've seen CD-ROM drives get into a state which could only be "cured" by shutting down the machine and leaving it off for a minute or so. Try that, boot using a different version of Linux (I've usually got a CD with BBC-Linux on it close at hand, these days), and if it still doesn't work, I'd call it cooked.
Fabrizio (donodeglidei at yahoo.it )
Fri Mar 17 10:09:27 PST 2006
My laptop compaq presario x1000 has this problem:
suddenly the sreeen have desappeared and now after two days of on & off
I can not use it anymore. The laptop in itself is working but there is not the
screen.
Someone told me that maybe the inverter cable connection between display and screen... in this case I am not skilled enough to find and replace the peace by my self. Do you have a tip?
[Thomas] - Sure -- take it back to where you bought it and get it mended. Seriously, fixing that sort of thing by yourself (if it even *is* the inverter cable) is going to be awkward.
So get someone who knows what they're doing, to do it for you.
Oh, by the way, a subject line of "help" really is annoying, and doesn't help me in knowing what your email is about. You could have tried: "Laptop screen blank? Inverter Cable?"
Ramon changed the subject line shortly after, and I've changed the title to reflect this. --Ed.
[Ramon] - Attach an external monitor to the laptop and use that to copy all your crucial data onto some external storage (network drive / other computer / usb-harddisk / whatever) Take the laptop to a certified repair shop or return it under warranty.
Tried fixing this stuff myself before and have hardly ever been succesfull :-(
Good luck.
P.S> It helps if you use a more relevant subject instead of help!
[Ben] - This has nothing to do with Linux, but - yeah, inverters die a lot. I've replaced a couple of them myself within the last couple of years, including the one on the laptop that I'm using right now.
First, figure out what the correct inverter is for your LCD display. The only way to do this is to actually take the plastic casing off the and look at the back of the display, since every manufacturer uses several different displays in any given model. Buy the appropriate inverter - I've found eBay to be an excellent source for them. The inverter is usually located just below the LCD display (i.e., just above the hinge); disassembling the case (you only need to open the top part - the body of the laptop is usually not involved, although you'll most likely need to remove the hinge cover) requires removing the screws and *carefully* unlatching the very thin plastic clamshell around it.
Reassemble everything - you *did* take pictures as you went through disassembly, right? :) - and enjoy your nice bright screen.
arun dubey (alencdave at yahoo.co.in)
Thu Mar 23 21:31:53 PST 2006
hello Mr.James T. Dennis
i m Arun from INDIA, i m a student of MCA(masters of computer applications)
i have written a very simple C-code for "who" command, this code is
having no Compilation Error, but everytime i try to run it i get the message
"SEGMENT FAULT"
n i just could not understand why this error is coming in my prog as i m not
using any pointers (xcept in command line arguments)
sir i need ur help in solving this problem. i m sending you the two trivial
codes for "who" command written by me and in both the codes i m getting
"segment fault"
pls help me in rectifying those codes.
i m using mandrake's linux 9.2
n i have my compiled my codes as follows
1) cc -o who1 who1.c
2) cc -o who_ who_.c
(i will b really thankful to u for ur prompt reply)
pls reply me on to:-
alecdave at yahoo.co.in
[Ben] - On Thu, Mar 23, 2006 at 09:31:53PM -0800, arun dubey wrote:
[[[Ben]]] - [ cc'd back to the list ]> hello Mr.James T. Dennis
It's been quite a long time since Jim was doing the Answer Guy thing, Arun - nowadays, we've got an entire Answer Gang here. However, we're still doing much the same thing - answering Linux questions. This doesn't seem to be one, but since it doesn't look too tough, I'll give it a shot (keeping in mind that my C is pretty rusty...)
> i m Arun from INDIA, i m a student of MCA(masters of computer applications)
> i have written a very simple C-code for "who" command, this code is having no
> Compilation Error, but everytime i try to run it i get the message "SEGMENT
> FAULT"
>
> n i just could not understand why this error is coming in my prog as i m not
> using any pointers (xcept in command line arguments)Erm, well, you are. At least you're using casts, which isn't too far off.
> sir i need ur help in solving this problem. i m sending you the two trivial
> codes for "who" command written by me and in both the codes i m getting
> "segment fault"
>
> pls help me in rectifying those codes.
>
> i m using mandrake's linux 9.2
> n i have my compiled my codes as follows
> 1) cc -o who1 who1.c
>
> 2) cc -o who_ who_.c
>
> (i will b really thankful to u for ur prompt reply)And we would be really thankful if you left off that kind of an unpleasant demand in the future. Promptness is something you can ask for if you're willing to pay for the service; otherwise, hold your horses.
[ snip code ]
ben at Fenrir:/tmp$ cc -o who1 who1.c ben at Fenrir:/tmp$ ./who1 Segmentation fault ben at Fenrir:/tmp$ gdb who1 GNU gdb 6.4-debian Copyright 2005 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i486-linux-gnu"...Using host libthread_db library "/lib/tls/i686/cmov/libthread_db.so.1". (gdb) run Starting program: /tmp/who1 Program received signal SIGSEGV, Segmentation fault. 0xb7eb1abe in tzset () from /lib/tls/i686/cmov/libc.so.6It looks like the problem is in a time-related function (that being what 'tzset' is.) Let's see... ah.
printf("\n %s %s: %s",u.ut_name,u.ut_line,(char *)ctime((time_t *)u.ut_time));There's the problem: pointers/dereferencing, as is usual with C. And here's the fix:
printf("\n %s %s: %s",u.ut_name,u.ut_line,(char *)ctime((time_t *)&u.ut_time));(Note the '&u.ut_time'.)
Let's try it and see:
ben at Fenrir:/tmp$ perl -i -wpe's/u.ut_time/&$&/' who1.c ben at Fenrir:/tmp$ cc -o who1 who1.c ben at Fenrir:/tmp$ ./who1 : Thu Mar 23 23:24:36 2006 reboot ~: Thu Mar 23 23:24:36 2006 runlevel ~: Thu Mar 23 23:24:37 2006 : Thu Mar 23 23:25:08 2006 ben tty1: Thu Mar 23 23:25:14 2006 LOGIN tty2: Thu Mar 23 23:25:09 2006 LOGIN tty3: Thu Mar 23 23:25:09 2006 LOGIN tty4: Thu Mar 23 23:25:09 2006 LOGIN tty5: Thu Mar 23 23:25:09 2006 LOGIN tty6: Thu Mar 23 23:25:09 2006 ben pts/1: Thu Mar 23 23:25:35 2006 ben pts/0: Thu Mar 23 23:25:35 2006 ben pts/2: Thu Mar 23 23:25:38 2006 ben pts/3: Thu Mar 23 23:25:38 2006 ben pts/4: Fri Mar 24 00:46:03 2006 ben pts/6: Fri Mar 24 00:49:32 2006 ben pts/9: Fri Mar 24 00:54:21 2006 ben pts/9: Fri Mar 24 00:54:21 2006 ben at Fenrir:/tmp$Seems to be running fine now.
(Blecch. Nasty stuff, C. Makes me want to write some Perl just to get rid of the taste. :)
On Thu, Mar 23, 2006 at 11:34:59PM -0800, arun dubey wrote:
> dear sir "Benjamin A. Okopnik"Heh. I was never an officer, so "sir" is perhaps a bit excessive.
> thanks a lot for helping me.
You're welcome; however, I must point out yet another point of etiquette you've violated (I grant you the benefit of the doubt and assume that you did it unknowingly.)
When you ask for help on a list, you *must* keep the list CC'd on subsequent replies unless you've received a specific request to do otherwise: not doing so is tantamount to stealing from the community that is served by that list. As an example, I charge $150/hr for private consultation. I participate in TAG - i.e., contribute my time at $150/hr - because it helps the Linux community. Taking the conversation off the list without making financial arrangements with me means that I do not get paid, and the community does not get the benefit. Given the lack of both of those, why would you expect me to keep helping you?
Again, I'm assuming that you've done it as an innocent mistake, so I'm CCing TAG on my reply. I also _strongly_ suggest that you read some documentation on netiquette - e.g., "Asking Questions of The Answer Gang" at https://linuxgazette.net/tag/ask-the-gang.html - before posting to any more lists, since many people are NOT going to treat this kind of violations with any degree of tolerance.
> though i still need ur help but as u said this was unplesant for u to have that
> kind of demand so i m not goin to ask.I don't have any problem with you asking for help; that's what this list is for. The problem was in your violation of basic list protocol, not the technical request.
> but i really thank you for your prompt n apt reply.
You're welcome; glad I could help.
> (but sir if u can suggest me some place from where i can ask my queries onC
> and UNIX-system programming, then it will be a gr8 help to me.)Hmm, tough question. USENET is usually a good source of that kind of information - e.g., news:comp.lang.c is reputedly very good, and has been around for a long time. Take a look at their FAQ - https://www.eskimo.com/~scs/c-faq.com/
There's news:comp.lang.c.moderated as well. Do note, however, that USENET folks tend to be very touchy about net protocol - and the standard netiquette recommendation (quoted from RFC1855, "Netiquette Guidelines")
is- Read both mailing lists and newsgroups for one to two months before you post anything. This helps you to get an understanding of the culture of the group.
Or you can ask here. Just do it politely - which includes Netiquette manners, although we don't apply the requirement just cited. We're always willing to help, particularly when the question relates directly to Linux. Programming questions are a somewhat marginal topic, and you may or may not get an answer - but you're always welcome to ask.
Talkback: Discuss this article with The Answer Gang
Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.
When away from the keyboard, her hands have been found full of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler.
Contents: |
Please submit your News Bytes items in plain text; other formats may be rejected without reading. [You have been warned!] A one- or two-paragraph summary plus a URL has a much higher chance of being published than an entire press release. Submit items to bytes@linuxgazette.net.
IDG World Expo, the producer of tradeshows, conferences and events for technology markets, has announced a new OSSw event - LinuxWorld OpenSolutions Summit - a regional and more vertically focused conference tailored specifically to the needs and interests of IT professionals involved in the deployment of Linux and open source solutions. The inaugural event is scheduled to take place February 14-15, 2007, at the Marriott Marquis in New York City.
The new OSSw summit evolved after recognizing a need for a targeted, high-level conference that concentrates on vertical markets. Attendees can access multiple tracks addressing best practices, including presentations by leading Linux and open source experts, case studies presented by IT executives, and a Solution Showcase for the latest Linux and OSSw products and technologies.
As a result of launching the New York LinuxWorld OpenSolutions Summit, the Boston LinuxWorld Conference & Expo will no longer take place.
LinuxWorld Conference & Expo San Francisco, the original, largest and most comprehensive event focusing exclusively on Linux and Open Source, will continue on an annual basis at the Moscone Center. Considered the de facto event by attendees and exhibitors alike, LinuxWorld San Francisco features across-the-board sessions and a full trade show floor embracing the entire Linux and open source continuum, and topics ranging from system administration to the desktop to mobile Linux. At a new conference track for the Linux Channel, the findings of the first major, objective study focusing on how solution providers are selling Linux solutions to mid-market customers will be released.
Sun will now allow the Java Standard Edition (Java SE) 5.0 to be distributed by GNU/Linux and OpenSolaris developers under a new license, the Operating System Distributor's License for Java, or "Distro License for Java" (or "DLJ"). The announcement came during the kick off keynote speech at May's JavaOne conference in front of 14 to 15 thousand developers in San Francisco. In a kind of rapproachement with the OSSw community, Sun's new CEO Jonathan Schwartz called Ubuntu founder Mark Shuttleworth to the stage to talk about the announcement.
Sun had hinted earlier in May that it will loosen distribution restrictions on its Java licensing, to encourage developers to use Java on Linux systems. Many Linux distributions previously required users to manual download, install and configure Java.
Sun developed this license in conjunction with numerous GNU/Linux communities. It allows distributors to ship Sun's Java SE 5.0 Java Development Kit (JDK) and Java Runtime Environment (JRE) as installable packages for their operating systems.
The company also announced that Sun's Java Studio Creator, Java System Portal Server, Java Message System-based message queue and Web Services Interoperability Technology would be released as open source code.
Sun has opened a new community project on Java.net (https://jdk-distros.dev.java.net ) to serve as a clearinghouse for best practices for delivering compatibly packaged JDK bundles on GNU/Linux and OpenSolaris.
[Meanwhile, CEO Schwartz continues to talk about Open Sourcing Java after JavaOne. Click here for SysCon's take.]
Several project teams have [or should soon] announce plans to redistribute the JDK for use with their operating systems including the Ubuntu, Gentoo and Debian distributions of GNU/Linux, NexentaOS, a hybrid operating system with an OpenSolaris kernel and GNU applications and both the Schillix and BeleniX versions of OpenSolaris. One well-known effort to repackage Sun's JDK for Linux, the Blackdown Project (https://www.blackdown.org), has agreed to join the new jdk-distros project on java.net and contribute their Debian packaging code to this initiative.
"We are really pleased to see Sun's increasing involvement in the free software community, from the opening of the Solaris Operating System source and now the re-licensing of Java technology to be compatible with
GNU/Linux distributions, and are looking forward to building stronger ties with the Sun community in the future", said Anthony Towns, Debian Project Leader.
"This new license shows that Sun and the Java technology world care about GNU/Linux and open source platforms and are willing to put aside philosophical differences and get down to business," said Mark Shuttleworth, founder and sponsor of the Ubuntu GNU/Linux distribution. "This eliminates one of the biggest roadblocks to wider use of the Java platform on free and open source operating system platforms and makes Java technology a more attractive foundation on which to build new projects and innovations."
[In email correspondence, Mark wrote, "I see it as a positive step by SUN towards having a genuinely free software license for Java." And "...the new license does allow us to carry SUN's Java implementation in the Ubuntu repositories alongside other non-free software."]
Quoting from Mark Shuttleworth's Blog [May 23rd] at https://www.markshuttleworth.com:
"Even though this was not the announcement we were all hoping for (a complete shift to free software Java), I was pleased to be part of the "Distro Licence for Java" announcement. As best I can tell, the new leadership at SUN clearly recognizes the importance of the free software model AND the role of the community. That's a big step forward and important to the progress of free software...."
"The new license does not mean that we can include Java in Ubuntu by default. It does not yet meet our criteria for free software in order to get into "main". But it DOES mean we can put it in the Multiverse or Commercial repositories, and people who want it can trivially get it after they have installed Ubuntu on a desktop or a server."
Mark's blog entry also has some interesting musings about Sun switching to the copyleft and TradeMark enforcement and also how the world might be different if Java had been free OSSw 1, 2, 3, 4, and 5 years ago [it would have been Java-on-Rails, not Ruby].
The DLJ allows the different distributions to define the packaging, installation and support for the JDK within their distribution. Distributions exercising the DLJ and shipping JDK bundles are ultimately responsible for maintaining compatibility.
https://www.internetnews.com/dev-news/article.php/3606656IBM announced in late May it will invest $2.2 million in 2006 to expand its Linux Technology Center (LTC) in Brazil. Developers at IBM's Linux Technology Center in Brazil will work to improve Linux as part of the open source community.
The investment will be used to complete construction of a Linux development laboratory in Hortolandia and expand a second lab in Campinas, on Brazil's Unicamp campus. It will assist with upgrading lab construction and equipment, furthering software development projects with Linux, and expanding student internships and job opportunities for recent graduates as a result of a collaboration between IBM and Brazil's Unicamp campus.
The investment will enable engineers and developers at the Linux Technology Center in Brazil to work on the following new projects:
- Linux development for IBM's Cell processor.Safend announced in May that Zvi Gutterman, its CTO and co-founder, has discovered several security vulnerabilities in Linux, the most common open source project. As Safend's CTO, Gutterman designs key technologies such as the algorithms and theory behind Safend Auditor and Safend Protector implementation and is a Ph.D. candidate at the Hebrew University of Jerusalem. Recently, he has been conducting analysis of the Linux Random Number Generator (LRNG) along with Benny Pinkas from the University of Haifa and Tzachy Reinman from the Hebrew University of Jerusalem.
The team's research includes an attack on the Linux Random Number Generator. The LRNG is the key element behind most security protocols and tools which are part of Linux. Among them are PGP, SSL, Disk and email encryption. Using the attack presented by the research team, an adversary attempting to break into a Linux machine may compute backward outputs of the LRNG and use them to access previous confidential communications.
Gutterman, along with Pinkas and Reinman, used dynamic and static reverse engineering to learn the operation of the LRNG. The team was then able to illustrate flaws in the design of the generator as well as measurements of the actual entropy collected by it.
"Our result shows that open source is not a synonym for secure design; once the LRNG is broken, we can break any future or previous password on that PC," stated Gutterman. "However, open source benefits security by enabling security audits. As we state in our research paper, we feel that the open source community should have a better policy for security sensitive software components. They shouldn't be treated as other source elements."
Gutterman, Pinkas, and Reinman presented their research paper entitled "Analysis of the Linux Random Number Generator" at the IEEE Security and Privacy Symposium held in Oakland, California, May 21-24.
We have another major security issue with Linux and other Unix OSes, if you use XWindows. In conjunction with the DHS, Coverity has been performing static analysis of OSSw projects. This link describes the project: https://www.internetnews.com/dev-news/article.php/3589361
This link shows the current results: https://scan.coverity.com/
During analysis of results from the Coverity code review of X.Org, a major flaw in the server was discovered that allows local users to execute arbitrary code with root privileges. The eWeek article below, which has been copied in many blogs and OSSw sites [sometimes without attribution], is reproduced here for your education and pleasure....
Homeland Security Audit Flags 'Critical' Linux
Bug
By Ryan Naraine / eWeek / May 2, 2006
An open-source security audit program funded by the U.S. Department of Homeland Security has flagged a critical vulnerability in the X Window System which is used in Unix and Linux systems. Coverity, the San Franciso-based company managing the project under a $1.25 million grant, described the flaw as the "biggest security vulnerability" found in the X Window System code since 2000.
The flaw was pinpointed during automated code scanning that formed part of the "Vulnerability Discovery and Remediation Open Source Hardening Project," a broad federal initiative to perform daily security audits of approximately 40 open-source software packages.
The purpose of the audit is to pinpoint buffer overflows, memory allocation bugs and other vulnerabilities that are a constant target for malicious hacking attacks. In addition to Linux, Apache, MySQL and Sendmail, the project will also pore over the code bases for FreeBSD, Mozilla, PostgreSQL and the GTK (GIMP Tool Kit) library.
The X Window System, also called X11 or X, provides the toolkit and protocol to build GUIs for Unix and Unix-like operating systems. It is used to provide windowing for bit-map displays. The X Window System also ships as an optional GUI with Macintosh computers from Apple.
Coverity Chief Technical Officer Ben Chelf said the flaw resulted from a missing parenthesis on a small piece of the program that checked the ID of the user. It could be exploited to allow local users to execute code with root privileges, giving them the ability to overwrite system files or initiate denial-of-service attacks.
Coverity hailed the discovery as proof that its automated code scanning tool can discover serious flaws that the human eye might miss. "This was caused by something as seemingly harmless as a missing closing parenthesis," Chelf said, describing the severity of the bug as a "worst-case scenario" for the X.Org Foundation that manages the X Windows System project.
Daniel Stone, release manager at X.Org, agreed that the vulnerability was "one of the most significant" discovered in recent memory. "[This is] something that we find once every three to six years and is very close to X's worst-case scenarios in terms of security," Stone said. "[Coverity's tool exposed] vulnerabilities in our code that likely wouldn't have been spotted with human eyes. Its attention to subtle detail throughout the entire code base - even parts you wouldn't normally examine manually - makes it a very valuable tool in checking your code base," he added.
The flaw, which affects X11R6.9.0 and X11R7.0.0, was fixed within a week of its discovery, and Chelf said Coverity has implemented a system to analyze the X Window System on a continuous basis to help prevent new defects from entering the project. [so pls check for the updates on your systems - your Editor]
The Open Web Application Security Project announced availability of a process guide that will help a broad range of developers incorporate security into the software application development lifecycle (SDLC). OWASP is dedicated to helping organizations understand and improve the security of their applications and services.
CLASP (Comprehensive Lightweight Application Security Process) will be accessible through OWASP to developers globally. Developers will be able to leverage a best practices methodology that provides a well organized and structured approach for integrating security requirements and activities into each stage of the software development lifecycle.
"Many organizations are realizing that discovery and remediation of vulnerabilities in later stages of development is far too costly," said Jeff Williams, CEO of Aspect Security and Chair of the OWASP organization. "The OWASP project makes sure that developers have the knowledge and the tools to build secure software from the beginning."
OWASP's mission is to enable organizations to develop, maintain and purchase secure applications through the development of free,open and unbiased application security documentation, tools, chapters and conferences.
OWASP documentation projects include a guide to web application security, metrics, a test guide, documents for performing ISO 17799 reviews, and an AppSec FAQ . OWASP projects also include WebGoat - an interactive training and benchmarking tool so users can learn about web application security - and WebScarab - a web application vulnerability assessment suite.
More News from JavaOne:
-JBoss announced plans to submit a proposal to standardize Web Beans in Java. The Web Beans standard initiative will aim to bridge the gap between Enterprise JavaBeans 3.0 and Java Server Faces (JSF) . The result would be a simpler, more elegant, unified programming model for web development.
Borland, Google, Oracle and Sun Microsystems will bring their support and expertise with web frameworks to the standardization effort. The proposed standard will draw upon principles found today in JBoss Seam, Oracle application Development Framework (ADF) and Apache StrutsShale. JBoss Seam introduced a uniform component model for building web applications through declarative, contextual, application state management. Oracle ADF promotes the use of a metadata-driven architecture that enables developers to cleanly separate business service implementation details from the user interface. Apache StrutsShale offers a set of fine-grained services that can be combined as needed, rather than a monolithic request processor.
Gavin King, architect at JBoss, plans to lead the standardization effort. King, who founded the popular Hibernate project and is currently leading the development of JBoss Seam, commented: "The overwhelmingly positive response to Seam from the developer community convinced us that this is an idea whose time has come and one that should be brought back into the standards process for the benefit of the entire Java community. JBoss' end goal is the same as these companies supporting this initiative: To create a highly productive, accelerated development environment and enable richer web applications."
The Developer Tools Group of Borland Software Corporation announced details around a three-year product roadmap for JBuilder, its award-winning Java Integrated Development Environment (IDE). The roadmap includes an update to JBuilder 2006, a new underlying framework based on Eclipse in JBuilder 2007 ("Peloton"), and provides insight into the functionality being developed in future JBuilder versions. Future capabilities for the product line include new team collaboration and developer productivity features, support for new Java standards and emerging open source tools and frameworks, enhanced support for Service Oriented Architecture (SOA), and more.
The JBuilder roadmap was presented at May's JavaOne conference in San Francisco, Calif. and to customer dates throughout the world during Borland's 2006 Global Developer Road Show. US dates are June 1 through June 14. Borland's Developer Tools Group is already working to deliver on this three-year roadmap with a free JBuilder 2006 Foundation edition available for download (https://www.borland.com/downloads/download_jbuilder.html )
Two new JBuilder 2006 updates planned for release this year, with the next major release of JBuilder, codenamed "Peloton," expected to be available in Q4. JBuilder 2006, already shipping, began the transition of JBuilder to a more collaborative team development environment with features such as shared code editor views and joint debugging capabilities. These features allow local and remote developers to jointly design, edit and debug applications in real time.
JBuilder will continue to support the latest JCP standards as they become available and Developer Tools Group expects to release an update to JBuilder 2006 in June to support Java SE 6 ("Mustang") and an additional service pack in the Fall once Mustang is formally released by Sun Microsystems.
Changing the Java landscape, Terracotta, Inc., a leader in enterprise Java scalability, announced at JavaOne that it has begun giving away free copies of its session clustering solutions -- Terracotta Sessions for Tomcat and Terracotta Sessions for WebLogic Server. By injecting clustering and caching into the Java runtime, Terracotta furnishes applications with linear scalability, total fault tolerance, and high availability without making any changes to the application code.
Apache Tomcat users can spend weeks or months writing clustering code by hand to make Java applications in production cluster and scale efficiently. Terracotta Sessions for Tomcat gives developers a free, clustering solution that meets their need for load-balanced Tomcat application servers. It also removes performance tuning from the development lifecycle.
Meanwhile, enterprises running WebLogic Server sessions from BEA Systems, Inc. can realize significant cost benefits with Terracotta Sessions for WebLogic Server, which eliminates performance tuning and provides linear scalability and total fault tolerance. Terracotta Sessions feature fine-grained updates of session data to reduce overhead and improve scalability and performance. Real-time monitoring of session contents simplifies debugging and provisioning.
Terracotta Sessions can plug-in to an "off-the-shelf" JVM and clusters at the JVM level to provide a simple runtime solution. With Terracotta Sessions, developers can now purchase inexpensive, non-clustered application servers or use open source application servers. Terracotta was named "Cool Vendor" by Gartner in April.
The Terracotta solutions are both standards-compliant and, as drop-in clustering solutions, they drive faster time-to-market for Java applications. Terracotta Sessions licenses are free for all platforms for four JVMs and under. Production licenses are available directly from Terracotta. For more information, see these links :
Terracotta Ships "Clustered" JVM (https://www.terracottatech.com/press_5_16_06_2.0.shtml )
The latest stable version of the Linux kernel is: 2.6.16.18 [ https://www.kernel.org/pub/linux/kernel/v2.6/patch-2.6.16.18.bz2 ]
The company is distributing updated kernel packages meant to fix 16 individual flaws present in the version 4.0 releases of its Red Hat Desktop and Red Hat Enterprise Linux OS software.
The company advised that all Enterprise Linux 4 users should upgrade their kernels to protect themselves from the security issues, 10 of which the Red Hat Security Response Team rated as "important," and six of which it tabbed as "moderate."
MEPIS has released beta 4 of SimplyMEPIS 6.0. The ISO image is available for download and test in the mepis 'testing' subdirectory at the MEPIS Subscriber's Site and public mirrors.
Beta 4 includes some new and/or updated applications including amarok 1.4, xaralx 0.5, and ksudoku. Digikam plugins were added to make Digikam and showFoto much more powerful and fun to use. To make room for xaralx, it was necessary to remove tvtime and the GIMP from the bootable CD but, for those who need them, they are available for download and install via the Synaptic download manager or apt-get.
Warren Woodford explains why this was done: "The SimplyMEPIS bootable CD is a starting point, and not always a complete solution. There are many applications we would like to include on the CD, but they won't fit. The OpenSource edition of Xaralx, formerly Corel Xara, is a well known Windows app for vector graphics drawing that is an exciting addition to Linux and MEPIS. Likewise, the new plugins make image manipulation in Digikam and showFoto a piece of cake. We want the new Linux user coming to MEPIS from XP to see right away that Linux has apps that are fun and easy. For the experienced user who wants or needs a powerful image manipulation program like the GIMP or any of the other apps we don't bundle on the CD, we ask for their understanding."
For this release, cpu speed management and suspend-to-ram have been improved. KDE configuration have been fixed for time format and other localization elements. The fontconfig configuration has been improved, so some web site fonts look better in both Firefox and Konqueror. Floppy support has been changed to be more reliable. Spamassassin has been tweaked to give very good results out-of-the-box, and then excellent results as soon as it has learned from a sample of user emails.
SimplyMEPIS 6.0 Beta 4 runs a reconfigured version of the Release Candidate update of the 2.6.15.7 kernel from Ubuntu. The kernel source has been verified to give the best possible compatibility with new Intel hardware. The MEPIS configuration maintains compatibility with the extra "restricted" drivers provided by the Ubuntu team in the Ubuntu pools. MEPIS builds extra drivers for the 386, 686, and K7 flavors of i386 including bcm4400, bcm5700, Intel536, quickcam, spca5xx, usbvision, and, new in this release, ivtv. These drivers are available in the MEPIS pool and they are compatible with the matching Ubuntu kernels.
Ken Smith has announced a new stable release of the FreeBSD 5.x series: "It is my great pleasure and privilege to announce the availability of FreeBSD 5.5-RELEASE. Work done between the 5.4-RELEASE and this release has mostly been bug fixes. Some 'vendor supplied' software has also been updated, mostly due to security concerns (specifically BIND and sendmail). This is the last planned release on the 5-STABLE branch.
The FreeBSD development community is currently focusing its efforts on the 6-STABLE and CURRENT codelines. No new major features are planned for the 5-STABLE branch, although minor updates and bug fixes may be merged at the discretion of individual developers."
The new release, Ubuntu 6.06 LTS (Long Term Support), has specific emphasis on the needs of large organisations with both desktop and server versions. Security updates will be available for five years on servers.
The Server Edition of Ubuntu will include a mechanism to set up a standardised, certified and supported LAMP server with a single command. The feature reduces the setup time for companies providing hosted LAMP services as well as making it easier for organisations to set up and maintain their own LAMP standardised servers. Also announced on June 1st was support for Sun's "Niagara" UltraSparc CPU for T1000 and T2000 Enterprise-class servers, in addition to x86 servers.
A special added bonus of Ubuntu 6.06 LTS is the inclusion of several chapters from "The Official Ubuntu Book", which Prentice Hall Professional will publish in July 2006, under an Open Content license. The book represents the collaborative effort of more than a dozen Ubuntu community members from around the world.
Ubuntu 6.06 LTS also has a new mechanism to make commercial software available via download from select software from Independent Software Vendors (ISVs). A group of solutions available this way already, including data management software from Arkeia, cross-platform development tools from Raining Data, PC sharing from Userful and virtualisation from VMware. Additional software for Ubuntu from ISVs will be added.
"This new functionality is a first step towards the simplification of common server deployment scenarios using Ubuntu" said Fabio Massimo Di Nitto, product manager of Ubuntu Server Edition.
Release parties for Dapper Drake are listed here: https://wiki.ubuntu.com/DapperReleaseParties
The developers of BeleniX, a full-featured live CD based on OpenSolaris, have announced an updated release - version 0.4.3a: "BeleniX 0.4.3a with JDK 1.5 released! Another release of the live CD this month. The significant feature of this release is the inclusion of JDK 1.5 under the new Distributors License for Java (DLJ) as announced by Sun Microsystems on May 16th in JavaOne. However due to a licensing issue with a required SUN Studio C++ runtime library it is currently an installable bundle and does not execute off the live CD. This is expected to be resolved soon."
Mandriva, the publisher of the Mandriva Linux operating system, and the OpenVZ project have announced that the OpenVZ operating system virtualization software will be included as part of the Mandriva Corporate Server 4.0.
OpenVZ is operating system level server virtualization software technology, built on Linux, which creates isolated, secure virtual environments on a single physical server - enabling greater server utilization and superior availability with fewer performance penalties. The virtual servers ensure that applications do not conflict and can be re-booted independently.
"The OpenVZ technology is a perfect match for our next Mandriva Corporate Server release 4.0. It provides our customers with a proven virtualization layer to deliver flexible and efficient solutions. We are pleased to offer OpenVZ as a standard complug insin the Mandriva Corporate Server 4.0 toolbox to simplify production management and maximize hardware usage," said David Barth, CTO at Mandriva.
"Embedding the OpenVZ technology directly into the Mandriva kernel will give Mandriva customers unparalleled virtualization functionality," said Kir Kolyshkin, manager of the OpenVZ project. "We're very pleased to work with Mandriva and make our technology widely available via the popular Linux distribution."
Xandros, a leading provider of easy-to-use Linux alternatives to Windows, and Trolltech, a leading provider of technologies that make advanced software faster to build and easier to use, announced that Trolltech's Qt development framework was used to create the new Xandros Server, which recently won the Product Excellence Award at LinuxWorld Boston. Utilizing the robust and efficient Qt framework, Xandros developers created the all-graphical user interface called Xandros Management Console (xMC).
Similar in concept to the Microsoft Management Console that ships with Windows 2000 and Windows 2003 servers, xMC presents a simplified, centralized approach to remotely administering users and services, unlike any other on the Linux server market today. Qt was also used to create a unique plug-in architecture that enables third party services, such as Scalix groupware and RealNetworks media delivery, to be integrated and managed within xMC just like other services running on a Xandros Server.
"Through the use of the robust Qt development framework we were able to save countless hours of development time, while producing superior code and documentation," said Ming Poon, Xandros VP of Product Development. "Qt's cross-platform framework, in conjunction with the platform-neutral design we built into xMC, will allow us to seamleplug insrt a Windows version of xMC so that administrators can manage Xandros Server from their Windows workstations as well."
Xandros Server, built on top of Debian Linux, offers Managed Community model with consolidated system monitoring and workflow automation to address the issues normally encountered when administering SMB systems. It presents a simplified, centralized approach to remotely administering users and services through the all-graphical Xandros Management Console (xMC). It is compatible with any existing Windows domain and networking infrastructure, offering a plug-and-play replacement to costly Windows servers.
For more information about the Xandros Server, visit: www.xandros.com .
Phoenix Technologies Ltd.has announced a new version of TrustedCore, its innovative firmware that creates a more tamper-resistant platform by proactively protecting X86-based computing devices and their data before the operating system and applications even load. The new version of TrustedCore, through its support of BitLocker Drive Encryption, will provide Windows Vista users with better data protection, pre-boot security and authentication and support for other security specifications, including biometrics and smart tokens. TrustedCore is a secure firmware foundation that will increase client and enterprise security by providing endpoints with strong authentication and a secure execution environment.
BitLocker Drive Encryption provides for full volume encryption and support for pre-boot multi-factor authentication. BitLocker will protect data from being used by unauthorized users or even downloaded inappropriately to thumb drives. Linux and other *IX platforms should also be able to leverage these BIOS enhancements.
TrustedCore architecture enables device designers and manufacturers to create trusted and self-authenticating networked devices. The software delivers a "root of trust" that allows customers to deploy devices that are inherently secure from the start and that support the latest in digital device authentication advancements. Phoenix TrustedCore supports strong, multifactor pre-boot user authentication and validates a user's identity before the system starts.
The new version of TrustedCore advanced firmware includes capabilities that legacy BIOS solutions cannot offer. TrustedCore SP3B enhances endpoint security by providing secure CRTM (core root of trust measurement, also known as BIOS Bootblock) update through its Secure Flash update process. In addition, TrustedCore SP3B supports Unified Extensible Firmware Interface (UEFI) 2.0 and provides developer tools, including a device driver kit for silicon and hardware vendors, and a software developer kit for application developers that want to build UEFI shell applications.
A beta version of TrustedCore SP3B is available immediately and the release version is expected in Q3 2006. For more information, visit www.phoenix.com/TrustedCore.
Lexar Media, announced the development of advanced, secure USB storage-based technologies. Lexar plans to work together with Phoenix Technologies, Ltd. to develop support for locking USB personal storage devices (PSDs) that can be used with the BitLocker Drive Encryption feature. The technology developed by Lexar for BitLocker, will coordinate the PSD-Lock technology protection of Lexar's new enterprise-class SAFE PSD products with the pre-boot authentication capability of Phoenix Technologies' TrustedCore.
This technology will provide protection against unauthorized use of the USB Flash Drive (UFD) that enables the boot of a BitLocker-protected computer and also protect the BitLocker Drive Encryption keys held in the UFD when the UFD is not connected to the protected computer.
Sapient, and Watchfire, announced the two companies are working together to help ensure the security and compliance of clients' web assets. Sapient is building on its track record for helping clients stay ahead of tomorrow's most challenging issues by expanding its security and compliance services with Watchfire's AppScan Enterprise and WebXM software.
A recent wave of online security and privacy breaches over the last few years has resulted in more rigid control regulations and industry guidelines. AppScan Enterprise is the industry's first web application vulnerability scanning and reporting solution for the enterprise to deliver centralized control, remediation capabilities, executive security metrics and dashboards, and key regulatory compliance reporting. WebXM is the only automated Online Risk Management solution that audits quality, privacy, and compliance issues across corporate web properties.
May marked the official debut of PostPath, perhaps the only Linux-based email server to offer drop-in plug-compatibility with Microsoft Exchange. Formerly Apptran Software, the company was founded in 2003 to address the growing frustrations of organizations locked into Microsoft's expensive and inflexible email server.
PostPath has created an alternative by combining publicly available documentation with packet-level protocol decoding to implement the Exchange network protocols on the PostPath Linux email server. As a result, the PostPath Server is the first Exchange alternative to be able to drop into an existing Exchange farm without disruption. It is the first to interoperate with the server-to-server functions of already-deployed Exchange servers and the first to provide full-featured Outlook interoperability without the need for plug-ins, special connectors, or reconfiguration.
The PostPath Server also movies the information store to a Linux file system, simplifying storage, replication, backup and recovery.
"Enabling a five-times performance increase over Exchange and a six-fold reduction in storage costs, granular backup and restore, standards-based virus-filtering, archiving, clustering, replication and disaster-recovery, AJAX web-client support, and drop-in compatibility, the PostPath Server is the first truly enterprise-class Exchange alternative," said PostPath's CEO, Duncan Greatwood.
Visit them at www.postpath.com .
At Interop 2006, Crescendo Networks announced the availability of its Application Layer Processing (ALP) technology - the first solution capable of intelligently accelerating application flows across all logical application tiers. It will be available as a software module for Crescendo's Maestro family of products beginning in Q3 of 2006.
Web applications commonly contain multiple logical processing tiers that reside on one or more physical server tiers. As application requests move between the tiers they must often wait for processing attention from upstream or downstream partners. Crescendo reduces this inherent latency by intelligently managing and optimizing the application flow between all logical tiers.
Using definitions created using the Crescendo Rule Engine (CRE), ALP is capable of recognizing for which tier each application request is destined. ALP also understands that different requests impose different processing "weights" on the application with "heavier" requests taking longer to process than "lighter" ones. Request weights can either be manually configured or adaptively learned by ALP. In addition ALP recognizes that each tier in the application has an upper processing limit in terms of simultaneous requests. When a tier reaches maximum capacity, ALP's Admission Control mechanism queues requests within Maestro sending them only to the application when processing capacity is available.
"Crescendo's unique ALP technology represents the first time an AFE [application front end] is addressing our application and database performance bottlenecks behind the web server," said Ian Rae, president and CEO of Syntenic Inc. "This revolutionary end-to-end acceleration approach will enable applications to reach new levels of performance and scalability unattainable by existing acceleration technologies."
Crescendo Network's Maestro product line delivers application acceleration, a faster end user experience, increased security and application assurance. ALP offers patent-pending application layer processing algorithms modeled on bio-medical engineering research techniques. ALP's unique functionality eliminates application overload, intelligently schedules and prioritizes delivery requests, and monitors and reports on application performance across all tiers. Collectively ALP and SLT technologies deliver a level of application acceleration several times greater than the performance capabilities of any other AFE vendor.
ALP will be available for the Maestro product line as a software module. Pricing for the integrated Maestro-ALP solution package will start at $52,000. ALP will be available to existing Crescendo clients as an upgrade.
GroundWork, a leader in open source-based IT operations management, today debuted the integration of additional open source monitoring technologies with its flagship GroundWork Monitor Professional product. With the responsibility of monitoring large mission critical infrastructures as well as InteropNet's extreme interoperability requirements, Interop serves as the ultimate proof-of-concept venue for GroundWork's open source network management system.
GroundWork Monitor Professional is a fully-integrated IT infrastructure and network monitoring solution that is built on top of best-of-breed, open source systems and network monitoring and management tools.
As the official "Open Source Network Monitoring" provider for this year's InteropNet, GroundWork showcased their network management system with additional open source networking tools including Network Weathermap, NTOP (Network Top), MRTG (Multi Router Traffic Grapher), RRDtool (Round Robin Database), Cacti, and NeDi (Network Discovery).
"Open source is disrupting the economics of IT operations management tools," said Ranga Rangachari, CEO of GroundWork. "With GroundWork Monitor Professional, companies can access these open source technologies in a plug-and-play architecture that makes the installation and configuration easier than it is for most off-the-shelf proprietary monitoring solutions. You no longer have to be a technical whiz in open source to enjoy the innovation that's being driven by the community. And the cost savings are tremendous."
-- Network Weathermap (https://netmon.grnet.gr/weathermap/) is an open source technology that provides outstanding visual representations of the network and where usage patterns are occurring.San Francisco-based GroundWork Open Source, Inc. (www.groundworkopensource.com) is the leader in the market for open source IT operations management software. More than 125 customers today use GroundWork as their IT operations management platform, taking advantage of the latest open source innovations in a framework architected specifically for mid-market and enterprise line-of-business customers.
Inivis Limited announced the release of AC3D 6, its affordable and intuitive 3D modeler; one of the longest established 3D software programs available. Widely used in both educational and commercial environments, AC3D 6 is available for Windows, Mac OS X and also Linux.
AC3D 6 is now a fully integrated subdivision-surface modeler with its powerful polygon control now functioning seamlessly within a subdivision environment. Further new features and additions include a new faster real-time 3D editing engine, intuitive new-look controls, and a catalogue of changes that make selection, handling and 3D shape creation faster and easier than ever before.
AC3D 6 is immediately available for download as a 14-day full free trial from the AC3D website www.ac3d.org. The software is priced at $69.95 for a full version, with upgrades starting at $29.95.
Free CRM, the world's only free, multi-user CRM software provider has announced Google Gmail support now directly integrated in the Free CRM product. Google Gmail users can now send out mass email campaigns, newsletters and template email merges directly in the Free CRM system. Mail items in your Gmail account can now be copied to contacts in the CRM, capturing important communications via email and providing a seamless and secure integration with Gmail.
Businesses can now sign up for a free Gmail account at Google (https://www.Gmail.com) and a free CRM account at https://www.FreeCRM.com to take advantage of powerful business automation and integrated email functionality.
Google Gmail users can view their POP mail boxes using SSL encrypted security and also send out emails using secure TLS SMTP with FreeCRM.com, thus giving all Google Gmail users the power to utilize advanced security offered by Google Gmail. The combination of FreeCRM.com business services with Google's Gmail provides an inexpensive vehicle for powering small businesses using the latest in online communications technology.
With over 30,000 companies and 53,000 subscribers, FreeCRM.com is the world's leading on-demand CRM provider for businesses worldwide. With unlimited data storage and XML data integration, Microsoft Outlook integration, Palm Pilot, RIM / BlackBerry and Pocket PC support, FreeCRM.com is a major alternative to SalesForce.com and other CRM products.
Visit FreeCRM.com for more info.
LogicBlaze, Inc., a leading provider of open source solutions for Service Oriented Architecture (SOA) and business integration, has entered into an agreement with MySQL AB, under which LogicBlaze will distribute LogicBlaze FUSE for MySQL, a configuration developed exclusively for the MySQL database. Under the agreement, LogicBlaze will resell support for MySQL through the MySQL Network and offer product delivery and coordinated support through subscriptions to its Community-oriented Real-time Engineering (CoRE) Network, which delivers a suite of services for open source SOA, including consulting, training, developer assistance and enterprise production support.
LogicBlaze FUSE is the first SOA and Web 2.0 platform available as an Apache License 2.0 open source distribution, combining enterprise-class messaging scalability, performance and reliability with connectivity for a broad range of interfaces and transports, including native support for Perl, Python, Ruby and PHP.
LogicBlaze FUSE incorporates the Apache Incubator's ActiveMQ, the leading open source messaging platform based on the Java Messaging Service (JMS) specification. ActiveMQ is an open source, easily deployed and extremely robust messaging system that provides a foundation for reliability and scalability in distributed computing environments, including the LAMP and Ajax application stacks. The heart of the LogicBlaze FUSE platform is the Apache Incubator's ServiceMix enterprise service bus (ESB), the leading open source integration solution based on the Java Business Integration (JBI) specification. ServiceMix provides the foundation for an open, standards-based SOA environment. Additionally, LogicBlaze FUSE enables high availability for Ajax applications through Jetty, its HTTP server.
LogicBlaze FUSE for MySQL will be available through the LogicBlaze Web site at www.logicblaze.com.
Intel now has record breaking results on 20 key dual-processor (DP) server and workstation benchmarks. The first processor due to launch based on the new Intel Core microarchitecture -- the Dual-Core Intel Xeon processor 5100 series, previously codenamed "Woodcrest" -- delivers up to 125 percent performance improvement over previous generation dual-core Intel Xeon processors and up to 60 percent performance improvement over competing x86 based architectures.
Fully-buffered dual in-line memory (FB-DIMM) technology allows for better memory capacity, throughput and overall reliability. This is critical for creating balanced platforms using multiple cores and the latest technologies, such as virtualization, to meet the expanding demand for compute headroom.
Intel's new server and workstation platforms, codenamed "Bensley" and "Glidewell" respectively, will support dual- and quad-core processors built using Intel's 65-nanometer (nm) and future process technologies.
The first processors for Bensley and Glidewell are in the Dual-Core Intel Xeon processor 5000 series, previously codenamed "Dempsey." Shipping since March at a new lower price point, they bring higher performance and lower power consumption to the value server and workstation segment. Complementing the 5000 series, Intel will ship the next processor for Bensley and Glidewell in June -- the Dual-Core Intel Xeon processor 5100 series. Based on the Intel Core Microarchitecture, most of these processors will only consume a maximum of 65 watts.
Using the SPECint_rate_base2000 benchmark, which measures integer throughput, a Dell PowerEdge 2950 server based on the Dual-Core Xeon 5100 series scored 123.0, setting a new world record. Using the SPECjbb2005 benchmark, the Fujitsu-Siemens PRIMERGY RX200 S3 server based on the Dual-Core Xeon processor 5100 series broke previous records with a score of 96,404 business operations per second.
An HP Proliant ML 370 G5(a) server based on the Dual-Core Xeon 5100, and using the TPC-C benchmark, which measures database performance, smashed another world record by scoring 169,360 tpmC at $2.93/tpmC. IBM is also in the record books with the IBM System x3650 server based on the Dual-Core Xeon 5100, which scored 9,182 simultaneous connections in the SPECWeb2005 benchmark, which measures web server performance.
These benchmarks, along with additional records set by the Dual-Core Xeon 5000 and Dual-Core Xeon 5100 processors, can be accessed by visiting www.intelstartyourengines.com .
Intel's current price list for 1,000 units includes the Dual-Core Xeon processor 5050 [3GHz, 2x2MB L2, 667 MHz FSB] at $177 on the low end and runs up to the highend Dual-Core Xeon processor 5080 [3.73GHz, 2x2MB L2, 1066MHz FSB] at $851. Intel's Dual-Core Xeon processor 5100 was not quoted.
Speaking in Austin, Intel Corporation President and Chief Executive Officer Paul Otellini gave the first public demonstration of a low-cost notebook PC for students in developing nations [small pic] and announced a plan with the Mexican government to provide PCs to 300,000 teachers.
"We're close to achieving Andy Grove's vision of a billion connected PCs -- and the economic, social and personal gains that come with them," said Otellini, referring to the Intel co-founder and former CEO. "Our job now is to harness the combined potential of full-featured technology, high-speed connectivity and effective education to speed the gains for the next billion people -- and the next billion after that."
In his speech, Otellini said that the predictions by Grove and of another co-founder of Intel, Gordon Moore, form a backdrop for the new World Ahead Program from Intel. The program's 5-year goals are to extend wireless broadband PC access to the world's next billion users while training 10 million more teachers on the effective use of technology in education, with the possibility of reaching another 1 billion students.
Otellini demonstrated one of the PCs developed from Intel's extensive ethnographic research in developing countries, a small notebook PC for students codenamed "Eduwise." Eduwise is designed to provide affordable, collaborative learning environments for teachers and young students and integrate with other non-computing learning tasks such as note taking and handwriting with wireless pen attachments. Because it is a fully featured PC, the Eduwise design can accommodate other standard software and tools [and operating systems].
Otellini also announced that Intel and the Mexican government have reached an agreement to make Intel's new low-cost, fully featured PC available to 300,000 teachers by year's end. The systems, unveiled last month in Mexico by Otellini as part of Intel's Discover the PC initiative, provide an easy-to-use, fully functional PC for first-time users. Intel also plans to extend teacher training to 400,000 teachers in Mexico through the Intel Teach to the Future program by 2010.
The adventure of the Star Wars galaxy comes to the iTunes Music Store (www.itunes.com) as Cartoon Network's Emmy Award-winning animated series STAR WARS: CLONE WARS becomes available for purchase and download. The 20 chapters of STAR WARS: CLONE WARS Volume 1 is be available on iTunes, with STAR WARS: CLONE WARS Volume 2 following later in late June. Each chapter, or episode, will be available for $1.99 with the full season of Volume 1 available for $10.99 and can be viewed, after download, on a computer or on an iPod.
The original animated series produced by Cartoon Network Studios, Lucasfilm Ltd. and renowned director Genndy Tartakovsky (Samurai Jack, Dexter's Laboratory) follows the exploits of heroic Anakin Skywalker, Obi-Wan Kenobi, Mace Windu and a legion of Jedi Knights as they fight against the forces of the Dark Side. In 2004 and 2005, Clone Wars was honored with Emmy Awards for Outstanding Animated Program (For Programming One Hour or More).
Hailed for its "fast and furious action" (USA Today) and described as "a thrill ride through the world of Star Wars" (San Jose Mercury News), CLONE WARS Volume 1 picks up where Attack of the Clones left off, while Volume 2 leads directly into Revenge of the Sith. The series originally aired on Cartoon Network and became the No. 1-rated show on basic cable among boys 9 to 17 years old and 12- to 17-year-old teens.
--Survey Finds Americans Want Strong Data Security Legislation
A survey from the Cyber Security Industry Alliance (CSIA) of 1,150 US adults
found 71 percent want the federal government to enact legislation to protect
personal data similar to California's data security law. Of that 71 percent, 46
percent said they would consider a political candidate's position on data
security legislation and "have serious or very serious doubts about political
candidates who do not support quick action to improve existing laws." In
addition, half of those surveyed avoid making online purchases due to security
concerns.
https://www.fcw.com/article94613-05-23-06-Web
https://ww6.infoworld.com/products/print_friendly.jsp?link=/article/06/05/23/78609_HNdatapolitics_1.html
--Millions of Blogs Inaccessible Due to DDoS Attack
A "massive" distributed denial-of-service (DDoS) attack
on Six Apart's blogging services and corporate web site left about 10 million
LiveJournal and TypePad blogs unreachable for hours on Tuesday, May
2. Six Apart plans to report the attack to authorities.
https://www.zdnet.com.au/news/security/print.htm?TYPE=story&AT=39255176-2000061744t-10000005c
--Soon-to-be-Proposed Digital Copyright Legislation Would Tighten Restrictions
Despite efforts of computer programmers, tech companies and academics to get
Congress to loosen restrictions imposed by the Digital Millennium Copyright Act
(DMCA), an even more stringent copyright law is expected to be introduced
soon. The Intellectual Property Protection Act of 2006 would make simply
trying to commit copyright infringement a federal crime punishable by up to 10
years in prison. The bill also proposes changes to the DMCA that would
prohibit people from "making, importing, exporting, obtaining control of or
possessing" software or hardware that can be used to circumvent copyright
protection.
https://news.com.com/2102-1028_3-6064016.html?tag=st.util.print
GMAC Global Relocation Services will conduct a complimentary online webinar to help companies that do business in India better understand that nation's unique culture. Titled "Exploring Indian Culture," the one-hour Webinar will begin at 11 a.m. (EDT) Monday, June 5.
"India has what can only be described as one of the world's most complex, fascinating and least understood cultures, with roots dating back thousands of years," said Rick Schwartz, president and chief executive officer of GMAC Global Relocation Services. "For growing numbers of businesses throughout America and the rest of the world, India is evolving into an increasingly important market."
The webinar explores the cultural attributes of India and Indians in the work environment. It also includes an overview of cultural values and recent events that could effect business and expatriate activities in India.
Specifically, the webinar will:
-- Introduce life in India, and detail stereotypes and perceptions of Indian
nationals
-- Introduce basic business and social "do's and don'ts" for interacting and
working in India
-- Provide a framework for comparing and contrasting cultural differences in
India with Webinar participants' cultures along 10 research-validated dimensions
-- Suggest strategies for
bridging cultural differences between personal and national
cultures
Participation is free and limited to the first 100 registrants. To register, go to: https://www.gmacglobalrelocation.com/insight_support/cc_india_reg.asp
Talkback: Discuss this article with The Answer Gang
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
By Thomas Adam
Configuring FVWM can seem like a chore at times. Indeed, there are
certain aspects of it that are easy - and some that are less so. I've been
helping people configure FVWM for some time now, and while I have delved
into some of the more esoteric regions of FVWM, it seems that many people
find the use of Style
lines the hardest aspect to grasp.
Hopefully this article will help clarify things.
Style
lines (in FVWM parlance) are those lines in an FVWM
configuration file which apply some specific style to a window. It could
be, for instance, that one would want all windows called foobar to
be sticky by default. Hence in one's .fvwm2rc file:
Style foobar Sticky
... would ensure that fact. One can also add multiple properties to a given window [1]. For instance, it might be desirable that the same window, foobar, have no visible title and a border width of eight pixels. This can be expressed as:
Style foobar Sticky, !Title, BorderWidth 8
More style lines can then be added, line by line, with a specific window for each style. Here's an example:
Style amble* !Borders Style Login? CenterPlacement Style Gvim Title, !Sticky Style urlview StartsOnPage 1 1, SkipMapping, Icon wterm.xpm, !Closable Style irssi StartsOnPage 1 1, SkipMapping, Icon 32x32-kde/chat.xpm,!Closable, \ StickyAcrossDesks
FVWM also allows for the use of wildcards when matching a window's name as in the above example. The '*' matches for anything after what precedes it, whereas the '?' matches a single character.
What's even more important is that the matching of Style
lines
is case sensitive. This means that for the
following, both are separate entities:
Style Window1 BorderWidth 23 Style WINdoW1 BorderWidth 23
So far, everything's going great. Window names are being added as style options, and everything's working just fine - until you have a series of lines which look like the following:
Style myapp* NoStick, NoTitle Style Fvwm* NoBorders, NoTitle, CirculateSkip, Sticky Style Mozilla* NoTitle Style Firefox* NoTitle
At first glance there's nothing wrong with them. Sure, FVWM is doing exactly what you asked for... except that '*' is a greedy match, which is what one would expect in using it. In the example above, Mozilla has, in theory, only been told to display no title as a Style directive - but it may also produce entirely unexpected results due to that greediness. As an example, it may match an earlier declaration (e.g., 'Style Fvwm*') if that string exists in the window title.
In all of the problems encountered with style lines, this has to be the
most common. The reason for this isn't that Mozilla or Firefox are
misbehaving, but usually that there's a lack of understanding of
howStyle
lines are applied.
With applications such as Mozilla and Firefox, titles are dynamic - they often change as a tab or page loads in them. Assuming that we're using the style lines from above, and that we're looking at, say, a webpage that has the title: "Fvwm: my nice screenshot":
Style Fvwm* NoBorders, NoTitle, CirculateSkip, Sticky
...this matches (in part) some of Firefox's title [1]. If one were to then restart FVWM with this page still showing in Firefox (or issue a Recapture command), then the window would become sticky - annoying, and certainly not what we want. Most people will also try something like this to remedy the situation:
Style *Firefox* NoBorders
...which also has the same problems, and perhaps even more so, since that's matching 'Firefox' anywhere within the title of a window.
To get around this, something unique needs to be used. With dynamically changing titles such as those in a web browser, specifying the full name of the window just won't work. However, FVWM also allows us to match by a window's class, as well.
Take Firefox. That will either have a class of Firefox-bin
or Gecko
- which will provide a unique class match.
The reason one wants to match on a window's class in this instance is that it's less ambiguous than the title of the window, which might be something like this:
Fvwm Forums :: Post a reply - Mozilla Firefox
There are a few ways to obtain a specific window's class. Perhaps the preferred option is using the module FvwmIdent, although window manager-agnostic commands such as xwininfo and xprop can also be used. Using the window class instead of the title, the previous style command would be replaced with:
Style Gecko NoTitle
You can be fairly well assured that the Class of a window tends to be unique to that application (the exceptions are things like RXVT which sometimes have been known to set their class to that of XTerm.) The problem here though is that the same application will generally always have the same Class.
Indeed, you might be wondering how FVWM knows which attribute style lines match. Truth is, it doesn't really know, however FVWM defaults to cycling through a known series of window attributes. Hence, FVWM will match your window's style line thus:
Title --> Class --> Resource
So, FVWM checks the title of a window first. If a match is unsuccessful, it will then look at the Class, and if that fails, it will then look at the Resource of that window for a match. By and large, where wildcards are used in style lines -- it's normally the window's title that gets matched in the first instance.
There are other considerations that need to be taken into account. Style lines are ANDed. That is, for successive lines that are specified one after the other for the same application, both lines are considered. So for the following:
Style foo Sticky Style foo !Title
The window 'foo' would be displayed without a title and would become sticky. Because of this, the ordering of style lines is VERY important to prevent race conditions, or other oddities that can creep in.
But that's not the entire story, either. Specificity is important. Yes, for the same window title, the styles are ANDed together. The order that the style lines appear within your .fvwm2rc also matters. For those of you who are familiar with the concepts of Object Oriented programming, you can consider style lines as following the rules for inheritance. The rule of thumb for style lines is:
"Always generalise, before you specialise."
That means, aggregate styles for all windows (Style * [...])
before you specify the style lines for specific applications.
FVWM's parsing is quite literal in that sense. When FVWM parses its
configuration file, it reads it line-by-line. This is why it's
important to think of Style lines as an inheritance model.
Hence, if you wanted all windows to be sticky, and a window whose name is 'foofoo' to not be sticky and have no borders, the correct order to write that in would be:
Style * Sticky Style foofoo !Sticky, !Borders
Note that because we had previously declared a global style in which all windows are sticky, it is necessary to negate that Sticky condition for the specific application. Otherwise it would be "inherited".
Writing that the other way around, however, gets one into trouble:
Style foofoo !Sticky, !Borders Style * Sticky
The greedy match of "*" for all windows, irrespective of the specific condition for 'foofoo' above, means that the greedy match takes precedent.
It was mentioned earlier that style lines are ANDed. This is
indeed true, and you can see that in operation. But we have another rule
that applies: given two contradictory Style
statements, the latter one always wins. So, for example (and I see
this a lot in people's configs), assume you had written this:
Style * SloppyFocus Style * FocusFollowsMouse
... because they're both focus policies being applied to all windows, FocusFollowsMouse would win because it was the last one specified.
Name --> Class --> Resource
.Talkback: Discuss this article with The Answer Gang
I used to write the long-running series "The Linux Weekend Mechanic", which was started by John Fisk (the founder of Linux Gazette) in 1996 and continued until 1998. Articles in that format have been intermittent, but might still continue in the future. I currently write occasional articles for LG, whilst doing a few things behind the scenes. I'm also a member of The Answer Gang.
I was born in Hammersmith (London UK) in 1983. When I was 13, I moved to the sleepy, thatched roofed, village of East Chaldon in the county of Dorset. I am very near the coast (at Lulworth Cove) which is where I used to work. Since then I have moved to Southampton, and currently attend University there, studying for a degree in Software Engineering.
I first got interested in Linux in 1996 having seen a review of it in a magazine (Slackware 2.0). I was fed up with the instability that the then-new operating system Win95 had and so I decided to give it a go. Slackware 2.0 was great. I have been a massive Linux enthusiast ever since. I ended up with running SuSE on both my desktop and laptop computers. Although I now use Debian as my primary operating system.
I am actively involved with the FVWM project, writing documentation, providing user-support, writing ad-hoc and somewhat esoteric patches for it.
Other hobbies include reading. I especially enjoy reading plays (Henrik Ibsen, Chekhov, George Bernard Shaw), and I also enjoy literature (Edgar Allan Poe, Charles Dickens, Jane Austen to name but a few).
I am also a keen musician. I play the piano in my spare time.
Some would consider me an arctophile (teddy bear collector).
I listen to a variety of music.
By Thomas Adam
Most window managers have some form of automation that allows the user to 'script' various aspects of its operation. Indeed, the 'kahakai' 1 window manager has long since defined Python as way of scripting its capabilities.
In FVWM, there are a few ways of scripting events. The use of FvwmPerl is one such way. However, in almost all cases, when people say they want to define actions, what they're really after is some way of conditionally checking windows when they're created, or something similar. The combination of this ability coupled with a series of commands grouped together to form what FVWM calls a function is something that can be quite powerful.
FvwmEvent
is a module - a piece of code that is separate
from the core of FVWM. There are a lot of different modules in FVWM, all of
which share that important distinction; there's no point in loading extra
code which might never be used, or loading it on an ad-hoc basis where the
user never requested it.
FvwmEvent
is a module which allows listening for various events,
and acting upon them when they occur. Originally it was known as FvwmAudio
, since its job
was primarily in playing sounds when various things happened (such as a window
being closed, iconified, etc). FvwmEvent
still retains
that functionality, but now also has the capability of running specific
tasks based on those events.
So what are these events? They're triggers associated with the
operations of windows (many of which are wrappers around various low-level
Xlib library calls). Whenever an event that FvwmEvent
has been
told to listen for occurs, it will look for an associated action and
execute it. A sample valid list of events that FvwmEvent
knows
to listen for can be seen in its man page2. The one this article will examine is
add_window; note that this is only a working example, and should
be thoroughly tested before using in production.
A generic FvwmEvent
configuration looks like the following (note that the
line numbers have been added as a convenient reference point, and are not
part of the configuration):
1 DestroyModuleConfig FvwmEvent: * 2 *FvwmEvent: <some_event_name> <some_action> 3 4 Module FvwmEvent
The very first thing that happens is that the module config is destroyed
(Line 1). This might seem a little strange at first given that nothing has
been declared yet, but the point of it here is that for any previous
definitions of it (say via multiple parsings of one's '.fvwm2rc' file during
restarts), it gets destroyed and then recreated; otherwise, the module
definition would just be added to continuously - something that is most
undesirable. What follows next (line 2) is the start of the alias
definition that FvwmEvent
will eventually read. Obviously <some_name>
and <some_action> are dependent on the event and the action required.
<some_action> might be a function, or a single command. Line 4 simply
tells FVWM to load the FvwmEvent
module.
Note the concept of what's happening. Most modules have the concept of
aliases - that is, an identifier that the module can be told to use
(in earlier version of FVWM, in order for multiple aliases of a specific
module to be used, one had to symlink the alias name to the module). In the
case of the generic example above, that's using *FvwmEvent which
is fine until more instances of FvwmEvent
need to be loaded.
It's permissible, of course, to just have one instance of
FvwmEvent
running and declare all the events it will listen
for in there. The problem is that it's often desirable to run different
actions on the same event - something you can't do with one alias. So the
heuristic approach is to define a unique alias to FvwmEvent
,
which isn't *FvwmEvent. Any name can be used, as will become
apparent.
Of course, in encountering this question, it is often the case that when people say 'maximised', they also mean so-called 'full-screen' - which implies the removal of any title bars and borders, and other such window decorations. That's fine, and can be dealt with at a later stage, although the premise of maximisation has to be discussed first of all.
It also seems to surprise many people that FVWM has no 'StartMaximised' style option. The reason for this is that in introducing such an option it would break the ICCCM5 - since clients set their own geometry, either by themselves or via user interaction.
The first thing to be done is setting up FvwmEvent
:
DestroyModuleConfig FE-StartMaximised: * *FE-StartMaximised: Cmd Function *FE-StartMaximised: add_window StartAppMaximised Module FvwmEvent FE-StartMaximised
This tells FvwmEvent
a few things. One is that the alias we're using for
it is *FE-StartMaximised. Secondly, we've informed the module
that the command specified for the event is a function. Thirdly,
the event we're listening for is add_window. Then the module is
started.
The function we'll declare is quite simple to start off with (again, line numbers are for point of reference only, and are not part of the syntax):
1 DestroyFunc StartAppMaximised 2 AddToFunc StartAppMaximised 3 + I Maximize
Line 1 destroys the previous function definition. It's generally a good idea to do this when declaring functions, since it removes a previous definition for it. Indeed, the AddToFunc command (line 2) is cumulative. Each time it is used, it just adds to the definition of the function. If it doesn't exist, the function is created. Quite often this cumulative nature isn't wanted, so removing the definition beforehand is advised. Line 3 is the important line since it is the line which defines the action for the function.
One can define as many actions within a function as is necessary. There are a few prefixes as well which define when and how those actions are to be invoked:
Function Operators | |
Context | Meaning |
I | Immediate - executed as soon as the function is called. |
C | Click - executed when the mouse button is clicked once. |
D | Double-click - executed when the mouse button is double-clicked. |
M | Motion - executed when the mouse is moved. |
H | Hold - executed when the mouse button is held down. |
Usually the most common operator is I for non-interactive functions, since those commands will always get executed when the function is called. So within this example, the command Maximize is run whenever a window is created. Try it and see; start up an xterm. It will then be maximised. Start up any application in fact, and all of those windows will be maximised. Clearly this is suboptimal, but a start nevertheless.
So far, it's been shown how one can use FvwmEvent
plus a
function to define actions for events. But there will be times in loading
applications (which produce windows mapped to the screen) when some windows
won't get maximised. The reason for this has to do with the context
in which the function is being run.
In most cases, functions are designed to run within a window context. This means that, when they're run, it's known which window or windows the function is to start operating from. Without the proper context, a function will prompt for one, or not run at all. So it's important to ensure a context is forced wherever it's not apparent.
One can achieve this is in a number of ways, and a lot of it depends upon the situation the function is likely to be called in. Recall the definition for StartAppMaximised - at the moment the line looks like:
+ I Maximize
This already assumes a window context. But one can always make sure by using the ThisWindow command, as in:
+ I ThisWindow Maximize
ThisWindow is extremely useful to refer to windows directly without implying any presumptions. Indeed, there are other conditional commands, such as Current, which is quite a common way to imply context:
+ I Current Maximize
However, its use implies that the window already has focus. Sometimes this is useful to refer to the specific window; however, in the case of the StartAppMaximised function one cannot assume the operand windows will always have the focus - hence, the use of ThisWindow is preferable. Where one is unsure as to the operand window (i.e., it is to be decided when the function runs), one can use the Pick command which will prompt for a window to operate on if it is not already known.
DestroyFunc StartAppMaximised AddToFunc StartAppMaximised + I ThisWindow ("name of window") Maximize
What happens here is that only the window with the name 'name of window' is considered. If it matches the window just created, then it is maximised; otherwise, nothing happens. Of course, the maximize command has a toggling action to it. If the said window 'name of window' were to already be maximised at the time it was created (presumably via some command-line flag) then the maximize command would have the opposite effect, "unmaximising" it. Luckily FVWM has a conditional test, Maximized that can be used to test if the window is maximised. The negation of this is !Maximized:
DestroyFunc StartAppMaximised AddToFunc StartAppMaximised + I ThisWindow ("name of window",!Maximized) Maximize
Looking better, certainly. There's still room for improvement, though. In FVWM 2.5.X, one is able to specify multiple windows to match on, if more than one window need be considered:
DestroyFunc StartAppMaximised AddToFunc StartAppMaximised + I ThisWindow ("name of window|another window", \ !Maximized) Maximize
The '|' operator acts as a logical OR command, matching either of the titles and applying the maximized condition to the (possibly) matched window. In FVWM 2.4.X, one would have to use multiple lines one after the other:
DestroyFunc StartAppMaximised AddToFunc StartAppMaximised + I ThisWindow ("name of window",!Maximized) Maximize + I ThisWindow ("some_window",!Maximized) Maximize
There's still one more condition to consider: different window types. Up until now, the assumption has been that normal windows are considered. Whilst in most cases that's true, FVWM has (at the simplest level) two different window types that it manages; ordinary application windows and transient windows. By its very nature, a transient window is one which is generally only on screen for a short length of time. Also known as 'popup' windows, they're typically used for 'Open' and 'Save' dialogue windows. It's not likely (due to their implementation) that one is going to be able to maximise them anyway, but it's worth excluding them. FVWM allows for this via the Transient conditional check, which can be negated to !Transient:
DestroyFunc StartAppMaximised AddToFunc StartAppMaximised + I ThisWindow ("name of window|another window", \ !Maximized, !Transient) Maximize
The basis and functionality for the StartAppMaximized function is complete. The last remaining item is to make certain windows borderless and to remove their title so that they appear to cover the entire viewport. In the simplest case, the window's name or class is known beforehand, and an appropriate style line can be set6. For example:
Style "name of window" !Title, !Borders, HandleWidth 0, BorderWidth 0, ResizeHintOverride
That line ought to be pretty self-explanatory. The ResizeHintOverride condition makes those applications which are column-sized aware (such as XTerm, GVim, XV, etc) not to be so. Without it, some applications would leave a noticeable gap at the bottom of the screen.
This has been a very brief look into how FvwmEvent
can be
used to monitor and react to various events. The most important thing to
remember about the use of FvwmEvent
is specificity: always be
as specific as possible when operating on windows. Where a certain amount
of automation is required, always enforce a given context, unless it's a
requirement that the user is to select an appropriate operand window at the
time the event is triggered.
Some general links that might be of interest:
https://edulinux.homeunix.org/fvwm/fvwmcookbookfaq.htmlTalkback: Discuss this article with The Answer Gang
I used to write the long-running series "The Linux Weekend Mechanic", which was started by John Fisk (the founder of Linux Gazette) in 1996 and continued until 1998. Articles in that format have been intermittent, but might still continue in the future. I currently write occasional articles for LG, whilst doing a few things behind the scenes. I'm also a member of The Answer Gang.
I was born in Hammersmith (London UK) in 1983. When I was 13, I moved to the sleepy, thatched roofed, village of East Chaldon in the county of Dorset. I am very near the coast (at Lulworth Cove) which is where I used to work. Since then I have moved to Southampton, and currently attend University there, studying for a degree in Software Engineering.
I first got interested in Linux in 1996 having seen a review of it in a magazine (Slackware 2.0). I was fed up with the instability that the then-new operating system Win95 had and so I decided to give it a go. Slackware 2.0 was great. I have been a massive Linux enthusiast ever since. I ended up with running SuSE on both my desktop and laptop computers. Although I now use Debian as my primary operating system.
I am actively involved with the FVWM project, writing documentation, providing user-support, writing ad-hoc and somewhat esoteric patches for it.
Other hobbies include reading. I especially enjoy reading plays (Henrik Ibsen, Chekhov, George Bernard Shaw), and I also enjoy literature (Edgar Allan Poe, Charles Dickens, Jane Austen to name but a few).
I am also a keen musician. I play the piano in my spare time.
Some would consider me an arctophile (teddy bear collector).
I listen to a variety of music.
By Edgar Howell
Recently, my wife and I spent a week in a hotel on the Dutch coast, and when we weren't out walking on the beach or riding on one of a near-infinity of bicycle paths, I was using a laptop to try to catch up on some reading. By a fortunate (???) coincidence, I failed to take the mouse along and was forced to use the idiotic touch-pad: apparently designed for the Wimp/OS world, it likes to trigger "events" at random intervals, perhaps at transition from one window to another. In any case, suddenly I was confronted with the following:
Wait a minute - my wife can get by in Dutch, but mine is terrible! I don't have anything in Dutch on this machine... What is going on here?
Well, my Dutch may be bad, but this wasn't too hard to decipher: apparently, the Dutch have a verb that means "to use the Internet", and the 2nd line translates as "use the Internet wirelessly ... as if at home" - and the green heading in the right-most column refers to the hotel where we were staying.
Guess what that means? They have a hot spot - and Knoppix found it without my even asking!
So I made things a bit easier on myself by clicking on the British flag:
When I showed this to my wife, her first question was: can we check on e-mail? Well - sure, why not?
Cautious as I am, though, I wanted to first test the water before jumping in. Under Knoppix, the Panel next to the "K" for KDE has a Tux icon; clicking there, then "Network/Internet" and "KWiFIManager (Wireless LAN Manager)" provided the following information:
Wow! Since Knoppix started DHCP at boot time, it had already obtained an IP-address. I had no idea this was going on. Klaus, you are so cool!
Clicking "File" and then "Disable Radio" was the emergency brake I wanted.
Now that I knew that I wasn't going to turn into the Sorcerer's Apprentice, the ice didn't seem quite as thin as it had a few moments earlier. After all, if it is necessary to pay for a block of time up front, I didn't want to waste any of it.
Something like $7 for 50 minutes isn't exactly cheap, but under the circumstances, it didn't seem unreasonable. More than enough time to check on e-mail a couple of times while we are here, and maybe even surf a bit.
After entering a user name and password as well as a valid e-mail address, it was time to provide a credit card number.
After that, it was just a matter of starting a browser and entering a URL. This initiated contact with the provider requiring a log in, which then provided a couple of options (which I didn't investigate) prior to actually connecting and starting the clock.
Real road warriors certainly don't worry about minor details such as cost: like E.T., they just want to get back in touch. But when you are on vacation and have to pick up the tab yourself, you might not consider the hotel WLAN. I certainly will in the future.
The great advantage for us Linux users is that Knoppix is quite happy to run without access to a hard drive. I didn't have to worry about anything evil. The one or two things that I found important enough to keep, I just wrote to a USB stick. Neat. Clean. No problem.
Do note, however, that my Knoppix 3.4 CD does not seem to support WLAN - at least I didn't find anything like it. Since 5.0 is now out (Mid-May 2006), there is little incentive to want an earlier version on a notebook anyway.
Pure conjecture on my part: this notebook has the Intel Centrino (TM) label and the provider at the hotel, KPN, displayed it as well. Apparently Linux supports this hardware configuration well.
For what it is worth, I sacrificed a SuSE partition and went on-line after booting it ("sacrificed" since it required access to the hard drive and I will now re-format that partition as potentially contaminated). It wasn't particularly difficult to figure out how to do this under SuSE as well.
I haven't been able to replicate this since returning home, but that certainly is just due to the absence of a hot spot in the vicinity. Here, Knoppix shows the following:
This was my very first encounter with WLAN and may not be typical of other providers.
While I would expect a somewhat similar scenario from other providers, perhaps the Dutch are just very good at making it easy on newbies.
[ I do a lot of travelling, and connect to a wide variety of strange WLANs. In my experience, at least, connecting to a wireless LAN with Linux is usually just as simple as Edgar describes. -- Ben ]
Still, it would seem that WLAN access is no longer just for high-tech road warriors. While it's not something I'd do in an airport somewhere waiting for a connecting flight, I definitely will be looking for connectivity in any hotel where a stay of several days is involved.
That will be a major improvement over dragging myself to the hotel lobby, checking in with some sleepy clerk, and getting stuck using some outdated OS that keeps getting in my way - and may even retain things I don't want kept in spite of my clearing the history before leaving the browser and logging off. Knoppix to the rescue!
Talkback: Discuss this article with The Answer Gang
Edgar is a consultant in the Cologne/Bonn area in Germany.
His day job involves helping a customer with payroll, maintaining
ancient IBM Assembler programs, some occasional COBOL, and
otherwise using QMF, PL/1 and DB/2 under MVS.
(Note: mail that does not contain "linuxgazette" in the subject will be
rejected.)
I have always been skeptical of media players and music management software... using a bare minimum player like XMMS to play my digital music. I have tried quite a few different ones: Rhythmbox, Banshee, etc.; I've also giving a fair time to demos of iTunes and the Yahoo Music engine, recommended by excited Mac and Windows weenies.
So, when a friend came to me and said "try amaroK!", I said, "Yeah, right!" and blew the idea out with a puff of smoke.
Then, yesterday morning, I got stuck in a traffic jam and as a result didn't feel like writing any kind of code - so I decided to install amaroK. :) Installing it on FreeBSD 4.11 could be only covered under a full-fledged technical orientation session; since that might scare lots of people, we will not dwell on those perils. Besides, I have it installed under Linux as well.
So, after I installed amaroK... was I blown? or was I blown?
The really cool thing about amaroK is that you can install it and it's ready to serve you in its full glory within a couple of minutes - at least under Linux. :)
Tracks are imported into a "collection", which is displayed in the left pane in a file manager-like fashion. Tracks must be added to a playlist before one can hear them. There is a playlist tab which has "Smart Playlists", and - of course - you can create your own.
Unlike most applications on Linux, eye-candy is not overlooked here, but is actually given as much importance as usability and features. I especially liked the "on-screen display" which comes up every time amaroK starts playing a new track; mousing over the task bar icon shows the name of the current song and also adds a little album cover. A lot of work has gone into creating such a visually appealing tool, and in my opinion at least, the effort was well worth it.
amaroK copies the winamp/XMMS shortcuts and does a great job of handling them. It also adds XMMS shortcuts to a meta key of your choice and gives you "Global Shortcuts." Dont like the current song? You don't have to switch applications/virtual desktops, search for amaroK, or look for any 'skip' keys - just press WIN+B. Sweet, or what?
And did I tell you it displays the lyrics of the current song in a little box? did I mention it also gives you a musicminds recommendation? maybe I forgot about the fact that it also looks up the artist at Wikipedia and shows the information in a little box. Oh, yes... I did forget to mention that it automatically downloads album covers from Amazon.
Sometimes, though, amaroK gets in my way rather than helping. For example: every time you play a song, it changes to the "Current" tab. This is rather irritating; if you were in the 'Playlists' tab and going through a collection - listening to a bit of this, and a bit of that - you'll be forced to the 'Current' screen.
That being said - I really like amaroK and am really looking forward to version 1.4. What I particularly like about it is the cover/lyrics/tag fetching, as well as the potential value of dynamic mode and its iPod support.
I have Linux on my laptop, and it's also running amaroK now. In my opinion, 'gstreamer' is a whole lot more stable than 'aRts'; the latter seems to be unmaintained, and imposes a lot of overhead.
Amarok appears to be written by hackers who themselves are truly fond of music. It is a increasingly attractive and capable tool, and these guys seem to be just getting warmed up. I'm really looking forward to seeing what they'll do next!
Talkback: Discuss this article with The Answer Gang
I work at Yahoo! on global Yahoo! operations, application level monitoring, writing articles on, speaking about and researching on monitoring and how it helps application performance. In my free time I do testing, documentation and bug fixes for the free software projects that I use the most.
By Rick Moen
One of the advantages of hanging out in The Answer
Gang is being privileged to hear when one of the Gangsters goes off on a
rant (which are usually lots of fun), or, as often happens in the case of
Rick Moen, fires off a mini-tutorial on some interesting subject. Recently,
in response to a comment of mine, Rick posted a reply that we'd like to
share with you, our readers; not only was it a bit long for our Mailbag,
but it also deserved the status of a full article. Many thanks to Jimmy
O'Regan for formatting the content; as with so many jobs around LG, "many
hands make light work".
-- Ben Okopnik
Ben wrote:
As to spam, you may or may not have noticed, but the incidence of it has gone down (my estimate) by ~99% since Rick Moen took over administering the list. It's been more than a year, and I don't think I've seen a dozen spams get through during that time.
Herewith, a brief report on the state of system spam-rejection.
As a reminder, TAG's setup is challenging, in that list policy precludes requiring subscription before posting. Thus, a prime antispam tool isn't available.
The most-effective antispam tool is use of Exim 4.60 (w/sa-exim 4.2)'s "callout" interface during incoming SMTP sessions, used to verify that the connecting MTA is RFC-compliant (e.g., accepts DSNs and mail to the postmaster and abuse accounts). This blocks an immense percentage of spam, and 55x-rejects those. After that, the mail is subjected to SpamAssassin 3.1 testing: High spamicity mail is 55x-rejected, very high-spamicity mail is 45x-rejected (i.e., teergrubing, which attempts to punish particularly flagrant malefactors by tempting them to continually reattempt delivery).
I use some but not all of the optional enhancements provided by J.P. Boggis's "EximConfig" package (https://www.jcdigita.com/eximconfig/), of which I have v. 2.0 installed:
The setup does not (yet) use greylisting, which might be beneficial.
I've been reluctant to make my MTA depend on MySQL.
The setup does not (yet) attempt to SA-test mail with full unescaping and Base64 decoding of the message body text, which might be beneficial.
I've been reluctant to make my MTA depend on embedded Perl and MySQL.
The setup does not (yet) attempt flood protection / duplicate messages / repeat failed deliveries.
I've been reluctant to make my MTA depend on MySQL.
The setup does not (yet) attempt direct detection of malware as distinct from other types of spam.
I've been reluctant to make my MTA depend on detectors of MS-Windows malware, which after all isn't actually harmful, only annoying -- and it seems lame to have to run a virus checker on a Linux box, even ClamAV.
Spam addressed to TAG since March 1 (to date) that evaded my filters was as follows:
In each case, I blackholed the delivering IP upon receipt -- but that's ultimately a poor solution: You end up endlessly playing whack-a-mole. Still, it makes sure there are no repeats from that IP.
I may finally overcome my reluctance to enable those above-cited optional EximConfig features, persuant to my long-deferred server migration to slightly less antique hardware, Real Soon Now. Also, I could attempt to update Exim and SA's rulesets, to help them pattern-match better on 419 and phishing frauds. (I gather that financial fraud spam has been really taking off.)
The point of the recounting, above, is to highlight where the major weaknesses in the current system lie: non-English-language spam, and financial fraud spam.
I try to be really, really careful in any experimentation with my MTA, since mail is very important to me and my users, and since I can't afford to lavish unplanned debugging time fixing a malfunctioning mail system. Thus, even though I know I really should upgrade to EximConfig 2.2, sa-exim 4.2.1, and SpamAssassin 3.1.1 -- and put in place some custom third-party rulesets for the latter (https://wiki.apache.org/spamassassin/CustomRulesets), I've been slow to do so for fear of breakage and extra time commitment.
[Description of the existing toolset: Exim 4.60, sa-exim 4.2, EximConfig 2.0, and SpamAssassin 3.1 -- along with its design goals, current weaknesses, and reasons why I've been reluctant to implement some optional extensions and in general am very careful about breakage.]
Warning: Following paragraphs include opinions, which you are welcome to either adopt, take home, and admire, or scowl at and hurl imprecations towards, as local cerebral wiring policy dictates. Readers looking for guaranteed objective truth should stick to mathematics -- and even then stay far away from anything touched by Kurt Gödel.
Spam defence is endlessly controversial for lots of reasons, including inherent drawbacks (false positives, false negatives, other types of collateral damage) in even the best regimes. No matter what tactics you use, you annoy someone -- and mailing lists (being glorified mail-forwarding devices) have proven to be Ground Zero for the spam problem and resulting controversies.
The "luv-main" mailing list, of the Linux Users of Victoria (Australia) has recently erupted in one such donnybrook, where the overwhelming majority (and pretty much all of the more-technical members) wish to convert the mailing list to publicly searchable archives, while a vocal minority stand in the way, protesting their right to continue hiding their mailing addresses from spammers. Many innocent electrons have been killed in the resulting discussion, but at least it was established to the satisfaction of most that the only reasons hide-from-spammers tactics "work for many people" (as a proponent put it) are short usage periods and dumb luck: Over time, any address used for mail will become discovered by spammers through any of several diverse means, including exposure on other people's virus-infected Windows boxes.
LUV's likely compromise solution will be creation of a fully public "luv" mailing list alongside the obscured "luv-main" one, with the expectation that the latter ghetto will wither and die. (Naturally, the minority aren't happy with that proposal, either.)
On the other side of the Pacific Ocean, the Silicon Valley Linux User Group's mailing lists are gradually becoming more spammy, despite using (modulo versions) exactly the same MTA and spam-rejection software I use, because SVLUG's Linux server has been completely unmaintained for 2+ years.
I'm not unsympathetic towards hide-from-spammers people: As they often point out, most use mail facilities over which they have no administrative control -- often, in fact, their work mailboxes. A deluge of spam would make their lives miserable and might even interfere with their professional lives. This perceived loss-of-control threat increases their stress levels immensely, and impels them to make sometimes emotional demands on listadmins and others. (I frequently get mail saying "I [/someone else] inadvertantly revealed my private e-mail address on mailing list $FOO. Would you mind please removing it from the public archive?" I always do help such people. Even though I don't share their general approach, it's the kind thing to do.)
In any event, I'd been pondering both the slow deterioration of SVLUG's spam-defences (from my front-row seat as its mailing list moderator) and my own system's occasional slippage on TAG mail -- e.g., the aforementioned twelve spams over the last six weeks: 5 advance-fee frauds, 4 phishing frauds, 1 in Russian, 1 Windows malware, and 1 UCE. As with SVLUG's system, I realised that I could probably do better, after a bit of updating.
So, a couple of days ago, I did some. You'll maybe have noticed that we've had absolute, blissful silence on the spam front, since then -- which might be coincidence, or maybe not.
I figured out that one easy path to the low-hanging fruit was: beef up the SpamAssassin rulesets. In an ideal world, this would not be my preferred approach: SA is a very beefy and slow Perl app, and those same heuristics, implemented as improvements to the Exim4 front-end rulesets would, mutatis mutandis, be much more desirable. However, those aren't at hand, while there's a veritable bazaar in third-party ruleset files for SA, right here:
https://wiki.apache.org/spamassassin/CustomRulesets
The ones I dropped into /etc/mail/spamassassin, a couple of days ago, were as follows:
70_zmi_german.cf Catches German language spam. Chinese_rules.cf Rules to catch spams written in Chinese. mime_validate.cf Finds MIME errors common in mails sent by bulk mailers. blacklist_ro.cf Catches spams written in Romanian or by Romanian spammers. evilnumbers.cf Phone #s, PO boxes, & street addresses harvested from spam. chickenpox.cf Looks for words broken up by extraneous symbols. french_rules.cf Catches spams written in French. malware.cf Detects URLs known to point to malware.
There's a standard cronjob (Rules du Jour) to keep these and others updated; I haven't implemented that yet, as I'm still testing.
Initially, I also installed this one:
sa-blacklist: a large set of blacklist entries of domains and IP addresses.
This turned out to an 11MB(!) ruleset file. For a Perl script. I rather recklessly did give it a try, resulting in the kernel out-of-memory killer going on a shooting spree (on my antique 256MB RAM PIII server) about five minutes later.
In addition, also quite usefully, I'm sure (if not more so than the SA improvements), I dropped in updated nine updated ACL files for the J.P. Boggis's EximConfig suite of Exim4 rules (and other things): https://www.jcdigita.com/eximconfig/#ACLs
...which brings us to the present, and the (for now) absence of new spam arrivals. Moral of the story: It's possible (at least, if you're an MTA operator) to have a credible, livable alternative to the venerable "hide from spammers" stance: Use better technology, apply it intelligently, and be aware that some ongoing maintenance is required.
Personally, I would regard it as beneath my dignity to do otherwise. Internet presence is my community's core competency, so I'll be damned if I'm going to surrender even an inch of it.
One caveat: Silence isn't necessarily blissful. There will always be collateral damage, and one must keep an eye out for addresses, IPs, hostnames, etc. that should be whitelisted.
Jimmy Regan replied to TAG querent Marcin Niewalda:
> Myślę, że to pomyłka: pan napisał do listy adresowego magazynu > internetowego. Dlatego że nasz magazyn jest napisany po angielsku, > przetłumaczyłem e-mail Pana. Adres, którego Pan szukał, jest > Delaveaux@heagmedianet.de ale myślę, że ten pan mowi tylko po > angielsku i po niemiecku; a nie wiem, czy ten adres jest nadal > aktualny.
Separately, I had commented:
> In each case, I blackholed the delivering IP upon receipt -- but > that's ultimately a poor solution: You end up endlessly playing > whack-a-mole. Still, it makes sure there are no repeats from that IP.
And there's another reason why it's a poor solution: Some days, you get trigger-happy, e.g., when an innocent, on-topic query arrives in Polish -- which is a problem if, unlike Jimmy, one is Polish-challenged.
Having seen the original query and (erroneously) concluded that it was spam, I absent-mindedly added the delivering host's IP to /etc/exim4/eximconfig/reject/ip via the vim instance I leave open all the time editing that file. It's a straight listing of "Individual full sender IP addresses to reject", one per line. Today, several days later, I had long ago lost track of which line it was. Fortunately, in this one instance, my error remains fixable:
[rick at linuxmafia] ~ $ dig -t mx okiem.pl +short 10 mail.okiem.pl. [rick at linuxmafia] ~ $ dig mail.okiem.pl +short 72.232.62.58
Accordingly, I've just now un-blackholed poor Marcin's ISP mail server IP. But in many cases, I'd either not realise I'd misjudged, or be unable to re-find which IP it was.
Anyway, the larger point I wanted to make is that what I said in the three-line quotation, above, wasn't exactly right -- because I actually do try to be mindful of grey areas, and look closely at what the IP really is, before consigning it to permanent oblivion. (One doesn't want to rely on Received headers' hostnames in so doing, except for ones supplied by my own receiving MTA doing reverse-resolves: Spammers lie; "dig -x" is often one's friend.)
For example, if the prior-hop IP on a 419-fraud spam corresponds to hostname mx105.exampleisp.com, then blackholing it would be dumb: For one thing, impliedly exampleisp.com has at least 104 other mail exchangers. For another, I'm not necessarily eager to pronounce anathema on exampleisp.com just because of one 419 fraudmail. That could happen to almost anyone operating an MTA. It could happen to me (but only until I tracked down the user on my system who did it and... reasoned with him).
I'm reminded of a passage in my friend Karsten Self's (CC'd) excellent recent analysis paper "CIDR House Rules: Use of BGP Router Data to Identify and Address Sources of Internet Abuse"[1]:
While blocklisting is one possible option, I'd very much like to see the discussion move beyond that point. A preferred approach is what I term "proportionate response". First: you'll likely want rules to expedite known-trusted mail, or high priority mail from remote organizational sites, peers, clients, vendors, or other established relationships. Secondly, many peers will either have small overall volumes, or not have a clearly identifiable nature. This leaves the set of networks that are both high-volume and overwhelmingly spammy in nature. Of course, any such implementation would have to be evaluated in a business and organizational context.
In proportionate response, a certain level of abuse would be met by a proportionate level of response. For example, a network from which 90% of email was found to be spam, 90% of traffic originating from that network would be denied or dropped, either at the service (protocol) or IP level, at random. If done at the SMTP transaction level, either as a timeout (without 250 OK) or non-permanent rejection, this would mean legitimate mail still has a fighting chance to get through. A 90% reject rate would allow half of mail through on 5 retries, for a typical 2 hour delay. A spam server without retry rules would fail delivery of 90% of its mail; with retries, it would suffer large mail spools and possible other resource starvation. The site implementing such a policy will receive immediate benefit to itself. Widespread adoption is not necessary to be locally beneficial. As multiple and large sites adopt such measures, impacts on abuse-tolerant networks would be significant. The approach is to be both non-invasive and non-retaliatory. You are not taking any action that in any way directly changes or affects a remote system: but are subjecting it to a denial of interest. As a proportionate response, reject rates could vary with total traffic volume, abusive traffic percentage, and severity of abuse, as suited specific needs. Fine levels of control are therefore possible; operators are not reduced to all-or-nothing responses to abuse.
I don't yet have the toolsets to implement Karsten's excellent advice, though I admire its judicious approach. Lacking those, I mostly rely on the previously described RFC-compliance checking (implemented via Exim 4.x callouts), SPF-checking, tarpitting, and intentionally very sparse use of other lossy filtering. Playing whack-a-mole on spam-source IPs is mostly a losing game, with too much collateral damage.
Main exceptions are: (1) Some IPs whose owning domains / companies I've just seen much too often in that context, and accordingly have classed as evil. The French division of European broadband ISP Wanadoo is the example that most comes to mind: After seeing way, way too much blatant spam from IPs respolving to *.wanadoo.fr hostnames, I just started adding them as they spammed me to /etc/exim4/eximconfig/reject/ip. Legitimate French Wanadoo customers will increasiingly be SOL in sending mail to any address on my machine, which is a little unfair but life's imperfect.
It's basically a moral judgement on my part that Wanadoo should do much better, and that therefore it and its users can go to Hell. This isn't very nice of me, and possibly isn't a wise long-term measure, but it feels good. ;->
(2) IPs whose surrounding facts make them seem like IPs from pools of virus-infected Windows desktop machines being abused to crank out virusgrams, UCE, etc. One can reasonably guess that nothing from those IPs will ever be legitimate, for at least 3-5 year values of "ever".
It would be good if entreis in /etc/exim4/eximconfig/reject/ip entries (and similar files for related purposes) were at least date-stamped and would time out: They aren't and don't. My maintaining such a file manually is not very satisfactory and has obvious drawbacks. Over time, I hope to phase it out -- and maybe even pull Wanadoo out of its oubliette.
[1] https://linuxmafia.com/~karsten/cidr-house-rules.pdf Recommended.
Abstract: "BGP router data may be used to identify contiguous regions of network space from which significant abuse is observed. Experience suggests a strong power-law relationship in ranking such sources. Applying this knowledge in abuse countermeasures may markedly reduce filtering overhead while minimizing inadvertant blocking and increasing total costs to abuse-tolerant networks."
Talkback: Discuss this article with The Answer Gang
Rick has run freely-redistributable Unixen since 1992, having been roped
in by first 386BSD, then Linux. Having found that either one
sucked less, he blew
away his last non-Unix box (OS/2 Warp) in 1996. He specialises in clue
acquisition and delivery (documentation & training), system
administration, security, WAN/LAN design and administration, and
support. He helped plan the LINC Expo (which evolved into the first
LinuxWorld Conference and Expo, in San Jose), Windows Refund Day, and
several other rabble-rousing Linux community events in the San Francisco
Bay Area. He's written and edited for IDG/LinuxWorld, SSC, and the
USENIX Association; and spoken at LinuxWorld Conference and Expo and
numerous user groups.
His first computer was his dad's slide rule, followed by visitor access
to a card-walloping IBM mainframe at Stanford (1969). A glutton for
punishment, he then moved on (during high school, 1970s) to early HP
timeshared systems, People's Computer Company's PDP8s, and various
of those they'll-never-fly-Orville microcomputers at the storied
Homebrew Computer Club -- then more Big Blue computing horrors at
college alleviated by bits of primeval BSD during UC Berkeley summer
sessions, and so on. He's thus better qualified than most, to know just
how much better off we are now.
When not playing Silicon Valley dot-com roulette, he enjoys
long-distance bicycling, helping run science fiction conventions, and
concentrating on becoming an uncarved block.
The Ecol comic strip is written for escomposlinux.org (ECOL), the web site that supports es.comp.os.linux, the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author.
These images are scaled down to minimize horizontal scrolling.
All Ecol cartoons are at tira.escomposlinux.org (Spanish), comic.escomposlinux.org (English) and https://tira.puntbarra.com/ (Catalan). The Catalan version is translated by the people who run the site; only a few episodes are currently available.
These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier, who would appreciate hearing from you.
Talkback: Discuss this article with The Answer Gang
Thu, 27 Apr 2006
From Rick Moen
Quoting Keith Owens (kaos@ocs.com.au): > A survey of DNS security[1] has this lovely quote > > "A cracker that controls a nameserver at Monash University in > Australia can end up controlling the resolution of the web site for > the Roman Catholic Church in Ukraine. Legacy DNS creates a small > world after all". > > It is scary to see how [potentially] insecure the DNS mesh is.
Dan Kaminsky gave an amazingly entertaining and enlightening lecture at the LISA 2005 conference, in part about his own studies of the global DNS, to determine among other things how vulnerable to cache poisoning it is: Answer: a great deal. There are way, way too many vulnerable BIND8, BIND4 and other (e.g., Microsoft) vulnerable nameservers out there.
Dan was able to set up a machine with sufficient bandwidth and horsepower that it's been able to conduct scans of all IP space, everywhere, doing various tests and mapping out all responding nameservers. He says he "got calls from some very scary places" in so doing (since such scans normally precede a large-scale network attack), but he's been able to placate them. (The IP reverse-resolves to "infrastructure-audit-1.see-port-80.doxpara.com", the Web pages on which explain his probes when they're active, and include his cellular number for any inquiries.)
Also, if you do a "whois" on his netblock, you get return values that include these lines:
Comment: This is a security research project, please send all Comment: abuse and alert requests to dan@doxpara.com.
His summary results (from 50GB of collected data) included:
As an afterthought, he realised that his test harness also enabled him to estimate the penetration of Sony's infamous Windows rootkit, as measured by its effect on the world's nameservers: All infected machines' rootkit software feeds data back to connected.sonymusic.com, reached by hostname (thus entailing resolution at some local nameserver, which thus loads its cache). Dan thus used his census of the world's nameservers to send each a non-recursive "A" query: This returned the matching IP if and only if the value was already cached.
Result: He found 556 thousand nameserver hosts with the cached value -- a quarter of the world. (This is after massive publicity and large-scale attempts to purge the rootkit.) Oddly, these were across 165 countries, and suggests this reflects bootlegging of USA-labelled music CDs.
Interesting remaining questions include estimating (through traffic-level studies) how many infected Windows clients this result reflects.
...and he's working on other ways to further exploit his DNS data.
Dan's a maniac. At the prior LISA conference, he'd demonstrated streaming audio over DNS packets -- to illustrate exactly how porous most people's "firewall" strategies are. Because he heard that many people had dismissed that as "Well, that's just low bitrate; couldn't be significant", this year, he demonstrated streaming video over DNS.
You can get the slides and complete MP3 of this talk from USENIX, at https://www.usenix.org/events/lisa05/tech It was called "Network Black Ops: Extracting Unexpected Functionality from Existing Networks". Recommended.
> https://beehive.cs.cornell.edu:9000/dependences?q=<your site name>. > Mine comes out at 282 name servers!
22 for linuxmafia.com, and only the ones that need to be there.
Forgot to mention: The DNS survey Keith Owens refers to is described at https://www.cs.cornell.edu/people/egs/beehive/dnssurvey.html .
Thu, 25 May 2006
From Lodes U. Currying
Buenos dias!
Nuestra compania se llama Magnat Trading Group.
Nuestra especializacion es ayudar a empresarios a vender o comprar el articulo en la subasta mundial Ebay. Como un resultado del trabajo intenso la compania en 4 anos pudo lograr el nivel mundial y segun los expertos ser una de las 20 mas influyentes companias, que proponen los servicios de comercio. En Espana empezamos a trabajar recientemente y en relacion con eso tenemos una vacancia de manager financiero supernumerariom, quien va a ser representante de nuestra compania en Espana. Los requerimientos basicos son los siguientes:
Por buen cumplimiento del deber prometemos alto nivel de beneficio, tiempo de trabajo flexible.
El pago se comete sin retraso. Le pagamos a Usted 150-500 euro por cada operacion .
Si esta Usted interesado en nuestra proposicion, puede recibir mas detalles por e-mail: magnat_group@km.ru
Gracias por la atencion a nuestra proposicion, La adminisracion de Magnat Trading Group
[clarjon1] well, for anyone who cares, this is spam... I like the translation this one web service gave me for it
Good day! Ours compania is called Magnat TRADING Group. Our specialization is to help industralists to sell or to buy I articulate in the world-wide auction Ebay. Like a result of the work intense compania in 4 anuses could manage the world-wide level and segun influential the the 20 experts to be one of but companias, that they propose the services of commerce. In Espana we began to recently work y in relation with that we have a vacancia of manager financial supernumerariom, who is going to be representing of ours compania in Espana. The basicos requirements are the following ones: - computer, Internet, email, I telephone - the banking account in Espana By good fulfillment of having we promise to stop benefit level, flexible working time. The payment is committed without delay. We paid to You 150-500 to him euro by each operation. If this You interested in our proposition, it can receive but details by email: magnat_group@km.ru Thanks for the attention to our proposition, Adminisracion of Magnat TRADING Group
My favorite part is the '4 anuses could manage the world-wide'!
[Ben] I just knew there had to be a Spam Cabal somewhere out there, pulling all the strings... and now, little by little, we're beginning to gather information about the exact composition of this Dark Council.
Tue, 09 May 2006
From Rick Moen
Our cherished Ma Bell reincarnation, SBC (which is in the middle of re-naming itself to AT&T, thus completing the circle), found a creative way to screw up relocation of our telephone service: They unilaterally rescheduled it (for fairly dumb reasons) -- and then decided not to tell us. As a reminder, this matter affects lists.linuxgazette.net's online presence because its Raw Bandwidth Communications DSL service piggybacks atop SBC's line.
What happened, you ask? Here are my wife and me, about 5:20 pm Monday, sitting on the living room floor, calling Ma Bell from our still-active telephone:
Deirdre: "Hey, what happened to the telephone move order you guys agreed with me to perform today for 650-561-9820? It's not happening." SBC: "Er, that job is listed in our records as rescheduled to Wednesday." Deirdre: "WHAT? Why?" SBC: "Well, the outgoing occupants had not released the line, and so it would not be available to you until they did, which would be when they scheduled shutoff, i.e., Wednesday." Deirdre: "And you were somehow incapable of ascertaining this fact at the time you committed to a _Monday_ due date -- because, what, you're more accustomed to hanging draperies for a living? And you were incapable of calling my contact telephone number to advise us of the delay because you needed, what, smoke signals? Semaphores? Telegraph lines? Carrier pigeons?" SBC: {mumble} {snivel}
Anyhow, considerations of logic and customer service do not apply, so it's going to be Wednesday -- which means our new house will be sans telephone and DSL service until then. Our old house still has service, and our servers are still camped out on the living room floor -- but I'm removing them tonight in order to turn over the house to the landlords. So, from around 10 PM tonight until SBC/AT&T reconnects telphone service, and then I reconnect the DSL -- probably around a full day, unless things get further screwed up.
Wed, 03 May 2006
From Jimmy O'Regan
I meant to say "I'll do it, but this is where the files are, should I be hit by a bus/abducted by aliens etc."
[Ben] If the latter, send us a postcard. If the former, be sure to let us know well ahead of time so we can initiate Plan B.
...to send the aliens to pick me up, gotcha. I'm sure that, with their superior technology, they can shrug their shoulders and say "yep, he's a goner" in a room with much more blinking lights.
I'm not clear on what to do if I get hit by an alien bus, though - send a postcard ahead of time?
[Ben] C'mon, Jimmy, trivially-easy answer. Borrow a time machine, come back to when you're about to send the email replying to this one, and include the information - GPS coordinates, exact time, etc. Sheesh, do I have to do the thinking for all the aliens around here?
Hmmm... I must be late - I haven't seen me yet.
[Rick] 'Ah, that takes me back. Or is it forward? That's the problem with time travel; you can never tell.' -- The Doctor
[Martin] Rick not sure where you are in the world.. But have you seen any of the new series??
https://www.bbc.co.uk/doctorwho
Although this is getting too off-topic now, unless the good Doctor uses Linux in his time machine!!
[Rick] How could I possibly miss it? For those of us who grew up on the Doctor (my canonical one being Jon Pertwee, the third Doctor), it's a veritable dream come true. I'm undecided whether "Father's Day" or "Dalek" should get my vote for the short-form Best Dramatic Presentation Hugo Award: They both blew me away.
BTW: I'm about 60km south of San Francisco, in the Silicon Valley Desert -- and am indeed guilty of being a Yank. However, I grew up in 7B Bowen Road flat 19A, Victoria, Hong Kong, Royal Crown Colony, where your only choices in television were Rediffusion Television (sort of a Granada/Carlton ancestor, under the 1968 ITV franchise round) or Cantonese opera.
I saw a great deal of 'Doctor Who' and first-run 'The Prisoner', in consequence -- not to mention coming up with much fine MST3K-style replacement dialogue for Cantonese operas, with my mother. All of which explains a great deal, I'm sure.
> Although this is getting too off-topic now, unless the good Doctor uses ^^^^^^^^^^^^^^^^^
Ah, I see. You've only just joined the list, but we've met you before because you travelled^Wwill travel backwards in time. That's dedication!
> Linux in his time machine!!
Sure. Kernel 6.8 came^Wcomes^Wwill come with Tardis support. Hate to think what'd happen if you get an "Oops!", though.
[Breen] The universe screams "Aieeee!! Killing the interrupt handler"...
You'll forgive me if I don't send flowers[1]
[1] The story's in the Launderette...
[Ben] [laugh] Yeah. I do read things before pubbing them, y'know.
Heh. The next time I saw her, she looked like she wanted to say something difficult - starting to talk, but not; red face; voice cracking... and I laughed, told her what happened, and everything went back to normal. (I suppose, in hindsight, I could have made a bit more effort with the first explanation. Oh well.)
Except then, later that evening, I went to visit some friends: I wanted to bring something with me, and had to tell the story, so I bought flowers. Two bunches, because another couple I know were going to be there... and as I was standing at the checkout, her brother passed me. I've never seen such a look of horror... I guess she didn't tell him the rest of the story :0)
[Jimmy] I later asked if I should try to smoothe things over with him... by giving him flowers. It's a good thing she can't make a proper fist
[Ben] Y'know, Jimmy - I've read French farces that weren't nearly this good. It takes a special talent to invent this kind of low drama and ludicrously embarassing malapropos acts and misunderstandings... oh, wait, you _didn't_ invent them. Sorry, no credit to you, better luck next semester!
Well, looking on the bright side, at least I got a good story out of the experience, and I will never forget the Polish for 'pint glass' (kufel) -- though I've been told since then that 'waza' (vase) is an acceptable substitute
It did cause a bit of an awkward silence on Saturday, at my former room-mate's wedding, though, as nobody wanted to try to tell a story to follow mine
Tue, 30 May 2006
From Thomas Adam
Hello,
I was reading the esoteric hello-world page [1] when it mentioned a very obscure language called piet [2]. I have to say I have never heard of it until now --- little pictures to represent programs. And they're colourful. Go take a look, it's quite clever.
[1] https://en.wikipedia.org/wiki/Hello_world_program_in_esoteric_languages
[2] https://www.dangermouse.net/esoteric/piet.html
[Kapil] How about "Ook!"? That would make all readers of Terry Pratchett happier.
Nah -- I never did like him. This does look interesting however:
https://en.wikipedia.org/wiki/Chef_programming_language
[Pedja] If you'd like to see Web or e-mail as spoken by Chef,checkout Bork Bork Bork extension for Firefox/Thunderbird
https://www.snert.com
https://addons.mozilla.org/firefox/507/