...making Linux just a little more fun!

October 2006 (#131):


Mailbag

By Kat Tanaka Okopnik


Mailbag

Piet Programming
[TAG] Locking down a Linux box
Talkback links should CC article author
Confusion about linux fonts
FVWM Kiosk - a different approach
How to convert RedHat9 to Gentoo over SSH on a live system
State of the anti-spam regime, July 2006 edition
Wiring a house with ethernet: Success
Linux driver question
USB Hard Drives
New URL for GLUE - Groups of Linux Users Everywhere
Copyright Notice
Nullmodem
The all new Ubuntu.... Did I say something wrong?
I will nead a little help or more!!!!!!
Kernel tweaking
Invisible Read!!
Port Linux on DSP
Which process wrote that line into syslog?
Which process wrote that line into syslog? [2]

Piet Programming

Thomas Adam (thomas at edulinux.homeunix.org)
Mon May 29 18:46:27 PDT 2006

Answered by: Kapil, Pedja, Thomas

Hello,

I was reading the esoteric hello-world page [1] when it mentioned a very obscure language called piet [2]. I have to say I have never heard of it until now --- little pictures to represent programs. And they're colourful. :D Go take a look, it's quite clever.

-- Thomas Adam

[1] https://en.wikipedia.org/wiki/Hello_world_program_in_esoteric_languages
[2] https://www.dangermouse.net/esoteric/piet.html

[Kapil] - How about "Ook!"? That would make all readers of Terry Pratchett happier.

[Thomas] - Nah -- I never did like him. This does look interesting however:

https://en.wikipedia.org/wiki/Chef_programming_language

:)

[Pedja] - If you'd like to see Web or e-mail as spoken by Chef, checkout Bork Bork Bork extension for Firefox/Thunderbird :)

https://www.snert.com/
https://addons.mozilla.org/firefox/507/


[TAG] Locking down a Linux box

Faber J. Fedor (faber at linuxnj.com)
Sun Jun 11 19:43:05 PDT 2006

Answered by: Francis, Kapil, Thomas

Hey Guys,

Well, maybe "locking down" isn't the right phrase but I'm not sure what the right phrase is which is why I'm stumped.

I want to boot up a Linux box, go into X and run my application, let's call it FaberOffice, and run nothing else. Nada. Zip. FaberOffice is to be the only thing running and the only thing that can run.

KDE Kiosk immediately sprang to mind but then I thought it might be overkill. I'd have to turn off all the KDE applications and the KDE Desktop functionality (the latter being the reason to use KDE over straight WMs). I don't need a desktop, I need a window manager. And a light one at that (Ice is Nice :-).

But what am I trying to do with the WM? Am I replacing the root window with FaberOffice? Am I simply maximizing the FaberOffice window and disabling the window decorations and Alt-Tab? Or... :-?

I know what I want to do; I just don't know what I need to do.

Suggestions?

[Francis] - What's your understanding of how a typical Linux system boots?

The short version is init runs and does whatever it says in /etc/inittab, and that's pretty much it.

For an application to run, it is either explicitly listed in inittab, or started by a program which itself is listed there. Typically, a program called "rc" is run, which searches through a specific directory and runs all scripts found. Changing the contents of that directory is the usual way to adjust what runs on boot, without having to edit inittab.

So, pick your distribution, trace what it does (most things are shell scripts, handily enough), and decide where you want to change things.

As an example, I'll describe what I recall of a version of a redhat-derived distro where the intention was to do something similar to what you describe.

In that version, inittab included a call to "prefdm", which chose the right X display manager to use. It also checked whether any display manager at all should be used; the alternative being to run a script to do a no-interaction login. I'm not avoiding the specific filenames in order to be coy; I'm just not certain I remember them exactly, and anyway you should be able to read through the scripts on your distribution to find how it does the equivalent.

One file "autologin" was a flag to bypass the display manager.

Another file "autologin" was the script, and that contained something like

su - $user -c "$(which startx)"

with $user set to be the non-root user as whom the application should run.

Depending on the X configuration, "-- -nolisten tcp" may be a useful addition to the commandline.

"startx" above is also a script; you can read it to see what it does. The straightforward thing to do is to make a ~$user/.xinitrc file which runs all of the programs you want. When the last one exits so does the X invocation.

> KDE Kiosk immediately sprang to mind but then I thought it might be
> overkill. I'd have to turn off all the KDE applications and the KDE
> Desktop functionality (the latter being the reason to use KDE over

Yes, it's much easier not to turn things on than to go and turn them off afterwards.

> straight WMs). I don't need a desktop, I need a window manager. And a
> light one at that (Ice is Nice :-).

Are you sure you need a window manager? You may well -- if your application uses multiple windows, or if you really want two applications running, or want any of the facilities the window manager provides.

> But what am I trying to do with the WM?  Am I replacing the root window
> with FaberOffice? Am I simply maximizing the FaberOffice window and
> disabling the window decorations and Alt-Tab? Or... :-?

If you're disabling everything the WM provides, then you possibly don't need it. Depending on how much you control the environment, you may be able to configure the app to start full-screen and never show a second window.

> I know what I want to do; I just don't know what I need to do.

Forgive me if I'm talking down to you, but you should be very clear on what the application-startup procedure for your distribution is; and when it comes to X, you should be very clear on what facility is provided by X, what by a WM, and what by your application.

With that understanding, you may be able to phrase "what I want to do" in such a way as to make "what I need to do" obvious.

Within reason, you want to aim for the "ps -ef | wc -l" low score.

And "netstat -pantu" should have no lines you don't understand.

> Suggestions?

You may also want to consider options within the X config file like DontZoom, DontZap, and DontVTSwitch. They may limit confusion for the user at the cost of limiting convenience for the maintainer. Of course, depending on the user's experience, they may instead increase confusion for the user.

Using ~$user/anything as an important file may be unwise if the user is able to edit files. For single user, the system xinitrc might be more appropriate. You know your system better than I do :-)

Don't forget to allow your maintenance guy some means of getting at a shell prompt without using a screwdriver, if you think that's useful.

[Thomas] - On Mon, Jun 12, 2006 at 11:57:21AM +0100, Francis Daly wrote:

> What's your understanding of how a typical Linux system boots?

https://www.hantslug.org.uk/cgi-bin/wiki.pl?LinuxHints/RunLevels

> In that version, inittab included a call to "prefdm", which chose the

Yes, which is why Redhat suck. What happens, if for some reason the display manager couldn't launch? /etc/inittab would continually try and respawn it, hence the spurious error messages to the console one used to find. It's a stupid way of working.

> "startx" above is also a script; you can read it to see what it does.
> The straightforward thing to do is to make a ~$user/.xinitrc file
> which runs all of the programs you want. When the last one exits so
> does the X invocation.

I'd use ~/.xsession here, since startx will read ~/.xsession in lieu of ~/.xinitrc missing, and has the benefit that some other display managers honour it.

> Forgive me if I'm talking down to you, but you should be very clear
> on what the application-startup procedure for your distribution is;

I disagree -- and that was never the question asked. If Faber had wanted an autologin, he'd have asked for one. The question was about kiosks and restricting modes of applications.

[Francis] - On Mon, Jun 12, 2006 at 01:03:32PM +0100, Thomas Adam wrote:

> On Mon, Jun 12, 2006 at 11:57:21AM +0100, Francis Daly wrote:
> > In that version, inittab included a call to "prefdm", which chose the
> 
> Yes, which is why Redhat suck.  What happens, if for some reason the
> display manager couldn't launch?  /etc/inittab would continually try and
> respawn it, hence the spurious error messages to the console one used to
> find.  It's a stupid way of working.

Anything in inittab potentially has the same looping failure mode. I haven't examined the Redhat setup to see if it only adds that entry after it has confirmed that the display manager is currently working.

It's a choice Redhat made. Presumably they decided it was better for them than the alternatives. It does make an autologin setup straightforward :-)

> > The straightforward thing to do is to make a ~$user/.xinitrc file
> 
> I'd use ~/.xsession here, since startx will read ~/.xsession in lieu of
> ~/.xinitrc missing, and has the benefit that some other display managers
> honour it.

That'll work too. The admin can configure the system to work as seen fit.

> > Forgive me if I'm talking down to you, but you should be very clear
> > on what the application-startup procedure for your distribution is;
> 
> I disagree -- and that was never the question asked.  If Faber had wanted
> an autologin, he'd have asked for one.  The question was about kiosks and
> restricting modes of applications.

Fair enough, I was answering a different question to you.

You read the question as, approximately, "how do I limit the use of the user account". I read it as, approximately, "how do I limit the use of the system". Both are valid. Either or neither could match the original intention. Now the OP can pick an answer, or wait for more, or offer a fuller description of "what I want to do" that will allow us to point at more specific reading material.

To revert to the original problem description, on a straight reading the answer is a simple "so do that".

Do click the FaberOffice icon. Don't click the nethack icon. Don't type "xterm -e wump" into any "run" box. Don't hit control-alt-backspace and wonder what just happened.

But as that is reasonably obvious, and very silly, it probably isn't the answer wanted. And it probably isn't the question intended either.

One guess at the OP's intention is "how do I get *someone else* not to run any other application". Which becomes "tell them 'Do click etc.'". Which is also reasonably obvious, and also probably not wanted.

So there probably needs to be some degree of compulsion or encouragement in the set-up, depending on whether the user is considered hostile or inquisitive, or whether they can be relied on not to try to run anything unwanted.

[Thomas] - It's funny -- I am writing a Kiosk article using FVWM as we speak that's going to appear in LG. This may or may not happen now in lieu of this reply.

[ Thomas' post in this thread was indeed turned into an article, in LG 128. -- Kat ]

[Faber] - On 12/06/06 12:40 +0100, Thomas Adam wrote:

> It's funny -- I am writing a Kiosk article using FVWM as we speak that's
> going to appear in LG.  This may or may not happen now in lieu of this
> reply.

Based on what you've written here, I'm looking forward to the article.

> OK, so this application of yours (FaberOffice) is proprietary I assume,
> since most people would have given the full name of it.  No matter.

What? You think my ego isn't big enough for me to name applications after myself?! O, yea of little faith! :-)

> In the best case, what you'll probably want to do is something like the
> following:

[ lot's of good stuff elided ]

> I hope that at least gives you some ideas to what you can do.  I've
> rambled on a bit -- I hope it helps.

Dude! You did everything but install t for me! Thanks. I'll let you know how it turns out!

[Thomas] - On Mon, Jun 12, 2006 at 03:12:50PM -0400, Faber J. Fedor wrote:

> Based on what you've written here, I'm looking forward to the article.

I probably won't bother with it -- or if I do, it will only be very similar to what I have written here. So your question came as a means for me to write it anyway, it just means it's in the form of TAG, and not as an article.

> Dude! You did everything but install t for me! Thanks. I'll let you
> know how it turns out!

I'd apperciate that, since I hadn't actually tested any of what I had written. :)

[Kapil] -

  1. You want to run only one application and it's main window should run in fullscreen mode. That suggests "ratpoison".
  2. I presume you want the transient windows to emerge with focus in the centre of the screen. If your app has transient windows that don't behave well with WM_HINTS then you must exclude ratpoison. (For example GIMP and ratpoison do not get along).
  3. You want to disable all key-bindings. You might be able to configure or hack ratpoison to do that.
  4. Finally you want to disable the running of any other applications. This suggests that the path be restricted using "rbash" as the shell.
  5. Another possibility (to ratpoison) is "ion" with minimal features and modules loaded. You may be able to configure ICEWM or FVWM to do this as well.

[Thomas] - Eh? Then ratpoison can't be ICCCM compliant in that case -- or if it claims to be, then it's deluded. The ICCCM is quite clear about how transient window are to be handled. Note also that WM_HINTS (as an XAtom) has nothing to do with a window set as transient -- that's what the "WM_TRANSIENT_FOR(WINDOW)" XAtom details.

> 4. Finally you want to disable the running of any other applications .
> This suggests that the path be restricted using "rbash" as the shell .

Or, as my reply alludes to, just don't allow a terminal emulator to run at all.

[Kapil] - On Tue, 13 Jun 2006, Thomas Adam wrote:
> Eh?  Then ratpoison can't be ICCCM compliant in that case -- or if it
> claims to be, then it's deluded.  The ICCCM is quite clear about how
> transient window are to be handled.  Note also that WM_HINTS (as an XAtom)
> has nothing to do with a window set as transient -- that's what the
> "WM_TRANSIENT_FOR(WINDOW)" XAtom details.

Sorry. This is more a case of my mis-prepresentation of "ratpoison" rather than any faults of "ratpoison" per se. I used the term "WM_HINTS" without understanding it fully.

However, it is a fact that "ratpoison" and "gimp" do not get along---I do not know enough to assign blame to either or both.

> Or, as my reply alludes to, just don't allow a terminal emulator to run at
> all.

True enough.

I looked at your detailed reply to Faber and it looks far more complete than the half-baked stuff I wrote up. It is interesting to see that FVWM lives up to its promise of being the "one wm to bind them all".

[Thomas] - On Tue, Jun 13, 2006 at 04:20:28PM +0530, Kapil Hari Paranjape wrote:

> Sorry. This is more a case of my mis-prepresentation of "ratpoison"
> rather than any faults of "ratpoison" per se. I used the term
> "WM_HINTS" without understanding it fully.

That's OK. I won't bore you with the details of how it all works -- that's what the ICCCM attempts to do. :)

> However, it is a fact that "ratpoison" and "gimp" do not get along---I
> do not know enough to assign blame to either or both.

I actually fired up ratpoison in Xnest to see what all the fuss was about -- and it seems you're right. Although interestingly enough, I don't think this is rapoison's fault -- but GIMP's. For instance, the "Open" dialogue window isn't transient. It ought to be marked as such -- hence as far as the WM is concerned, it treats it as a normal window. This is probably where half of the issues lie. Because of the way ratpoison works, it likes to completely consume any pane it's started in, and this includes the toolbar which looks ugly in this way.

> I looked at your detailed reply to Faber and it looks far more
> complete than the half-baked stuff I wrote up. It is interesting to
> see that FVWM lives upto its promise of being the "one wm to bind them
> all".

I suppose. :)


Talkback links should CC article author

Thomas Adam (thomas at edulinux.homeunix.org)
Sun Jun 18 08:28:23 PDT 2006

Answered by: Ben

I missed when the talkback parts were setup, but it seems to me that when the links for it are generated that a CC field should be added to the respective author's email address as well. For those authors who do not provide an email address, or write under the guise of "Anon", then on CC should be used.

Currently, whilst the model in use is quite successful, it does leave any kudos for the author unnoticed, unless they're subscribed to TAG which is unlikely for most of the authors we have.

[Ben] - That would be one of the bits of Python coding that's been on my stack for a LONG time - but has also been put off for a long time. It wouldn't be all that hard, but it would expose the authors' email addresses to spambots, something that is not currently the case. Yes, there would be large benefits to doing this - for the moment, whenever there's a worthwhile Talkback followup, I forward it to the authorfrom my mailbox, screamingly manual and inefficient as it may be - but there's also a detrimental effect, as well as a need for someone to get their hands grimy in Python bowels.

This week's classes are done (*WHEW*... especially since we were doing double the usual, 8 hours of class/welding per day), which means that I'll have a little brainpower available before the systems crash at the end of the day. I'll give this a bit of thought while hoping that a Python-savvy person will volunteer to help with this - as an option, if nothing else - while I'm cogitating. :)

[Thomas] - Cc Mike Orr -- he's not subscribed to TAG at the moment, but I hear he knows a little about Python. ;)


Confusion about linux fonts

J. Bakshi (j.bakshi at icmail.net)
Wed Jun 28 01:54:21 PDT 2006

Answered by: Kapil

Dear list,

I am really confused that where does Linux actually search for fonts ? I have flipped through my system (debian - sarge) and get two locations

1] /etc/fonts and 2] /usr/share/fonts where the #1 location has three html files and #2 has fonts file.

I have true-type font server called xfs-xtt. below is the part of fonts from my XF86config-4

Section "Files"
        FontPath        "unix/:7100"         #local font server
        # if the local font server has problems, we can fall back on these
        FontPath        "/usr/lib/X11/fonts/misc"
        FontPath        "/usr/lib/X11/fonts/cyrillic"
        FontPath        "/usr/lib/X11/fonts/100dpi/:unscaled"
        FontPath        "/usr/lib/X11/fonts/75dpi/:unscaled"
        FontPath        "/usr/lib/X11/fonts/Type1"
        FontPath        "/usr/lib/X11/fonts/CID"
        FontPath        "/usr/lib/X11/fonts/Speedo"
        FontPath        "/usr/lib/X11/fonts/100dpi"
        FontPath        "/usr/lib/X11/fonts/75dpi"
EndSection

All these location have fonts file. Now when fonts are already there at /usr/share/fonts then why do we need those above mentioned font directories ?

there is also a hidden folder named .fonts in my home directory

*xset -q | grep font* shows

/usr/lib/X11/fonts/misc,/usr/lib/X11/fonts/100dpi/:unscaled,/usr/lib/X11/fonts/75dpi/:unscaled,
/usr/lib/X11/fonts/Type1,/usr/lib/X11/fonts/Speedo,/usr/lib/X11/fonts/100dpi,/usr/lib/X11/fonts/75dpi,
home/joy.fonts

so my hidden .fonts directory is been searched. but no where is /usr/share/fonts . so what is the utility of having /usr/share/fonts ?

I am really confused. Can any one solve? Thanks for your time and kindly CC: to me.

[Kapil] - Only two? Where fonts are searched for depends on which program is doing the searching and the device for which the fonts are intended. Hence there are X fonts, TeX fonts, GhostScript fonts, etc. which are further divided according to bitmaps or scalable (vector) fonts. Bitmap fonts are further divided according to resolution.

Hope this clarifies a bit.

[ Thomas' reply was published in LG 128 at Ben's suggestion. His original post and Ben's response have been elided. -- Kat ]


FVWM Kiosk - a different approach

stomfi (stomfi at bigpond.com)
Fri Jul 7 01:06:04 PDT 2006

Answered by: Ben

This is another way of achieving user lock down.

Computerbank Queensland (CBQ) in partnership with a Work for the Dole team implemented a Blue Care Computers for the Third Age project at Salvin Park in the last 3 months of 2003.

This is a short description of the project which uses oroborus as the kiosk window manager, but which could use FVWM in a similar fashion.

I was the coordinator of the project.

#################################################################
The users are aged and have limited faculties, although they can all
see. They could easily do the wrong thing and forget what they
learnt from day to day, so the system has to be relatively fool
proof, while still delivering the required applications in a simple
and straightforward manner.

CBQ donated equipment and built a special Linux system for this
project. The installed systems consisted of two separate facilities
in the park.

They are comprised of an Internet, and print server with a simple
masquerade iptables script to the Internet.

The servers use LPD for the printer and a dialup ppp connection.
The clients use cups and have the Internet nameservers in
/etc/resolv.conf.

One site has a server and three clients connected with a 10Mbps hub,
the other a server/client connected by cross over cable to the client.

The servers bring up ppp0 at boot, which stays up till the system is
closed down or rebooted.

The clients are configured with 4 users, being guestx, help, Internet and
letters.  Letters, Internet and help all use oroborus as the backgrounded
window manager which does nothing except display windows. There are no
active key sequences except CTRL-q which normally kills the window manager
but does nothing since it is running in the background. After backgrounding
the window manager the user .xinitrc file execs the required program, ie
gedit letters.txt, firebird help.html and firebird Internet.html.

The user is prompted to print typed letters as they are not saved
past the session.

Closing the program sends the user back to the GDM user login screen.

No files are saved on the clients and gconf and other configuration
files have their ownerships changed so that user session changes
cannot be saved and are forgotton with each new logon.

Guestx is a normal user with a few apps such as GIMP and abiword,
and some games.

None of the user login names have passwords, except guestx whose
password is guestx. This is for the "sophisticated" user just in
case they have one or two.

The Internet home page has pictorial links to news, email, search,
net games, and the off line help. Email is web mail only.

Each client has the mouse keypad accessibility option configured.

All systems run on RedHat which was chosen by the team as it was the
most consistent and robust and had a good selection of end user
apps. Due to performance issues on P1s, Gnome is the guestx window
manager and GDM the logon manager.

The servers are P2s. One is running RedHat 7.3 and the P1 clients
and server/client are running a cut down RedHat 8. RedHat kindly
donated the software to CBQ which we used for this project.

Many thanks go to the work for the dole team for learning the
necessary routines needed to complete this project on time and at a
minimal cost to Blue Care.
###################################################################

Hope this shows you a robust way of locking users out of a system.
Kind regards
Tom Russell

[Ben] - The project sounds interesting, Tom, but I'm a little unclear about some of the specifics as well as being curious about some of the results.

> #################################################################
> 
> CBQ donated equipment and built a special Linux system for this
> project. The installed systems consisted of two separate facilities
> in the park.
> 
> They are comprised of an Internet, and print server with a simple
> masquerade iptables script to the Internet.

I'm not clear on what you mean by the above. Do you mean that you had a workstation connected to the Internet via a NAT router, with 'iptables' used to set up the masquerading?

> The servers use LPD for the printer and a dialup ppp connection.
> The clients use cups and have the Internet nameservers in
> /etc/resolv.conf.
> 
> One site has a server and three clients connected with a 10Mbps hub,
> the other a server/client connected by cross over cable to the client.
> 
> The servers bring up ppp0 at boot, which stays up till the system is
> closed down or rebooted.

So, it sounds like you had two dialup lines. Was there a particular reason that the two kiosks were so different (3 workstations plus a server versus 1 workstation plus a server)?

> No files are saved on the clients and gconf and other configuration
> files have their ownerships changed so that user session changes
                              ^^^^^^^

I presume that means "changed to root" or something similar.

> cannot be saved and are forgotton with each new logon.
> 
> Guestx is a normal user with a few apps such as GIMP and abiword,
> and some games.
> 
> None of the user login names have passwords, except guestx whose
> password is guestx. This is for the "sophisticated" user just in
> case they have one or two.

Presumably, this has the window managed "running in the foreground", to use your terminology, so that the apps are accessible and can be individually started - or do you mean something else?

> All systems run on RedHat which was chosen by the team as it was the
> most consistent and robust and had a good selection of end user apps.
  ^^^^^^^^^^^^^^^^^^^^^^^^^^

That is, perhaps, an arguable point. :)

> Due to performance issues on P1s, Gnome is the guestx window
> manager and GDM the logon manager.

This seems odd as well. If I was concerned about performance issues, I would have gone with, say, FVWM or IceWM; Gnome is quite resource-intensive when compared with either of those (or many other simple WMs.)

> Many thanks go to the work for the dole team for learning the
> necessary routines needed to complete this project on time and at a
> minimal cost to Blue Care.
> ###################################################################

Indeed; it sounds quite the public-spirited project, and I'd be interested to hear more about it. Tom, I'd like to suggest that you write an article about this for LG - perhaps detailing the larger surrounding issues (i.e., how Work for the Dole and the Third Age project came up with this undertaking, how the project was coordinated, lessons learned, whether there are plans to do more of this kind of thing in the future, etc.) I believe that our readers would be very interested to hear about this; I certainly would like to see this myself.


How to convert RedHat9 to Gentoo over SSH on a live system

Suramya Tomar (security at suramya.com)
Thu Jul 20 10:34:06 PDT 2006

Answered by: Rick, Thomas

Found this HOWTO that describes how to take a stock RedHat9 system and convert it to Gentoo, remotely over ssh and while it is running.

https://www.darkridge.com/~jpr5/doc/rh-gentoo.html

Haven't tried it yet as I don't have access to a RedHat system that I can experiment on. But this sounds very interesting. I wonder if its possible to convert other Linux OS's to another flavor of linux(Like Debian maybe?)

[Thomas] - Google for 'debtakeover'.

[Rick] - Certainly.

https://twiki.iwethey.org/twiki/bin/view/Main/DebianChrootInstall
https://www.hadrons.org/~guillem/debian/debtakeover/
https://trilldev.sourceforge.net/files/remotedeb.html
https://www.starshine.org/SysadMoin/DebootstrapInstallation

[Suramya] - Thanks for a great set of links, Now I have something to waste time on over the week...


State of the anti-spam regime, July 2006 edition

Rick Moen (rick at linuxmafia.com)
Fri Jul 21 11:37:44 PDT 2006

Answered by: Ben, Martin

Quoting Benjamin A. Okopnik (ben at linuxgazette.net):

> That's how I figured you were using it, Rick. If you wanted money, you'd
> have said so - I have this feeling that you got over being all shy and
> retiring a while ago. :) I just thought that you might still be in the
> planning/buying stages.

The newer machine in question's still, to my way of thinking, pretty nice: Single-proc PIII/800 or so, VA Linux Systems 2230 2U rackmount chassis, Intel L440GX+ "Lancewood" motherboard, 1.5TB RAM, 2 x 73GB RAID1 pair (Linux software RAID) for the important filesystems, 16GB? boot drive,

That's pretty snazzy -- for 2001. ;->

All the filesystems are built, and it's loaded with Debian "etch" 4.0 and a rough cut of all the necessary software, not yet fully configured. Data files haven't yet been copied over (IIRC).

The last time I worked on it, I'd fetched a new Debian-packaged binary kernel 2.6.x and blithely removed the previous, believed-bootable installed 2.6.x kernel. And then rebooted, found that I'd just shot myself in the foot, lost patience / ran out of time, and quit for the day. I've not yet gotten back to it, and meantime other things have been keeping me away.

You know, it's also possible the old box has developed a bad spot of RAM, or something like that. Look at this kernel "oops" from /var/log/messages, which is typical of process blowouts, lately:

Jul 21 11:20:38 linuxmafia kernel:  <1>Unable to handle kernel NULL pointer dereference at virtual address 00000004
Jul 21 11:20:38 linuxmafia kernel:  printing eip:
Jul 21 11:20:38 linuxmafia kernel: c0153ca5
Jul 21 11:20:38 linuxmafia kernel: Oops: 0000
Jul 21 11:20:38 linuxmafia kernel: CPU:    0
Jul 21 11:20:38 linuxmafia kernel: EIP:    0010:[prune_icache+53/464] Not tainted
Jul 21 11:20:38 linuxmafia kernel: EFLAGS: 00210213
Jul 21 11:20:38 linuxmafia kernel: eax: 74756564   ebx: 00000000   ecx: 00000006
   edx: 00000004
Jul 21 11:20:38 linuxmafia kernel: esi: fffffff8   edi: 00000000   ebp: 00000383
   esp: c4e0dddc
Jul 21 11:20:38 linuxmafia kernel: ds: 0018   es: 0018   ss: 0018
Jul 21 11:20:38 linuxmafia kernel: Process exim4 (pid: 32387, stackpage=c4e0d000)
Jul 21 11:20:38 linuxmafia kernel: Stack: cb7b3e20 00000000 c4e0dde4 c4e0dde400
000009 c1046310 c025ecd8 00001951 
Jul 21 11:20:38 linuxmafia kernel:        c0153e64 00000383 c01353eb 00000006 00
0001d2 ffffffff 000001d2 00000009 
Jul 21 11:20:38 linuxmafia kernel:        0000001e 000001d2 c025ecd8 c025ecd8 c0
1357bd c4e0de50 000001d2 0000003c 
Jul 21 11:20:38 linuxmafia kernel: Call Trace: [shrink_icache_memory+36/64] [
shrink_cache+379/944] [shrink_caches+61/96]
[try_to_free_pages_zone+98/256] [locate_hd_struct+56/160]
Jul 21 11:20:38 linuxmafia kernel:   [balance_classzone+66/480] [__alloc_pages+3
76/640] [do_anonymous_page+92/256] [handle_mm_fault+119/256] [do_page_fault+456/
1337]
[e100:__insmod_e100_O/lib/modules/2.4.27-2-686/kernel/drivers/net+-687130/96]
Jul 21 11:20:38 linuxmafia kernel:   [process_timeout+0/80] [bh_action+34/64] [t
asklet_hi_action+70/112] [do_IRQ+154/160] [do_page_fault+0/1337] [error_code+52/
60]
Jul 21 11:20:38 linuxmafia kernel: 
Jul 21 11:20:38 linuxmafia kernel: Code: 8b 5b 04 8b 86 08 01 00 00 a8 38 0f 84 
1c 01 00 00 81 fb a8 

Off hand, I'm uncertain of the root cause.

[Martin] - On 21/07/2006 Rick Moen wrote:

> The newer machine in question's still, to my way of thinking, pretty 
> nice:
> Single-proc PIII/800 or so, VA Linux Systems 2230 2U rackmount 
> chassis,
> Intel L440GX+ "Lancewood" motherboard, 1.5TB RAM, 2 x 73GB RAID1 pair
> (Linux software RAID) for the important filesystems, 16GB? boot drive

Rick do you mean 1.5Gb memory?? ;) Just thinking that terabytes of memory is going a bit OTT... I also thought that P3's usually only go up something like 2Gb memory, not sure off hand though.

[Rick] - D'oh! Yes, I only wish I had 1.5 TB of RAM. I'm willing to accept that even in PC-100. Just leave it in a brown bag on my doorstep, please, and nobody need get hurt. ;->

Yes, the old box is back online. Absent-minded members of my household (myself certainly included) had closed up all the doors leading into the garage that is the temporary home of our servers. Today, my town had record high temperatures of 35 degrees (95, if using last millennium's Fahrenheit scale) -- which meant it was probably closer to 40 inside the sealed garage. And the machine simply was unhappy, that way.

There will be a fresh backup, soonish -- and I'll devote serious attention to the long-delayed hardware migration, -and- to creating the planned server-shelf space in the foundation crawlspace, under my house. For now, there's also an electric fan blowing additional air at the server.

For the record, anyway, if you see segfaults and kernel oopses, it may indicate a runaway heat problem. I didn't know that, before.

[Ben] - One of the laptops that I tested before buying the HP that I have as my backup machine ran ridiculously hot - I actually got to see the kernel spit out a "thermal shutdown" message and halt (I didn't realize it had such a goodie in it until it did that.) During the short time that I ran it - and I actually tried two different machines of the same make and model - a number of the sessions terminated either in a thermal cutout or a segfault.

In general, when I see a segfault that wasn't caused by a known factor (e.g., a just-compiled, highly experimental kernel), I immediately suspect either a) bad hardware or b) overtemp conditions. I suppose you could make the case that b) really resolves to a) - I've always considered memory to be analog hardware, anyway... It sorta works within parameters when the moon in in the right phase, but tends to wander outside of them whenever anything (like the price of pork bellies on the commodities market, or the percentage of carbon dioxide on Mars) changes.


Wiring a house with ethernet: Success

Jason Creighton (jcreigh at gmail.com)
Fri Aug 4 23:45:20 PDT 2006

Answered by: Bradley, Jason

Hi Gang,

Way back in October of last year, I asked TAG about how to go about pulling Ethernet in a new house. I pulled all the cable, punched down the ends and hoped for the best. We moved in early this year, but the only networking we needed right away was with two, physically adjacent computers, so I just used a crossover cable.

So the Ethernet built into the house was totally untested until last week when I needed to tie a computer at the other end of the house into the netowrk. Every single (okay, seven total, but still...) drop worked correctly on the first try. Maybe I just got lucky. :)

Anyway, I just wanted to report success and thank you guys for helping out. Chalk another one up for TAG.

[Bradley] - On 8/5/06, Jason Creighton <jcreigh at gmail.com> wrote:

> [...] Maybe I just got lucky. :)

Maybe, maybe not ;-)

I just finished wiring a house for Ethernet as well - the house was built with three UTP cables running to three of the bedrooms, as well as a single unused box in the kitchen. After wiring up the RJ45 jacks in the bedrooms, and pulling a new cable in the kitchen and punching an RJ45 jack onto that, I now have a decent Ethernet network as well.

What sort of terminal arrangement did you use? I used a double-gang switch box with a 12-port faceplate as a "patch panel" for all of my lines (for expandability).

[Jason] - On Sun, Aug 06, 2006 at 04:16:58PM -0400, Bradley Chapman wrote:

> What sort of terminal arrangement did you use? I used a double-gang
> switch box with a 12-port faceplate as a "patch panel" for all of my
> lines (for expandability).

Very similar to yours, actually. I'm don't have proper patch panel, just two single-gang[1] low-voltage wiring boxes coming in closest with a wall-mounted 8-port Netgear switch. Were I to do it again, I think I would:

  1. Do everything in the wiring closet in some sort of flush mounted wiring box. (I think this is called a "structured wiring box")
  2. For futureproofing and flexibility, pull two Cat6 drops to every room in the house, punch them both down to RJ45, and then use one for ethernet and one for phone.

[1] When you mentioned that you used a 2-gang box, I thought "why didn't I do that?" D'oh! (I hoping there was some actual reason, like not being able to find a faceplate for a 2-gang box, but I can't remember now.)


Linux driver question

J. Bakshi (j.bakshi at icmail.net)
Sat Aug 5 10:11:01 PDT 2006

Answered by: Karl-Heinz, Peter

Hi list,

hope everyone is well here :-)

I have a question about linux usb driver. Is there any driver which allows attaching a serial device to usb port under linux ? My old PI PC has a serial port so I don't have problem to use JDM programmer (PIC Programmer). But my new mother board of AMD doesn't have any COM port. Hence I am looking such an arrangement that I can still use serial devices by a USB-to-Serial adapter. But does linux has any driver/feature to support this ?

thanks in advanced.

[Karl-Heinz] - "J. Bakshi" <j.bakshi at icmail.net> wrote:

> I have a question about linux usb driver. Is there any driver which allows 
> attaching a serial device to usb port under linux ? 
> [...]
> But does linux has any driver/feature to support this ?

seems so....

/lib/modules/2.6.5-7.111-default/kernel/drivers/usb/serial> ls
belkin_sa.ko        empeg.ko        io_ti.ko   keyspan.ko      kobil_sct.ko pl2303.ko       visor.ko
cyberjack.ko        ftdi_sio.ko     ipaq.ko    keyspan_pda.ko  mct_u232.ko safe_serial.ko  whiteheat.ko
digi_acceleport.ko  io_edgeport.ko  ir-usb.ko  kl5kusb105.ko   omninet.ko usbserial.ko

but from that list I would assume there are usb-to-serial adapter which are better supported then others -- but maybe safe_serial and usbserial are a lowest common featureset support?

[Peter] - The "Edgeport" boxes from https://www.ionetworks.com/ seem to enjoy good Linux support from the "io_edgeport" and "io_ti" drivers:
https://www.kroah.com/linux/usb/edgeport/

Disclaimer: I haven't tried this myself (yet). Instead I'm using an old "Annex" box from Bay Networks https://www.ofb.net/~jheiss/annex/ to access serial ports over plain old TCP/IP. Not sure if this would be sufficient for your purpose, though.


USB Hard Drives

Bob van der Poel (bvdp at xplornet.com)
Sat Aug 12 19:16:06 PDT 2006

Answered by: Ben, BobV, Faber, Lew

I'm thinking of getting a USB (external) hard drive to use for backup purposes. Is there anything to look out for on these, or should I just try to get the best dollar/meg deal? I'm thinking that something in the 150 to 150 meg size would be perfect.

I'm assuming that all (most?) of these drives will work just fine with Linux :)

Thanks.

[Faber] - On 12/08/06 19:16 -0700, Bob van der Poel wrote:

> I'm thinking of getting a USB (external) hard drive to use for backup 
> purposes. 

I bought a USB enclosure for ~50 USD for a HD I already owned and I recently bought a Maxtor 300G model for ~200 USD. I have had no problems with either. I use them as everyday storage; the former for /home and the latter for my mp3s, VMware files and media files.

> Is there anything to look out for on these, or should I just 
> try to get the best dollar/meg deal? I'm thinking that something in
the 
> 150 to 150 meg size would be perfect.

My rule of thumb on buying anything: buy the most expensive thing you can afford, but don't buy the cheapest thing on the market and don't buy the most expensive.

> I'm assuming that all (most?) of these drives will work just fine with 
> Linux :)

That's been my experience. One person emailed me about problems he was having with a USB enclosure and Linux. We believe his problem turned out to be the enclosure was just too cheap(ly made).

[Lew] - I wouldn't know about all or most of these drives, but I suspect that they will all work with Linux, especially a recent kernel.

A couple of weeks ago, I bought a Vantec NexStar:GX "USB 2.0 External 3.5 inch Hard Drive Enclosure", which I used with an old 60Gb HD that I had laying around. The drive was correctly recognized by a 2.4.29 kernel, and I had no problems using it at all.

I'm certain that once I upgrade the drive to a more modern 300Gb device, the NexStar:GX will make a good external backup medium.

[Ben] - I didn't want to just jump in with "metoo!", but - me too. After ages of making mental notes to do it Sometime Soon, I finally bought myself a USB-to-HD adapter and shuffled through my ancient hard drive collection (ye ghods... some of these things had capacities in the megabyte range. What year was that, 1920 or so?) Except for several drives that wouldn't even spin up, I was able to read all of them - and I had quite a variety.

Amusing note: I tend to be a pack rat when it comes to information, and when I get rid of a machine, I usually keep the HD. Then, at some point when I've completely forgotten what the hell those things contain, I work my way through them, copying off stuff that looks interesting and throwing away the now-ancient hardware. It has been my experience that whatever drive I'm using at the time is plenty and more than plenty to hold the contents of all the drives I'd saved until then.

[BobV] - Thanks for the comments on this, guys. I was at my not-so-local Staples earlier today and they had 250gig USB "My Book" drives on for $119 US. So, I got one. (I really do remember thinking that my first box of 5.25" floppy disks each holding about 160K would be enough storage for the REST OF MY LIFE).

It works just fine. Reformatted to ext2 FS and got rid of a bunch of windows files :)

Just copied my base file system over. 92,000 files; 24 gig in about 28 minutes. No screamer, but no slouch either. And, really, for backups who cares about speed.

Seems to be very little noise or heat on this. And it has an automatic 10 minute power-down built in. So, I'll just leave it plugged in.

I'll have to see if it will auto-mount, etc. But, I don't see any serious problems.


New URL for GLUE - Groups of Linux Users Everywhere

B. E. Irwin (beirwin at shaw.ca)
Sun Aug 13 17:19:10 PDT 2006

Answered by: Rick

I'm trying to track down where "GLUE - Groups of Linux Users Everywhere" lives now. As you can see from my email below, this valuable resource is no longer hosted by LinuxGazette.com. I recall you guys started a new LinuxGazette.net (and linked on my site, btw) a while back. I searched linuxgazette.net and could not find GLUE. Do you know where I might find it? Is it on linuxgazette.net and I missed it? A Google search turned up nothing.

Thanks for your help.

---------- Forwarded message ----------
Date: Fri, 04 Aug 2006 11:41:36 -0600
From: Keith Daniels &lt;keith@ssc.com&gt;
To: Barbara E. Irwin &lt;beirwin@shaw.ca&gt;
Subject: Re: New URL for GLUE - Groups of Linux Users Everywhere =&gt; Was hosted
     here: https://newglue.linuxgazette.com/

Barbara E. Irwin wrote:
&gt; I am one of the contributors for the Loads of Linux Links project
&gt; (https://loll.sourceforge.net/linux/links/index.html).  We have a link to
&gt; GLUE - Groups of Linux Users Everywhere however, this url is no longer valid 
&gt; Is this link somewhere on Linux Journal?

No, Glue has been discontinued and will not be put back up.  Sorry about that 
but management decided to kill Glue, linuxgazette.com and linuxresources.com.

&gt; 
&gt; FYI, this is a GPL'd database of 5000+ bookmarks of important URLs about
&gt; Linux and the Open Source movement.  It was originally a project started for
&gt; the Victoria Linux Users' Group and is now hosted by SourceForge.
&gt; 
&gt; Thanks for any info. about your link!

I will tell the newsletter editor about your site and he might put it in one of 
the newsletters.


Keith Daniels
--
Webmaster
SSC Publications, Inc.
www.ssc.com

Publishers of:

Linux Journal - www.linuxjournal.com
TUX Magazine  - www.tuxmagazine.com
Doc's IT Garage - www.docsearls.com
A42 - www.A42.com

[Rick] - Greetings, Barbara!

I'd love to see GLUE resurrected[1], and we at Linux Gazette were every bit as surprised and dismayed to see it disappear (with apparently no advance notice or consultation with anyone) as you were. Heck, if I had the right software for it, I'd even host it on my home aDSL line, though it would require some effort to prevent it being overrun by comment spam.

[1] As opposed to being one of the ever-growing stable of dead Linux-community-Web-site URLs that now redirect to the commercial www.linuxjournal.com site. I'm struggling to be polite, here.


Copyright Notice

Mahesh Aravind (ra_mahesh at yahoo.com)
Tue Aug 15 22:02:32 PDT 2006

Answered by: Ben, Rick

--- Kristian Orlopp <kristianorlopp at web.de> wrote:

> The script https://linuxgazette.net/129/misc/mail/colors.sh may be a
> modification of the public domain (?) 
> https://www.linux-magazin.de/Artikel/ausgabe/1997/08/Tips/ls.html
> (Farbtest)
> 
> Even if that the script published in linux-gazette is simple I think a
> notice should refer to that page.

Kristian,

Thanks you very much for pointing out the (striking) similarity between the scripts. I think the link you pointed was created in Aug 1997.

I really haven't seen/copied that page (Hell, I can't even read German), but I assure you, that it wasn't my fault.

You can, if you want drop a line to Ben Okopnik and say to include a copyright notice or something. And I'll do that if I release a v2 of the script.

I wasn't aware of the page, and I haven't copied anything. Sorry if I hurt anyone's feelings...

Thank you once again for pointing out the link. I believe it was rather a coincidence.

-- Mahesh Aravind

[Ben] - Hi, Kristian -

I'm probably the last guy in the world to ignore a copyright violation or omit credit for an author - but Mahesh's code, other than the necessary and obvious common points, is clearly different from the code you cite. It's true that the output is much the same, but any script designed to display console colors - and that includes my own version in the Bash tutorial that I wrote, as well as the later versions that I wrote in reply to Mahesh's post in the LG Answer Gang - is going to have a similar-looking output; otherwise, it will have failed in its purpose.

If you take a look at the two scripts, even the programming structures are different - except where they both use the escape codes as detailed in the Bash-Prompt-HOWTO. If there's any credit that should be given, both scripts should be crediting that document. :)

Again, if I considered it a copyright infringement of any sort, or an omission of due credit, I would update our archives, even though - as I often have to explain to people - the change would not propagate to our mirror sites, since they've already downloaded the published issue. In this case, however, it's not a matter of credit or copyright, and making changes in published material without a powerful reason, especially when those changes will only be seen by a vanishingly small percentage of people, does not seem reasonable.

[Rick] - I concur. There's a common misconception that any similarity shows "copying", and that any "copying" constitutes copyright infringement. In fact, by law, copyright arises only in the "expressive elements" of a creative work, first of all. "Functional elements", e.g., portions of code that embody the obvious, or only, or required-for-compatibility way of doing things, are deemed to not be copyrightable at all.

That aside, if one has copied any substantive amount of something, it's simple good manners to give acknowledgement -- but that doesn't seem to apply here, either. (I should hasten to add that all this was implied by your wording; I'm just agreeing with you and making the point more explicit.)

[ Kristian's response is contained below in Ben's next post. --Kat ]

[Ben] - On Thu, Aug 17, 2006 at 12:47:10AM +0200, Kristian Orlopp wrote:

> Hi !
> 
> > If you take a look at the two scripts, even the programming structures
> > are different - except where they both use the escape codes as detailed
> > in the Bash-Prompt-HOWTO. If there's any credit that should be given,
> > both scripts should be crediting that document. :) > 
> Sorry, I did not want to be captious ;-)

[smile] No problem at all, Kris; I didn't take it that way.

> Oh yes, you are right, I studied both codes in a closer way.
> So I just learned the usage of a sequence in bash
> via your "for j in $(seq 40 47)"-construct.

Fun stuff, isn't it? There's a bunch of cool tiny utilities in the 'coreutils' (used to be 'shellutils') package that most people aren't even aware of; 'seq' is only one of them. Most of them make a shell programmer's life much, much easier.

> We in german say: "lots of ways lead to Rome" :)

We have the same thing in English - except we say "Alle Wege führen nach Rom". :) [*]

In fact, I figure that's what Alaric said back in 410AD...

> I am happy to read lots of scripting-examples, as I am not a programmer.
> Here I want to say "thank you" for your work at linux-gazette.
> I read it since 1999. Very good job !

Thank you! I blame The Answer Gang and our staff and authors. :)

[*]
[Rick] - A mere 401 years after Publius Quinctilius Varus said "D'oh!" (That was on the occasion of Augustus Caesar losing the ability to count beyond XVI.)


Nullmodem

(cssutto at attglobal.net) cssutto at attglobal.net
Wed Aug 16 14:29:11 PDT 2006

Answered by: Kapil, Rick, Thomas

Rick:

I looked at your recommendations and the list was long enough to be confusing.

Since I operate from a laptop, it looked like this one might be the right one for me.

nbSMTP (no-brainer SMTP) 

Is this OK or do you have a better suggestion?

[Kapil] - Hello,

On Wed, 16 Aug 2006, cssutto at attglobal.net wrote:

> Since I operate from a laptop, it looked like this one might be the
> right one for me.
> 
> nbSMTP (no-brainer SMTP) 
> 
> Is this OK or do you have a better suggestion?

Here is an alternative (which I probably learned from the LinuxMafia Knowledge Base) which I use:

1. Setup a local MTA (aka sendmail alternative) using any simple to
   configure MTA. Ensure that it is configured not to send mail
   to the internet. This is so the system can send you messages if
   it notices some configuration problems etc.

2. In your user account setup msmtp to send mail via different "accounts"
   depending on which network neighbourhood you find yourself in.

This works well for me.

[Rick] - Quoting Kapil Hari Paranjape (kapil at imsc.res.in):

> Here is an alternative (which I probably learned from the LinuxMafia
> Knowledge Base) which I use:

Heh, what do those guys know? ;->

> 1. Setup a local MTA (aka sendmail alternative) using any simple to
>    configure MTA. Ensure that it is configured not to send mail
>    to the internet. This is so the system can send you messages if
>    it notices some configuration problems etc.

Reminds me of something that I used to be confused about: I used to mistakenly believe that you needed an MTA daemon running in order to process purely-local mail such as system automated notices, logfile analysis, etc. It turns out you don't: Just having any MTA able to run as an on-demand mailer, only long enough to process the mail and then terminate, is more than good enough.

On Debian, this program by default is Exim. (On 2.2 "potato", it was Exim3; in recent versions, it's the Exim4 rewrite-from-scratch series.)

As is traditional on Unixes, this program is callable by the name "sendmail", even though it's not actually the sendmail program at all:

  # ls -l $(which sendmail)
  lrwxrwxrwx 1 root root 5 2006-04-22 08:47 /usr/sbin/sendmail -> exim4
  #

Non-sendmail MTAs such as Exim3, Exim4, Postfix, and Courier-MTA all honour all of sendmail's command-line options as well. Dan Bernstein's qmail of course honours only a subset of them, because this is the easygoing Dan we all know and love. ;->

[Rick] - Quoting cssutto at attglobal.net (cssutto at attglobal.net):

> I looked at your recommendations and the list was long enough to be
> confusing.

It's a bit of an exaggeration to call that recommendations. That page is just a compendium of descriptions of all known examples. I've never personally run any of them, and so can't really speak to choice of nullmailer.

> Since I operate from a laptop, it looked like this one might be the
> right one for me.
> 
> nbSMTP (no-brainer SMTP) 
> 
> Is this OK or do you have a better suggestion?

Looking from a distance, it looks as good as any of the others -- which is a fancy way of saying "I really don't know, but it's probably worth trying."

My history with MTAs is as follows:

1.  I started with sendmail, because I was young and foolish.  ;->

2.  I switched my personal mail machines from sendmail to Exim3
    because it was an emergency rebuild, and Exim3's what came by
    default in then-current releases of Debian, and was dead-easy 
    to configure.

3.  While working as chief sysadmin at $FIRM, a briefly famous 
    professional Linux support and services company in San Francisco 
    that shall go nameless ;-> , I was obliged to administer qmail, 
    and didn't enjoy the experience much.

4.  Upon another rushed rebuild under (again) emergency conditions,
    found myself with a mostly well-functioning Exim4 installation, 
    and tend to not fool with it much because it's a production mail
    system and the Cost Plus rule applies.  ("You break it; you buy it.")

None of those are nullmailers; they're all very full-service MTAs.

[Thomas] - I've always used 'nullmailer' per se ( https://lists.suse.com/archive/suse-linux-uk-schools/2005-Jan/0049.html ).


The all new Ubuntu.... Did I say something wrong?

clarjon1 (clarjon1 at gmail.com)
Wed Aug 23 11:22:59 PDT 2006

Answered by: David

Hey gang!

I'm going to keep this as short, and concise, as I can. Here goes:

I got some Ubuntu CDs yesterday! 10 Dapper Drake 6.06 LTS, with 2 sheets of 4 bumper stickers! Got them from shipit.ubuntu.com

Of course, I booted it up... :D They really improved the look and feel of it. Of course, I want to install it, right? I'm done with gaming for now, now that most of the games I play (emulators) work good under Linux natively or via Wine. So, I start the install program. For those who have used breezy badger, you would expect me to use the install CD. The Dapper drake allows you to install from the live cd -- Only one CD required to ship now. So, I started the install. It asked me for a language, my timezone, and then my keyboard layout. I thought to myself, This is so easy! Boy, did I let Murphy and his law come in there. After the correct (i.e default) keyboard layout was selected, i clicked next. And that's the end of the story. Not a single error message, the non-greyed out buttons and text input boxes are still allowing input. But on that window, anywhere other than the text input, is the little "i'm thinking" mouse pointer. But nothing happens. The CD isn't being read from, and the swap partition isn't showing any signs of being used. Any help would be appreciated.

Thanks!

[David] - "Over 300 post-release updates have been pre-applied, so that fewer updates will need to be downloaded after installation, and a number of bugs in the installation system have been corrected."

I suspect that you may have been bitten by one of the installer bugs. May I suggest trying a 6.06.1 install?

[clarjon1] - Yeah, I think I may have bitten one of the bugs alright. I've tried reinstalling with a different keyboard layout, and that seems to have worked.


I will nead a little help or more!!!!!!

Nico Teiu (nicoleta_teiu at hotmail.com)
Fri Aug 25 08:55:11 PDT 2006

[ Nico's initial post was embedded in a lot of html gobbledygook; I hope that the readership will forgive my taking the easy way out and starting the thread with Neil Youngman's initial response. As a note to future querents to TAG, please send your inquiries in plain text, not html. -- Kat ]

On or around Friday 25 August 2006 16:55, Nico Teiu reorganised a bunch of electrons to form the message:

> Hello!
>  
> i'm Nicoleta from Roumania!

Hi Nicoleta

I'm Neil from England

> I have a little problem, i wanted to make to partition from C and now it's
> not working anything, my laste operated system does not work, i dont know
> the bios password because i will want to reinstall my windows. 

I'm not sure what the BIOS password has to do with reinstalling Wind0ws, unless you're saying it won't boot from CDROM?

> I have a Acer Laptop Travel Mate 552tx.
>  
> I dont know what to do

OK. Start with a clear explanation.

Why were you trying to repartition your disk? Were you trying to install Linux?

What tools did you use to repartition the system?

What error messages do you get when booting the system?

Do you want to reinstall Wind0ws alongside Linux, or just restore it to the way it was?

> Please help me!!!

We'll try, but this is the Linux Gazette and reinstalling Wind0ws isn't our speciality. A Wind0ws support forum may prove more productive?

Neil Youngman

[ Nico additionally followed up with e-mail sent solely to Neil. Note to future querents: Please do ensure that your e-mail is sent to tag@lists.linuxgazette.net! -- Kat ]

On or around Friday 25 August 2006 19:03, Nico Teiu reorganised a bunch of electrons to form the message:

Nico, please use "reply all" to keep emails going to the whole gang. You've got a better chance of getting the help you need if everyone's involved.

> I wanted to make another partition for instaling Linux

Good idea.

> Now i had format my pc with a boot diskete of Wind0ws 98 and i dont know
> how i can install the operating system. I had made the primary partition c,
> so i dont know how to do more from this point I have on a dvd a version of
> Red hat linux

That's still not very clear.

Are you saying you used Wind0ws 98's version of fdisk to repartition it?

Did you defragment the disk first first?

Did you reformat any partitions with the format command.

Did it have Wind0ws 98 on to start with or a different Wind0ws version?

It would have been better to use the tools on the Linux disk to do this. The Wind0ws tools aren't very good.

Assuming that C: took up the whole partition beforehand, or you know what the partitions were, it may be that putting the partition back to the way they were will allow you to boot the old system and start again. If there has been other data written to the partitions then that may not be possible.

> I want to install it because i want to learn linux
> you can help me, on this dvd i have the 3 images of red hat linux
> How can i install it?

Do you want to install just RedHat, or RedHat and Wind0ws? I can offer a little help, but the best thing to do is to read the installation. documentation on the RedHat site and follow the instructions carefully.

[ Again, I've used Neil's response rather than Nico's html-ful post to TAG. -- Kat ]

On or around Friday 25 August 2006 19:42, Nico Teiu reorganised a bunch of electrons to form the message:

> From:  Neil Youngman <ny at youngman.org.uk>
> >Are you saying you used Wind0ws 98's version of fdisk to repartition it?
>
> yes
>
> >Did you defragment the disk first first?
>
> i dont know
>
> >Did you reformat any partitions with the format command.
>
> i just fallow the steps

Without knowing what steps you followed, it's going to be hard to figure out what happened and how best to fix it.

'' > >Did it have Wind0ws 98 on to start with or a different Wind0ws version? > > I had window 2000 ''

You probably had an NTFS file system, which the Wind0ws 98 tools wouldn't recognise. That may complicate things.

'' > >It would have been better to use the tools on the Linux disk to do this. > > The Wind0ws tools aren't very good. > > i dont know how to get at the linux tools > > i can not boot from the cd because i dont have the bios password, and > floppy is put first there, on my laptopt is working anything in this moment ''

OK. It sounds as though you need to install Linux with a boot floppy. There are instructions at

https://www.redhat.com/docs/manuals/linux/RHL-9-Manual/install-guide/s1-steps-install-cdrom.html#S2-STEPS-MAKE-DISKS

> >Do you want to install just RedHat, or RedHat and Wind0ws? I can offer a
> >little help, but the best thing to do is to read the installation
> >documentation on the RedHat site and follow the instructions carefully.
>
> Yes i will want to insall both, but i will need more Linux

I'm afraid you'll need to ask someone else for help with reinstalling Wind0ws.


Kernel tweaking

Benjamin A. Okopnik (ben at linuxgazette.net)
Mon Aug 28 08:46:41 PDT 2006

Answered by: Pedro

----- Forwarded message from Joris Lambrecht <jl_post at telenet.be> -----

Hello tag,

As i'm a Debian user for some years and feel at home with this distro i recently took on the challenge of moving to the testing distro (etch). Thus reviving a desktop pc wich i had barely used for about 6 months.

Admittedly, i took the gorilla approach but hey, it works. And looks to be one of the better released the Debian team and community is about to deliver (Dec-2006)

But of course, but the kernelimage 2.6.16-2-k7 is preventing the proprietary nvidia module to load, using m-a to rebuild it from scratch failed on a 'rivafb enabled' message. After reinstalling some previously removed versions of gcc (sigh) that message (rivafb ...) dissapeared but still the module wouldn't compile.

So i tried to figure out how to disable this part of the kernel without rebooting. Since i couldn't find any related information to this matter i figured this is not possible, still somewhere in my memory the idea persists.

As such i'd like to ask you people for a final opinion. Can a root-user disable certain (external or compiled-in) modules in a running kernel, or in this kernel at boottime ?

And, should you be able to spare the resources, why the plunk won't my nvidia kernel module compile correctly without giving any other reason than 'could not be built'

Best of Regards,

Joris

[Pedro] - Hi Joris,

Talking about modules, if what you are trying to achieve is preventing a module from loading automatically at boot time, I think there are at least two things you may do:

1) Add a file under /etc/modprobe.d (or modify an existing one), with the following content:

blacklist <your module name here>

If I have understood correctly, this line prevents the automatic loading of a module based on its internal alias list. However, the module may still be manually loaded with a "modprobe <module name>".

2) Again, under /etc/modprobe.d, put the following line in a file:

install <your module name here> /bin/true

This effectively disables the module load.

Both keywords ("install" and "blacklist") are explained in more detail in the manual page of modprobe.conf.


Invisible Read!!

Vikas Mohan (vikas-m at chintech.org)
Wed Aug 30 15:16:48 PDT 2006

Answered by: Ben, Neil, Thomas

Sir,
iam trying to emulate a login session.

login:<some login>
NOTE----------------------->password:<the text typed here should not be visible> how to do this with shell script BASH.

Please this is my assignment and iam a student.

your's faithfully,
Vikas Mohan.

[Ben] - It is indeed your assignment, and you are a student.

That being the case, why aren't you *studying*? The idea of going to school is for you to gain knowledge; when you try to cheat instead of actually studying, you waste money, time, and effort - and that includes other people's as well as your own.

Go study, Vikas Mohan from chintech.org [1]. I hope that your professor reads Linux Gazette and gives you a poor grade in this class for attempting to cheat; perhaps that will turn out to be the most valuable part of your current education.

[1] Chinmaya Institute of Technology Govindagiri, Chala PO Thottada, Kannur 670007

[Neil] - Ben, maybe a small credit is due for honesty here? Or am I being too nice?

Vikas -

We don't do people's homework here, but we do occasionally point them in the right direction. May I suggest that you read the man page for bash, specifically the part relating to the "read" builtin command and it's options. (RTFM is good advice in these situations)

[Ben] - On Thu, Aug 31, 2006 at 01:28:55PM +0100, Neil Youngman wrote:

> Ben, maybe a small credit is due for honesty here? Or am I being too nice?

You are a nice guy, Neil; me, well, I'm afraid that I see no redeeming qualities in his request. Instead, I see either 1) a student going down a bad path, or 2) a proto-skript-kiddie who wants to learn how to fake a login session in order to steal login info. He clearly knows that it's all about Bash - but just as clearly, he hasn't troubled himself to look it up (a Net search with the keywords he used in his email is quite instructive, BTW.)

I'm always happy to help someone with a real question - students included - but basic stuff like this, where the OP just has to lift a hand? Nope, no credit from me.

There are a bunch of routes to getting the necessary information; heck, a net search alone turns up thousands of hits. My diagnosis is acute grade-chasing, severely exacerbated by a laziness infection.

[Thomas] - I assume you're referring to something like the following:

echo "Input secret: "
stty -echo
read -k key
[ "$key" == "$SomethingElse" ] && stty echo

Have fun.

[Ben] - Actually, I think it was more like

echo "Be vewy vewy quiet - I'm hunting GWADES!"
stty --turn_off_the_noise
read --please_please_keep_it_quiet dont_let_anyone_find_out
stty --not_crazy
echo "I didn't study..."|mail -s 'Please fail me!' professor at chintech.org

[Neil] - Just "read -s key" should do it, if I've read the man page correctly.


Port Linux on DSP

Neil Youngman (ny at youngman.org.uk)

[ My thanks again to Neil Youngman for his quoting messages otherwise unreadable to me. -- Kat ]

On or around Wednesday 30 August 2006 13:59, [,,,] reorganised a bunch of electrons to form the message:

> Dear sir:
>         I want port uclinux on freescale's 56800 DSP,can you give some
> advice on how to deal with it and please give some materials on that.
>                                                           Thank you!

That's a very specialised question and I doubt that this list has the expertise to answer it. I would suggest trying the ucLinux mailing lists, see https://www.uclinux.org/maillist/


Which process wrote that line into syslog?

Ville (v+tag at iki.fi)
Tue Jun 27 06:37:26 PDT 2006

Answered by: Ben, Thomas

Hi,

On an odd day, these began to pop into /var/log/messages:

                Jun 25 05:01:15 servername out of memory [19164^N\213^M at out]
                Jun 25 05:49:06 servername out of memory [25038^N\213^M at out]
                Jun 25 07:01:53 servername out of memory [10600^N\213^M at out]
                Jun 25 07:51:05 servername out of memory [16145^N\213^M at out]
                Jun 25 09:05:53 servername out of memory [24702^N\213^M at out]
                Jun 25 09:56:24 servername out of memory [30349^N\213^M at out]
                Jun 25 11:13:14 servername out of memory [7752^N\213^M at out ]
                Jun 25 12:05:04 servername out of memory [14101^N\213^M at out]
                Jun 25 13:23:53 servername out of memory [23758^N\213^M at out]
                Jun 25 14:17:00 servername out of memory [29815^N\213^M at out]
                Jun 25 15:37:52 servername out of memory [9325^N\213^M at out ]
                Jun 25 16:32:25 servername out of memory [16081^N\213^M at out]
                ....

(where 'servername' is the hostname of the server.)

Notice the absense of colon (':').

[Ben] -

That's pretty odd. It looks like a hand-crufted message sent by "logger" or something similar, not an actual system report (which makes me very, very suspicious of where it may have come from.) In fact, an error message from the kernel that dealt with this kind of issue would look more like this:

Jun 25 05:01:15 localhost kernel: oom-killer: gfp_mask=0x1d6

[Ville] -

Exactly. I've seen those pesky kernel oom-rambo messages more than I wanted. This was not it.

I had a couple of "usual" suspects, a closed-source UPS monitoring program which is not exactly robustly coded and a closed-source virus scanner. I haven't been able to conclusively link them to this problem, though.

[Ben] -

That is, the kernel knows what to do about "out of memory" conditions; it's not just going to tell you and wait for you to do something about it. :)

[Ville] -

Well, I've found that out the hard way and several other ways. It's just too easy to have a oom problem. The kernel used to be even worse-behaved, nowadays it seems to sometimes axe the actual culprit process, not just innocent bystanders (like in early 2.4 days.) The overcommit setting in /proc also helps.

No cronjob (my usual suspect) seemed to fit the bill. Google didn't give an easy answer.

The first number could have been the PID, but then again, it could also be pure junk. It was probably a shortlived process, or at least it probably kept /dev/log open only for a short while.

This one almost^W drove me nuts. Which process and executable was littering my syslog? Was there a real emergancy somewhere? It was almost like receiving bottle mail - pretty hard to answer...

[Ben] -

Heh. Yeah, it's sorta like trying to troubleshoot intermittent problems in electronics. There's more than one e-tech in a padded room due to those.

[Ville] -

I've done some programming, and it tends to happen there as well. The most tenacious ones only happen in once a month, with a 3GB data set, in another country, and when you have it in reproducible state just waiting for a remote debugger to get set up, the machine must be booted...

I did:

        [1] grep -rsHU "out of memory \[" /usr/{sbin,bin,local/bin,local/sbin}

==> "out of memory \[" only matched one binary which I was able to rule out. "out of memory " matched 93 files. I couldn't find "@out".

        [2] strace -p $(pidof syslogd) -o /root/logi

and

            tail -f /root/logi | grep "out of memory" | grep writev | while
            read line; do
                  date >> /root/oom-trace
                  fuser -uv /dev/log >> /root/oom-trace
                  POSPID=$(echo $line|sed 's,.*y \[,,; s,\^N.*,,')
                  ps $POSPID >> /root/oom-trace
            done

==> Does not work, since the second | causes the pipeline to be so much buffered that the payload (fuser et al) didn't even trigger at the same minute (required several lines of input to get triggered).

[Thomas] -

Of course that won't work. You want --line-buffered to grep (if you're version is GNU and supports it) or you can use the 'unbuffer' expect program, or if you had used awk (which would have greatly reduced your entire ugly pipeline above), that has the fflush() system call.

[Ville] -

Hmm, how's awk any better than perl I used in the second version (which did get rid of the buffer problem)? I anticipated there was something like --line-buffered in existence, but, it was quicker to redo it with perl that I knew was going to work.

[Thomas] -

YMMV on this. Choice is such a wonderful thing.

[Ville] -

The first option got ugly and lengthy incrementally as I added more filters. Doesn't that ever happen to you? I never meant it to be pretty, I just wanted to solve the problem...

        [3] strace -p $(pidof syslogd) -o /root/logi
              and
            tail -f /root/logi |
            perl -nle '
              next unless /^writev.*out of memory \[(\d+)/;
              print `date;
              $pospid = $1;
              print "Possible pid = $pospid";
              print `echo \$\$`;
              print `fuser -uv /dev/log`;
              print `ps le $pospid`;
            ' | tee /root/oom-trace

==> This one triggered, but as I suspected, fuser was executed too late (the process had already closed /dev/log) and ps likewise (and there's no evidence that $pospid was really the pid.). [4] Encouraged by https://www.linux.com/howtos/Secure-Programs-HOWTO/sockets.shtml I hacked the following

--- syslogd.c~  Tue Jun 27 09:38:03 2006
+++ syslogd.c   Tue Jun 27 10:06:02 2006
@@ -1104,8 +1104,33 @@ int main(argc, argv)
 #ifdef SYSLOG_UNIXAF
                for (i = 0; i < nfunix; i++) {
                    if ((fd = funix[i]) != -1 && FD_ISSET(fd, &readfds)) {
+                       struct ucred cr;
+                       int cl=sizeof(cr);
+                       int ret;
+                       
+                       ret = getsockopt(fd, SOL_SOCKET, SO_PEERCRED, &cr, &cl);
+                       
                        memset(line, '\0', sizeof(line));
                        i = recv(fd, line, MAXLINE - 2, 0);
+                       
+                       dprintf("ret=%i  Peer's pid=%d, uid=%d, gid=%d\n",
+                               ret, cr.pid, cr.uid, cr.gid);
+                       if (ret == 0 && strstr(line, "out of memory"))
+                       {
+                            char tmp[1024];
+                            int t;
+                            snprintf(tmp, sizeof(tmp), "fuser -vu /dev/log >> /root/syslog.log");
+                            system(tmp);
+                            snprintf(tmp, sizeof(tmp), "ps le %d >> /root/syslog.log", cr.pid);
+                            system(tmp);
+                            memset(tmp, 0, sizeof(tmp));
+                            t = snprintf(tmp, sizeof(tmp),
+                                "ret=%d   Peer's pid=%d, uid=%d, gid=%d\n",
+                                ret, cr.pid, cr.uid, cr.gid);
+                             printchopped(LocalHostName, tmp, t + 1, fd);
+                       }
+                                     
+                                     
                        dprintf("Message from UNIX socket: #%d\n", fd);
                        if (i > 0) {
                                line[i] = line[i+1] = '\0';

into sysklogd-1.4.1.

I thought this was easier than writing an unix domain reader/writer proxy to read /dev/log and feed real syslogd.

==> That one triggered, but I only got

                ret=0   Peer's pid=0, uid=-1, gid=-1

and naturally, fuser & ps showed nothing interesting. This also happened when I tested the hack with

                initlog -s "out of memory [foo"

Later, the messages stopped appearing.

Some notes:

  - This was a productiong server => the most reckless stunts were out of question
  - 2.4.32-rc1 kernel => hence no dnotify/inotify goodness (nevermind dprobes)
  - getsockopt(SO_PEERCRED) was introduced in 2.2, so that one WAS supposed to 
    work

[Ben] -

Well, SO_PEERCRED depends on the socket being created by socketpair() (see 'man 7 socket'); I'm not willing to dig through the syslog code to find out if that's the case, but it should be easy enough to hack up a short test prog to see if it works or not.

[Ville] -

Sure. The syslogd end just does

  sunx.sun_family = AF_UNIX;
  strcpy(sunx.sun_path, "/dev/log");
  fd = socket(AF_UNIX, SOCK_DGRAM, 0);
  bind(fd, (struct sockaddr*) &sunx, sizeof(sunx.sun_family)+strlen(sunx.sun_path));

Somewhat like https://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp?topic=/rzab6/uafunix.htm -- no socketpair().

The problem of course is that I don't know what the client does - because I don't know who the client is in this case...

[Ben] -

[Nod] I suspect that you're out of luck with that approach, then.

[Ville] -

  - sysklogd-1.4.1-14.legacy.7x (the source code I hacked was sysklogd-1.4.1-19 or 
    so from Debian archive (the first I was able to find)
  - syslogd was NOT accepting crap from network (verified several times, and supported
    by strace - the message came from the /dev/log unix domain socket.)

The odds are, I'm never going to find out where the heck they came from. I'm still rather curious how a real linux admin is supposed solve this sort of thing... I know Solaris has dtrace, but unfortunately I couldn't transfer the problem onto Solaris.

[Ben] -

I suppose you could try attaching 'strace' to the syslog process, if you prefer that approach.

[Ville] -

If you mean "strace -o $(pidof syslogd)" I did that in alternatives [1] and [2]. The only thing I found out was that someone writes the message to /dev/log and syslogd picks it up from there.

[Ben] -

Whoops! Sorry, I don't know how I managed to miss that.

[Thomas] -

Looking at your initial log from /var/log/messages (and assuming that was an accurate verbatim copy, as opposed to having your editor or some other program mangle it)

[Ville] -

I did change the server name to protect the innocent, but nothing else.

[Thomas] -

I would have said it was some weird error message from a program that the perhaps logger(1) obligingly shunted there -- this could explain the lack of the rigid structure (c.f. no colon) you noticed.

[Ville] -

logger(1) and initlog(1) both seem to add ':' there, but I did not try all the command line options.

At this point the alternatives I can think of are

  - writing a kernel module to overwrite connect() syscall with logging variant
  - modifying glibc to do the same (provided the culprit is not statically linked)
  - instrumenting the kernel by some other means (perhaps vai /dev/kmem)

[Thomas] -

Overkill and completely unnecessary.

[Ville] -

I do realize that (if nothing else) :).

[Thomas] -

If you can't reliably reproduce it, it can be much of a problem.

[Ville] -

What ever happened to the pioneer spirit of finding out the solution just for the sake of it? ;-)

[Thomas] -

Depends how much time you have on your hands, I suppose. The output is indeed not similar to any output I would expect a standard process to generate -- so either it's a faulty process, or more likely some rogue process was running on your system.

[Ville] -

... which is why I initially got curious.

What I meant to ask from TAG is, in more general terms, how does one find out which process/executable responsible for an odd syslog line. I didn't mean to ask TAG to solve all my problems, sorry if it came out that way.

[Thomas] -

"With difficulty" is the answer. PIDs change each time a process is spawned, and any kind of persistent data one observes within any of the log files is a snapshot in time usually.

[Ben] -

The question is, a snapshot of what time? Ville is definitely trying the right thing, since part of the SO_PEERCRED definition is "The returned credentials are those that were in effect at the time of the call to connect(2) or socketpair(2)" - not at the time of the call to 'getsockopt'.

[Ville] -

Trying, perhpas, but probably not having enough knowledge to do so :) (See my other message.)

[Thomas] -

Since this output is dubious at best, and points to something which is non-standard (it has to be since it's not easily identifiable) then you have two choices:

- Sit at your computer endlessly rotating logfiles to see if it happens again. (Comes with a health-warning though).

[Ville] -

Hmm, I find myself sitting on the front of computer anyway, and come to think of it I'm not sure about my health either. But again, that's a separate question.

[Thomas] -

- Use a reporting tool that monitors your /var/log/messages file for any occurances of a regexp such that it also generates other data and emails you the report.

[Ben] -

Too slow, as his experience so far has shown - and would tell him nothing useful, since the message itself does not contain a pointer back to the process. As is indeed the case, sometimes.

[Ville] -

Actually, I've been using logcheck and LogWatch. No matter how delirious this question might sound, it didn't exactly come to me as a revelation in a premonition dream...

LogWatch does generate other data, and logcheck is mostly a filter. Both of these scan the log files periodically - woefully late to dig any further information of the then-gone mysterious process.

There might be other tools that scan the log file continuously, but I think they still suffer from the latency I faced in my try [2] in the original problem report - you just can't execute ps(1) or anything quite fast enough, the process is already gone then. Remember that both ps and fuser scan /proc on linux - not exactly lightning fast.

[Thomas] -

I'd go with the second choice. :) I can't remember off the top of my head the names of programs which do that, but they do exist (that's some homework for you to do). Whether the reporting mechanism suffers from any latency in terms of the message appearing in /var/log/messages, and any subsequent data you might reply on thereafter (such as a 'ps' snapshot) is unclear, you'd have to see.

[Ville] -

Well, I'll have a look, but I do fear that's a dead end.

I think the answer (if any) would have to be something that reliably gets the information when the process is still connected to /dev/log. That's exactly what I tried to achieve by hacking syslogd to save the PID of the process that's at the other end of the /dev/log unix domain socket.

[Thomas] -

Going down the root-kit avenue is probably the better option still, even if you do consider it another question in its own right.

Note that you might want to install some root-kit devices to be sure it's not some h4x0r.

[Ville] -

Thanks, I did consider that (but that's a separate question...)

[Thomas] -

How do you mean? It's something you should look into.

[Ville] -

I mean "I am naturally looking into that, but that is a separate question."

None of these seem even remotely feasible (expect for, perhaps, the glibc alternative.)

Please, hit me with a cluestick!

[Ben] -

Ville, I don't think that such a cluestick exists; you've definitely got a Very Large Clue, and have done a bunch of right things in pursuit of that elusive beast. You may, in fact, win the "most clued querent ever" award in TAG - [ ... ]

[Ville] -

... the crucial things to realize of course is that - the MOST clued querents never get to ask TAG, because they already have to problem solved - the SECONDS MOST clued ones know where to stop and don't bang their heads against the wall endlessly.

[Ben] -

Sure; my point was that, out of the querents we get, we're most likely to see a) the completely lost, b) the moderately clued looking for "the next step", and c) highly clued but with a complex, subtle problem. a) is fine, b) is interesting, and we don't get enough of it, and c) can be frustrating by its nature but the search to find the answer is usually fascinating (where it's not so application/situation/querent specific that it's of no help to anyone other than the querent.) Your question manages to pass that hurdle without even ticking the top bar - I see the answer to it as something that would be very useful to admins and other system people everywhere.

[Ben] -

[ ... ] with the oak cluster and the maple leaf. It's just that, given your bug's intermittent (and now completely absent) nature, there's nothing left to trace.

[Ville] -

Exactly. That's the problem.

But when you've already walked such a long way, and the rainbow vanishes and there's no longer a gold pot to hunt -- the only thing you can do is to look around and try to learn. That's what I was trying to do here. The actual problem might never show up, but perhaps I'll be just a little more better-prepared for the next one. I already got to know about grep --line-buffered (although it is not present in grep-2.4.2-5 I have on the server), getsockopt(SO_PEERCRED) and its socketpair() limitation. And it got me thinking about Solaris dtrace which actually now does sound useful.

[Ben] -

I agree with you, and I appreciate that motivation highly - since I believe that this is how the best types of learning happen. Being able to fix the problem is very, very important - but gleaning knowledge of how to fix the category of that kind of problems is miles better than that, especially if it can be propagated to others.

[Ville] -

It did appears for about two days, perhaps once an hour on average.

The system was not that short of memory at that time (it's been much worse at times), so it might have had something to do with the data the program was chewing (for example email spam/virus scanner trying to bite a too big mail.)

[Ben] -

As to how I would go about tracing such a bug if it was present on my system - I think I'd note how often it occurred (often, hopefully), and start killing all non-essential processes to narrow down the list of what it could be. Next, I'd see if I could replace the essential processes with similar programs, one at a time, and look for the messages to disappear.

[Ville] -

The problem with an oldish production server is, that not all the programs can be replaced. Thankfully, this box doesn't run Oracle, but for example the UPS monitoring program has been kind of shaky, but that's the only thing that talks to this brand of UPSes. A few years ago I did trace an unexplained log message back to this very software.

I actually did something remotely like this. I weeded some spam from the mail queue and one large mail that was generated by a overly keen logger. That might have caused the problem to disappear, but then again it might be completely unrelated.

[Ben] -

You did mention that it's a production server... yeah, but what do you do with a production server if it's got (e.g.) a rootkit on it? The answer is the same as with any other system: you take it off-line (in the case of the server, hopefully by replacing it with a working machine) and fix it. That part of the scenario doesn't change regardless of how "critical" that machine is; the problem you're describing supersedes that critical need, since it implies a far more dangerous problem than the one on the surface.

[Ville] -

This is a very though call to make. Obviously, even given inifinite time for system administration you can't just format and redo a server each time something unexplained happens. Many times, I've traced unexplained events for hours, and eventually found a perfectly understandable (if not valid) reason for them. In case of security problems, there's usually also been a string clue, having looked hard enough. That doesn't mean there's a decisive clue in all cases, and that's when it's hard to decide what to do. Wipe out and reinstall? Forget about it?

[Ben] -

It depends on your security policy, of course. Most places don't care enough about it to do something like that; those that do set up systems that make the "wipe/reinstall" cycle a trivial, nearly-automatic procedure and don't consider it a problem to do so.

[Ville] -

I don't agree 100%. I still think it implies a possibility of a far more dangerous problem. This is about risk management. Obviously the magnitude of the threat here is very severe, but so is, say, crashing a car on a highway. If you hear strange sounds from a wheel, you might stop and investigate, but if you (and mechanics) can't spot the problem and it doesn't happens again, you might forget about it and go on instead of buying a new car.

[Ben] -

Not if you're driving a Formula1 car, though. :) At that point, swapping in a new steering system is essentially a "standard" task; the stakes are very high, the car is made for it, and "heck, I dunno" is not an acceptable answer.

[Ville] -

Now, the magnitude and the likelihood of the threat vary, but you still must draw the line somewhere. Even though the Right Thing to do would be to re-install.

(Or at least tell me why getsockopt(SO_PEERCRED) failed...)

[Ben] -

Write a test program and let us know. :)

[Ville] -

I'll try getsockopt(SO_PEERCRED) on socketpair() vs. socket()-bind()-listen() a la

https://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp?topic=/rzab6/uafunix.htm
https://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp?topic=/rzab6/uafunix.htm

and let you know.

[Ben] -

Better yet, if you discover a bug, let the developers know.

[Ville] -

I'm sure it's in my code - this was my first getsockopt(SO_PEERCRED) hack, afterall. Although there have been bugs: https://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/release-notes/as-s390/RELEASE-NOTES-U1-s390-en.html

[Thomas] -

Most likely it was syslogd trying to use that call to ascertain the user/process who opened the logfile in the first place.

[Ville] -

Umh, come again?

I couldn't find getsockopt(... SO_PEERCRED ...) anywhere in the sysklogd source in the first place [1] -- that's the very reason I added it there.

Thank you for your insights.

[1] Most likely because getsockopt(SO_PEERCRED) is linux only, and sysklogd is older that getsockopt(SO_PEERCRED) in linux kernel)

[ Some time passes... ]

[Ville] -

Adding the https://iki.fi/v/tmp/syslogd-peercred.patch snippet to the (small and well-commented) server example from above:

      sd2 = accept(sd, NULL, NULL);
      if (sd2 < 0)
      {  
         perror("accept() failed");
         break;
      }
      
+      {   
+          struct ucred cr;
+          int cl=sizeof(cr);
+          int ret;
+          
+          ret = getsockopt(sd, SOL_SOCKET, SO_PEERCRED, &cr, &cl);
+          printf("ret=%i  Peer's pid=%d, uid=%d, gid=%d\n",
+                  ret, cr.pid, cr.uid, cr.gid);
+          ret = getsockopt(sd2, SOL_SOCKET, SO_PEERCRED, &cr, &cl);
+          printf("ret=%i  Peer's pid=%d, uid=%d, gid=%d\n",
+                  ret, cr.pid, cr.uid, cr.gid);
+      }


% ./server& ./client     
Ready for client connect().
[3] 12836
ret=0  Peer's pid=12836, uid=1414, gid=100
ret=0  Peer's pid=12837, uid=1414, gid=100
250 bytes of data were received

so it works with socket()-bind()-listen()-accept(), not just with socketpair(). (To be completely frank, I found socket(7) and unix(7) a tad vague about these things.) The output does reveal that you need to getsockopt() that for the accepted connetion.

But syslogd.c doesn't do accept(), it just select()'s and then recv()'s.

I believe the crucial difference is explained in an syslogd.c comment

    * Changed: unixm is gone, since we now use datagram unix sockets.
    * Hence we recv() from unix sockets directly (rather than
    * first accept()ing connections on them), so there's no need 
    * for separate book-keeping.  --okir

That's probably why it doesn't work.

More precisely, the boulder.ibm.com example does

  sd = socket(AF_UNIX, SOCK_STREAM, 0);
whereas syslogd.c does
  socket(AF_UNIX, SOCK_DGRAM, 0);
[Ville] -
On Wed, Jun 28, 2006 at 10:53:45AM +0300, you [Ville]  wrote:
> 
> Yes .../lib/ likewise. I actually tried to grep more directories, but I
> concluded it couldn't 100% solve the problem, since "out of memory" string
> occurs in so many places, and the rest might just as well be a random
> argument to sprintf("%s").

But looking closer, I noticed f-prot (one of my usual prime suspects) has " [%s]" just above "out of memory" in strings(1) output.

One of the first things I tried was in fact sth like

 cat > /usr/local/f-prot/f-prot.WRAP <<END 
 #!/bin/sh

 TMP=/var/tmp/fprot.last.$$
 strace -o $TMP /usr/local/f-prot/f-prot $*

 if grep -q "out of memory" $TMP; then
   free | mutt -a $TMP -s "f-prot out of memory" root
 fi

 rm $TMP
 END

 mv /usr/local/f-prot/f-prot /usr/local/f-prot/f-prot.REAL
 ln -s /usr/local/f-prot/f-prot.WRAP /usr/local/f-prot/f-prot

I trust you realize why that was not a great idea (hint: that's when I last had an appointment with kernel oom killer...) Eventually I got it right of course, but it didn't trigger soon, so took it off.

(I didn't list this step in the original message, because it was just a shot in the dark, and the results were not so great.)

Trying to make f-prot run out of memory _on purpose_ turns out to be surprisingly difficult. Without purpose, it really has not been a problem in the past. Larger .ZIPs have caused it to wake up kernel oom rambo, which in turn has killed a lot of innocent daemons. Now, no matter what I try, I can't seem to get it run oom.

[Ben] -

Well, if they have somehow managed to handle the "huge ZIP" problem, great for them - but, just in case, see the mime-encoded chunk below (it's a 238-byte long result of double-bzipping 1 terabyte of nulls.) That tends to break most AV filters, so I'm not sending it as an attachment; you can always decode it and send it to yourself, though. :)

---------------------------------------------------------------------
--vkogqOf2sHV7VnPd
Content-Type: application/octet-stream
Content-Disposition: attachment; filename="1TB_of_nulls.bz2.bz2"
Content-Transfer-Encoding: base64

QlpoOTFBWSZTWcaRHXYC7Jt//2LxQgjDAWCkcQIIMMBAQABEEUSAYCEACFAAAAC0ADABcABg
NGQ0GEA0A00AAJqqoADQDQA0AYj0JoYDRkNBhANANNAAYFKIWwUohYilELZ65SiFyaZSiF5b
m6UohYpSiF+ylELblKIXjm7tXMUohYYSlELeKUQtHEUohaJSiFkx5ceEpRC9SlELNzylEL6K
UQvwpRC9ilELKUohfZSiFl4ZSiF8ZsMxSiF8ylELRrwlKIWfPyylELjlKIWnThKUQsmnzKUQ
tWjXKUQtf9KUQtf+LuSKcKEhjSI67A==

--vkogqOf2sHV7VnPd--
---------------------------------------------------------------------

[Ville] -

Good idea. I recall ridiculing f-prot with such bombs a few years a go, when they got some publicity. Back then, f-prot failed miserably (had a meeting with the kernel oom rambo). But a few years a go, f-prot had trouble even with 'normal' large .zip's. Now I'm surprised to see how small foot print it has chewing large archives. They must have done something, although I admit I had lost hope.

To be frank, I was unable to base64 decode your attachment. I did fed it to 'base64 -d', which said

 base64 1.2
 Copyright 2004, 2005 Simon Josefsson.
 Base64 comes with NO WARRANTY, to the extent permitted by law.
 You may redistribute copies of Base64 under the terms of the GNU
 General Public License.  For more information about these matters,
 see the file named COPYING.
 BZh91AY&SY?v??b???0?@@DD^P?0p^Base64: invalid input

The starting (BZ) is at least right, but bzip2 says:

 base64 -q -d < oo |bzip2 -d > /dev/null
 base64: invalid input

 bzip2: Compressed file ends unexpectedly;
         perhaps it is corrupted?  *Possible* reason follows.

Anyway, I did:

  f=foo
  mkdir $f
  for i in $(seq 1 10); do 
       touch $f/$f$i; 
       perl -e 'truncate "'$f/$f$i'", 1024**2'; 
  done 
  cp ~test-virus.gz foo/foo.gz

  for j in bar zot urf goo zik; do 
      mkdir $j
      zip -9 $j/$j.zip $f/$f* 
      for i in $(seq 2 10); do ln $j/$j.zip $j/$j$i.zip; done
      f=$j
  done

(test-virus.gz contains the standard Eicar test virus in gzipped form.)

That should contain 100GB of zero and 10000 virii, if I counted right.

Surprisingly enough, it found the test virii:

  test2/zik/zik.zip->goo/goo5.zip->urf/urf8.zip->zot/zot4.zip->bar/bar9.zip->foo/foo.gz->test-virus
  Infection: EICAR_Test_File
  (...)

  f-prot zik/zik.zip | grep -c Infection:
  10000

and only consumed a tad over 6000k while doing so;

12296 test      20   0  6044 6044   400 R    39.7  0.6   0:09 f-prot
                        ^^^^

Since adding more zero seemed to add more cpu time that memory use, I first went ballistic with the recursion, making it 20 branches wide and 30 levels deep.

  f=foo
  mkdir $f
  for i in $(seq 1 10); do
       touch $f/$f-$i;     
       perl -e 'truncate "'$f/$f-$i'", 1024';
  done
  cp ~/test-virus.gz foo/foo.gz
  
  for j in $(seq 1 30); do
      mkdir $j                    
      zip -9 $j/$j-1.zip $f/$f*
      for i in $(seq 2 20); do ln $j/$j-1.zip $j/$j-$i.zip; done
      f=$j                                                   
  done
  zip -9 all.zip $f/$f*

Now, that must hurt... (Okay, 20^30 is far less than a googol, still "much".)

With the largest of these, it segfaulted, until I gave it 8m of memory. With 8m, It happily churned the all.zip (not all of it, since I only have finite time, but still):

  test2/all.zip->30/30-10.zip->29/29-10.zip->28/28-10.zip->27/27-10.zip->26/26-10->25/25-10.zip
  Infection: EICAR_Test_File
  (...)

  20534 test    18   0  7168 7168   412 R    64.5  0.7   5:33 f-prot

But limiting the memory caused it to segfault, not give an oom message.

So if it ever going to oom in a "normal" situation (with the normal ulimits), it's due to an odd bug, not a pathology in handling recursion.

Now, compare that to

   2393 haldaemo  16   0 72136 3364  636 S  0.0  0.7  14:54.79 hald                                                       
  23738 user      20   5 86908 9736 2504 R  0.3  1.9  14:27.96 xmms                                                       
  15806 user      15   0 30804 1604 1096 S  0.0  0.3  35:14.98 gnome-settings-daemon 
  15866 user      15   0 32168 4016 2820 S  0.7  0.8 744:28.97 gkrellm                                                    

And so on and so on. Not bad from f-prot, I'd say.

[Ben] -

You could also try setting 'ulimit' to squeeze f-prot down to a small footprint and see what that does.

[Ville] -

I've been trying to do that. As I said, with 'addressspace', 'memoryuse' and 'datasize' limited to 5m , f-prot fails to even load the virus db - if I up it to 6m, f-prot churns through almost everything.

[Ville] -

The process 'addressspace', 'memoryuse' and 'datasize' limits seem to have megabyte granularity and with 5m f-prot can't even read the virus db (fails with different message - I'm not surprised if it doesn't properly check every malloc()) and with 6m, it happily churns through everything I throw at it. And it can be just one allocation that triggers the log message, not every allocation that might fail in it. But I'm still trying. (Oh, the joy of having source code vs. a closed source program...)

Thank you for your insights! Much appreciated!


Which process wrote that line into syslog? [2]

Ville (v+tag at iki.fi)
Sat Jul 1 13:01:10 PDT 2006

Answered by: Ben

[ This thread resulted from an earlier one of the same name, but covers rather different ground - so it got an entry of its own. -- Ben ]

[Ben] -

Sure; my point was that, out of the querents we get, we're most likely to see a) the completely lost, b) the moderately clued looking for "the next step", and c) highly clued but with a complex, subtle problem.

a) is fine,

[Ville] -

I can imagine it is - especially, when the querent shows some respect to the ones who answer and willingness to learn, not just get rid of the problem.

[Ben] -

b) is interesting, and we don't get enough of it,

[Ville] -

An optimist would perhaps presume that this is because the linux system and documentation is in such a good shape that a clued person rarely runs into a dead-end. A pessimist might find other reasons...

[Ben] -

and c) can be frustrating by its nature but the search to find the answer is usually fascinating (where it's not so application/situation/querent specific that it's of no help to anyone other than the querent.) Your question manages to pass that hurdle without even ticking the top bar - I see the answer to it as something that would be very useful to admins and other system people everywhere.

[Ville] -

If you are in a dead-end, I think you can often (maybe not always) find a pattern or a generalization of the problem, if you step back and look it from distance.

In this case it is "how do I find which process wrote that line into syslog", which in turn divides into several other less and more general questions. While exploring the alternatives to get grip of the problem one surely find a set of general patterns that can be useful in situations other than the one at hand. If they are not familiar yet, looking closer to them might be a valuable - the next time you'll know their possibilities and limitations right from the start.

[Ben] -

I agree with you, and I appreciate that motivation highly - since I believe that this is how the best types of learning happen. Being able to fix the problem is very, very important - but gleaning knowledge of how to fix the category of that kind of problems is miles better than that, especially if it can be propagated to others.

[Ville] -

Yes, the 'category' metaphor describes quite accurately, what I tried to say.

[Ben] -

I assume you know about Perl's "$|" variable, then. If you don't, 'perldoc perlvar' will be highly enlightening. :)

[Ville] -

Actually, I do.

It was just that with the perl variation, buffering was not a problem, since there was only one pipe in the equation. In hindsight, it might have been, and if I had come to think of $|, I would have used it proactively. Good idea.

[Ben] -

I will say that Solaris tools for this kind of thing do come to my mind a bit more readily than anything similar in Linux. For one thing, in Solaris, you could always just enable BSM (be sure to have LOTS of disk capacity for logging, though!) and beat on that machine with every software hammer you've got until it does produce one of those warnings - then, examine that microsecond-by-microsecond log that BSM produces. You will definitely know who and what did X at Y time. I know that there's got to be something like that for Linux, since I recall hearing that Linux can pass the DoD "C2" certification - but I don't know what that app would be.

[Ville] -

BSM sounds fascinating, but perhaps overkill, in case Solaris DTrace is available...

https://www.sun.com/bigadmin/content/dtrace/
https://users.tpg.com.au/adsln4yb/dtrace.html
https://daemons.net/~matty/articles/solaris.dtracetopten.html
https://www.sun.com/software/solaris/howtoguides/dtracehowto.jsp

I think something like the following DTrace script:

      dtrace -n 'syscall::connect:entry 
                 / arg1->sun_path == "/dev/log" /
                 { printf("%s %s", execname, copyinstr(arg0)); }'

(Completely untested as I have no Solaris installation around)

would have solved the problem. (Add a time stamp and pid printing to that).

There something like that for linux; kprobes & systemtap:
https://sourceware.org/systemtap/
https://www.redhat.com/magazine/011sep05/features/systemtap/
(And Frysk, Oprofile and LTT).

Systemtap wasn't available for the problem server, though, too old kernel.

I gather is would have been something like

   stap -p2 -e 'probe kernel.function("sys_connect") { 
       log "connect /dev/log called: " . 
           execname() . string(pid()) . 
           " at " . string(gettimeofday_s()) }'

to solve the problem with systemtap.

I haven't yet been able to test that, since on the box I first installed systemtap, I couldn't get it working.

[Ben] -

It depends on your security policy, of course. Most places don't care enough about it to do something like that; those that do set up systems that make the "wipe/reinstall" cycle a trivial, nearly-automatic procedure and don't consider it a problem to do so.

[Ville] -

Yes, the cost of reinstall is one crucial variable.

[Ben] -

Not if you're driving a Formula1 car, though. :) At that point, swapping in a new steering system is essentially a "standard" task; the stakes are very high, the car is made for it, and "heck, I dunno" is not an acceptable answer.

[Ville] -

For Mercedes F1 team it seems to be ;) (I don't know if you follow the F1 series, but Finnish Kimi Räikköinen has been plagued with mechanical problems for the past few years...)

Anyway, I so agree with you on that.

One more thing I tried was the LD_PRELOAD trick:

--8<-----------------------------------------------------------------------
/* 
   gcc -shared -ldl -o libwrap_connect.so wrp.c
   LD_PRELOAD=/pth/to/libwrap_connect.so initlog -s "Test"
*/
#include <stdlib.h>
#include <stdio.h> 
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/un.h>
#include <sys/stat.h>
#include <unistd.h>


static int same_inode(char* file1, char* file2)
{
    struct stat st1, st2;
    if (stat(file1, &st1) != 0 ||
        stat(file2, &st2))
        return 0;

    return st1.st_dev == st2.st_dev && 
           st1.st_ino == st2.st_ino;
}

static int read_file(char* file, char* buf, int sz_buf)
{
    int ret;
    FILE* f = fopen(file, "r");
    if (!f) return 0;
    ret = fread(buf, 1, sz_buf, f);
    fclose(f);
    
    return ret;
}
       
int connect(int sockfd, const struct sockaddr *sa, socklen_t addrlen)
{
    struct sockaddr_un* serv_addr = (struct sockaddr_un*)sa;
    if (serv_addr->sun_family == AF_UNIX &&
        same_inode(serv_addr->sun_path, "/dev/log"))
    {
        /* Note: should use file locking or at least sth like /tmp/LOG.<pid> 
           here */

        FILE* f = fopen("/tmp/LOG", "a");
        if (f)
        {
             int s;
             time_t t;
             char exename[PATH_MAX] = { 0 };
             char cmdline[512] = { 0 };
             readlink("/proc/self/exe", exename, sizeof(exename));
             s = read_file("/proc/self/cmdline", cmdline, sizeof(cmdline));
             for (--s; s > 0; s--) 
                 if (cmdline[s] == '\0')
                     cmdline[s] = ' ';

             time(&t);
             fprintf(f, "%s\t%s [%d] calling connect(\"/dev/log\")\n\t%s\n",
                     ctime(&t),
                     exename,
                     getpid(),
                     cmdline);
             fclose(f);
        }
    }

    /* N.B. Simply replacing <symbol> with __<symbol> to access the original 
            function doesn't always work. See the output of "nm -D <lib>" to
            check if __<symbol> is available.

            You could do something like

               typedef void (*connect_t)(int sockfd, const struct sockaddr *sa, socklen_t addrlen);
               static connect_t real_connect;
        
               if (!real_connect)
               {
                   real_connect = (connect_t)dlsym(RTLD_NEXT, "connect");
                   if (!real_memcpy) exit(EXIT_FAILURE);
               }

            to reach the original symbol more reliably. */

    return __connect(sockfd, sa, addrlen);
}
--8<-----------------------------------------------------------------------

I got

  Fri Jun 30 09:02:11 2006
          /sbin/initlog [10383] calling connect("/dev/log")
          initlog -s Test

That was actually surprisingly easy to do (for a quick and dirty hack, you could shorten the above a bit; same_file -> strcmp etc).

It gives you a lot of possibilities for debugging, but it also has downsides:

- doesn't work with statically linked executables
- hard to enable globally
- doesn't affect daemons that are already running

I did have a look at syscall wrapper kernel modules, but since 2.4 sys_call_table hasn't been exported, so that's no longer feasible (and was never encouraged.)

I also tried inotify-tools (https://rohanpm.net/inotify-tools), but inotify doesn't seem to note unix domain socket connect() as file access.

As a side-note, a friend of mine tested the viral test.zip I created to stress F-Prot with a handful of anti-virus programs and sent it to a couple of anti-virus vendors. While F-Prot actually did pretty well with it, not all anti-virus programs fared that well. According to him, at least Norman already fixed up their product somewhat :)

[Ben] -

On Thu, Jul 06, 2006 at 12:02:12PM +0300, Ville wrote:
> On Sat, Jul 01, 2006 at 11:01:10PM +0300, you [Ville] wrote:
> > 
> >    stap -p2 -e 'probe kernel.function("sys_connect") { 
> >        log "connect /dev/log called: " . 
> >            execname() . string(pid()) . 
> >            " at " . string(gettimeofday_s()) }'
> 
> I actually tried this on a Fedora 5 system, and systemtap seems pretty cool
> indeed.
> 
> It requires these:
>    % yum install systemtap kernel-devel
> And 'kprobes' enabled in kernel (Fedora and RHEL have that by default.)
> 
> and after that, you can do 
> 
>    % (sleep 10; mkdir test)&
>    [1] 9086                                         
>    % stap -v -e 'probe kernel.function("sys_mkdir") 
>                  { log("mkdir() called: "); 
>                    log(execname()); 
>                    log(string(pid()));
>                    log(string(gettimeofday_s())); }'

Actually, this is quite similar to the BSM config file - except for the 'log()' syntax. You'd just tell it what to log - reads, writes, etc. Thanks for writing this up, by the way: that's an end of Linux that I have never explored, myself, and I'm very, very chuffed to hear that there's good tools available for it.

> So I definetely could have solved the problem with system tap, if the kernel
> and distro had been new enough.
> 
> The only downside is this:
> 
>    % rpm -qi kernel-debuginfo
>    Name        : kernel-debuginfo             Relocations: (not relocatable)
>    Size        : 1730037086                       License: GPLv2
>                  ^^^^^^^^^^
>    % rpm -qi kernel-devel
>    Name        : kernel-devel                 Relocations: (not relocatable)
>    Size        : 13954129                         License: GPLv2
>                  ^^^^^^^^

Yikes. Well, that's generally the case with leaving all the debug info in an executable - and, of course, doing it with the kernel verges on the ridiculous.

r!calc 1730037086-13954129
1716082957

2GB versus 14MB, wow. Pretty impressive. Well, if you go hunting elephants, you definitely need a big-bore rifle...

[Ville] -

On Thu, Jul 06, 2006 at 10:48:27AM -0400, you [Benjamin A. Okopnik] wrote:
> >    % (sleep 10; mkdir test)&
> >    [1] 9086                                         
> >    % stap -v -e 'probe kernel.function("sys_mkdir") 
> >                  { log("mkdir() called: "); 
> >                    log(execname()); 
> >                    log(string(pid()));
> >                    log(string(gettimeofday_s())); }'
> 
> Actually, this is quite similar to the BSM config file - except for the
> 'log()' syntax. 

That's very similar to Solaris DTrace (which i gather is slightly newer?)

I haven't actually tried it (I have no access to Solaris), but I have drooled over several praising articles about it (see my earlier mail for links). It is definetely nice Linux is gaining something similar.

> You'd just tell it what to log - reads, writes, etc. Thanks for writing
> this up, by the way: that's an end of Linux that I have never explored,
> myself, and I'm very, very chuffed to hear that there's good tools
> available for it.

Great :) I thought I was not the only one who hadn't yet explored it. I won't paste the systemtap URLs again, since you probably spotted them in my earlier mail.

> >    % rpm -qi kernel-debuginfo
> >    Size        : 1730037086                       License: GPLv2
> >    % rpm -qi kernel-devel
> >    Size        : 13954129                         License: GPLv2
> 
> Yikes. Well, that's generally the case with leaving all the debug info
> in an executable - and, of course, doing it with the kernel verges on
> the ridiculous.
>
> 2GB versus 14MB, wow. Pretty impressive. 

But you need to install both for systemtap. :-P

> Well, if you go hunting elephants, you definitely need a big-bore rifle...

Sure, but it might feel like hunting for a fruit-fly riding a mammoth...

[Ben] -

On Thu, Jul 06, 2006 at 09:21:28PM +0300, Ville wrote:
> On Thu, Jul 06, 2006 at 10:48:27AM -0400, you [Benjamin A. Okopnik] wrote:
> > >    % (sleep 10; mkdir test)&
> > >    [1] 9086                                         
> > >    % stap -v -e 'probe kernel.function("sys_mkdir") 
> > >                  { log("mkdir() called: "); 
> > >                    log(execname()); 
> > >                    log(string(pid()));
> > >                    log(string(gettimeofday_s())); }'
> > 
> > Actually, this is quite similar to the BSM config file - except for the
> > 'log()' syntax. 
> 
> That's very similar to Solaris DTrace (which i gather is slightly newer?)

Well, BSM is actually a full-time "recorder" that tracks exactly who does what and at what time; it's one of the reasons that full-spec C2 systems are such a huge admin load. DTrace has similar capabilities, as I understand it, but is more of a specific-case troubleshooting tool.

> I haven't actually tried it (I have no access to Solaris), but I have
> drooled over several praising articles about it (see my earlier mail for
> links). It is definetely nice Linux is gaining something similar.

Yeah, DTrace was quite a leap in the state of the art when it first came out. Again, I don't usually deal with that end of administration myself, but I agree - it's wonderful to see similar tools made available for Linux.

> > 2GB versus 14MB, wow. Pretty impressive. 
> 
> But you need to install both for systemtap. :-P

[laugh] Well, disk space is cheap nowadays. I still recall the days when a 21MB hard drive cost over $200 - i.e., ~$10/MB. At those prices, the above would have been a problem indeed.

> > Well, if you go hunting elephants, you definitely need a big-bore rifle...
> 
> Sure, but it might feel like hunting for a fruit-fly riding a mammoth...

Ah - the "use a tiny-caliber rifle but DON'T miss" scenario. Very familiar. :)

[Ville] -

On Thu, Jul 06, 2006 at 09:21:28PM +0300, you [Ville] wrote:
>  
> > >    % rpm -qi kernel-debuginfo
> > >    Size        : 1730037086                       License: GPLv2
> > >    % rpm -qi kernel-devel
> > >    Size        : 13954129                         License: GPLv2
> > 
> > Yikes. Well, that's generally the case with leaving all the debug info
> > in an executable - and, of course, doing it with the kernel verges on
> > the ridiculous.
> 
> > 2GB versus 14MB, wow. Pretty impressive. 
> 
> But you need to install both for systemtap. :-P

Well, as it happens, it seems Roland McGrath, Dave Jones and the other Fedora kernel fellows have managed to slim down the elephant:

https://kernelslacker.livejournal.com/43037.html

--8<-----------------------------------------------------------------------
                 -debuginfo, now with 86% more awesome!   
Before:

-rwxr-xr-x 1 48 48 693332940 Jun 18 02:53  kernel-debuginfo-2.6.17-1.2136_FC5.i686.rpm

After:

-rwxr-xr-x 11 48 48 163280651 Jul 13 22:24 kernel-debuginfo-2.6.17-1.2396.fc6.i686.rpm
-rwxr-xr-x 11 48 48 27167808 Jul 13 22:07  kernel-debuginfo-common-2.6.17-1.2396.fc6.i686.rpm
-rwxr-xr-x 11 48 48 172196950 Jul 13 22:12 kernel-kdump-debuginfo-2.6.17-1.2396.fc6.i686.rpm
-rwxr-xr-x 11 48 48 163326708 Jul 13 22:18 kernel-PAE-debuginfo-2.6.17-1.2396.fc6.i686.rpm

We'd all be downloading a lot more bits if it wasn't for the efforts of
Roland McGrath on this one.
The amount of change in the kernel specfile isn't as much as I'd feared,
which was one reason I had procrastinated over this (besides constantly
seemed to find something more important to be tackling, like "my kernel
doesn't boot").

It may still need some slight tweaks, but it's getting there. Longterm,
hopefully we can bring down the size of the individual rpms further too.

[Ville] -

On Sun, Jul 16, 2006 at 09:26:31PM +0300, you [Ville] wrote:

> > But you need to install both for systemtap. :-P
> 
> Well, as it happens, it seems Roland McGrath, Dave Jones and the other
> Fedora kernel fellows have managed to slim down the elephant

No just that, but now there's a GUI / IDE for systemp as well:
https://stapgui.sourceforge.net/features.shtml

[Ben] -

On Fri, Jul 21, 2006 at 02:12:47PM +0300, Ville wrote:
> On Sun, Jul 16, 2006 at 09:26:31PM +0300, you [Ville] wrote:
> > > But you need to install both for systemtap. :-P
> > 
> > Well, as it happens, it seems Roland McGrath, Dave Jones and the other
> > Fedora kernel fellows have managed to slim down the elephant
> 
> No just that, but now there's a GUI / IDE for systemp as well:
> https://stapgui.sourceforge.net/features.shtml

I've just been offered an opportunity to be certified as a Solaris-10 "Operating System Internals" instructor (it would require my going to a week-long, open-skull/install-firehose type of training seminar in Boston.) A part of the training involves lots of heavy-duty work with 'mdb', 'kmdb', and 'dtrace'. I don't think I'm going to do it - it's not really my cuppa tea, even though there's theoretically a bunch of money in it - but having someone who teaches this course and knows Linux take a look at 'systemtap' would make for a very interesting comparison. I'm going to see if a quiet word in the right ear will result in anything publishable... at least eventually, since this class is still three weeks away.

Talkback: Discuss this article with The Answer Gang


Bio picture

Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.

When away from the keyboard, her hands have been found full of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler.


Copyright © 2006, Kat Tanaka Okopnik. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 131 of Linux Gazette, October 2006

Talkback

By Kat Tanaka Okopnik


Talkback

Talkback:124/smith.html (10)
Talkback:124/smith.html (11)
Talkback:124/smith.html (12)
Talkback:126/howell.html (3)
Talkback:128/adam.html
Talkback:128/ramanathan.html
Talkback:129/okopnik1.html
Talkback:130/tag.html
Talkback:130/neville.html

Talkback:124/smith.html (10)

[ In reference to Build a Six-headed, Six-user Linux System in LG#124 ]

Amber Sanford (amber at modernspaces.com)
Thu Aug 17 11:41:25 PDT 2006

Followed up by: Ben

Non-linux machines: any recommendations for this set-up though running on Windows XP?

[Ben] - You bet: switch to Linux. :)

Amber, I'm not the world's top expert when it comes to Windows, but I've got quite a lot of experience and expertise with it under my belt. Speaking from that perspective and to the best of my knowledge, there are a few minor things that you can do in this direction - i.e., a pair of video cards and even multiple keyboards attached to a single CPU - but all of this hardware is still tied to one session, i.e. one person using all of it. There are some uses for this type of configuration, but it's a completely different kettle of fish.

As far as creating a robust, serious piece of software that will do this under Windows, or even a project toward that end, it's simply not going to happen: Microsoft would need to open-source their OS, and that's not in the cards. This is one of the many reasons that so many people and so many companies are switching to Linux: in the world of Open Source, if you have this kind of a requirement, you can either find someone somewhere who has already done it (and perhaps fund them to tweak it to your exact specs) - or you can do it yourself (perhaps by hiring a little programming muscle if the capability does not exist in-house.) It's a different - and we believe, better - approach to computing.

Best of luck in your search, and we'll be happy to hear from you if you decide to make the switch at some point in the future.


Talkback:124/smith.html (11)

[ In reference to Build a Six-headed, Six-user Linux System in LG#124 ]

José Antonio (jap1968 at yahoo.es)
Tue Sep 12 15:39:45 PDT 2006

Followed up by: Ben

Hi there,

I would like to invite you to have a look to another tutorial to create a multihead computer (two seats). In this case, the base distribution has been Ubuntu. The hardware used is a dual head nVidia AGP card.

You can find the article here:

https://netpatia.blogspot.com/2006/09/multiseat-computer-with-ubuntu.html

Regards,

José Antonio

[Ben] - That's a good article, José - thank you. I'm going to CC the author of the piece you're responding to; perhaps you two can discuss ways in which you can help each other, or share the knowledge you've gained in the process of doing these projects.


Talkback:124/smith.html (12)

[ In reference to Build a Six-headed, Six-user Linux System in LG#124 ]

tkalenko_ma (tkalenko_ma at sibadi.org)
Sun Sep 17 19:41:09 PDT 2006

Followed up by: Ben, BobS

Hello Comrad,

I have read your article about 6-user system. How possible configuring similar system on two doubleheaded PCI-E cards (SLI motherboards) + one doubleheaded PCI card?

Best regards,
Maxim, Russian Federation

[Ben] - (Comment for Mike Orr: Oh boy! This is your big chance to practice some of your "Russian humor" on me! Go ahead, I'm ready. :)

> I have read your article about 6-user system.
> How possible configuring similar system on two doubleheaded PCI-E cards
> (SLI motherboards) + one doubleheaded PCI card?

You have very interesting timing, Maxim; one of our readers, José Antonio, just wrote in describing a system that's built around dual-video cards. I suspect that what you're asking is possible; take a look at the link below and experiment. We would appreciate being kept apprised of your progress - judging from the amount of mail we've received about Bob's article, many people seem to be interested in this and similar issues.

https://netpatia.blogspot.com/2006/09/multiseat-computer-with-ubuntu.html

Good luck!

[BobS] - I'm sorry that I can not give you any useful advice in setting up your dual head PCI-E card. Ben is right, the article he references may help.


Talkback:126/howell.html (3)

[ In reference to From Assembler to COBOL with the Aid of Open Source in LG#126 ]

S.K.Goel (skgoel at omlogistics.co.in)
Wed Sep 20 21:32:31 PDT 2006

Followed up by: Neil

Dear Sir,

At present, I am using Microfocus cobol on RHEL-AS-4. I am interested to use open-cobol. Please advise me.

[Neil] - My advice is to install it and get stuck in.

There are forums at https://www.opencobol.org/, which will no doubt be better placed to help you with the transition than the members of this list.


Talkback:128/adam.html

[ In reference to How Fonts Interact with the X Server and X Clients in LG#128 ]

Thomas Adam (thomas.adam22 at gmail.com)
Tue Sep 12 05:16:54 PDT 2006

It's always nice when I get indirect feedback. A friend of mine sent me this:

https://lwn.net/Articles/189901/

Posted Jul 3, 2006 17:41 UTC (Mon) by subscriber otaylor

Probably not news to most LWN readers, but the font article should be
ignored: it is describing technologies that are no longer in common
use. Goodbye and good riddance to the XLFD (X Logical Font
Description). Contemporary applications, toolkits, and desktops use the
fontconfig library instead.

What I will say to that is it's true in part -- but it's only very recent applications which aren't using the XLFD (that's the long-form font names). Many applications still using XLFD (think xterm, rxvt, etc).

I certainly don't agree that the article should be ignored.


Talkback:128/ramanathan.html

[ In reference to Subversion: Installation, Configuration — Tips and Tricks in LG#128 ]

Nathaniel Ye (Nye at airliteplastics.com)
Tue Jul 18 09:22:40 PDT 2006

Followed up by: Ramanathan

I found the subversion article by Muthaiah is outstanding. It covered topics that the Subversion book did not. I had a scenario (regarding library mismatch) described in the "Post-installation tips" section and have been strugling. Any help will be greatly appreciated.

This is my output:

[root at localhost test]# ldd /usr/lib/httpd/modules/mod_dav_svn.so | grep apr
        libaprutil-0.so.0 => /usr/lib/libaprutil-0.so.0 (0x00cb2000)
        libapr-0.so.0 => /usr/lib/libapr-0.so.0 (0x00man 6a5000)
[root at localhost test]# ldd /usr/sbin/httpd | grep apr
        libaprutil-0.so.0 => /usr/lib/libaprutil-0.so.0 (0x002ba000)
        libapr-0.so.0 => /usr/lib/libapr-0.so.0 (0x005ca000)

For some reason, these apr libraries point to the older versions (while I have newer version compiled and installed during the Apache 2 installation on Red Hat Linux 4 - by default RH does not have apxs and I could not uninstall the non-conventionally installed Apache. I had to reinstall Linux leaving Apache out)

[root at localhost lib]# ls -al libapr*
lrwxrwxrwx  1 root root     17 Jul 17 10:22 libapr-0.so.0 -> libapr-0.so.0.9.4
-rwxr-xr-x  1 root root 139868 May 17  2005 libapr-0.so.0.9.4
lrwxrwxrwx  1 root root     21 Jul 17 10:23 libaprutil-0.so.0 -> libaprutil-0.so.0.9.4
-rwxr-xr-x  1 root root  83260 Jun 16  2005 libaprutil-0.so.0.9.4

Changing the soft links to point to newer versions would cause Apache not to start. Any suggestions on how to upgrade these apr libraries and force both Apache and Subversion to use them?

[Ramanathan] - You should not change the soft links to newer versions.

You can try to recompile svn and use the --with-apr switch in the configure script to point to the apache version of the apr. apxs can also be a problem.


Talkback:129/okopnik1.html

[ In reference to Low-Fat Linux - Now with Less Cruft! in LG#129 ]

Mark Baldridge (mbaldrid at us.ibm.com)
Sat Sep 2 18:09:48 PDT 2006

Followed up by: Ben, Faber, Kapil, Rick

With a host of UNIX O/S there are often a lot of admin logs that get kept, and grow until you do something about them. Any of these in Linux?

[Ben] - Sure - in '/var/log', just like many other *nixen. Those aren't much of a problem, though: most programs that create logs also create an entry in '/etc/cron.{d,daily,monthly,weekly}' which rotates those logs at that specified interval.

There's also the fact that this is one of the classic reasons for having multiple partitions: even if '/var' does get filled up, '/' isn't affected.

[Faber] - Or, on Red Hat derived boxen, the application puts an entry in /etc/logrotate.d and let's the logrotate daemon do the heavy lifting.

BTW, I've had the joy recently to work on a Debian box. A non-standard one at that. Boy, you guys do things weirdly!

[Ben] - Why, Faber... you've never struck me as a religious type before. Wanna argue about Emacs vs. Vim next? :)

Pot. Kettle. #000000. :)))

[Faber] - Why bother? Everyone knows that pico rules!

:-)

[Kapil] - Actually "nano" is better than "pico" (smaller is better).

(Imagine devilish grin; I have forgotten the emoticon)

[Rick] - Wimp.

$ ls -l /etc/alternatives/editor
lrwxrwxrwx 1 root root 12 2006-03-10 14:28 /etc/alternatives/editor -> /bin/cat

[Ben] - Wuss.

https://ars.userfriendly.org/cartoons/?id=19990508&mode=classic

[Faber] - I thought this was appropriate (wait for the nag screen to go away)

https://ars.userfriendly.org/cartoons/?id=20060904

[Ben] - [laugh] Great minds think alike, of course. Two great cartoons...

[Kapil] - Quoting Faber Fedor (faber at linuxnj.com):
> BTW, I've had the joy recently to work on a Debian box. A non-standard
> one at that.  Boy, you guys do things weirdly!

On a Debian system /etc/logrotate.d also works the same way so I am mystified by your complaint---if it was one!

[Rick] - Maybe some Debian-oriented BOfH is pulling a prank on you?

$ ls -l etc/logrotate.d
total 13
-rw-r--r-- 1 root root  366 2005-01-19 19:32 apache
-rw-r--r-- 1 root root  240 2006-01-16 02:15 apache2
-rw-r--r-- 1 root root   79 2004-09-28 11:44 aptitude
-rw-r--r-- 1 root root  384 2004-12-03 14:25 base-config
-rw-r--r-- 1 root root  111 2005-09-26 00:04 dpkg
-rw-r--r-- 1 root root  170 2005-01-05 02:07 exim4-base
-rw-r--r-- 1 root root 1272 2005-01-14 01:23 mailman
-rw-r--r-- 1 root root 1072 2005-09-29 15:19 mysql-server
-rw-r--r-- 1 root root 1020 2005-01-18 14:44 mysql-server.dpkg-old
-rw-r--r-- 1 root root  128 2004-11-08 14:18 super
-rw-r--r-- 1 root root  134 2004-07-11 21:08 vsftpd
$

[Faber] - I'm leaning towards "incompetent"; there are kernels, initrds and log files in /, there's a directory called /images that holds CSS files, etc.

[Ben] - It may have started life as a Debian system, but that's not the right description for it any more. I think that somewhat stronger terms are much more applicable. :) Debian follows the FHS these days (https://qa.debian.org/fhs.html), and has for a while now.


Talkback:130/tag.html

[ In reference to The Monthly Troubleshooter: Installing a Printer in LG#130 ]

Steve Brown (steve.stevebrown at gmail.com)
Sun Sep 3 14:07:10 PDT 2006

Followed up by: Ben

Hi Gang!

Another fantastic issue, maintaining this level of excellence must be difficult, but please keep it up.

I was just reading the section regarding printers (fantastic idea - disturbing the pool) and the description of the user dragging the My Documents folder to the printer. Made me think of a guy I work with.

I have worked with him for four years, we have to use Windows (Blechh) and use Citrix as a rule. I have patiently tried to teach this guy to use cut and paste. For four years. Four LONG years. The other day I sat with him for an hour copying and pasting, from lots of apps into lots of other apps, just so he got the hang of it. He could actually do it in the finish. Friday I caught him. Only one app open (IE) writing stuff down a sheet of paper, he closed IE and opened Word and typed it in. He believes - very strongly, and contrary to the demonstrated evidence - that you can only have one app at a time open at once.

I no longer work in the same office as him, and I am sure that has led to an increase in my expected lifespan. My hair has greyed and thinned because of this man, and I should feel resentment, but I just feel pity. I hear you cry "Why do you still help him?" well it's a bit like that tricky puzzle in a text adventure, the one you just can't get past? It just pulls you in, one day it will stick and he will understand. I am not the only one to try, many have failed before me, but I am the most persistent.

I spend my days yearning for my linux box. I should get another job.

Be well all,

Steve.

[Ben] - Hi, Steve -

Great story - much in the vein of our erstwhile "Foolish Things We Do With Our Computers" column, although in this case, it's "Foolish Things Other People Do With Their Computers." As to the guy that you're talking about - I've met him! Or, well, people who are exactly like him... lots of them. Tech support will do that to^Wfor you. :)

Thanks for writing!


Talkback:130/neville.html

[ In reference to DNS techniques in LG#130 ]

Blizbor (tb670725 at ima.pl)
Thu Sep 7 06:50:08 PDT 2006

Followed up by: Ben, Blizbor, Ed, Rick, Thomas

There are at least one principal mistake done - in bind you can control caching nameserver. Telling the trueth - problem is in complexity of used tools rather than one is incapable for something. I strongly suggest reedition of this article in context: "how it could be done using bind, and how uisng djb" leaving choice of solution to users. To the author - in the bind manual is a chapter about "view" keyword, I appreciate your work in writing this article - well done about djb, but you missing trueth about bind. In actual form article is unacceptable and should be removed.

I wish you Linux Gazette never again put such poor quality material on your pages.
Fix this asap - only those doing nothing arent doing mistakes.

(This is my personal opinion.)

[Thomas] - On 07/09/06, Blizbor <tb670725 at ima.pl> wrote:

> I wish you Linux Gazette never again put such poor quality material on
> your pages.

You're absolutely right -- why, when I look back at the hundreds of issues, I can see several mistakes, all of which have large arrows which point to the nasty people of LG. What awful creatures we are at LG. Tut.

> Fix this asap - only those doing nothing arent doing mistakes.

No -- to do so is pointless since the mirrors of LG would have already have taken the tarball themselves. To change it at lg.net (which may or may not happen -- it's not my say in such matters) is the best we can do.

[Ben] - [ Forwarded to author. ]

On Thu, Sep 07, 2006 at 03:50:08PM +0200, Blizbor wrote:

> There are at least one principal mistake done - in bind you can 
> control caching nameserver. Telling the trueth - problem is in
> complexity of used tools rather than one is incapable for something.
> I strongly suggest reedition of this article in context: "how it could
> be done using bind, and how uisng djb" leaving choice of solution to users.

You're welcome to write such an article.

> To the author - in the bind manual is a chapter about "view" keyword, I
> appreciate your work in writing this article - well done about djb, but
> you missing trueth about bind.  In actual form article is unacceptable
> and should be removed.

Thank you for your opinion. I take it you're volunteering as a technical editor to the Linux Gazette? If you do, and then succeed in establishing some credentials for your technical knowledge, your opinion will be considered along with others here.

> I wish you Linux Gazette never again put such poor quality material on
> your pages.

Then I suggest you get cracking on that application. I'll be waiting for it with baited breath.

> Fix this asap - only those doing nothing arent doing mistakes.

The answer, then, is for you to stop doing nothing and contribute your time and effort to helping LG and the Linux community.

> (This is my personal opinion.)

Really? In that case, here's mine: if you haven't contributed, don't be so quick with the harsh criticism.

[Rick] - Quoting Blizbor (tb670725 at ima.pl):

> There are at least one principal mistake done - in bind you can
> control caching nameserver.  Telling the trueth - problem is in
> complexity of used tools rather than one is incapable for something.

Greetings, Blizbor. I can perhaps comment as the editor who did the technical edit of Ed Neville's article (whom I am cc'ing). You might recognise my name from the several footnotes I added.

On the matter of controlling BIND9's caching nameserver functionality, please note that Mr Neville correctly and commendably qualified his statement by saying that open access to caching is a problem of BIND9's _default installation_. That is absolutely correct as stated, and a very valuable point that should be heeded by all BIND9 users.

> I strongly suggest reedition of this article in context: "how it could
> be done using bind, and how uisng djb" leaving choice of solution to
> users.  To the author - in the bind manual is a chapter about "view"
> keyword, I appreciate your work in writing this article - well done
> about djb, but you missing trueth about bind.

That would have been a very different article from the article Mr Neville chose to write; _Linux Gazette_ has no wish to dictate the scope of articles to authors. We merely ask that they be clear and accurate, so as to increase the wealth of understanding among our readers, and are delighted to assist authors in hitting that target.

Along those lines, you seem to have missed the fact that BIND9's 'view' keyword actually is described within Mr Neville's article, courtesy of the additional material I added in footnote number 1, referring any interested readers to Rob Thomas's 'Secure BIND Template' for good examples.

> In actual form article is unacceptable and should be removed.  I wish
> you Linux Gazette never again put such poor quality material on your
> pages.  Fix this asap - only those doing nothing arent doing mistakes.
> 
> (This is my personal opinion.)

I say this as a sysadmin familiar both with DJBware and with its open-source alternatives, who chooses to use BIND9 for nameservice on various servers, and who has published in LG a primer on simple DNS setup modes in BIND9 (https://linuxgazette.net/121/moen.html): Bosh. Bollocks. Mr Neville's piece was technically adept and well written.

(It also may be of interest that I'm a major devil figure for many of the less civil of DJB's coterie, because of critiques I have written online. E.g., Prof. Bernstein's Web pages call me names on account of those critiques.)

[Ed] - Rick Moen <rick at linuxmafia.com> wrote:

> Quoting Blizbor (tb670725 at ima.pl):
> 
> > There are at least one principal mistake done - in bind you can
> > control caching nameserver.  Telling the trueth - problem is in
> > complexity of used tools rather than one is incapable for something.

Can you explain any other errors you feel need resolving, if there are technical mistakes it might be possible to alter it, but I think the only point you want emphasised is that small mention of BIND.

> please note that Mr Neville correctly and commendably qualified his
> statement by saying that open access to caching is a problem of
> BIND9's  _default installation_.  That is absolutely correct as
> stated, and a very valuable point that should be heeded by all BIND9
> users.

I knew there would be some flack if I did not! I've had this same old discussion all over the place. The article is in no way about "my NS is better than your NS", what I wrote is my experience of running a large ISP's NS, it's what works well for us and our customers. The introduction was to explain why I wrote it, if nothing else, the various components helps the reader to understand a little about lookups in the process.

> I say this as a sysadmin familiar both with DJBware and with its
> open-source alternatives, who chooses to use BIND9 for nameservice on
> various servers, and who has published in LG a primer on simple
> DNS setup modes in BIND9 (https://linuxgazette.net/121/moen.html):  
> Bosh.  Bollocks.  Mr Neville's piece was technically adept and well
> written.
> 
> (It also may be of interest that I'm a major devil figure for many 
> of the less civil of DJB's coterie, because of critiques I have
> written online.  E.g., Prof.  Bernstein's Web pages call me names on
> account of those critiques.)

Oh! You're that /Rick Moen/, I would not have know unless you had pointed it out, quite a mean thing DJB did there putting your mail address and name on the FAQ.

[Rick] - Quoting ed (ed at s5h.net):

> I knew there would be some flack if I did not! I've had this same old
> discussion all over the place. The article is in no way about "my NS is
> better than your NS", what I wrote is my experience of running a large
> ISP's NS, it's what works well for us and our customers. The
> introduction was to explain why I wrote it, if nothing else, the various
> components helps the reader to understand a little about lookups in the
> process.

If I may say so, I've learned a great deal from study of djbdns and from reading technical pieces written by knowledgeable members of the DJBware community. In particular, I'll treat Jonathan deBoyne Pollard and Russ Nelson to tall quaffs from their favourite beverages any day of the week, out of gratitude. You follow in their footsteps, and I'm glad to have "met" you.

> Oh! You're that /Rick Moen/, I would not have know unless you had
> pointed it out, quite a mean thing DJB did there putting your mail
> address and name on the FAQ.

Well, it gives me a rare distinction, actually: I can ask people who else they know who's mentioned by name in a major software licence? ;-> The mean-spritedness doesn't bother me, in fact, but Dan's non-sequitur evasion of my substantive critique did. (Please pardon this URL, as I was a bit annoyed at the time:) https://linuxmafia.com/~rick/faq/just-another-djb-groupie.html

(Sometimes, I get asked on IRC 'Are you really Rick Moen?', to which my traditional answer is 'No, just someone else of the same name.')

[Blizbor] - Rick Moen wrote:

> Quoting Blizbor (tb670725 at ima.pl):
>
>   
>> There are at least one principal mistake done - in bind you can
>> control caching nameserver.  Telling the trueth - problem is in
>> complexity of used tools rather than one is incapable for something.
>>     
>
> Greetings, Blizbor.  I can perhaps comment as the editor who did the
> technical edit of Ed Neville's article (whom I am cc'ing).  You might
> recognise my name from the several footnotes I added.
>
> On the matter of controlling BIND9's caching nameserver functionality, 
> please note that Mr Neville correctly and commendably qualified his
> statement by saying that open access to caching is a problem of BIND9's 
> _default installation_.  That is absolutely correct as stated, and a
> very valuable point that should be heeded by all BIND9 users.

Greetings,

I'm a bit not precise in saying what I mean. Sometimes what I want to say are a bit unkind. The point of my mail is: nobody on the world is using _default configuration_. Actually I think that default configuration of any network demons should be crippled to the extent they are do start and do extremely limited functionality on the loopback interface. I found referring to default configuration as primary source of principal mistake of the article. It can be done and should be said "it can be, but we will keep focus on how to do that using djbdns because ...". Wait ? I read that article and why exactly it's worth to use DJB ... I still don't know. I must tell it again (emphasize) - from technical and editorial (samples, config quotes, etc) article is good, however mistake on the beginning makes it sounds different. GIGO...

[Rick] - Quoting Blizbor (tb670725 at ima.pl):

> I'm a bit not precise in saying what I mean. Sometimes what I want to
> say are a bit unkind.

It's OK. We try to work through to the substance, which is the important thing.

> The point of my mail is: nobody on the world is using _default
> configuration_.

I wish that were true, but for example a presentation by Dan Kaminski at the 2005 LISA Conference in San Diego revealed the interim results of his project to study the world's Internet-reachable DNS servers -- including the fact that a frighteningly large percentage are vulnerable to cache poisoning: I think the figure was well over 25% (would have to check).

You can also spot-check domains you know: I think you'll find the problem to be widespread. For example, I just tried https://www.dnsreport.com/tools/dnsreport.ch?domain=cocacola.com . Notice that both of the Coca Cola Company's are open to public recursive queries. Nameserver software versions were not available in that case, but Kaminsky's results suggest that BIND8 versions for Unixes and Windows still predominate.

> Actually I think that default configuration of any network demons
> should be crippled to the extent they are do start and do extremely
> limited functionality on the loopback interface.

I tend to agree with you, but that is not Mr Neville's responsibility, especially since his piece wasn't about BIND9 in the first place. To the extent he referred to that software, his point was correct, well stated, and properly qualified.

> I found referring to default configuration as primary source of
> principal mistake of the article.

We will have to agree to disagree, since I see no mistake. To the contrary, Mr Neville was commendably careful -- and, incidentally, much more helpful to BIND9 users than are many who actually write on that subject.

> It can be done and should be said "it can be, but we will keep focus
> on how to do that using djbdns because ...".

I've read advocacy pieces, and this wasn't one.

> Wait ? I read that article and why exactly it's worth to use DJB ... I
> still don't know.

I'd say the piece was more how to effectively use djbdns. It should be read in that spirit.

Talkback: Discuss this article with The Answer Gang


Bio picture

Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.

When away from the keyboard, her hands have been found full of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler.


Copyright © 2006, Kat Tanaka Okopnik. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 131 of Linux Gazette, October 2006

2-cent Tips

By Kat Tanaka Okopnik


2-Cent Tips

2-cent tip: Finding clunky files
2-cent tip: ethereal became wireshark
2-cent Tip: Annotating PDF
2-cent Tip: Real editing of PDF Forms
2-cent tip: Renaming music files

2-cent tip: Finding clunky files

Teal (teal at mailshack.com)
Sun 3 Sep 2006 13:58:28 PDT

Followed up by: Ben, Neil, Rick

What's eating up your hard-drive?

Most linux'ers familiar with the bash shell know that df is good for finding out just how much space is being taken up in a partition. They may also know that du lists each folder in the current dir, and the size of all that folder's contents.

Those are neat commands, but not that informative. The latter inspired me to come up with a more helpful shell one-liner that points out clear as day the files which are sucking up your space. I keep it handy to clean out my tiny 40GB hard drive every now and then. I also shared it with someone who runs a 160GB personal server, and they were very thankful. So if it's useful for me, and useful for him, I can be moderately sure that it'll be useful for you, too. Here it is:

cd ~; du -Sa --block-size=MB | sed -r '/^0/d' | sort -nr | less

You may have to wait a minute for it to get the size of all the files (with my small HD, takes me about 20 seconds).

This is only to scan your home directory for big files. To scan your root directory, change the ~ at the beginning to / ... and while it's scanning, press Ctrl+C, and then 'q' to quit. Or after it's done and the results are shown, just press 'q' to leave the pager program and go back to your prompt.

[Neil] - That's an interesting variation on the usual approach. Most people use 'find' to pick out large files, which I find preferable, e.g.

  find ~ -size +250k -ls

will list every file under your home directory larger than 250kB. If you want it sorted

  find ~ -size +250k -ls | sort -nr -k 7 

will do that.

As the saying goes "there's more than one way to do it" and your approach works just fine.

[Ben] - It may be that one solution is significantly faster than another (although I rather doubt it); I'd certainly like to find out. I wish I knew how to flush the page cache that 'find', etc. use to keep the relevant info ('du' uses the same one); I'd have liked to compare the speed of the two solutions, as well as perhaps 'ls -lR|sort -nrk5'. However, no matter what, Teal's is a good, useful approach to solving (or at least reporting) a common problem. Heck, I just cleaned out a bunch of thumbnails (187MB!) going back to... umm, given that I've been just carrying my '~' structure forward all along, back to when I started using Linux, probably.

ben at Fenrir:~$ time find ~ -size +250k -ls | sort -nr -k 7 > /dev/null
real    0m45.453s
user    0m0.120s
sys     0m0.500s

Maybe I'll remember to test one of the others when I next turn this laptop on.

[Rick] - Here's my own favourite solution to that problem:

:r /usr/local/bin/largest20


#!/usr/bin/perl -w
# You can alternatively just do:  
# find . -xdev -type f -print0 | xargs -r0 ls -l | sort -rn +4 | head -20
use File::Find;
@ARGV = $ENV{ PWD } unless @ARGV;
find ( sub { $size{ $File::Find::name } = -s if -f; }, @ARGV );
@sorted = sort { $size{ $b } <=> $size{ $a } } keys %size;
splice @sorted, 20 if @sorted > 20;
printf "%10d %s\n", $size{$_}, $_ for @sorted

[Ben] - [smile] Why, thank you. Nice to see it making the rounds. Original credit to Randal Schwartz, of course, but I've mangled the thing quite a bit since then.

[Neil] - The advantages of the find solution are

  1. It is somewhat more portable, the options to du used in teal's solution aren't available on some old distros I can't escape from.
  2. It's easier to fine tune the file size threshold.
  3. When sorted, it sorts in exact file size (but not exact disk usage). The du based solution won't sort a set of 1.2MB, 1.8MB and 1.6MB files into order of size.

In terms of speed, there may be an advantage in not having to remove small files from the initial list, but I would expect that difference to be lost in the noise.

[Nate (Teal)] - Hrm... the 'du' tool can sort based on a smaller size, you'd just have to set the block-size to say, kb, or just stick with bytes like find does, and you can fine-tune the files the 'du' tool shows based on size with grep. But of course, neither of those are as intuitive or easy-to-use the find solution, so 'du' is still worse in that aspect.

I have to say, I'm pretty humbled. It'd probably be better to just include the 'find' solution, or Moen's perl-based solution in the Gazette than my 'du' cruft.

[Ben] - Heck no, Nate. The point of all those tools in Linux is well represented by the motto of Perl, "TMTOWTDI": There's More Than One Way To Do It. It was nice to see someone else applying some brainpower to solving a common problem in a useful way.

[Nate (Teal)] - Good stuff, there.

[Ben] - Yep. Yours included.

[Rick] - As Ben reminded me, he's one of the most recent people to polish up that Perl gem ('largest20'): I'm merely one of the many people passing around variations of it -- and grateful for their craftsmanship.


2-cent tip: ethereal became wireshark

Peter Knaggs (peter.knaggs at gmail.com)
Thu Sep 7 19:02:15 PDT 2006

Old news to frequent ethereal users I guess, but back in July 2006 ethereal became "wireshark". It seems that the company Ethereal, Inc. is keeping the old name.

If you've been using the command line version tethereal, you're probably wondering what to call it now. Well tethereal has become "tshark".


2-cent Tip: Annotating PDF

Kapil Hari Paranjape (kapil at imsc.res.in)
Tue Sep 12 20:07:35 PDT 2006

Followed up by: Ben

Hello,

If you have ever wanted to do the Guardian sudoku and not wanted to waste trees then you need to find a way to annotate PDF files on your computer.

"flpsed" (FL toolkit PostScript EDitor) to the rescue.

Install "flpsed" and import any PDF file for annotation. The interface is simple and intuitive.

This can also be used to fill forms which are not quite in the PDF form format. More about that in the next tip.

It can also be used to annotate PS files of course.

Regards,

Kapil.

[Ben] - That's a great tool, Kapil. I've needed something like that for ages - many of the contracts that I get sent by my clients are in PDF, and up until now, I've been converting them to PS, editing them in Gimp, and reconverting them to PDF before shipping them back. This will save me tons of time - thanks! I hope others will find it at least as useful.

[Kapil] - Don't shoot (as in photograph) the messenger :)

I too am extremely grateful to the author (Mortan Brix Pedersen morten at wtf.de) of "flpsed".

Glad to have been of help.


2-cent Tip: Real editing of PDF Forms

Kapil Hari Paranjape (kapil at imsc.res.in)
Tue Sep 12 23:49:11 PDT 2006

Hello,

"Real" PDF forms are quite common nowadays. How does edit them with a "Real" editor like vi (OK also emacs :))?

"pdftk" (PDF ToolKit) to the rescue.

Suppose that "form.pdf" is your PDF form.

1. Extract the form information:

	pdftk form.pdf generate_fdf output form.fdf

2. This only gets the text fields to get an idea of all the fields do:

	pdftk form.pdf dump_data_fields output form.fields

3. Sometimes the field names are cryptic. It helps to also view the form:

	xpdf form.pdf

or

	pdftotext -layout form.pdf; less form.txt

(if you insist on text-mode)

4. You can now edit the file form.fdf and fill in the fields marked with the string '\n%%EOF\n'.

Once you have edited form.fdf you can generate the filled in form with:

	pdftk form.fdf fill_form form.fdf output filled.pdf

or

	pdftk form.fdf fill_form form.fdf output filled.pdf flatten

to get a non-editable pdf.

Some additional hints:

1. If your form.fdf file contains no '\n%%EOF\n' strings then you are out of luck---it means your PDF form is only a printable form and cannot be filled on the computer (but see the hint about "flpsed").

2. Checkboxes/buttons will not appear in the fdf file. You can use form.fields to find out what these fields are called and introduce entries in the fdf file as (here replace FN by the field name)

   	 <</V (Yes) /T (FN) >> 

or

   	 <</V (Off) /T (FN) >> 

3. It helps to have three windows open. One for editing, one for viewing the form.fields and one for viewing the filled pdf file.

4. You may also want to periodically update the filling of the form to see whether the filling works.

Remarks:

Clearly this is crying for someone to write a nice interface---why don't I you ask? I will ... but don't hold your breath.

You can skip all of this and use Adobe's Distiller, but most readers should be able to guess why I don't want to use that!


2-cent tip: Renaming music files

Benjamin A. Okopnik (ben at linuxgazette.net)
Wed 27 Sep 2006 11:24:37 PDT

Much of the available CD-ripping software out there produces files with names like 'trackname_01.wav' or '01_track.wav' instead of actual song names. Yes, there's software available that will look up CDDB entries... but what if your CD isn't in the CDDB, or you don't have a net connection readily available?

'wavren' to the rescue. :)

This script, when executed in a directory containing the 'standard' track names, takes the name of a file that contains the names of the songs on that album and returns a paired list of the current track name and the line in the file that it will be renamed to. It will exit with an error message if the lists aren't the same length, and it will not actually rename anything until you specify a '-rename' argument. Example:

ben@Fenrir:/tmp/foo$ ls
01.wav  02.wav  03.wav  04.wav  05.wav  06.wav  07.wav  08.wav
09.wav 10.wav names
ben@Fenrir:/tmp/foo$ cat names
01. Hells Bells
02. Shoot To Thrill
03. What Do You Do For Money Honey
04. Given The Dog A Bone
05. Let Me Put My Love Into You
06. Back In Black
07. You Shook Me All Night Long
08. Have A Drink On Me
09. Shake A Leg
10. Rock And Roll Ain't Noise Pollution
ben@Fenrir:/tmp/foo$ wavren names
"01.wav" will be "01. Hells Bells.wav"
"02.wav" will be "02. Shoot To Thrill.wav"
"03.wav" will be "03. What Do You Do For Money Honey.wav"
"04.wav" will be "04. Given The Dog A Bone.wav"
"05.wav" will be "05. Let Me Put My Love Into You.wav"
"06.wav" will be "06. Back In Black.wav"
"07.wav" will be "07. You Shook Me All Night Long.wav"
"08.wav" will be "08. Have A Drink On Me.wav"
"09.wav" will be "09. Shake A Leg.wav"
"10.wav" will be "10. Rock And Roll Ain't Noise Pollution.wav"

If the lineup isn't exactly how you want it, you can either renumber the original files, or change the order of the lines in the "names" file. Also note that you can rename mp3 files, etc., just by changing the 'ext' variable at the top of the script to reflect the extension that you're looking for.

Talkback: Discuss this article with The Answer Gang


Bio picture

Kat likes to tell people she's one of the youngest people to have learned to program using punchcards on a mainframe (back in '83); but the truth is that since then, despite many hours in front of various computer screens, she's a computer user rather than a computer programmer.

When away from the keyboard, her hands have been found full of knitting needles, various pens, henna, red-hot welding tools, upholsterer's shears, and a pneumatic scaler.


Copyright © 2006, Kat Tanaka Okopnik. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 131 of Linux Gazette, October 2006

Apache2, WebDAV, SSL and MySQL: Life In The Fast Lane

By Dominique Cressatti

Introduction

As part of my work I had to set up an upload/download site for our customers with the following brief:
We chose to use WebDAV with a patched version of the Apache 2 WebDAV module, since we needed quota functionality.

Additionally, all the WebDAV traffic and authentication was to be done via HTTPS. This was required because Windows XP clients simply refuse authentication of a WebDAV directory, and in any case has the benefit of making the management of the site far more secure regardless of the WebDAV client and platform used.

One thing to bear in mind is that I had to use Apache 2 instead of Apache 1.3.x as there is no WebDAV patch for Apache 1.3.x.

The following article will explain how to set up and configure Apache 2, HTTPS,  WebDAV. In addition, it will demonstrate how to recompile the WebDAV module to support quota and how to use MySQL to provide authentication and give arbitrary access control to the various part of the site.

Configuration overview

The configuration will be done in the following order:
  1. installing apache 2 and the WebDAV module
  2. creating an SSL certificate (including setting up the CA) for the Web server
  3. configuring apache to serve HTTPS pages
  4. enabling WebDAV and configuring simple authentication
  5. recompiling the apache WebDAV module to provide quota
  6. configuring apache to use MySQL for more complex authentication
  7. configuring the site to provide all the functionalities mentioned in the introduction.
  8. Footnote, comments/suggestions

Installing Apache 2 and the WebDAV module

# apt-get install apache2 libapache-mod-dav
Reading Package Lists... Done
Building Dependency Tree... Done
The following extra packages will be installed:
  apache-common apache2-common apache2-mpm-worker apache2-utils libapr0 libxmltok1 openssl ssl-cert
Suggested packages:
  apache apache-ssl apache-perl apache2-doc lynx www-browser ca-certificates
The following NEW packages will be installed:
  apache-common apache2 apache2-common apache2-mpm-worker apache2-utils libapache-mod-dav libapr0 libxmltok1 openssl
  ssl-cert
0 upgraded, 10 newly installed, 0 to remove and 0 not upgraded.
Need to get 3141kB of archives.
After unpacking 10.1MB of additional disk space will be used.
Do you want to continue? [Y/n] y

Creating a certificate/CA setup for the Web server

To serve Web pages with the HTTPS protocol, the Web server will require a certificate. If you are already familiar with certificate management on Linux then skip ahead to configure apache to serve HTTPS pages, otherwise the following steps will explain how to set up you own certificate authority and then create a certificate for your Web server.

Creating a certificate authority

Note: most of the following was copied and slightly modified from Nate Carlson's excellent IPSec Web page (https://www.natecarlson.com/linux/ipsec-x509.php).

Edit the file "/usr/lib/ssl/misc/CA.sh", and change the line that says 'DAYS="days 365"' to a very high number (this sets how long the certificate authority's certificate is valid.). This is necessary as the certificate authority must last longer than the Web server certificate. I generally set it to 3560 (which roughly amount to 10 years). Run the command 'CA.sh -newca'. Follow the prompts, as below. Example input is in red, and my comments are in blue. Be sure to not use any non-alphanumeric characters, such as dashes, commas, plus signs, etc. These characters may make things more difficult for you.

# /usr/lib/ssl/misc/CA.sh -newca
CA certificate filename (or enter to create)
(enter)
Making CA certificate ...
Using configuration from /usr/lib/ssl/openssl.cnf
Generating a 1024 bit RSA private key
.............................................................................+++
........................................+++
writing new private key to './demoCA/private/./cakey.pem'
Enter PEM pass phrase:(enter password) This is the password you will need to create any other certificates.
Verifying password - Enter PEM pass phrase:(repeat password)
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US(enter) Enter your country code here
State or Province Name (full name) [Some-State]:State(enter) Enter your state/province/county here
Locality Name (eg, city) []:City(enter) Enter your city here
Organization Name (eg, company) [Internet Widgits Pty Ltd]:ExampleCo(enter) Enter your company name here (or leave blank)
Organizational Unit Name (eg, section) []:IT(enter) OU or department, if you like. You can leave it blank if you want.
Common Name (eg, YOUR name) []:CA(enter) The name of your Certificate Authority
Email Address []:ca@example.com(enter) E-Mail Address

Create the certificate for your Web server

# /usr/lib/ssl/misc/CA.sh -newreq
Using configuration from /usr/lib/ssl/openssl.cnf
Generating a 1024 bit RSA private key
...................................+++
...............................+++
writing new private key to 'newreq.pem'
Enter PEM pass phrase:(enter password) Password to encrypt the new cert's private key with - you'll need this!
Verifying password - Enter PEM pass phrase:(repeat password)
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US(enter)
State or Province Name (full name) [Some-State]:State(enter)
Locality Name (eg, city) []:City(enter)
Organization Name (eg, company) [Internet Widgits Pty Ltd]:ExampleCo(enter)
Organizational Unit Name (eg, section) []:(enter)
Common Name (eg, YOUR name) []:host.example.com(enter)The host name of  your Web server
Email Address []:user@example.com(enter) (optional)

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:(enter)
An optional company name []:(enter)
Request (and private key) is in newreq.pem

What we just did is generate a Certificate Request - this is the same type of request that you would send to Thawte or Verisign to get a generally-accepted SSL certificate. For our uses, however, we'll sign it with our own CA:

# /usr/lib/ssl/misc/CA.sh -sign
Using configuration from /usr/lib/ssl/openssl.cnf
Enter PEM pass phrase:(password you entered when creating the ca)
Check that the request matches the signature
Signature ok
The Subjects Distinguished Name is as follows
countryName :PRINTABLE:'US'
stateOrProvinceName :PRINTABLE:'State'
localityName :PRINTABLE:'City'
organizationName :PRINTABLE:'ExampleCo'
commonName :PRINTABLE:'host.example.com'
emailAddress :IA5STRING:'user@example.com'
Certificate is to be certified until Feb 13 16:28:40 2012 GMT (3650 days)
Sign the certificate? [y/n]:y(enter)

1 out of 1 certificate requests certified, commit? [y/n]y(enter)
Write out database with 1 new entries
Data Base Updated
(certificate snipped)
Signed certificate is in newcert.pem
Next, move the output files to names that make a bit more sense for future reference, where "host.example.com..."  in the example below is the name of your Web server.

# mv newcert.pem host.example.com.pem
# mv newkey.pem host.example.com.key

Configuring apache to serve HTTPS pages

Copy the above 2 certificates files into /etc/apache2/ssl and make sure you change the files to be only readable by root.

# chmod 400 /etc/apache2/ssl/host*
For convenience, you may want to remove the passphrase from the RSA private key which will stop to prompt for the passphrase when starting or restarting apache. However if you prefer to keep the passphrase, skip to the next step.

First make a backup of the encrypted key:

# cp /etc/apache2/ssl/host.example.com.key /etc/apache2/ssl/host.example.com-bkp
Then re-write the key with encryption. You will be prompted for the original encrypted key passphrase.

# openssl rsa -in /etc/apache2/host.example.com.key-bkp -out /etc/apache2/ssl/host.example.com.key
Enter pass phrase for /etc/apache2/ssl/host.example.com.key-bkp:
writing RSA key
Edit the file "/etc/apache2/mods-available/ssl.conf" and add the following at the bottom.

SSLCertificateFile    /etc/apache2/ssl/host.example.com.pem
SSLCertificateKeyFile /etc/apache2/ssl/host.example.com.key

listen 443
Enable HTTPS by creating symbolic links to the "SSL" files:
# ln -s /etc/apache2/mods-available/ssl.conf /etc/apache2/mods-enabled/ssl.conf
# ln -s /etc/apache2/mods-available/ssl.load /etc/apache2/mods-enabled/ssl.load
Create a site configuration file:
# touch /etc/apache2/sites-available/testwebdav
Copy the following stanza into it:
<VirtualHost WEB_server_IP_address:443>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav
    CustomLog /var/log/apache2/access.log combined

    <IfModule mod_ssl.c>
        SSLEngine on
        SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
    </IfModule>

</VirtualHost>
replacing WEB_server_IP_address with the IP address of  your own server.

Last of all, "enable" the site by creating a symbolic link to the configuration file:
# ln -s /etc/apache2/sites-available/testwebdav /etc/apache2/sites-enabled/testwebdav

WebDAV directory

Now we need to create the Webdav directory and create a test file in it so we can test HTTPS and WebDAV functionality.
# mkdir /var/www/webdav
Change its ownership to the account and group under which apache is running (under Debian the apache user and group are both www-data).

# chown www-data:www-data /var/www/webdav/
Then create a test file in the WebDAV directory.

# echo "hello world" > /var/www/webdav/test.txt
# chown apache:apache /var/www/webdav/test.txt
# chmod 640 /var/www/webdav/test.txt
Now reload apache to make the changes effective.

# /etc/init.d/apache2 reload
You may want to check that apache is running HTTP and HTTPS by using the "netstat -anpt" command:

# netstat -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1870/portmap
tcp 0 0 127.0.0.1:692 0.0.0.0:* LISTEN 2636/famd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2455/exim4
tcp6 0 0 :::80 :::* LISTEN 5395/apache2
tcp6 0 0 :::22 :::* LISTEN 2470/sshd
tcp6 0 0 :::443 :::* LISTEN 5395/apache2
If everything is OK, you should see that apache2 is listening on port 80 and more importantly on port 443, as in the above example. If not, consult your the apache log file (/var/log/apache2/) for problem reports.

Now open the following URL in your Web browser:

https://WEB_server_IP_address/test.txt
replacing "WEB_server_IP_address" with either the IP address of your server or its FQDN. This should ask you to accept a certificate, and once you have accepted, it should display:

hello world

Enabling WebDAV and configuring simple authentication

Enable WebDAV and apache simple authentication by once again creating a symbolic link to the modules in /etc/apache2/mods-available to /etc/apache2/mods-enabled:

# ln -s /etc/apache2/mods-available/auth_anon.load /etc/apache2/mods-enabled/auth_anon.load
# ln -s /etc/apache2/mods-available/dav_fs.conf /etc/apache2/mods-enabled/dav_fs.conf
# ln -s /etc/apache2/mods-available/dav_fs.load /etc/apache2/mods-enabled/dav_fs.load
# ln -s /etc/apache2/mods-available/dav.load /etc/apache2/mods-enabled/dav.load
Create a simple authentication file and set its permission so only the apache account can read it.

# htpasswd -c /etc/apache2/passwd.dav test
# chown root:www-data /etc/apache2/passwd.dav
# chmod 640 /etc/apache2/passwd.dav
Now you need to modify the site configuration file to enable WebDAV and authentication..
Do it according the example below (added directives in red).

<VirtualHost WEB_server_IP_address:443>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav
    CustomLog /var/log/apache2/access.log combined

    <IfModule mod_ssl.c>
        SSLEngine on
        SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
    </IfModule>

    <Directory /var/www/webdav>
            DAV On
            AuthType Basic
            AuthName "test"
            AuthUserFile /etc/apache2/passwd.dav
            <Limit PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                Require user test
            </Limit>
    </Directory>

</VirtualHost>
Again we need to reload apache to make the changes effective:
# /etc/init.d/apache2 reload
Now to test WebDAV you need a "WebDAV client".

I will be using Windows built-in WebDAV support through Internet Explorer as it is adequate for the remaining of this article.

In Internet Explorer select "File" => "Open" and open this URL:

https://WEB_server_IP_address/
Make sure to tick the check box next to "Open as Web Folder" (this is very  important; otherwise, it won't open the folder using the WebDAV protocol and all you'll be able to do is view the files but not modify them).

Once again you will be prompted to accept a certificate but now additionally you will also be prompted to provide a user name and password.

Note: If you are constantly prompted for the user name and password, either they are incorrect or there is a problem with authentication. It could be that apache cannot read the password file because there is a syntax error or that apache doesn't have the permission to read the file. Look in "/var/log/apache/error.log" to find out more.

To test WebDAV functionalities, in Internet Explorer, right click on "test.txt" (the test file we created earlier), select "Rename", rename the file and press the "Enter" key.

Enabling public browsing access to the site

You may want to have your site "browsable" or a least 1 part of the site (see later on for more granular access). To do so modify the site configuration file to enable WebDAV and authentication. Do it according the example below (added directive in red):

<VirtualHost WEB_server_IP_addres:80>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav

</VirtualHost>


<VirtualHost WEB_server_IP_addres:443>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav
    CustomLog /var/log/apache2/access.log combined

    <IfModule mod_ssl.c>
        SSLEngine on
        SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
    </IfModule>

    <Directory /var/www/webdav>
            DAV On
            AuthType Basic
            AuthName "test"
            AuthUserFile /etc/apache2/passwd.dav
            <Limit PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                Require user test
            </Limit>
    </Directory>

</VirtualHost>
Once again reload apache to make the changes effective:

# /etc/init.d/apache2 reload
Open the following URL in your Web browser (but not as a Web Folder).

https://WEB_server_IP_address/test.txt
replacing "WEB_server_IP_address" with the IP address of your server or it FQDN.

This should display:

hello world

Recompiling the apache WebDAV module to provide quota

As a quick overview, in order to provide quota capabilities with WebDAV, you will need the apache2 source. You will also need to patch the WebDAV modules and recompile them.

Recompiling the apache WebDAV modules

On Debian Sarge, the apache version I used was 2.0.54.
  1. Obtain the Apache 2.0.54 source from https://archive.apache.org/dist/httpd/httpd-2.0.54.tar.gz
  2. Obtain the Apache WebDAV modules patch from https://www.geocities.jp/t_sat7/webdav/webdav.html
    (I used https://leche.goodcrew.ne.jp/webdav/webdav-2.0.54-quota-2.3any.txt or webdav-2.0.54-quota-2.3any.txt)
Note: if you are using a later version of apache 2 than 2.0.54 obtain the source from https://httpd.apache.org/ and the corresponding  WebDAV patch from https://www.geocities.jp/t_sat7/webdav/webdav.html.

# tar -xvzf httpd-2.0.54.tgz
(snip)
# cd httpd-2.0.54
# patch -p2 < /location/where/the/patch/is/webdav-2.0.54-quota-2.3any.txt
patching file modules/dav/main/mod_dav.c
patching file modules/dav/main/quotachk.h
patching file modules/dav/main/quotachk.c
patching file modules/dav/main/config5.m4
patching file configure
# ./configure --enable-modules=most --enable-mods-shared=all
(snip)
# make

Once the compilation has completed, verify that you have the 2 WebDAV modules binaries (they should be located in /httpd-2.0.54/modules/dav/).
 
# ls -l ./modules/dav/fs/.libs/mod_dav_fs.so
-rwxr-xr-x  1 root root 217493 2006-03-24 10:10 ./modules/dav/fs/.libs/mod_dav_fs.so
# ls -l ./modules/dav/main/.libs/mod_dav.so
-rwxr-xr-x  1 root root 417579 2006-03-24 10:09 ./modules/dav/main/.libs/mod_dav.so
Make a backup of the of your current apache modules:
# mv /usr/lib/apache2/modules/mod_dav.so /usr/lib/apache2/modules/mod_dav.so-bkp
# mv /usr/lib/apache2/modules/mod_dav_fs.so /usr/lib/apache2/modules/mod_dav_fs.so-bkp
and copy the new modules to the apache module directory:
# cp ./modules/dav/main/.libs/mod_dav.so /usr/lib/apache2/modules/mod_dav.so
# cp ./modules/dav/fs/mod_dav_fs.so /usr/lib/apache2/modules/mod_dav_fs.so

Enabling quota for the site

To enable quota for the site, use the "DAVSATMaxAreaSize" directive with size limit specified in kB. Again, the example below shows the added directive in red.

<VirtualHost WEB_server_IP_address:80>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav

</VirtualHost>

<VirtualHost WEB_server_IP_address:443>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav
    CustomLog /var/log/apache2/access.log combined

    <IfModule mod_ssl.c>
        SSLEngine on
        SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
    </IfModule>

    <Directory /var/www/webdav>
            DAV On
            # DAVSATMaxAreaSize: the size is specificed in kBytes
            # since each blocks are 4K each
            # add about 50K above the limit wanted
            DAVSATMaxAreaSize 150
            AuthType Basic
            AuthName "test"
            AuthUserFile /etc/apache2/passwd.dav
            <Limit PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                Require user test
            </Limit>
    </Directory>

</VirtualHost>

A few words about the quota limit before moving on

One important thing you should be aware of is the quota limit is dependent on the block size of your Web server file system and may cause the quota limit to be reached quicker than you would expect. Here is an example:

If you have set the quota limit to 50 kByte and your file system has a block size of 4 kByte and say all files in the WebDAV directory amount to a total of 48 kByte, when you copy a 1kByte text file from a windows system to the WebDAV directory, this 1kByte text file will occupy a 4 kBytes block on the WebDAV folder thus exceeding the limit even though you thought you had 2 kBytes free.

An easy workaround is to set the limit a little bit higher than required. For example a strict limit for a quota of 150 kByte would be to set the quota to 152 kByte. However in practice I personally add 50 kBytes over the required limit.
Now this time in order use the "quota enabled" WebDAV modules and quota directives for the site, you have to restart apache.

# /etc/init.d/apache2 restart
After that, restart your WebDAV session with Internet Explorer or your favorite WebDAV client and copy a file in order to exceed the 150 kBytes limit.

At the same time you may want to see what is happening on the server in real time:

tail -f /var/log/apache2/error.log
Once you reach or exceed the quota limit Internet Explorer will report the message: "An error occurred copying some or all of the selected files" (NetDrive will properly report that the file copy failed because the storage space has been exceeded) and an error like the following example will appear in /var/log/apache2/error.log:

[Fri Mar 24 12:26:13 2006] [error] [client 10.44.10.1] File does not exist: /var/www/webdav/impunx.log
[Fri Mar 24 12:26:13 2006] [error] WebDAV-Quota: Directory `/var/www/webdav/' size `404KB' is over `150KB'!

Configuring Apache to use MySQL for more complex authentication

Installing MySQL

# apt-get install mysql-server libapache2-mod-auth-mysql
Reading Package Lists... Done
Building Dependency Tree... Done
The following extra packages will be installed:
  libdbd-mysql-perl libdbi-perl libmysqlclient12 libnet-daemon-perl libplrpc-perl mysql-client mysql-common
Suggested packages:
  dbishell mysql-doc
The following NEW packages will be installed:
  libapache2-mod-auth-mysql libdbd-mysql-perl libdbi-perl libmysqlclient12 libnet-daemon-perl libplrpc-perl mysql-client
  mysql-common mysql-server
0 upgraded, 9 newly installed, 0 to remove and 0 not upgraded.
Need to get 5233kB of archives.
After unpacking 12.6MB of additional disk space will be used.
Do you want to continue? [Y/n] y

Creating a MySQL database

The following steps will outline how to create a MySQL database. This is the first of several databases that need to be created.

The default database name and table to use with apache are respectively: http_auth and mysql_auth, however you can use any database name you want providing that you specify it the site configuration file (more later). In my case the I called the 1st database "sysadmins".

# mysqladmin -uroot -p create sysadmins
Enter password:: 
Now create the table.

# mysqladmin -uroot -p create sysadmins
# mysql -uroot -p

mysql> use sysadmins
Database changed
mysql> create table mysql_auth
    -> (
    -> username char(50) not null,
    -> passwd char(25),
    -> groups char(25)
    -> );
Query OK, 0 rows affected (0.01 sec)

mysql> create unique index mysqlauthix1 on mysql_auth(username);
Query OK, 0 rows affected (0.01 sec)
Records: 0  Duplicates: 0  Warnings: 0
Now create a user which will be used by apache to read the database:

mysql> grant select on sysadmins.* to apache@localhost identified by '1pach2';  (apache@localhost user is the user and '1pach2' is the password)
Query OK, 0 rows affected (0.00 sec)
Now add a user (to be used for WebDAV authentication later.)

mysql> insert into mysql_auth (username, passwd, groups) values ('admin','1dm3n','sysadmins');
Query OK, 1 row affected (0.01 sec)
Log out of mysql and check that everything is OK: try to log into mysql as the apache user that was just created and verify that you can read the "mysql_auth" table. It should allow you to log on (otherwise you have made an error with that user name or password), allow you to select the database (otherwise there is a grant problem with the database), and return the details of the user that was inserted.

delldebian:/etc/apache2/mods-enabled# mysql -uapache -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 36 to server version: 4.0.24_Debian-10sarge1-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use sysadmins
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from mysql_auth;
+----------+--------+-----------+
| username | passwd | groups    |
+----------+--------+-----------+
| admin    | 1dm3n  | sysadmins |
+----------+--------+-----------+
1 row in set (0.00 sec)

Configuring apache to use MySQL authentication

Create the apache mysql configuration file:

# touch /etc/apache2/mods-available/auth_mysql.conf
Specify the host, user, and password of the apache user created earlier to read the databases.

Auth_MySQL_Info localhost apache 1pach2
Now enable the apache mysql authentication modules.

# ln -s /etc/apache2/mods-available/auth_mysql.load /etc/apache2/mods-enabled/auth_mysql.load
# ln -s /etc/apache2/mods-available/auth_mysql.conf /etc/apache2/mods-enabled/auth_mysql.conf

Reconfiguring the site to use MySQL authentication

Once again in the example below I have added in red the extra directives to use MySQL authentication and which database to use for that particular directory. Taking care of which database to use is done with the Auth _MySQL_DB parameter followed by the database name. Also, pay attention to the line in blue which I have commented out (in the production version, I actually deleted it). Failure to comment it out or delete it will result in apache still using the file "/etc/apache2/passwd.dav" for authentication instead the MySQL database and any attempts to authenticate with any user in the database will fail.

<VirtualHost WEB_server_IP_address:80>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav

</VirtualHost>

<VirtualHost WEB_server_IP_address:443>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav
    CustomLog /var/log/apache2/access.log combined

    <IfModule mod_ssl.c>
        SSLEngine on
        SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
    </IfModule>

    <Directory /var/www/webdav>
            DAV On
            # DAVSATMaxAreaSize: the size is specificed in kBytes
            # since each blocks are 4K each
            # add about 50K above the limit wanted
            DAVSATMaxAreaSize 150
            AuthType Basic
            AuthName "test"
            #AuthUserFile /etc/apache2/passwd.dav
            <Limit PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                Auth_MySQL_DB sysadmins
                Auth_MySQL_Encrypted_Passwords off
                Require user admin
            </Limit>
    </Directory>

</VirtualHost>
Now restart apache to load the MySQL authentication modules and take the MySQL directives into account.

# /etc/init.d/apache2 restart
After that, restart your WebDAV session and log in with the "admin" user account created above.

Configuring the site to provide all the functions mentioned in the introduction

So far, I have covered the basics. All the extra functions are nothing more than a variation of what has already been covered. The additional capabilities that I'm going to add to the site are:

Installing phpmyadmin

# apt-get install phpmyadmin libapache2-mod-php4
Reading Package Lists... Done
Building Dependency Tree... Done
The following extra packages will be installed:
  apache2-mpm-prefork libapache-mod-php4 php4 php4-mysql
Suggested packages:
  php4-pear php4-gd php5-gd
The following packages will be REMOVED:
  apache2-mpm-worker
The following NEW packages will be installed:
  apache2-mpm-prefork libapache-mod-php4 libapache2-mod-php4 php4 php4-mysql phpmyadmin
0 upgraded, 6 newly installed, 1 to remove and 0 not upgraded.
Need to get 1815kB/6220kB of archives.
After unpacking 17.3MB of additional disk space will be used.
Do you want to continue? [Y/n] y 
Now that phpmyadmin has been installed, is a good idea to secure and restrict its access. The first thing I do is move the phpmyadmin symbolic link from the Web's root directory into the WebDAV directory so I can control how it accessed from the site configuration file.
# mv /var/www/phpmyadmin /var/www/webdav/phpmyadmin
Then I modify the site configuration file to restrict the access. Again, following is the updated site configuration files with the added directive in red which I'll explain after (thought the comments should make it mostly self explanatory).

<VirtualHost WEB_server_IP_address:80>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav

     # Hide restricted access to phpmyadmin
        <Directory /var/www/webdav>
             IndexIgnore phpmyadmin
        </Directory>

     # redirect https://Site_Name/phpmyadmin to https://Site_Name/phpmyadmin
        redirect /phpmyadmin https://Site_Name/phpmyadmin

</VirtualHost>

<VirtualHost WEB_server_IP_address:443>
    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav
    CustomLog /var/log/apache2/access.log combined

    <IfModule mod_ssl.c>
        SSLEngine on
        SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
    </IfModule>

   # restrict access of phpmyadmin to the sysadmins group
   <Directory /var/www/webdav/phpmyadmin>
       Order Deny,Allow
          Deny From all
          allow from IP_address, IP_address_range, etc...
        AuthType Basic
        AuthName "restricted access to phpmyadmin"
          Auth_MySQL_DB sysadmins
          Auth_MySQL_Encrypted_Passwords off
         require group sysadmins
    </Directory>

    <Directory /var/www/webdav>
            DAV On
            # DAVSATMaxAreaSize: the size is specificed in kBytes
            # since each blocks are 4K each
            # add about 50K above the limit wanted
            DAVSATMaxAreaSize 150
            AuthType Basic
            AuthName "test"
            <Limit PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                Auth_MySQL_DB sysadmins
                Auth_MySQL_Encrypted_Passwords off
                Require user admin
            </Limit>
    </Directory>

</VirtualHost>
The following prevents indexing the phpmyadmin symbolic link, thus hiding it from the root the WebDAV site.
Mind you if people knew the URL they can still accesss it. However as you'll see later, further levels of restriction
to access the phpmyadmin URL lay ahead.

     # Hide restricted access to phpmyadmin
     <Directory /var/www/webdav>
             IndexIgnore phpmyadmin
     </Directory>
For further security we restrict the use of  phpmyadmin to only be accessible through HTTPS rather than plain HTTP and redirect any HTTP access to HTTPS:

     # redirect https://Site_Name/phpmyadmin to https://Site_Name/phpmyadmin
        redirect /phpmyadmin https://Site_Name/phpmyadmin
Then access to phpmyadmin is further restricted to a specific IP address or range see (https://httpd.apache.org/docs/ 
for further details about the "allow, deny apache directives). As well we require authentication and only members of
the "sysadmin" group are allowed to access the phpmyadmin Web page (and of course they have log on into phpmyadmin).

After having passed the authentication you will have to the use mysql root account to log into phpmyadmin. However later
we will create an "operators" group which we will allow their members to log into phpmyadmin using their own account rather
than using the root account as it is much safer.

   # restrict access of phpmyadmin to the sysadmins group
   <Directory /var/www/webdav/phpmyadmin>
       Order Deny,Allow
          Deny From all
          allow from IP_address, IP_address_range, etc...
        AuthType Basic
        AuthName "restricted access to phpmyadmin"
          Auth_MySQL_DB sysadmins
          Auth_MySQL_Encrypted_Passwords off
         require group sysadmins
    </Directory>

Creating the operators and customers groups

The purpose of the "operators" group is to provide a segmented part of the site that they can managed but not the whole site (which can only be managed by the "sysadmins"). Therefore you don't have to create it if you don't want to.

The procedure for creating the "operators" and "customers" groups is almost identical to the procedure for creating the "sysadmins" group.

# mysqladmin -uroot -p create operators
Enter password:

# mysqladmin -uroot -p create customers
Enter password:

delldebian:/home/dom# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 181 to server version: 4.0.24_Debian-10sarge1-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> use operators
Database changed
mysql> create table mysql_auth
    -> (
    -> username char(50) not null,
    -> passwd char(25),
    -> groups char(25)
    -> );
Query OK, 0 rows affected (0.01 sec)

mysql> create unique index mysqlauthix1 on mysql_auth(username);
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> grant select on operators.* to apache@localhost identified by '1pach2';
Query OK, 0 rows affected (0.00 sec)

mysql> insert into mysql_auth (username, passwd, groups) values ('operator','4p2r1t4r','operators');
Query OK, 1 row affected (0.00 sec)

mysql> use customers
Database changed
mysql> create table mysql_auth
    -> (
    ->  username char(50) not null,
    ->  passwd char(25),
    ->  groups char(25)
    ->  );
Query OK, 0 rows affected (0.01 sec)

mysql> create unique index mysqlauthix1 on mysql_auth(username);
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> grant select on customers.* to apache@localhost identified by '1pach2';
Query OK, 0 rows affected (0.00 sec)

mysql> insert into mysql_auth (username, passwd, groups) values ('joe','bl4g','customers');
Query OK, 1 row affected (0.00 sec)
Once you have created the "operators" and "customers" databases, we need to grant the members of the "sysadmins" group the privileges to administer to the operator and customer database and also grant the members of the "operators" group the privileges to administer to the customer database.

mysql> use mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> grant all on operators.* to admin@localhost identified by '1dm3n';
Query OK, 0 rows affected (0.01 sec)

mysql> grant all on customers.* to admin@localhost identified by '1dm3n';
Query OK, 0 rows affected (0.00 sec)

mysql> grant all on customers.* to operator@localhost identified by '4p2r1t4r';
Query OK, 0 rows affected (0.00 sec)
Repeat the same grant process for every member of the "sysadmins" or "operators" group you are adding to either databases. This grant process also allow members of the "sysadmins" and "operators" groups to log into phpmyadmin

Now that we have the "operators" and "customers" databases we can allow the members of the "operators" group to manage their segmented part of the site and the list of customers which are allowed to download files.

Allowing the "operators" to manage part of the site

As always see the example below for the updated site configuration file, with the added directives in red and the removed/commented out in blue. Pay particular attention the "Auth_MySQL_DB"directive, making sure that the correct database is specified. Otherwise this will result in failures to log on.

<VirtualHost WEB_server_IP_address:80>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav

     # Hide restricted access to phpmyadmin
        <Directory /var/www/webdav>
             IndexIgnore phpmyadmin
        </Directory>

     # redirect https://Site_Name/phpmyadmin to https://Site_Name/phpmyadmin
        redirect /phpmyadmin https://Site_Name/phpmyadmin

</VirtualHost>

<VirtualHost WEB_server_IP_address:443>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav
    CustomLog /var/log/apache2/access.log combined

    <IfModule mod_ssl.c>
        SSLEngine on
        SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
    </IfModule>

   # restrict access to phpmyadmin
   <Directory /var/www/webdav/phpmyadmin>
       Order Deny,Allow
          Deny From all
          allow from IP_address, IP_address_range, etc...
        AuthType Basic
        AuthName "restricted access to phpmyadmin"
          # Auth_MySQL_DB sysadmins
          Auth_MySQL_DB operators
          Auth_MySQL_Encrypted_Passwords off
          # require group sysadmins
          Require group operators
    </Directory>

     # give admins full access to the WebDAV root directory
     # upload is unlimited
    <Directory /var/www/webdav>
        DAV On
        # DAVSATMaxAreaSize: the size is specificed in kBytes
        # since each blocks are 4K each
        # add about 50K above the limit wanted
        # DAVSATMaxAreaSize 150
        AuthType Basic
        AuthName "test"
            <Limit PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                Auth_MySQL_DB sysadmins
                Auth_MySQL_Encrypted_Passwords off
                # Require user admin
                Require group sysadmins
            </Limit>
    </Directory>

    # Give operators full access to the operator directory
    # but not to the parent directory
    # upload is limited with quota (DAVSATMaxAreaSize)
    <Directory /var/www/webdav/downloads>
        DAV On
        # since blocks are 4K each add
        # about 50K above the limit
        # limit upload size to 2 Gigs (2 000 000K)
        DAVSATMaxAreaSize 2000050
        AllowOverride None
        Options None
        AuthType Basic
        AuthName "Restricted access to the downloads directory"
            <Limit GET PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                Auth_MySQL_DB operators
                Auth_MySQL_Encrypted_Passwords off
                Require group operators #!! if you copy this make sure the correct DB is used (Auth_MySQL_DB)
            </Limit>
     </Directory>

</VirtualHost>
Once you have updated the site configuration file, reload apache 2 and restart your WebDAV session. In your browser, select "File" => "Open" and open this URL:

https://WEB_server_IP_address/downloads
replacing "WEB_server_IP_address" with the IP address of your server or its FQDN (and don't forget to tick the check box next to "Open as Web Folder").

This time when you are prompted for a user name and password, use the "operator" account that was created earlier. Once in your WebDAV session as an "operator", create the directory "restricted" (right click and select "New => Folder"). We will use that directory to host files which only authorized customers.will be able to download.

Allowing customers to download files from the restricted part of the site.

Restricting access to the "restricted" directory is a very much simplified version of the WedDAV directive section that was created earlier (added directives shown below in red).

<VirtualHost WEB_server_IP_address:80>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav

    # Hide restricted access to phpmyadmin
    <Directory /var/www/webdav>
        IndexIgnore phpmyadmin
    </Directory>

    # redirect https://downloads.lansa.co.uk/phpmyadmin to https://downloads.lansa.co.uk/phpmyadmin
    redirect /phpmyadmin https://delldebian.lansa.co.uk/phpmyadmim

    # restricted access to "/downloads/restricted" directory
    # require authentication against list of customers
    <Directory /var/www/webdav/downloads/restricted>
        AuthType Basic
        AuthName "Restricted download accesss"
        Auth_MySQL_DB customers
        Auth_MySQL_Encrypted_Passwords off
        require group customers
    </Directory>


</VirtualHost>

<VirtualHost WEB_server_IP_address:443>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav
    CustomLog /var/log/apache2/access.log combined

    <IfModule mod_ssl.c>
        SSLEngine on
        SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
    </IfModule>

   # restrict access to phpmyadmin
   <Directory /var/www/webdav/phpmyadmin>
       Order Deny,Allow
          Deny From all
          allow from 10.44.10.1
        AuthType Basic
        AuthName "restricted access to phpmyadmin"
          # Auth_MySQL_DB sysadmins
          Auth_MySQL_DB operators
          Auth_MySQL_Encrypted_Passwords off
          # require group sysadmins
          Require group operators
    </Directory>

     # give admins full access to the WebDAV root directory
     # upload is unlimited
    <Directory /var/www/webdav>
        DAV On
        # DAVSATMaxAreaSize: the size is specificed in kBytes
        # since each blocks are 4K each
        # add about 50K above the limit wanted
        # DAVSATMaxAreaSize 150
        AuthType Basic
        AuthName "test"
            <Limit PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                Auth_MySQL_DB sysadmins
                Auth_MySQL_Encrypted_Passwords off
                # Require user admin
                Require group sysadmins
            </Limit>
    </Directory>

    # give operators full access to the operator directory
    # but not to the parent directory
    # upload is limited with quota (DAVSATMaxAreaSize)
    <Directory /var/www/webdav/downloads>
        DAV On
        # since blocks are 4K each add
        # about 50K above the limit
        # limit upload size to 2 Gigs (2 000 000K)
        DAVSATMaxAreaSize 2000050
        AllowOverride None
        Options None
        AuthType Basic
        AuthName "Restricted access to the downloads directory"
            <Limit GET PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                Auth_MySQL_DB operators
                Auth_MySQL_Encrypted_Passwords off
                Require group operators #!! if you copy this make sure the correct DB is used (Auth_MySQL_DB)
            </Limit>
     </Directory>

</VirtualHost>
Once you have updated the site configuration file and reloaded apache 2, open the following URL (replacing "WEB_server_IP_address" with the IP address of your server or its FQDN):

https://WEB_server_IP_address/downloads/restricted
Then when you are prompted for a user name and password, use the "customer" account that was created earlier.

Note: as you may have noticed from the above URL, you need to specify the full path including the name of the directory for which the authentication is performed. This is because the restriction directive has the effect of  hiding that directory.

Allowing customers to upload files

Allowing customers to upload files is fairly easy to achieve and uses more or less the same configuration as the "operators" section. First, create the upload directory:
# mkdir /var/www/webdav/upload
# chown www-data:www-data /var/www/webdav/upload
(you can also easily do this via a WebDAV session if you're logged in as admin). Next, modify the site configuration file (added directives shown below in red).

<VirtualHost WEB_server_IP_address:80>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav

    # Hide restricted access to phpmyadmin
    <Directory /var/www/webdav>
        IndexIgnore phpmyadmin
    </Directory>

    # redirect https://downloads.lansa.co.uk/phpmyadmin to https://downloads.lansa.co.uk/phpmyadmin
    redirect /phpmyadmin https://delldebian.lansa.co.uk/phpmyadmim

    # restricted access to "/downloads/restricted" directory
    # require authentication against list of customers
    <Directory /var/www/webdav/downloads/restricted>
        AuthType Basic
        AuthName "Restricted download accesss"
        Auth_MySQL_DB customers
        Auth_MySQL_Encrypted_Passwords off
        require group customers
    </Directory>


</VirtualHost>

<VirtualHost WEB_server_IP_address:443>

    Servername testwebdav.lansa.co.uk
    Documentroot /var/www/webdav
    CustomLog /var/log/apache2/access.log combined

    <IfModule mod_ssl.c>
        SSLEngine on
        SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
    </IfModule>

   # restrict access to phpmyadmin
   <Directory /var/www/webdav/phpmyadmin>
       Order Deny,Allow
          Deny From all
          allow from 10.44.10.1
        AuthType Basic
        AuthName "restricted access to phpmyadmin"
          # Auth_MySQL_DB sysadmins
          Auth_MySQL_DB operators
          Auth_MySQL_Encrypted_Passwords off
          # require group sysadmins
          Require group operators
    </Directory>

    # give operators full access to the operator directory
    # but not to the parent directory
    # upload is limited with quota (DAVSATMaxAreaSize)
    <Directory /var/www/webdav/downloads>
        DAV On
        # since blocks are 4K each add
        # about 50K above the limit
        # limit upload size to 2 Gigs (2 000 000K)
        DAVSATMaxAreaSize 2000050
        AllowOverride None
        Options None
        AuthType Basic
        AuthName "Restricted access to the downloads directory"
            <Limit GET PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                Auth_MySQL_DB operators
                Auth_MySQL_Encrypted_Passwords off
                Require group operators #!! if you copy this make sure the correct DB is used (Auth_MySQL_DB)
            </Limit>
     </Directory>

    # allow customers full access to the upload directory
    # but not to the parent directory
    # upload is limited with quota (DAVSATMaxAreaSize)
    <Directory /var/www/webdav/upload>
        DAV On
        # since blocks are 4K each add
        # about 50K above the limit
        # limit upload size to 200 Megs (200 000K)
        DAVSATMaxAreaSize 200050
        AllowOverride None
        Options None
        AuthType Basic
        AuthName "Restricted access to the upload directory"
            <Limit GET PUT POST DELETE PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK>
                Auth_MySQL_DB customers
                Auth_MySQL_Encrypted_Passwords off
                Require group customers #!! if you copy this make sure the correct DB is used (Auth_MySQL_DB)
            </Limit>
     </Directory>

</VirtualHost>
Once you have updated your site configuration file, reload apache 2 and restart your WebDAVsession. In your browser, select "File" => "Open" and open the following URL:

https://WEB_server_IP_address/upload
replacing "WEB_server_IP_address" with the IP address of your server or its FQDN (and don't forget to tick the check box next to "Open as Web Folder").

Then when you are prompted for a user name and password, use again the "customer" account.

Note: with the above configuration, any customers with a valid user name and password will be able to upload, download, rename and delete files in the upload directory.

Using phpmyadmin to allow the people in the operators group to manage the list of allowed customers

Managing the list of customers using phpmyadmin is fairly easy. However for those of you not familiar with phpmyadmin, here a quick tutorial on how to do it. Open the following URL in your Web browser:

https://WEB_server_IP_address/phpmyadmin
replacing "WEB_server_IP_address" with the IP address of your server or its FQDN.

Then when you are prompted for a user name and password, use the "operator" account to get past the authentication at the Web server level. Subsequently you will be presented with the phpmyadmin login screen and once again use the "operator" account  to login.

Once logged into phpmyadmin, click the "customers" database and the "browse" icon (1st from the left) under the "Action" menu. From there you can add a customer by clicking "Insert new row", edit their details (password change, etc...) by clicking the "Edit" icon (pencil icon) or delete them by clicking the "Delete" icon (red cross icon) and if you want to disable and account either edit its details or delete it.

Lastly, remember the following (which, actually can be used to disable an account):
  1. user name, password and groups are case sensitive
  2. make sure that every customer is part of the "customers" group and likewise any operators are part of the "operators" group.
Any issue with the above two items will prevent successful logins.

Credits

I should give credit and thanks to the following; without them I would never have been able to achieve my project:

Footnote, comments/suggestions

I invite all comments and suggestion for improving this HOWTO. Please feel free to drop me a line at dom2319@yahoo.co.uk.

Talkback: Discuss this article with The Answer Gang


[BIO]

I was born in France and moved to the UK in 1993 - and, believe it or not, have loved it ever since.

Back in 1998, a work colleague suggested I look at Linux when it was just barely a buzz. Something to do in my spare time. I got myself a book with a copy of RH 5.0. A few weeks later Mandrake came out, and I've been hooked on it ever since.

In the recent years, I switched to Debian, and those days the Debian boxes pop up everyhwhere in the company I admin the network for. I use it for Web servers, VPN-firewalls, routers, etc... but that's never enough, as it even made it on our iSeries (also know as AS/400).

In my free time I like snowboarding, listening to house music, and getting a foot into the paranormal.


Copyright © 2006, Dominique Cressatti. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 131 of Linux Gazette, October 2006

Sharp does it once again: the SL-C3200

By Edgar Howell

Sharp's latest PDA -- I'll use that term although it really doesn't do it justice -- is a real killer, irresistible to incorrigible Linuxers. I'm not even going to try to pretend to be objective about it.

A bit over $500 for a PDA may be a bit pricey but this is far more than a mere PDA. After all, the system is Linux - 2.4.20, but Linux. And this gem has more hard disk storage and RAM/ROM than all but the last two machines I've purchased.

Hardware

I've had the SL-5500 for a bit over 3 years and it has functioned perfectly all that time. But when I heard that the SL-C3200 has a 6GB hard drive, I had to have one.

It has an infrared port, basically used to transfer data between two Zauri. When my wife got her 5500, I gave her much of the contents of my address book via infrared.

The USB port is quite interesting. If you use the extremely short cable, you can connect things like USB sticks to the Zaurus. With the considerably longer cable you can connect the Zaurus to a PC and it will function as a storage device; this allows access to the hard drive, although as best I can tell, not to RAM or ROM. And for those so inclined, the USB interface works fine with Windows.

After copying the persistent settings for Knoppix from a USB stick to the Zaurus hard drive, I removed the USB stick and booted the notebook under Knoppix telling it to scan for the settings.
It worked, of course.
This beats just Knoppix and a USB stick because of the amount of storage and the ability to actually do something away from a PC. Remember, this is a mere PDA... or is it?

The keyboard is English with a few Japanese characters. The alphabetics are where one expects but many of the other characters are somewhere else. As I recall on that keyboard the '*' is with the '8' but here it is an alternate on the 'J'. That's a minor irritant, really -- the size precludes touch-typing anyhow.

There are slots for SD Card and Compact Flash. Given the size of the hard drive, it is not backed up with the Zaurus backup application; in fact, until I get a larger SD card, backing up takes place to the hard drive - so the contents of the HD and the backup file need to be transfered somewhere else. Not a problem via USB.

The screen is quite nice. Much like a notebook, it folds open above the keyboard. But it can then be pivoted 180 degrees and closed on top of the keyboard to enable use while held with one hand rather than having to be placed on some surface. There is a scroll wheel and buttons for OK and Cancel that can be operated with your thumb. The display is easily changed from landscape to portrait. It works, but the need is dependent on the application.

The CF network card I have used with the 5500 works flawlessly in this device as well. Oddly, I couldn't configure it with 'ifconfig' - but the network application was easy enough to use. You can either give it a fixed IP address or -- if your LAN has a DHCP server -- let it get one automatically. Well... more or less automatically: once you tell it to do so, it will.

Although the screen isn't much larger than on the 5500, the resolution is much better - and in landscape mode, it was quite easy to use the browser application (not Opera as on the 5500) to read documents on the Apache server on the LAN.

The battery seems to hold up quite well. After first being charged it worked for well over 4 hours, although I made no effort to make it easy on the battery. One week I used it well over an hour every day and didn't have to re-charge it until the weekend.

Software

The Zaurus doesn't offer multiple virtual terminals or a choice of GUI but one of the applications is a command line. The keyboard is too small for touch-typing, but I still prefer it to using the stylus with an on-screen keyboard. The usual functionality is there, history (arrow-keys, not "!"), tabbed completion (although not for root). I do miss "less", though. "more" just isn't "less".

The 5500 included a number of applications that could be installed from CD-ROM. There were a few apps for the C3200 as well, but nothing that seemed of interest to me; however, there was a comment in the documentation that software for the 5500 would also run on the C3200.

Regarding additional software, I did check out my favorite trivial test-case: solitaire. Search engines didn't help -- far too many stale links. But a reference in the documentation to www.elsix.org (Embedded Linux Software Index) was heads-up. Under Zsolitaire, there were references to 4 varieties, including one for OpenZaurus. The installation was simple: download to PC, copy to SD Card, and run the installation application - a piece of cake. This works equally well when the package has been copied via USB to the hard drive. De-installation worked as advertised.

The 3200 comes with a rather useful collection of applications, at least when considered as a PDA: calendar, address book, to-do list, text editor, e-mail, music and video players, spreadsheet, calculator, PDF viewer.

Apparently the music software is quite efficient. During one 5-minute piece the hard drive was only accessed about every minute - no need to stress the battery. Well done, as I have come to expect from Linux and Sharp.

Problems

Did I just say "well done"?
It struck me as odd that an application to play music under Linux apparently can't handle the OGG Vorbis format - only MP3s. For those who would like to see what it takes to freely flip between the various formats, as well as the CD-to-HD-to-Zaurus transfers (assuming that this is legal where you live), you should take a look at my other article this month.

If you get one, be sure to back it up as soon as you have taken care of initial set-up. It may have been something I did, or failed to do, but I ended up with the same problem as I had with the 5500: I couldn't log on as root. The early back-up lets you restore the 3200 to a relatively pristine condition. It is possible to set a password for root - but not for long. Or at least so it would seem; your mileage may vary.

Although the infrared port can be used to transfer entries in the address book or to-do list from one machine to another, this doesn't work with spreadsheets and other text files (isn't offered in the menu). This was only a minor inconvenience, since 'find' and 'cp' on the command line along with an SD card worked fine for everything else; this lack just seems to be an odd gap.

While sniffing around the 'net I was unable to find an application to display HTML available on the Zaurus. The 5500 included Opera - but that wasn't the case here. The network application works well to connect to a server over a LAN but can't do anything with a simple HTML file on the Zaurus. This seems like an odd omission.

Then there was the time when the Zaurus locked up and would not turn on, or at least it seemed so at first. It was on, just no information being transfered to the display. Following the directions for a command line boot, I was able to backup a couple of things that would have been a bit of a pain to re-construct and after that it worked again no trouble. The supplier hadn't encountered the problem and it isn't something I can make happen. They are waiting for more information that I hope I won't be able to supply.

Given that a reboot resolves the problem, it seems much like initial problems with usbfs, when too many insertions and removals of USB devices led to confusion as to what was there and wasn't. Reboot helped there as well. On the Zaurus it does require a bit of manual dexterity but otherwise isn't difficult to perform. Fortunately, it does not result in any data loss.

Once you're looking at anything outside the Zaurus itself, Sharp seems to be very Windows-centric. The CD-ROMs have the usual junk needed to communicate with that environment. It was much the same for the 5500; e.g., software installation was via an interface that only talked with Windows. On this machine, installation is from hard drive or external media. On the other hand, given network connectivity, what more does one really need under Linux?

The biggest problem is the fact that Sharp some time ago withdrew from the international market and only makes the Zaurus line for their domestic market in Japan. The company from which I bought the C3200, www.trisoft.de, has been distributing the Zaurus for a number of years and referenced a partner company in the States, www.streamlinecpus.com.

They import them, convert the software from Japanese to English and sell them to the incorrigible. The modest documentation they can provide here in German is OK but might not be enough for someone not yet used to the Zaurus. I really couldn't do a great deal with 2 manuals and 3 CD-ROMs in Japanese.

Conclusion

As I sit here writing this while listening to the Brandenburg Concertos off the Zaurus, life is good. Every serious Linuxer deserves a Zaurus.

I just wish someone could convince Sharp to get back into the international market.

Talkback: Discuss this article with The Answer Gang


Bio picture Edgar is a consultant in the Cologne/Bonn area in Germany. His day job involves helping a customer with payroll, maintaining ancient IBM Assembler programs, some occasional COBOL, and otherwise using QMF, PL/1 and DB/2 under MVS.

(Note: mail that does not contain "linuxgazette" in the subject will be rejected.)

Copyright © 2006, Edgar Howell. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 131 of Linux Gazette, October 2006

Ogg, WAV, and MP3

By Edgar Howell

Disclaimer

What is described in the following may be illegal where you live. Where I live, the law permits making a small number of copies of the contents of legally acquired media for personal use as long as this does not involve circumventing copy protection. Strangely enough, "personal use" includes giving a copy to close relatives. And "small number" certainly is well under double-digits, but to my knowledge the courts have not yet made it explicit. You didn't expect the legislators -- politicians! -- to do that in the course of creating the legislation, did you?

For obvious reasons I would have much preferred to use the Ogg Vorbis format. Unfortunately the music player on the Zaurus couldn't handle Ogg, so MP3 it was. IANAL (I Am Not A Lawyer), I may be wrong, but as best I can determine, conversion to MP3 is permitted.

(It is interesting to consider the obligation of those subject to the law to observe it. Reasonable enough, right? When they can't even understand it without professional help?!)

From these considerations, I conclude that it would be prudent to remove all files created as described below before taking any storage medium across any international border. As usual, your call.

CD to Hard Disk

The first step in getting some music onto the Zaurus was to copy it to the hard disk on a PC - but never having done anything of the sort in the past, it was time to consult the oracle:

web@lohgoDELL:/tmp/CD> apropos CD
rsyncd.conf (5)      - configuration file for rsync in daemon mode
reader.conf (5)      - configuration file for pcscd readers' drivers
pcscd (8)            - PC/SC Smart Card Daemon
cdparanoia 9.8 (Paranoia release III libcdio) (1) [cd-paranoia] - an audio CD reading utility which includes extra data verification features
Encode::EBCDIC (3pm) - EBCDIC Encodings
...

This went on for a while but fortunately the reference to cd-paranoia was early in the list.

The man-page even included an example for an entire CD, "cd-paranoia -B", which is basically what I used. This simply puts the entire contents of the CD into the current directory with names like "track01.cdda.wav".

WAV to Ogg

WAV files are, well, sizeable. That made it worthwhile looking into conversion into another format.

So here is a script to convert from WAV to Ogg:

#!/bin/bash

/usr/bin/oggenc track$1\.cdda\.wav -o $1_$2\.ogg
/bin/rm track$1\.cdda\.wav

It is executed as follows:

./wav2ogg 01 Melanie=The_Nickel_Song

to convert 'track01.cdda.wav' into '01_Melanie=The_Nickel_Song.ogg'.

In fact, if you start cd-paranoia in one window and wait until the first track is available, in another window you can almost keep up with it converting formats and assigning reasonable names. It takes about 15 to 20 minutes to do a CD.

[ Conversely, you could automate the process with this timely 2-cent tip. :) -- Ben ]

There are "better" approaches. You can create playlists and go out to the Internet to get track information. But my goal was to have a convenient may of occasionally copying a CD to the Zaurus to be listened to for a while before erasing it to recover space.

Ogg to WAV

Unfortunately the Zaurus doesn't handle Ogg - at least I haven't been able to figure it out. Even more unfortunately, because of the size of the WAV files, the script erases them after conversion - but I didn't want to have to read the CD again and then go through the process of assigning titles.

Well, careful reading of the documentation pointed me at 'ogg123' and resulted in the following script:

#!/bin/bash

/usr/bin/ogg123 -d wav -f $1\.wav $1\.ogg

It is executed as follows:

./ogg2wav 01_Melanie=The_Nickel_Song

to convert '01_Melanie=The_Nickel_Song.ogg' into '01_Melanie=The_Nickel_Song.wav'

Interestingly enough, this is extremely fast.

WAV to MP3

This is where things got a bit complicated. MP3 is subject to certain restrictions and SuSE no longer includes anything dealing with it. But there was a reference to LAME, which I obtained from Sourceforge.

The directions provided by SuSE seemed a bit complicated to me -- construct an RPM out of a tar-ball and then install it? As root I just did the following:

cd /usr/src/packages/SOURCES/lame-3.93.1
./configure
make
make install

Having done this, the following script

#!/bin/bash

/usr/local/bin/lame track$1\.cdda\.wav -o $1_$2\.mp3
/bin/rm track$1\.cdda\.wav

can be executed as follows:

./wav2mp3 01 Melanie=The_Nickel_Song

to convert 'track01.cdda.wav' into '01_Melanie=The_Nickel_Song.mp3'

For the sake of completeness I should mention that there is an inconsistency in the names used in all that fumbling about, sometimes 'cdda.wav' and sometimes just 'wav'. This merely reflects the status as work in progress. Ultimately I used the following two scripts ('ogg2wav2mp3' and 'wav2mp3_2') to convert an entire CD in its own directory from OGG to MP3 all at once:

#!/bin/bash

for i in *.ogg;do /tmp/CD/ogg2wav $(basename $i .ogg);echo $? $i;done
for i in *.wav;do /tmp/CD/wav2mp3_2 $(basename $i .wav);echo $? $i;done
rm *.wav

and

#!/bin/bash

/usr/local/bin/lame $1\.wav -o $1\.mp3

File Sizes

It is interesting to note the differences in file sizes.

web@lohgoDELL:/tmp/CD> ll *01*
-rw-r--r-- 1 web users  3682219 2006-07-29 15:47 01_Melanie=The_Nickel_Song.wav.2.mp3
-rw-r--r-- 1 web users  3087829 2006-07-29 15:45 01_Melanie=The_Nickel_Song.wav.2.ogg
-rw-r--r-- 1 web users 40581452 2006-07-29 15:51 01_Melanie=The_Nickel_Song.wav.2.ogg.2.wav
-rw-r--r-- 1 web users 40581452 2006-07-29 14:34 track01.cdda.wav

And interestingly enough, converting back to WAV produces a file of the same size but it isn't identical.

web@lohgoDELL:/tmp/CD> diff 01_Melanie\=The_Nickel_Song.wav.2.ogg.2.wav track01.cdda.wav
Files 01_Melanie=The_Nickel_Song.wav.2.ogg.2.wav and track01.cdda.wav differ

Presumably such manipulations will have an impact on the quality of the recording, but a mobile music player certainly can't claim to be high-fidelity - and I am extremely happy with the sound on the headphones.

Talkback: Discuss this article with The Answer Gang


Bio picture Edgar is a consultant in the Cologne/Bonn area in Germany. His day job involves helping a customer with payroll, maintaining ancient IBM Assembler programs, some occasional COBOL, and otherwise using QMF, PL/1 and DB/2 under MVS.

(Note: mail that does not contain "linuxgazette" in the subject will be rejected.)

Copyright © 2006, Edgar Howell. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 131 of Linux Gazette, October 2006

On Qmail, Forged Mail, and SPF Records

By Rick Moen

Here's an object lesson in how not to do e-mail, with the role of global village idiot played by Prof. Daniel J. Bernstein's (DJB's) popular proprietary SMTP server package, qmail.

I. Qmail Follies

The trail starts with a virus-infected MS-Wind0ws box in Ireland on an Eircom dynamic IP, which evidently has been energetically pumping out copies of the MyDoom worm and sending them everywhere its tiny little mind can think of, via Eircom's "smarthost" MTA boxes.

In one case, the forged malware mail was sent out addressed to TAG, with a claimed sender address of "MAILER-DAEMON@lists.linuxgazette.net", i.e., my own MTA's mail-housekeeping mailbox. The Eircom host duly accepted this forgery and attempted delivery to my machine (in its guise as lists.linuxgazette.net) -- which said very emphatically "No" in its "550 Unsolicited spam" permanent-reject message.

Eircom tossed this result around its network, and eventually a different Eircom host's "qmail-send" process was thus left holding the bag, and had to decide whom to notify. It decided to lob my MTA's reject notice across the Internet to the forged claimed sender, "MAILER-DAEMON@lists.linuxgazette.net", wrapped up inside an e-mail from the null sender ("<>").

I can't really complain about the latter phase of this process, in fairness: Eircom's second MTA had no clue about the message's real provenance, and had only forged header data to act on. In that context, it did the right thing.

The actual problem was at the first qmail instance, when that program's qmail-smtpd module accepted the forgery in the first place. qmail-smtpd is a deliberately stupid program capable only of accepting all incoming port-25 SMTP traffic without prejudice, handing the received bytes over to the qmail-queue module for instantiation on-disk, making an entry in a logfile, and closing the connection.

DJB's admirers call this deliberate stupidity a feature, pointing out that qmail's sparsity and modularity make it easier to deal with public data safely.[1] This is true, but it comes at an excessive cost, namely the qmail-smtpd module's designed-in helplessness to detect and reject forged and otherwise aberrant mail: all incoming mail gets accepted and the connection closed before its header or contents can even be looked over at all. This is in stark contrast to its main open-source competitors, Postfix, Exim, and sendmail -- all of which can intelligently inspect incoming mail before saying yes or no to delivery.

II. Forgeries

qmail is thus distinctively guilty of one of this decade's leading technological sins: generation of "backscatter spam" -- unwanted mail sent to innocent parties whose sending addresses were forged in bulk e-mail sent out by spammers and/or malware. It's important to stress that this is a design deficiency, one that's not curable without redesigning and rewriting qmail's qmail-smtpd module (at minimum). (And, if you're going to go to all that trouble, it's smarter to just switch to a good open-source MTA such as Postfix, instead.)

As I think I've mentioned before, mailing lists (as the leading modern example of mail forwarders) are ground zero in the Internet's junkmail wars, being frequently caught between spam/malware senders trying to pump out junk and vigilant mail admins trying to reject it. The MyDoom malware e-mail concerned here was a case in point, having been deliberately crafted to implicate "MAILER_DAEMON@lists.linuxgazette.net" as an apparent sender -- a ploy called "Joe Job" mail after its first known target, Joe Doll of joes.com (Joe's Cyberpost), against whom a Chicago-area spammer launched a pernicious and clever "revenge spam" attack on January 2, 1997, trying to punish Doll for the commendable act of throwing the spammer off his free hosting service.[2]

If Eircom had used a less deliberately stupid MTA, it could in theory have taken steps to determine that "194-125-179-235.as1.mgr.mullingar.eircom.net" (one of Eircom's own dynamic-IP addresses!) was an extremely doubtful source host for "linuxgazette.net" mail. However, there are also things we can do to help such efforts:

III. SPF

We at Linux Gazette really need to get around to limiting the potential for abuse by adding SPF (Sender Policy Framework) [3] records to the linuxgazette.net DNS for domain "linuxgazette.net" and subdomain "lists.linuxgazette.net" -- records that specify for all other mail systems which specific MTA hosts are solely authorised to originate SMTP mail from our mail-handling hostnames (aka authorised to be our MX or mail exchanger hosts). Basically, an SPF record is a reverse-MX record, in the same way that a PTR (reverse DNS) record is the mirrored half of a forward-lookup "A" record -- which, if checked during mail delivery, permits rejecting forgeries.

To my knowledge, only kayos's host and mine legitimately handle our outbound mail.[4] Thus, a suitable pair of SPF records would be as follows -- with each record comprising a hostname, an SPF version number, and a list of "mechanisms" (suggestions on what to do with mail from specified sets of hosts):

linuxgazette.net. IN TXT "v=spf1 a mx a:lists.linuxgazette.net -all"
lists.linuxgazette.net. IN TXT "v=spf1 a mx -all"

Parsing the first one by parts:

  linuxgazette.net.  #Identifies domain this is for.

  IN TXT  #INternet-style record of type TXT = freeform text, that being 
          #where SPF records are stored for lack of a dedicated SPF
          #reference record type assigned by the IANA, so far.  
          #(A dedicated record type has been applied for.)

  v=spf1  #Implementing SPF protocol version 1.

  a       #Honour as valid any "linuxgazette.net" mail arriving from our "A"
          #record host, which happens to be genetikayos.com, IP 64.246.26.120.

  mx      #Honour as valid "linuxgazette.net" mail arriving from any other 
          #host listed as type "MX" for that domain.  There aren't any, 
          #but this will allow for future ones.

  a:lists.linuxgazette.net  #Accept as valid "linuxgazette.net" mail 
          #from my host, too (just in case it's ever necessary for
          #my host to handle some).
  
  -all    #"all" is a catchall mechanism that matches if none of the foregoing
          #keywords do.  Modifier "-", here, advises receiving MTAs that they 
          #should hardfail all matching mail ostensibly from the 
          #"linuxgazette.net" domain, i.e., reject any such mail received from 
          #any SMTP host not enumerated above.

(The second SPF line, published for subdomain/host lists.linuxgazette.net, should be pretty easy to parse, being slightly simpler but generally similar.)

There are two alternatives to "-all": The wishy-washy "?all" (neutral recommendation) is what, for example, aol.com and google.com publish, and means "We're sort of considering SPF deployment, but for now are saying nothing definitive about which hosts should be considered valid mail exchangers."

  $ dig -t TXT aol.com +short
  "v=spf1 ip4:152.163.225.0/24 ip4:205.188.139.0/24 ip4:205.188.144.0/24
  ip4:205.188.156.0/23 ip4:205.188.159.0/24 ip4:64.12.136.0/23
  ip4:64.12.138.0/24 ptr:mx.aol.com ?all"
  "spf2.0/pra ip4:152.163.225.0/24 ip4:205.188.139.0/24
  ip4:205.188.144.0/24 ip4:205.188.156.0/23 ip4:205.188.159.0/24
  ip4:64.12.136.0/23 ip4:64.12.138.0/24 ptr:mx.aol.com ?all"
  $
  $ dig -t TXT google.com +short
  "v=spf1 ptr ?all"
  $

The slightly more confident "~all" (softfail recommendation) implies "We're in transition to SPF, so please consider doubting the authenticity of mail from mail exchangers other than the ones we're listing here" -- but we advise against rejecting it out of hand." sendmail.com is using that model:

  $ dig -t TXT sendmail.com +short
  "v=spf1 ip4:209.246.26.40 ip4:209.246.26.45 ip4:63.211.143.38
  ip4:209.246.26.36 ip4:209.246.26.12 ip4:209.246.26.24 ip:209.246.26.25
  ip4:209.246.26.10 ~all"
  $

The fully unequivocal "-all" (hardfail recommendation) catchall (which I recommend) means "Please consider this list of allowed hosts truly definitive; we'd prefer that you summarily reject all mail purporting to be ours but arriving from anywhere we don't list here." The District of Columbia-area tux.org LUG collective publishes that sort of record, for example:

  $ dig -t TXT tux.org +short
  "v=spf1 mx ptr -all"
  $

Assuming I hear no serious objections, I'll be asking kayos to add the hardfail-type lines specified above to our domain records, going forward.

[ Objections? What objections? If I had a complaint, it would be something like "why didn't I think of this a year ago?" Thanks for taking care of it, Rick! -- Ben ]

----- Forwarded message from MAILER-DAEMON@eircom.net -----

Return-path: <>
Envelope-to: MAILER-DAEMON@lists.linuxgazette.net
Delivery-date: Tue, 29 Aug 2006 09:20:25 -0700
Received: from mail00.svc.cra.dublin.eircom.net ([159.134.118.16]:21343)
	 by linuxmafia.com with smtp   (Exim 4.61 #1 (EximConfig 2.0))
	 id 1GI6Hw-0005DL-Oa   
	for <MAILER-DAEMON@lists.linuxgazette.net>; Tue, 29 Aug 2006 09:20:24 -0700
Received: (qmail 42778 messnum 6402494 invoked for bounce); 29 Aug 2006 16:18:01 -0000
Date: 29 Aug 2006 16:18:01 -0000
From: MAILER-DAEMON@eircom.net
To: MAILER-DAEMON@lists.linuxgazette.net
MIME-Version: 1.0
X-EximConfig: v2.0 on linuxmafia.com (https://www.jcdigita.com/eximconfig)
X-SA-Exim-Connect-IP: 159.134.118.16
X-SA-Exim-Mail-From: 
X-Spam-Checker-Version: SpamAssassin 3.1.1 (2006-03-10) on linuxmafia.com
X-Spam-Level: 
X-Spam-Status: No, score=-1.6 required=4.0 tests=BAYES_00,NO_REAL_NAME 
	autolearn=no version=3.1.1
Content-Type: multipart/mixed; boundary="1156868198eircom.net6401928"
Subject: failure notice
X-SA-Exim-Version: 4.2.1 (built Mon, 27 Mar 2006 13:42:28 +0200)
X-SA-Exim-Scanned: Yes (on linuxmafia.com)

Hi. This is the qmail-send program at eircom.net.
I'm afraid I wasn't able to deliver your message to the following addresses.
This is a permanent error; I've given up. Sorry it didn't work out.

<tag@lists.linuxgazette.net>:
198.144.195.186 failed after I sent the message.
Remote host said: 550 Unsolicited spam:  score=6.7 required=4.0 trigger=4.2  -  Sorry, your message has been rejected by our filtering software due to achieving a high spam score.  We apologise if you have sent a legitimate message and it has been blocked.  If this is the case, please re-send adding verified- to the beginning of the E-mail address of each recipient.  If you do this, your message will get through successfully, e.g:  verified-fred.bloggs@domain.tld

--- Enclosed is a copy of the message.

Return-Path: MAILER-DAEMON@lists.linuxgazette.net
Received: (qmail 30458 messnum 6401928 invoked from network[194.125.179.235/194-125-179-235.as1.mgr.mullingar.eircom.net]); 29 Aug 2006 16:15:59 -0000
Received: from 194-125-179-235.as1.mgr.mullingar.eircom.net (HELO lists.linuxgazette.net) (194.125.179.235)
  by mail00.svc.cra.dublin.eircom.net (qp 30458) with SMTP; 29 Aug 2006 16:15:59 -0000
From: MAILER-DAEMON <MAILER-DAEMON@lists.linuxgazette.net>
To: tag@lists.linuxgazette.net
Subject: Delivery failed
Date: Tue, 29 Aug 2006 17:13:14 +0100
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="----=_NextPart_000_0009_B34925E4.84D3BAB0"
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2600.0000
X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2600.0000

Dear user tag@lists.linuxgazette.net,

We have found that your email account was used to send a huge amount of junk e-mail during the recent week.
Obviously, your computer had been infected by a recent virus and now runs a hidden proxy server.

We recommend that you follow our instruction in order to keep your computer safe.

Have a nice day,
The lists.linuxgazette.net team.


[RM note:  The MyDoom worm binary was file-attached at this point.]

----- End forwarded message -----

[1] See "The Security Architecture of qmail", https://hillside.net/plop/2004/papers/mhafiz1/PLoP2004_mhafiz1_0.pdf

[2] For more, see the Joe Job entries in my Linuxmafia.com Knowledgebase, at https://linuxmafia.com/kb/Mail/. I was among the many anti-spam activists on the Usenet news.admin.net-abuse-email newsgroup whom the spammer attempted to lure into revenge-attacking Joe Doll, through flamebait mail sent to me directly, forged so as to fool me into thinking Doll had sent it.

[3] SPF is an oft-misunderstood technical mechanism, very easy to add to one's published DNS records and with a bit more difficulty to SMTP servers, that makes it possible for mail servers to determine at the moment of receipt if a piece of mail's claimed sender is forged. The idea is for you as a domain owner to declare (in special SPF reference records in your DNS) which specific IPs/hostnames are the sole authorised sources of mail from your domain. Thereafter, any system receiving mail claimed to be from your domain has the means to verify that assertion, checking the "envelope From" and Return-Path headers' domain against your published SPF record.

If you care about your, your domain's, and its users' reputations, then you should add an SPF record to your DNS. It's that simple -- and it's something you can easily do once, and never have to revisit unless you move, add, or retire SMTP servers.

Objections to SPF divide generally into "It doesn't achieve [additional desired goal foo]" and "It might interfere with my favourite way of relaying mail through multiple SMTP servers" categories (see whitepaper, below) -- all of which miss the point that publishing an SPF record is absolutely in the interest of any domain owner. If you haven't created one yet, what are you waiting for?

Details at:
https://www.openspf.org/whitepaper.pdf
https://www.securityfocus.com/infocus/1763
https://new.openspf.org/SPF_Record_Syntax
https://new.openspf.org/svn/project/specs/rfc4408.html
Note also the "SPF Setup Wizard" CGI at https://www.openspf.org/, that you can use to write prototype SPF records for domains.

How best, if at all, to implement the MTA (SMTP server) end of SPF, i.e., the checking of SPF records at the time of receiving mail, is a separate discussion, another point commonly missed in discussions of this subject. You as the sysadmin of an MTA always have within your sole control whether your server will act at all on SPF information, and whether it should follow particular SPF records' recommendations or not. SPF is just published information: you can ignore it, implement it, do the exact opposite of its suggestions, or anything else you can dream of. SPF does not break mailing lists (because they rewrite the "envelope From" and Return-Path headers) -- and there do exist ways to implement other forms of mail-forwarding that don't trigger unauthorised-MX suspicions. (Publishing SPF records isn't useful if, for some strange reason, you cannot determine what SMTP hosts are supposed to be legitimate senders of your outgoing mail -- but then, I'd say you have bigger problems.)

[4] Hosting of Linux Gazette's Web pages, svn archive, and main administrative e-mailboxes is kindly donated by T.R. "kayos" Fullhart on his genetikayos.com server, masquerading as "linuxgazette.net". All of the magazine's mailing lists, however, are hosted separately at my linuxmafia.com server, masquerading as "lists.linuxgazette.net".

Talkback: Discuss this article with The Answer Gang


Bio picture Rick has run freely-redistributable Unixen since 1992, having been roped in by first 386BSD, then Linux. Having found that either one sucked less, he blew away his last non-Unix box (OS/2 Warp) in 1996. He specialises in clue acquisition and delivery (documentation & training), system administration, security, WAN/LAN design and administration, and support. He helped plan the LINC Expo (which evolved into the first LinuxWorld Conference and Expo, in San Jose), Windows Refund Day, and several other rabble-rousing Linux community events in the San Francisco Bay Area. He's written and edited for IDG/LinuxWorld, SSC, and the USENIX Association; and spoken at LinuxWorld Conference and Expo and numerous user groups.

His first computer was his dad's slide rule, followed by visitor access to a card-walloping IBM mainframe at Stanford (1969). A glutton for punishment, he then moved on (during high school, 1970s) to early HP timeshared systems, People's Computer Company's PDP8s, and various of those they'll-never-fly-Orville microcomputers at the storied Homebrew Computer Club -- then more Big Blue computing horrors at college alleviated by bits of primeval BSD during UC Berkeley summer sessions, and so on. He's thus better qualified than most, to know just how much better off we are now.

When not playing Silicon Valley dot-com roulette, he enjoys long-distance bicycling, helping run science fiction conventions, and concentrating on becoming an uncarved block.

Copyright © 2006, Rick Moen. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 131 of Linux Gazette, October 2006

The Geekword Puzzle

By Ben Okopnik

This month's Geekword Puzzle is a bit simpler than the ones in the previous issues - intentionally so; even the clues have been made a little easier (in fact, the standard informational tools in Linux, if used properly, would almost give away the answers.) The reason is a simple one: there are new people joining the world of Linux every day - and they, unlike those of us who have been around for a while, need a Big Clue to help them. And what better place for clues than a crossword?

So, to help any new Linux users who may have discovered LG along their search path in this new world, we have this month's puzzle. Most of the answers are simple, every-day Linux utilities (not all, of course; creating a crossword is hard enough without additional restrictions of that sort!), and I advise those who aren't familiar with them to use the 'man' command wisely.

Oh, and - welcome. It's nice to have you here.

Good luck, and enjoy!


 

1
2
3
4
5
*
6
*
*
*
*
*
7
8
9
* *
*
* *
10
11
*
12
13
* *
*
*
14
*
*
* *
*
*
*
15
* *
16


[ Crossword formatting and JavaScript via Alon Altman's cwd. The ASCII-art, printable version is available here. ]

 

Across

1: An editor for X resources
6: Shows library dependencies
7: Displays the differences between two files
8: File locator
10: World-wide water clock?
14: Shows the interesting part of, e.g., log files
15: Executes command under different group ID
16: Displays data in octal and other formats
Down

1: If not if and not else
2: Copies and converts data
3: Finds RCS keywords
4: An antique editor - the restricted version
5: Daemon that answers the queries in this article
9: Displays nice menus in shell scripts
11: Quits the current shell or script
12: Sets file permissions
13: Controls terminal options

 


Solution to the last month's Geekword (ASCII version here):

 

1
O
2
R
3
A
4
C
5
L
6
E
*
7
F
O
8
O
*
9
I
D
E
N
T
*
I
*
O
10
G
P
M
* *
H
*
11
G
C
C
P
*
12
I
S
P
E
L
L
*
A
A
*
N
* *
R
*
E
*
L
R
* *
13
W
*
A
*
T
*
C
14
T
U
N
E
L
P
* *
15
R
*
* * *
P
*
16
E
G
R
E
P

 


Talkback: Discuss this article with The Answer Gang


picture

Ben is the Editor-in-Chief for Linux Gazette and a member of The Answer Gang.

Ben was born in Moscow, Russia in 1962. He became interested in electricity at the tender age of six, promptly demonstrated it by sticking a fork into a socket and starting a fire, and has been falling down technological mineshafts ever since. He has been working with computers since the Elder Days, when they had to be built by soldering parts onto printed circuit boards and programs had to fit into 4k of memory. He would gladly pay good money to any psychologist who can cure him of the recurrent nightmares.

His subsequent experiences include creating software in nearly a dozen languages, network and database maintenance during the approach of a hurricane, and writing articles for publications ranging from sailing magazines to technological journals. After a seven-year Atlantic/Caribbean cruise under sail and passages up and down the East coast of the US, he is currently anchored in St. Augustine, Florida. He works as a technical instructor for Sun Microsystems and a private Open Source consultant/Web developer. His current set of hobbies includes flying, yoga, martial arts, motorcycles, writing, and Roman history; his Palm Pilot is crammed full of alarms, many of which contain exclamation points.

He has been working with Linux since 1997, and credits it with his complete loss of interest in waging nuclear warfare on parts of the Pacific Northwest.


Copyright © 2006, Ben Okopnik. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 131 of Linux Gazette, October 2006

Songs in the Key of Tux: KGuitar

By Jimmy O'Regan

It's been quite a while since my last "Songs in The Key of Tux" article, and, to be honest, I had given up on using Linux-based programs for my tablature needs.

The main problem is that, as a metal guitarist, I need to be able to represent palm muting in my tablature — I've tried in the past to give tablature to people that lacked palm muting information, and ended up having to show them how to play the riffs, which defeats the purpose of tablature.

Fortunately, I finally got around to trying out KGuitar, which does support palm muting.

KGuitar

As well as palm muting, KGuitar has a number of great features. It's able to import Guitar Pro files, up to version 4; it has a visual fretboard, so you can see how a chord would look on the guitar neck; a chord analyser (like Guitar Pro's) that gives you a choice of fingerings based on a chord name, or provides you with chord names for a chord you input; and, a feature that seems to be unique to KGuitar: the ability to tap the rhythm of a bar using the mouse or keyboard.

Virtual fretboard

KGuitar's virtual fretboard

If you've ever used a Windows-based tablature editor, KGuitar will seem familiar to you. Most editing is done with the keyboard: use the arrow keys to go to the section of the tablature you want to edit, and type the number of the fret of the note you want to add.

The Chord Constructor

The Chord Constructor is one of KGuitar's most interesting features. For those who are interested in music theory, it allows you to construct a chord by selecting the root note, and editing the type of the chord using the three lists on the top left of the window. It then gives a list of icons of possible fingerings on the bottom of the window.

To launch the Chord Constructor, either select Note->Insert->Chord, or type 'Shift-C':

Chord Constructor

For those of us who don't know music theory, it allows us to fake it: simply click on the relevant frets in the chord box in the middle of the window, and it will give you a list of possible chord names on the right. You can then use it to find alternate fingerings: you might just find an easier way to play your music!

Using the Rhythm Constructor

The Rhythm Constructor is an interesting idea, and quite useful for the majority of guitarists, who don't read music: simply enter the notes for a bar (or several bars), select Note->Insert->Rhythm (or 'Shift-R'), and tap out the rhythm using the mouse: tap on the Tap button, then press Quantize when you're finished.

Rhythm Constructor

I tried it out using a simple sample file (MIDI). The result (MIDI) was different, but my mouse clicking is more to blame than KGuitar is. I'm sure that, with practise, it could be quite a useful tool.

Importing from TuxGuitar

TuxGuitar is a Java-based tablature program which can import Guitar Pro 5 files. Although it can't represent several features of Guitar Pro tablature, as it lacks those features, it does an admirable job of importing files. (Though I have encountered one bug: see below).

It also claims to be able to export files in Guitar Pro 3 and 4 formats, although my every attempt to import those files into KGuitar gave me screens full of this output:

kguitar: WARNING: readDelphiString - first word doesn't match second byte
kguitar: WARNING: Chord INT1=-1, not 257
kguitar: WARNING: Chord INT2=-1, not 0
kguitar: WARNING: Chord INT4=-1, not 0
kguitar: WARNING: Chord BYTE5=255, not 0

I also encountered a strange bug when importing tablature of one of my exercises: (right-click on these links to download either the TuxGuitar's GP4 or the KGuitar version.)

TuxGuitar bug

To explain that for those who read neither tablature nor music, the tablature doesn't match the music notation. It should have looked something like this (using KGuitar's ASCII export, slightly corrected):

E|--------------------------|-------------------------|
B|--------------------------|-------------------------|
G|--------------------------|-------------------------|
D|--------------------------|-------------------------|
A|-------7-----------10-----|------7-----------10-----|
D|-0-7-8---8-7-0-7-8----8-7-|0-7-8---8-7-0-7-8----8-7-|

You could be forgiven for thinking that this is a font problem; that a 0 with a line through it and an 8 look similar, but TuxGuitar's playback follows the tablature line, and plays D (0) instead of A sharp (8).

I'll take a closer look at TuxGuitar (and DGuitar, a Java-based Guitar Pro viewer) in a future article, when I've set up Java on my system. Until then, here are some sample files to try with TuxGuitar. Some of them are quite complicated, so they should give a good indication of what TuxGuitar can do.

Talkback: Discuss this article with The Answer Gang


Bio picture Jimmy is a single father of one, who enjoys long walks... Oh, right.

Jimmy has been using computers from the tender age of seven, when his father inherited an Amstrad PCW8256. After a few brief flirtations with an Atari ST and numerous versions of DOS and Windows, Jimmy was introduced to Linux in 1998 and hasn't looked back.

In his spare time, Jimmy likes to play guitar and read: not at the same time, but the picks make handy bookmarks.

Copyright © 2006, Jimmy O'Regan. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 131 of Linux Gazette, October 2006

SVN Hackery: Versioning Your Linux Configuration

By Stephen Shirley

Basic Premise

"So, let's assume that you're already familiar with SVN..." This might have made a good start for an article that deals with refining and sharpening your SVN-Fu and making you into an SVN Guru. Stephen's article, however, started life as a tip posted to ILUG - and gained a new lease on life when Rick Moen, a member of LG's staff and an ILUG regular, forwarded it to me for possible inclusion in LG. At first, I didn't see this idea as being all that revolutionary - but after an email exchange with Stephen where he explained the finer points to me [1], I came to appreciate it for what it is.

SVN (Subversion, also known as "a compelling replacement for CVS") is a central part of the LG publication process; after using it for a couple of years and exploring its various features, I'm definitely a fan. However, SVN is not an end in itself but an important building block, a flexible tool. In the case described here, it's used to provide a series of versioned snapshots of your system - something that would allow you to "roll it back" to a known-good version (creating a cron job that would take those snapshots on a regular basis would, of course, be an obvious complement to this.) In my opinion, that makes a very nice addition to a system administrator's toolkit - one that's likely to save you untold grief in a number of situations. That's the kind of ideas and systems I like to expose to our readers; that, to me, is what makes Linux "just a little more fun".
-- Ben Okopnik, Editor-in-Chief


So you have all of /etc stored in your SVN repository (say, at file:///root/svn/etc). That works nicely: it's trivially easy to see if any files have changed, you can revert to any previous version, sleep well at night - just great. Then you decide that you should also store /boot in SVN (say, at file:///root/svn/boot). Also great. However, now in order to check if anything has changed, you have to do:

   cd /etc; svn status
   cd /boot; svn status

If you add any more top-level directories to SVN, it quickly becomes a pain to do this. So, you want to store / in SVN also (with the appropriate "svn:ignore" properties set so that all the non-SVN'd top-level dirs don't show up in 'svn status'). If, however, you now try

   svn co file:///root/svn/ /

it wont work as SVN will try to also check out /etc and /boot, and will complain that those directories already exist. Hurm. If you try the non-recursive checking

   svn co -N file:///root/svn/ /

it will check out file:///root/svn/ into / alright, but /etc and /boot will show up as unknown in "svn status":

   fluff# svn status /
   ?    /boot
   ?    /etc

Fixing this requires some minor hackery of the .svn/ special directory. If you open /.svn/entries in a text editor, youll see something like:

<?xml version="1.0" encoding="utf-8"?>
<wc-entries
  xmlns="svn:">
<entry
  committed-rev="226"
  name=""
  committed-date="2006-08-17T22:34:00.835159Z"
  url="file:///root/svn"
  last-author="root"
  kind="dir"
  uuid="18f6e95b-a6ff-0310-910f-8823210a8ec4"
  revision="226"/>
</wc-entries>

It contains a single <entry ... /> tag, with the name attribute set to "". This is the SVN entry for '/' itself. If you open '/etc/.svn/entries' in a text editor, you'll find a very similar <entry... name=""... /> tag:

<entry
  committed-rev="25"
  name=""
  committed-date="2005-09-22T20:33:37.949298Z"
  url="file:///root/svn/etc"
  last-author="root"
  kind="dir"
  uuid="18f6e95b-a6ff-0310-910f-8823210a8ec4"
  revision="25"/>

So, to make SVN consider /etc to be part of the / working directory, what you need to do is this: copy the above <entry.../> tag from /etc/.svn/entries into /.svn/entries, placing it just before the closing </wc-entries> tag. Then, change the 'name' attribute from "" to "etc". The complete finished /.svn/entries file should look like this:

<?xml version="1.0" encoding="utf-8"?>
<wc-entries
  xmlns="svn:">
<entry
  committed-rev="226"
  name=""
  committed-date="2006-08-17T22:34:00.835159Z"
  url="file:///root/svn"
  last-author="root"
  kind="dir"
  uuid="18f6e95b-a6ff-0310-910f-8823210a8ec4"
  revision="226"/>
<entry
  committed-rev="25"
  name="etc"
  committed-date="2005-09-22T20:33:37.949298Z"
  url="file:///root/svn/etc"
  last-author="root"
  kind="dir"
  uuid="18f6e95b-a6ff-0310-910f-8823210a8ec4"
  revision="25"/>
</wc-entries>

Now when you check the SVN status of /, it no longer considers /etc to be a foreign object:

   fluff# svn status /
   ?    /boot

Bingo. If you repeat the process again for /boot (i.e. copy the <entry... name=""... /> tag from /boot/.svn/entries into /.svn/entries before the closing </wc-entries> tag, and set the name attribute to be "boot") and any other applicable dirs; SVN will then start treating them as proper checked-out subdirs of the working dir.

As your reward, a single 'svn status /' will now check all of the top-level dirs that are stored in Subversion, making it much easier to keep track of things.


[1] Our exchange went something like this:

Ben > > Why not just do 'svn status /etc /boot', or even a little script that
Ben > > would read the list of directories in SVN and report on them? E.g.,
Ben > > 
Ben > > -------------------------------------------------------------------
Ben > > #!/bin/bash
Ben > > 
Ben > > cd /root/svn
Ben > > svn status `echo *|sed 's|\<|/|g'`
Ben > > -------------------------------------------------------------------
>
Steve > For what i explicitly stated, yes, that's a neat enough solution.
Steve > Other solutions that were posted in response on the ILUG list involved
Steve > using the "svn:externals" property on / to achieve roughly the same
Steve > thing. The downside to doing this is that you lose the benefit of a
Steve > cohesive working directory. You can't do atomic commits (you have to
Steve > do one per /, /etc and /boot instead). It also doesn't work with
Steve > branching etc. And of course you have to remember to run a script
Steve > rather than just 'svn status' when you're working on stuff.
Steve > 
Steve > All in all, my solution is probably overkill for most people, i just
Steve > thought i'd document it seeing as i discovered it was possible with a
Steve > moderate amount of effort, and it makes things nice and elegant for
Steve > me, even when i set up svn for /etc before realising that i want it
Steve > for other top-level dirs too.
Steve >
Steve > By all means publish it if you think it's genuinely useful to others,
Steve > but i'd perfectly understand if you don't -)

Talkback: Discuss this article with The Answer Gang


[BIO]

Stephen 'Captain Pedantic Pants' Shirley is a quiet student[0] with an appetite for sanely configured, maintainable systems. He was first introduced to Linux in 1996 with RedHat 4.1, and has since progressed through SuSE, Mandrake and Debian, seeking enlightenment. He is now involved in the Ubuntu project, and is the head admin of the University of Limerick computer society (skynet.ie). Currently finishing a MSc through research in the aforementioned UL, he's looking forward to a fulfilling life of bending Unix boxen to his will, whilst striking down inaccuracies wherever he finds them.

[0] It's always the quiet ones...


Copyright © 2006, Stephen Shirley. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 131 of Linux Gazette, October 2006

The LG Backpage

By Ben Okopnik

Dear readers:

I'm going to get straight to the point - the Linux Gazette needs your help. No, I'm not going to ask you for money; what we really, really need are volunteers. People who are willing to commit a few hours of their time and effort every month to doing some of the work that's necessary to get LG out. Without that help, I see the Linux Gazette slowly grinding to a halt in the not-too-distant future - and that's a future I'd prefer to avoid.

I'll be honest with you: I love LG, and would like to see it continue, even after I've moved on to something else (as hard as I find that to imagine.) I see it as a terrific resource for Linuxers, both new and experienced - a place where people can ask questions about Linux and explore the answers along with The Answer Gang; a place for articles, humor, and challenges; a way to make Linux "Just A Little More Fun"; most importantly, a way for our community to communicate. Behind the scenes, though, there's a lot of hard work going on - and just a few people carrying the load. If one of us has a problem, or is tied up by large amounts of Life Happening in a given month, then... there's no one left to carry the ball. That, my friends, is a serious problem - or so says the engineering part of my brain. It's a system with lots of vulnerabilities and no backup.

What I'm trying to do is create a system in which there's not only a significant amount of safety but also less strain - a system in which we, the staff of LG, get to participate in the fun as well as create it. Producing LG, making it come alive can be really enjoyable... but not if it's a chore. More hands would make it light work - and add to the general level of fun.

What We Need

At the simplest, most basic level, we need people who are communicative and punctual. Punctual, because LG has fixed deadlines; communicative, because the rest of the team relies on you to either deliver or notify. That is, you're expected to either 1) do what you've committed to do, or 2) tell us, as early as possible, that you've run into a problem. Either one is fine - but making an explicit commitment and not doing either of the above is a disaster in the making. (These skills would make you a valuable employee or business partner anyway - so get'em while they're hot.)

Going beyond that - there is a process to publishing LG; consequently, there is a need for people who can carry that process forward. I'll try to detail that process here, but do note that it's constantly evolving (comments on improving the process itself are also welcome, particularly if they come with an offer of help attached.)

Of course, we're always looking for authors and translators.

Friends... help me - help us - keep LG running. It's a good way to "pay forward" for the value that you get from reading the Gazette, for the value and power you gain from being a part of the Linux community. Join us today by emailing me at editor@linuxgazette.net.

Sincerely,

Benjamin A. Okopnik
Editor-in-Chief, Linux Gazette

Talkback: Discuss this article with The Answer Gang


picture

Ben is the Editor-in-Chief for Linux Gazette and a member of The Answer Gang.

Ben was born in Moscow, Russia in 1962. He became interested in electricity at the tender age of six, promptly demonstrated it by sticking a fork into a socket and starting a fire, and has been falling down technological mineshafts ever since. He has been working with computers since the Elder Days, when they had to be built by soldering parts onto printed circuit boards and programs had to fit into 4k of memory. He would gladly pay good money to any psychologist who can cure him of the recurrent nightmares.

His subsequent experiences include creating software in nearly a dozen languages, network and database maintenance during the approach of a hurricane, and writing articles for publications ranging from sailing magazines to technological journals. After a seven-year Atlantic/Caribbean cruise under sail and passages up and down the East coast of the US, he is currently anchored in St. Augustine, Florida. He works as a technical instructor for Sun Microsystems and a private Open Source consultant/Web developer. His current set of hobbies includes flying, yoga, martial arts, motorcycles, writing, and Roman history; his Palm Pilot is crammed full of alarms, many of which contain exclamation points.

He has been working with Linux since 1997, and credits it with his complete loss of interest in waging nuclear warfare on parts of the Pacific Northwest.


Copyright © 2006, Ben Okopnik. Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 131 of Linux Gazette, October 2006

Tux