...making Linux just a little more fun!

next -->

The Mailbag


General Mail
Submit comments about articles, or articles themselves (after reading our guidelines) to The Editors of Linux Gazette, and technical answers and tips about Linux to The Answer Gang.


And Now, A Message From The Crypt...

by Ben Okopnik, Editor-in-Chief

Greetings, all. This month's Mailbag is getting hammered out by yours truly: Heather Stern, our Editor Gal and Scissor Wielder ne plus ultra, is away at Baycon (as I understand it, she's working just as hard on keeping their networks running as she would on editing LG, which at least implies enviable connectivity for the attendees.) "Hammering" may, in fact, be just the right word for what I'm doing here - I have wielded a large mallet, a 12HP chainsaw, and a rusty pipe wrench where she might have used a fine scalpel and a jeweler's screwdriver - but somehow, the Mailbag got done. The point is, if you hate the look of it this month, blame me; it's not Heather's fault. If you love it, credit her - I've tried to follow her basic layout, although my version is built by hand rather than via her high-tech scripts.

In other - but still related - news, whereas Heather keeps (and processes) all the Answer Gang mail for the month, I... umm... don't. That is, I do read it, but as far as keeping it, well, on my system, it's chiefly noticeable by its absence. :) In other words, TAG will be back next month. Meanwhile, for your reading pleasure and (hopefully) gain in Linux knowledge, we have a series of discussions garnered from Usenet by Rick Moen of our own Answer Gang. Rick participated in these discussions, and thought that our readers might benefit from seeing them - and I find that I agree with him: "there's gold in them hills!" So, enjoy. As always, comments are welcome; send them to The Answer Gang, and you may see your name appear in lights - or at least in our pages.


Reset root password

partze <eron(at)lowzo(dot)com> wrote:
I lost my root password. Running Fedora Core 2 and I'm trying to figure out how to reset it. I can login locally as a regular user. Is there a way that I can reset the root password from the system without using 3rd party tools?
Bit Twister <BitTwister(at)mouse-potato(dot)com> wrote:
On Thu, 19 May 2005 11:21:46 -0400, partze wrote: > I lost my root password. Running Fedora Core 2 and I'm trying to > figure out how to reset it. I can login locally as a regular user. > Is there a way that I can reset the root password from the system > without using 3rd party tools? Depends on which boot loader you use ----------------- using grub ---------------------- When at the grub menu Hit e to get to the edit mode You should be on the Kernel line containing /vmlinuz Hit e again and add a space single to the end of line and hit Enter key. Example:
    kernel (hd0,4)/vmlinuz 1 root=/dev/hda9 mem=128M single
then b to boot

You then come to the sh-nnn# prompt.

When you "exit" the system will continue booting.

If single does not work, try a 1 instead.

---------------- using lilo  -----------------------

Hit Esc or Tab key at lilo prompt then
linux 1  or linux single

test <test(at)example(dot)com> wrote:
Bit Twister wrote: > On Thu, 19 May 2005 11:21:46 -0400, partze wrote: >> I lost my root password. Running Fedora Core 2 and I'm trying to >> figure out how to reset it. I can login locally as a regular user. >> Is there a way that I can reset the root password from the system >> without using 3rd party tools? > > Depends on which boot loader you use > > ----------------- using grub ---------------------- > When at the grub menu > > Hit e to get to the edit mode > You should be on the Kernel line containing /vmlinuz > Hit e again and add a > space single > to the end of line and hit Enter key. Example: > kernel (hd0,4)/vmlinuz 1 root=/dev/hda9 mem=128M single > then b to boot > > You then come to the sh-nnn# prompt. > > When you "exit" the system will continue booting. > > If single does not work, try a 1 instead. > > ---------------- using lilo ----------------------- > > Hit Esc or Tab key at lilo prompt then > > linux 1 or linux single Another method that forces a password on all consoles during any boot is to simply modify inittab and add/edit these to lines.
d:3:initdefault:
~:S:wait:/sbin/sulogin
note the double tilde. Basically what this does is force the requirement
of any legal logon to access the system even when in single user mode.
You still need to lock down the bios and restrict physical access to the
case in order to prevent a system boot with boot media.

That gets you involved in using an Encrypted File System, which is
something I've yet to learn enough about to even think of implementing.


Rick Moen <rick(at)linuxmafia(dot)com> wrote:
test <test(at)example(dot)com> wrote: ^^^^^^^^^^^ Hey, look! IANA's staff are on Usenet. (I"m kidding, I'm kidding! Yes, I know about RFC2606.) > You still need to lock down the bios and restrict physical access to > the case in order to prevent a system boot with boot media. Which, for reasons already cited, actually doesn't work: https://labmice.techtarget.com/articles/BIOS_hack.htm But password-encumbering all access to single-user modes, _plus_ restricting physical access to the removable-bootable-media ports (floppy and CD/DVD slots, USB ports) would do it. > That gets you involved in using an Encrypted File System, which is > something I've yet to learn enough about to even think of > implementing. Worth considering including a small one on laptops that must house sensitive data, given the theft threat. You'll have to deal with crypto overhead and a key-management problem.

Jack Masters <jackm.abc(at)starplace(dot)com> wrote:
Rick Moen wrote: > test <test(at)example(dot)com> wrote: > ^^^^^^^^^^^ > Hey, look! IANA's staff are on Usenet. (I"m kidding, I'm kidding! > Yes, I know about RFC2606.) > >>You still need to lock down the bios and restrict physical access to >>the case in order to prevent a system boot with boot media. > > Which, for reasons already cited, actually doesn't work: >https://labmice.techtarget.com/articles/BIOS_hack.htm If (big if) you can restrict access to the case, it is possible. Add some PCI card that just contains one EPROM, some NVRAM, and bus interface logic, and program an EPROM as a BIOS extension. Let that EPROM ask for the password, and only allow the boot process to continue once you get a valid password. I have seen this in a computer lab around mid-80s, in that case the EPROM did a bit more and caught all disk access, and encrypted it. This could have worked quite well, if the sysadmin who made the EPROM hadn't installed a backdoor (to make it more exiting, you had to hit the spacebar twice during boot, use his initials as password, and the system would boot from track 27 sector 5 or something like that off a floppy 8-) ). One could implement this without a backdoor, and the only way to tamper with the machine is to physically remove the card. Anyone who can do that can also physically remove the HD and mount it somewhere else. > But password-encumbering all access to single-user modes, _plus_ > restricting physical access to the removable-bootable-media ports > (floppy and CD/DVD slots, USB ports) would do it. > >>That gets you involved in using an Encrypted File System, which is >>something I've yet to learn enough about to even think of >>implementing. > > Worth considering including a small one on laptops that must house > sensitive data, given the theft threat. You'll have to deal with > crypto overhead and a key-management problem. You don't want to be that sysadmin who's boss has to give an important presentation in some far-away city, one hour from now, and forgot his password 8-)

Rick Moen <rick(at)linuxmafia(dot)com> wrote:
Jack Masters <jackm.abc(at)starplace(dot)com> wrote: > Rick Moen wrote: >> test <test(at)example(dot)com> wrote: >>>You still need to lock down the bios and restrict physical access to >>>the case in order to prevent a system boot with boot media. >> >> Which, for reasons already cited, actually doesn't work: >>https://labmice.techtarget.com/articles/BIOS_hack.htm > > If (big if) you can restrict access to the case, it is possible. If you restrict access to the case, it's also not _necessary_. (E.g., preventing people from putting the CD drive first in boot order becomes an unnecessary parlour trick if users cannot get to the CD slot.)
Unruh <unruh-spam(at)physics(dot)ubc(dot)ca> wrote:
partze <eron(at)lowzo(dot)com> writes: >I lost my root password. Running Fedora Core 2 and I'm trying to >figure out how to reset it. I can login locally as a regular user. Is >there a way that I can reset the root password from the system without >using 3rd party tools? That would be pretty useless, would it not, if anyone logged on could reset the root password? However, IF you have physical access to the machine, you could boot up in single user mode. You will now be in as root. Run passwd and change the password. This illustrates a) that physical access negates all security, or b) that the LILO password is another password to remember. Then you have to use your rescue disk, which willl again emphasize a)

muxaul(at)lenta(dot)ru wrote:
I would argue that there are ways to strengthen security even in case users have physical access to the machine. 1) In the BIOS, allow boot from the first HDD only and protect the BIOS settings with a password. (One needs to open the case then to change the BIOS settings w/o you.) 2) In lilo.conf, exclude "prompt" and use only one kernel. In addition, single user mode doesn't necessarily mean that you are logged in as root. In Slackware, # telinit S (or, equivalently, # telinit 1) takes you to the single user mode (as expected) and ... presents "login:" Mikhail

Rick Moen <rick(at)linuxmafia(dot)com> wrote:
muxaul(at)lenta(dot)ru wrote: > I would argue that there are ways to strengthen security even in case > users have physical access to the machine. > > 1) In the BIOS, allow boot from the first HDD only and protect the > BIOS settings with a password. (One needs to open the case then to > change the BIOS settings w/o you.) > > 2) In lilo.conf, exclude "prompt" and use only one kernel. These measures are dumb, and are only going to piss off your system administrators when they find they have to resort to service-password lists (like https://labmice.techtarget.com/articles/BIOS_hack.htm), or draining the BIOS electrical charge, or inserting a second hard drive -- just in order to help you when you forget your root password. I know, because I've _been_ that pissed-off sysadmin. So, quit playing around with gadget-freak local-console tricks, and put a lock on the damned door.

muxaul(at)lenta(dot)ru wrote:
I agree with both statements. The second idea is not always easy to implement, is it? Imagine a university lab ... ;-) Mikhail

ibuprofin(at)painkiller(dot)example(dot)tld (Moe Trin) wrote:
In article <1116678422.947909.17740(at)g43g2000cwa.googlegroups(dot)com>, muxaul(at)lenta.ru wrote: [Please learn to quote for context.] >I agree with both statements. Which were? I think you meant >huge(at)ukmisc.org.uk (Huge) wrote: > >>muxaul(at)lenta.ru writes: >>>I would argue that there are ways to strengthen security >> ^ >>You forgot the word "minimally". Actually, it can be a good bit more than "minimally" >>>even in case users have physical access to the machine. >> >>You're better off preventing physical access in the first place. >The second idea is not always easy to implement, is it? >Imagine a university lab ... ;-) 1. Remove floppy and CD drives - users can't bring in removable media, which makes installing windoze virus/trojans and *nix rootkits much harder. 2. Boot loader restricted and password protected. Same for BIOS. 3. Students save files to a central file server, which are running 'quotas'. 4. The case of the computers is physically locked, and the computers and monitors are secured by security cables. 5. Internet access _severely_ restricted - FTP/web access to proxy server only 6. Students guilty of transgressions loose computer privileges. This probably means they fail the course - and perhaps the quarter/semester. Second offenders are expelled. Not fool proof (fools are constantly discovering new ways to be a more complete fool), but also more than 'minimally' strengthened. And this is not just for education facilities - I know a number of companies that have essentially the same setup, except for step 6. Instead, they may simply be fired. Old guy

"Mikhail Zotov" <muxaul(at)lenta(dot)ru> wrote:
Moe Trin wrote: > In article <1116678422.947909.17740(at)g43g2000cwa.googlegroups(dot)com>, > muxaul(at)lenta.ru wrote: > > [Please learn to quote for context.] Sorry. My current ISP doesn't have COLS at its news server and I thus have to use Google Groups (GG). GG only provide an opportunity to quote for context only when you "sign in" for a single session (what I am doing now). If one signs in for two weeks (as took place in my previous postings), this opportunity disappears. >>The second idea is not always easy to implement, is it? >>Imagine a university lab ... ;-) > > 1. Remove floppy and CD drives - users can't bring in removable media, > which makes installing windoze virus/trojans and *nix rootkits much > harder. Done except for floppies because some students need them to bring work they have done at home. > 2. Boot loader restricted and password protected. Same for BIOS. Isn't this what I said before? (and was flamed for by Rick Moen) :-) > 3. Students save files to a central file server, which are running 'quotas'. Absolutely agree. Unfortunately, it's not always easy to make the HQs understand even simple things. :-) > 4. The case of the computers is physically locked, and the computers > and monitors are secured by security cables. 5. Internet access > _severely_ restricted - FTP/web access to proxy server only 6. > Students guilty of transgressions loose computer privileges. This > probably means they fail the course - and perhaps the > quarter/semester. Second offenders are expelled. Thanks you. I do completely agree with your suggestions and believe this is what must be done in ideal conditions. Mikhail

ibuprofin(at)painkiller(dot)example(dot)tld (Moe Trin) wrote:
In article <1116754011.850441.50970(at)g47g2000cwa.googlegroups(dot)com>, Mikhail Zotov wrote: >Moe Trin wrote: >> 1. Remove floppy and CD drives - users can't bring in removable >> media, which makes installing windoze virus/trojans and *nix rootkits >> much harder. > >Done except for floppies because some students need them to bring work >they have done at home. The solution at the local colleges I'm familiar with is to have the students send mail to themselves. I understand this may not be as easy in Russia. Still, those using computers at home are more likely to be able to have such mail access, even if it means dialing in to a server at the university. >> 2. Boot loader restricted and password protected. Same for BIOS. > >Isn't this what I said before? (and was flamed for by Rick Moen) :-) I'm not Rick, but with the cases locked, it's a reasonable solution for a school, where the admin's have the password, not the users. For home use, it's a pain in the butt when you forget your password, and you are the one to have to fix it. Pay me enough, and I'll fix that, but my rates (like Rick) are going to be quite high. >Thanks you. I do completely agree with your suggestions and believe >this is what must be done in ideal conditions. It's only a slight variation of how things were in the 1970s and early 1980s. More than one student was kicked out of places like UCB. CMU, and MIT, and more than that were threatened with such actions. Old guy

Rick Moen <rick(at)linuxmafia(dot)com> wrote:
Moe Trin <ibuprofin(at)painkiller(dot)example(dot)tld> wrote: >>Isn't this what I said before? (and was flamed for by Rick Moen) :-) > > I'm not Rick, but with the cases locked, it's a reasonable solution > for a school, where the admin's have the password, not the users. For > home use, it's a pain in the butt when you forget your password, and > you are the one to have to fix it. Pay me enough, and I'll fix that, > but my rates (like Rick) are going to be quite high. As you say, with this model, you do indeed need to have the case locked (or have a person in the room, monitoring the machines sufficiently well). After all, you not only want to ensure that nobody cracks root, but also that visitors don't (e.g.) unplug hard drives and take them home. This is the exactly the model we followed at The CoffeeNet, a now-closed 100% Linux Internet cafe in San Francisco (mirror of the cafe's site at: https://linuxmafia.com/coffeenet/). BIOS passwords were set, cases were locked, and one of the food-service staff kept a lackadaisical eye on things. However, the main protection was that the machines were set up so that, if you cracked root on any of the workstations, you enjoyed _less_ privilege than before, rather than more: All significant data were on NFS exports from a locked room upstairs (/home, /var, /tmp). We used NIS for single sign-on. Dan Farmer came for a visit, poked around using a workstation and his laptop, and to our satisfaction pronounced our security model "devious". ;-> If someone happened to crack root and make a nuisance of himself -- e.g., using the BIOS service passwords on the Web page I cited earlier -- the food-service staff possessed a floppy they could use to rebuild any impaired workstation from network-fetched disk images. After a while, it dawned on us to extend the NFS/NIS server's monitor and keyboard cables downstairs to serve as a (heavily locked down) text-only e-mail workstation that you were welcome to use even without buying any food or coffee. I would not consider a situation such as _that_ to be a poorly thought out security model -- having of course used it myself. But what I do find annoying is people (often business executives who've skim-read too many Kurt Seifried columns) applying it _generally_, and thinking that they're somehow improving security just by implementing a BIOS password -- and then of course standing there looking stupid, unable to produce that password, when the sysadmin makes a service call and can't get in. As I said, I've been that sysadmin.

Menno Duursma <pan(at)desktop(dot)lan> wrote:
On Mon, 23 May 2005 03:11:16 -0400, Rick Moen wrote: [ Snip - Internet cafe configuration. ] > [ ... ] But what I do find annoying is people (often business > executives who've skim-read too many Kurt Seifried columns) applying > it _generally_, Probably this is becouse they can get machines (at least from HP) at 50 a pop, imaged with the company modified GNU/Linux of choice, booting from HD per default _and_ having the BIOS password set as shipped. At no extra cost for setting the password (they use a network tool to do that?) > and thinking that they're somehow improving security just by > implementing a BIOS password -- Well they are. If against nothing else, at least against people (managers from different departments) saying they don't care for security at all. And if a user cracks/resets the BIOS password and the manager gets to know about that, they have a case/argument agaist them (as the user has to have done more then just wonder around, and learn about the system.) > and then of course standing there looking stupid, unable to produce > that password, when the sysadmin makes a service call and can't get > in. Of course the sysadmin should know the password beforehand. Of not, they should know about there being a (unknown) password, hence would come prepared at to the scene. > As I said, I've been that sysadmin. So have I.

Rick Moen <rick(at)linuxmafia(dot)com> wrote:
Menno Duursma <pan(at)desktop(dot)lan> wrote: [BIOS Setup password implemented by some jackass executive who then forgets it:] > Of course the sysadmin should know the password beforehand. Of not, > they should know about there being a (unknown) password, hence would > come prepared at to the scene. > >> As I said, I've been that sysadmin. > > So have I. Just to clarify: I was annoyed and inconvenienced by being hit with this _once_, as an unexpected impediment in 2001, at $PRIOR-PRIOR-FIRM, a Linux company where I was chief sysadmin at the time -- and I _was_ "prepared at the scene" after getting over the idiot's pointless waste of my time. One of the executives arranged for me to fix something he'd screwed up on his Linux workstation, and told me the alleged root password -- which I didn't rely on, knowing that such information is best assumed unreliable. I arrived at the appointed time; the executive was not there as arranged, but rather off on some sudden errand, which seemed just as well. Tried the alleged root password: No go, as expected. Fell back on booting Tom's Root/Boot; found out that removable drives were locked out of the boot order. Hurled maledictions on the executive's biological antecedents, went to fetch a screwdriver and grounding cable, partially disassembled workstation, jumpered BIOS battery out of the circuit, used grounding cable to drain BIOS Setup CMOS. Booted into BIOS Setup, set boot order, supplied IDE drive geometry yet again, booted Tom's R/B, mounted HD filesystem, chrooted, changed system root password. Fixed original problem. Additional time added to the original task: about 1/2 hour. (I didn't have the hyperlink to that collection of "skeleton key" BIOS passwords, and didn't at the time have leisure to search for it. But I knew such pages existed.) Executive arrives back, expresses surprise that it's taken me an hour. I politely suggest that setting a BIOS password, not telling the sysadmin, and then leaving the scene wasn't very helpful on his part. His take: It's "necessary for security", and I should just deal with it. That evening, I "deal with it" by setting all executive workstations' BIOS passwords, myself, to a string subtly unflattering to that one guy. ;->

Menno Duursma <pan(at)desktop(dot)lan> wrote:
On Mon, 23 May 2005 17:05:35 -0400, Rick Moen wrote: [ 'bout a (PHB) pinhead executive. ] > Tried the alleged root password: No go, as expected. Fell back on > booting Tom's Root/Boot; I used to have some Slackware boot/color floppys laying around SERs, heh. > found out that removable drives were locked out of the boot order. Which wouldn't be bad security practice should the pw work IMO. > Hurled maledictions on the executive's biological antecedents, LOL. > went to fetch a screwdriver and grounding cable, partially > disassembled workstation, jumpered BIOS battery out of the circuit, > used grounding cable to drain BIOS Setup CMOS. Booted into BIOS > Setup, set boot order, supplied IDE drive geometry yet again, booted > Tom's R/B, mounted HD filesystem, chrooted, changed system root > password. Fixed original problem. Additional time added to the > original task: about 1/2 hour. I hope your salary was good, atleast... > (I didn't have the hyperlink to that collection of "skeleton key" BIOS > passwords, and didn't at the time have leisure to search for it. But I > knew such pages existed.) Well actually that might have gotten you an stunning reaction otherwise. Some printer i had to fix was barfing errors i had never seen before (or couldn't remember the meaning of) so i went and yahoo'd for them... They go: WTF? You just STFW for that stuff? How much do you make? > Executive arrives back, expresses surprise that it's taken me an hour. > I politely suggest that setting a BIOS password, not telling the > sysadmin, and then leaving the scene wasn't very helpful on his part. > His take: It's "necessary for security", and I should just deal with > it. That's bad, hopefully (more often then not) that isn't the reaction there, gees. > That evening, I "deal with it" by setting all executive workstations' > BIOS passwords, myself, to a string subtly unflattering to that one > guy. ;-> Understandable. One time i accidently had local echo on and kind of tieped the password to some server with an (intern admin) luser looking over my shoulder is was getFuc17up or some such ... Auw. echo off and chanced ... (Those kind of passwords i _do_ seem better remember though. :-))
ibuprofin(at)painkiller(dot)example(dot)tld (Moe Trin) wrote:
In article <706c8$42918214$c690c3ba$11994(at)TSOFT(dot)com>, Rick Moen wrote: >As you say, with this model, you do indeed need to have the case locked >(or have a person in the room, monitoring the machines sufficiently >well). After all, you not only want to ensure that nobody cracks root, >but also that visitors don't (e.g.) unplug hard drives and take them >home. Absolutely. In the early 1990s, we had a rash of RAM thefts that finally convinced the bean counters that case locks were cheaper than replacing 4Meg by 9 SIMMs four at a time. This, in a "secure" facility. The thieves were smart, as they generally hit during lunch break, and only took half the memory out of the workstations. They turned off the monitors, so it looked as if the screen savers had kicked in, and the boot messages were not obvious - the display being cleared by /bin/login.. >However, the main protection was that the machines were set up so that, >if you cracked root on any of the workstations, you enjoyed _less_ >privilege than before, rather than more: All significant data were on >NFS exports from a locked room upstairs (/home, /var, /tmp). We used >NIS for single sign-on. I like it! If you are using a switched network, rather than coax or hubs, sniffing NIS becomes a lot harder. >If someone happened to crack root and make a nuisance of himself -- >e.g., using the BIOS service passwords on the Web page I cited earlier >-- the food-service staff possessed a floppy they could use to rebuild >any impaired workstation from network-fetched disk images. Harder for us, as our workstations and servers generally lack floppies or CDs. However, we only need the system to be able to boot and find the network, and we can reload from that. If the system won't boot (which is pretty rare for us), we normally just drop in a spare system, and take the broken system in to the service room for a cleaning and reload. >After a while, it dawned on us to extend the NFS/NIS server's monitor >and keyboard cables downstairs to serve as a (heavily locked down) >text-only e-mail workstation that you were welcome to use even without >buying any food or coffee. We'd be against that. Our servers are servers, and the only people who can log in are staff. We don't even let the janitorial staff into the server rooms after one guy decided to plug in the (industrial sized) floor buffer into the UPS - borrowing the socket where a file server was plugged in. All kinds of fun, as the Magic Smoke(tm) escaped from the UPS, and the file server (a Sparc2 with two separate external SCSI chains - can you say 14 hours to fsck?) was down for a while. All of the wall outlets had stuff plugged in, and had security retainers to keep the stuff plugged in... what was the guy to do, right? Sigh. >I would not consider a situation such as _that_ to be a poorly thought >out security model -- having of course used it myself. But what I do >find annoying is people (often business executives who've skim-read too >many Kurt Seifried columns) applying it _generally_, and thinking that >they're somehow improving security just by implementing a BIOS password >-- and then of course standing there looking stupid, unable to produce >that password, when the sysadmin makes a service call and can't get in. We avoid that by not handing out the root password (and having the systems locked). If the hell-desk staff can't get in remotely to "fix" some problem they're complaining about, someone will be by shortly to have a look. If there is a problem that can't be solved on the spot, we can have a spare system in within a half hour. Luckily, most of the PHBs are dependent on their secretary, and we only have a few of those that need to be shot. >As I said, I've been that sysadmin. I'm trying to remember where that tee-shirt is.

Rick Moen <rick(at)linuxmafia(dot)com> wrote:
Moe Trin <ibuprofin(at)painkiller(dot)example(dot)tld> wrote: > In article <706c8$42918214$c690c3ba$11994(at)TSOFT(dot)com>, Rick Moen wrote: >>However, the main protection was that the machines were set up so >>that, if you cracked root on any of the workstations, you enjoyed >>_less_ privilege than before, rather than more: All significant data >>were on NFS exports from a locked room upstairs (/home, /var, /tmp). >>We used NIS for single sign-on. > > I like it! If you are using a switched network, rather than coax or > hubs, sniffing NIS becomes a lot harder. Yes, indeed. This was back when switches were still rare, so the entire building had only hubs. The CoffeeNet's proprietor and I both lived in our respective apartments, upstairs -- so that had the healthy side-effect of hammering into me the habit of never trusting the network, if I can possibly help it. I've retained that habit to this day; on our residence LAN, none of the hosts trusts any of the other hosts, or trusts the network: When we deployed 802.11b wireless and friends asked how we dealt with the security exposure, our answer was "What security exposure? Do you think we ever trusted the _wired_ network?" (To his credit, the querent recovered nicely, saying we still had to worry about a drive-by spammer parking at the end of our driveway. I countered that I'm prepared to do baseball-bat DoSes, in such cases.) I should probably also explain, in case it wasn't apparent, that the main reason you had _less_ access after cracking root than before is that all the significant filesystems were NFS-exported using the no_root_squash flag. So, it actually sucked to be UID=0 on any of the client workstations. > Harder for us, as our workstations and servers generally lack floppies > or CDs. However, we only need the system to be able to boot and find > the network, and we can reload from that. Sure. Pretty easy in these days of PXE support built into everything. It would have made things simpler, at The CoffeeNet. >>After a while, it dawned on us to extend the NFS/NIS server's monitor >>and keyboard cables downstairs to serve as a (heavily locked down) >>text-only e-mail workstation that you were welcome to use even without >>buying any food or coffee. > > We'd be against that. Our servers are servers, and the only people who > can log in are staff. I wish I could remember exactly what sets of permissions and ownership The CoffeeNet's proprietor, my friend Richard Couture, set such that he felt it rational to assume that risk on an NFS/NIS master. I can't quite remember. I do remember raising an eyebrow double-plus high, when he originally mentioned the idea. Gratuitous plug: Richard decamped in 1996 to Guadalajara, Jalisco state, Mexico, where he established "LinuxCabal". If you're in the area, drop in on him and say "Hi". https://www.linuxcabal.com/ (English pages at: https://www.linuxcabal.org/index.en.html) Note that only the keyboard and monitor were public: The system case enclosure (and all drives and other ports) remained under strong physical protection.

Debian's policy regarding security updates

Robert Glueck <rglk(at)web(dot)de> wrote:
I can't quite figure out the policy of Debian with regard to security updates for their OS. From what I understand, it is as follows. Please correct me if I'm wrong. When a security vulnerability is discovered in a Linux package that's part of the Debian distribution, Debian will attempt to prepare a fix for it, first for stable (for all supported architectures) and perhaps later then for unstable, and announce the fixes in a DSA. If they managed to prepare a fix for unstable, it will be posted as such and then after two days migrate automatically into the testing distro, "after all dependencies have been fulfilled" (?). For example, all of the 98 vulnerabilities that Debian issued DSAs for so far in 2005 have been fixed for stable, and the great majority have also been fixed for unstable. By now, all packages in the latter group would have migrated into testing. Hence, I assume that the current versions of all packages in the latter group in the testing distro have received the security fix. For the rest, i.e. a small fraction of the 98 packages, the DSA states that "for the unstable distribution (sid) these problems will be fixed soon." The situation is thus fairly clear for stable: a vulnerability is discovered, a fix is prepared, new deb packages are made for all supported architectures, they are tested to make sure they don't break any dependencies, and if everything is fine, they are released to the public. For unstable and testing, the situation is less clear. If the Debian developers have time, they will prepare a fix for the most recent version of the affected package, which would be in unstable, release it (as source only?), and after a short quarantine it would become part of the testing distro. Are these updated packages in the testing distro then tested with regard to breaking dependencies? Are they available as deb packages, e.g. for the intel 86 architecture? With regard to the packages about which the DSA said that "for the unstable distribution (sid) these problems will be fixed soon", does that mean that Debian still hasn't fixed them for unstable (and testing)? Or did they fix them and they are now in the testing distro but Debian simply failed to update the advisory about this fact? If this newsgroup isn't quite the right place to post this query, which Debian newsgroup, forum or mailing list would be the appropriate place? Robert --------------------------------------------------------------------------- Pertinent sections of the Debian Security FAQ: Q: How is security handled in Debian? A: Once the security team receives a notification of an incident, one or more members review it and consider its impact on the stable release of Debian (i.e. if it's vulnerable or not). If our system is vulnerable, we work on a fix for the problem. The package maintainer is contacted as well, if they didn't contact the security team already. Finally, the fix is tested and new packages are prepared, which are then compiled on all stable architectures and uploaded afterwards. After all of that is done, an advisory is published. Q: How is security handled for testing and unstable? A: The short answer is: it's not. Testing and unstable are rapidly moving targets and the security team does not have the resources needed to properly support those. If you want to have a secure (and stable) server you are strongly encouraged to stay with stable. However, the security secretaries will try to fix problems in testing and unstable after they are fixed in the stable release. Q: How does testing get security updates? A: Security updates will migrate into the testing distribution via unstable. They are usually uploaded with their priority set to high, which will reduce the quarantine time to two days. After this period, the packages will migrate into testing automatically, given that they are built for all architectures and their dependencies are fulfilled in testing.

Rick Moen <rick(at)linuxmafia(dot)com> wrote:
Robert Glueck <rglk(at)web(dot)de> wrote: > For unstable and testing, the situation is less clear. Indeed. (Disclaimer: I'm a Debian-using sysadmin, but speak only for the guy I shave, and then only on a good day, following application of sufficient caffeine.) > If the Debian developers have time, they will prepare a fix for the > most recent version of the affected package, which would be in > unstable, release it... Please note that by "the Debian developers", here, you're referring to the individual package maintainers, not the Security Team. As I understand it, the Security Team make no promises as a general rule about releasing updates to fix holes in any branch other than Debian-stable. The "Debian developers" you refer to, above, will probably apply new security fixes incidentally during the course of releasing into Debian-unstable (and thus, after quarantine, into Debian-testing) sundry upstream revisions / new versions. But they're not guaranteed to be diligent about security _per se_: They're just 1000+ run-of-the-mill package maintainers. So, they might apply timely security fixes, or they might screw it up. The Security Team might backstop them if they screw up -- or not. So, I hear you ask, what's a body to do -- if that body is inclined to run a branch other than Debian-stable? Here's my solution: Put lines for both -testing and -unstable into /etc/apt/sources.list, and then use apt's "pinning" feature to declare -testing my default branch. Subsequently, I can request the other branch's current package at any time by including "-t unstable" on the apt-get (or aptitude) command line. And, I subscribe to the security alerts mailing list, so I can skim DSAs[1] as they come out. Why is this approach useful? Because I can normally just fetch -testing-branch packages by default, and -- if a DSA says there's a security problem -- can fetch the -unstable branch's new release of that package without waiting for the quarantine period, if the DSA suggests that would be useful. The disadvantage, such as it is, is that one has to actually _read_ the DSA, and then be prepared to manually fetch, apply, or otherwise implement whatever fix suffices to address the indicated problem. Usually, the (default) -testing package suffices. Failing that, most often the -unstable one does. Or in rare cases (can't think of any) not, and you have to do something else. The point is that it's less automated -- the burden's on you to pay attention -- but it's still pretty darned automated. > If the Debian developers have time, they will prepare a fix for the > most recent version of the affected package, which would be in > unstable, release it (as source only?), No, not just source only. > and after a short quarantine it would become part of the testing > distro. Yes. Here's an old FAQ on the quarantining process. (It may be outdated: Caveat lector.) "Testing FAQ" on https://linuxmafia.com/kb/Debian/ > Are these updated packages in the testing distro then tested with > regard to breaking dependencies? Yes. That's part of Debian Policy. If they aren't, it's a bug. In the -unstable branch, and rarely in -testing, on rare occasions a new package will want to overwrite a package already owned by a different pacakge. I figure this is just the price you pay for being on a development branch, and indicates a graceless one-time transition of the file between packages. apt-get will halt and refuse to let newly arrived package A overwrite that file that's owned by package B, and will tell you so just before shutting down. At that point you do:
# cd /var/cache/apt/archives
# dpkg -i --force-overwrite A
...then resubmit the apt command, and you're back on your way.

> Are they available as deb packages, e.g. for the intel 86
> architecture?

Goodness gracious yes.

> With regard to the packages about which the DSA said that "for the
> unstable distribution (sid) these problems will be fixed soon", does
> that mean that Debian still hasn't fixed them for unstable (and
> testing)?  

Impliedly, that's what it means.  Of course, the person who wrote that
DSA might not have bothered to check the -unstable package carefully: 
That's not his job.  Upstream may have already done the fix, and the
package maintainer duly ground out packages containing it, without the
Security Team being fully aware.   Or not.  If you're on -unstable or
-testing, making sure you _get_ security fixes -- or shut off / remove
vulnerable packages and maybe use something else for the duration -- is
your responsibility.

As a rough heuristic, one might generally assume that, if either
upstream or the package maintainer (or both) are minimally on the job,
and the security problem is significant, then new versions will be
quickly in the pipeline.  Remember, some alleged security holes are
speculative and may not be realistic threats, some apply only for very
unlikely deployment configurations, etc.

Of course, if upstream _and_ the package maintainer are functionally
comatose, there could be a problem.  In theory, the other Debian
developers will eventually notice and compensate for this, if necessary
doing NMUs (non-maintainer uploads) of fixed packages, or other
remedies.

> If this newsgroup isn't quite the right place to post this query,
> which Debian newsgroup, forum or mailing list would be the appropriate
> place?

Try the debian-security mailing list.

[1] Debian Security Advisories.


Robert Glueck <rglk(at)web(dot)de> wrote:
... see preceding post ... Thanks a lot, Rick, for going to the trouble of posting a lengthy reply to my queries. It certainly went a long way to clarify for me the rather obscure matter of Debian's handling of security updates for unstable and testing. Also, your policy and procedure for keeping your sarge system secure seem eminently reasonable and ought to be applicable to variants of Debian as well. So from what I gather, once a patched package has moved into sarge, it's rather safe to install it to replace the older, vulnerable package, i.e. the likelihood that dependencies will be broken is low or nil. But for it to show up in sarge will take at least two days after the DSA is posted and in many cases may take much longer. E.g. in one of the articles on the Linuxmafia website someone referred to 1700 packages queued up in sid at one time because dependencies hadn't been resolved in some of them on which they were all cross-dependent, thus holding up the movement of the entire lot into sarge. So if the security risk is high and one doesn't want to wait for the patch to appear in sarge, one may have to install the patched sid package. Is there any way of assessing the likelihood of breaking packages when one installs such a freshly patched package from unstable? In particular, if apt-get warns you about potentially breaking packages and you force-overwrite the existing package any way and cause damage, can the damage be reversed and the system be restored to its previous state? How often has it happened to you or others you know that you installed a security fix from sid and caused major damage? Has it ever been necessary to reinstall Debian from scratch after an untested sid security update busted your system? Further, is there any forum in which folks post their experiences with installing specific sid security updates? I see many references to or comments on DSAs in linux.debian.security but I'm not sure how many of these are indeed user reports about success or failure in installing sid security updates. Finally, where can I find an up-to-date general assessment of the security status of a single-user home desktop system that runs Debian sarge and that's used in a typical fashion, i.e. principally for Internet access, and that's also moderately well defended (broadband connection with NAT router, iptables/netfilter firewall with pretty strict reject rules, no services running, good passwords, fairly good awareness of Internet security and privacy risks on the part of the user, i.e. paranoia above average)? And is there a clearinghouse somewhere that would guide this mythical non-pro non-sysadmin security-conscious home user of Debian in this matter, i.e. alerting him to DSA's that apply to his system, along with explicating the specific nature and degree of risk? Many thanks for your help! Cheers, Robert

Rick Moen <rick(at)linuxmafia(dot)com> wrote:
Robert Glueck <rglk(at)web.de> wrote: > So from what I gather, once a patched package has moved into sarge, > it's rather safe to install it to replace the older, vulnerable > package, i.e. the likelihood that dependencies will be broken is low > or nil. But for it to show up in sarge will take at least two days > after the DSA is posted and in many cases may take much longer. E.g. > in one of the articles on the Linuxmafia website someone referred to > 1700 packages queued up in sid at one time because dependencies hadn't > been resolved in some of them on which they were all cross-dependent, > thus holding up the movement of the entire lot into sarge. Yes, exactly. That was immediately preceding the release of 3.0/woody as the new Debian-stable branch, by the way -- and I'm pretty sure a new and possibly-problematic libc6 package in -unstable was the one in question. The possibility of packages not currently installable in -testing because new versions of packages needed to satisfy dependencies are still held up in quarantine is part of the reason I add -unstable sources to my /etc/apt/sources.list and specify -testing as default: If getting a new version of some package seems really important, and that sort of dependency gotcha seems to apply, then adding "-t unstable" will generally fix that. (That option causes not only the specified package to be fetched from the named branch, but also any others required for dependency reasons.) That seems, a priori, most likely to happen for several notorious dependency hairballs: GNOME, KDE, Mozilla and related browsers. > So if the security risk is high and one doesn't want to wait for the > patch to appear in sarge, one may have to install the patched sid > package. Is there any way of assessing the likelihood of breaking > packages when one installs such a freshly patched package from > unstable? Hmm. The methods that come immediately to mind: o Do a spot-check on the debian-devel mailing list. o Do a spot-check on the #debian IRC channel. > In particular, if apt-get warns you about potentially breaking > packages and you force-overwrite the existing package any way and > cause damage, can the damage be reversed and the system be restored to > its previous state? Just to be ultra-clear on this: I wan't talking about a warning of "potentially breaking packages" exactly. It's just that apt-get is ultra-cautious and will refuse to let any newly fetched package overwrite any package "owned" by any existing package. I've never seen any situation where the explanation wasn't that the file in question was merely transitioning from package A to package B. And thus I've never seen breakage result from that. But you could certainly just reinstall A if putting in B seems to create problems. > How often has it happened to you or others you know that you installed > a security fix from sid and caused major damage? Personally, not at all. But if you have concerns about that, you should ask more broadly, perhaps on the debian-user or debian-security mailing list. > Has it ever been necessary to reinstall Debian from scratch after an > untested sid security update busted your system? Nope. I should mention that I was very skeptical, when I first deployed -testing on a couple of non-critical boxes. It's proven its worth over time. (Note that I'm not a GNOME or KDE guy, and am a very long-time Linuxer.) > Further, is there any forum in which folks post their experiences with > installing specific sid security updates? See above. > Finally, where can I find an up-to-date general assessment of the > security status of a single-user home desktop system that runs Debian > sarge and that's used in a typical fashion, i.e. principally for > Internet access, and that's also moderately well defended (broadband > connection with NAT router, iptables/netfilter firewall with pretty > strict reject rules, no services running, good passwords, fairly good > awareness of Internet security and privacy risks on the part of the > user, i.e. paranoia above average)? I doubt it. If you bother the developers, they'll hit you with a standard line, that (translated) means "Please don't bother us": 1. If you want automatic Security Team coverage, run Debian-stable. 2. If you decide to run -unstable, please don't complain if it breaks. You were warned, and get to keep both pieces. 3. If you decide to run -testing, don't complain about any shortfalls in Security Team coverage, because the Debian Security Team FAQ clearly states that they don't promise any. And don't complain about possible dependency snarls (temporarily uninstallable packages) because of differential rates by which packages clear quarantine: Again, you were warned, and that's the way it works. It should be noted that this situation has created an ecological niche for such things as Ubuntu / Kubuntu, which you might consider to meet your needs exactly. > And is there a clearinghouse somewhere that would guide this mythical > non-pro non-sysadmin security-conscious home user of Debian in this > matter, i.e. alerting him to DSA's that apply to his system, along > with explicating the specific nature and degree of risk? Not that I know of -- but, honestly, skim-reading DSAs really isn't very difficult or time-consuming. Really. And do have a look at Ubuntu (cutting-edge GNOME-based desktop system, forking off a copy of Debian-unstable every six months) and Kubuntu (same system, except with KDE). You might like 'em. I run Ubuntu on my G3 iBook -- except that I de-GNOMEified the thing, pronto.

Robert Glueck <rglk(at)web(dot)de> wrote:
Thanks a lot again, Rick, for all your effort to dispell my confusion about Debian's security updates for unstable/testing. I've got a pretty clear idea now about how this is being handled by Debian. And I got a straightforward procedure to follow for any sarge packages that I wish to update with security patches. It turns out the whole affair isn't all that complicated and hazardous. If one proceeds carefully and knows what one is doing, it seems nothing can really go seriously wrong and any damage conceivably caused can be readily reversed. I'm getting a sense that Debian is a well-crafted distribution. It's been very stable on my system for more than a year of running it, more so than Mandrakelinux v.9.1 and 10.0 which I was running for about 6 months before I switched to Debian. Although MDK had a lot going for it, it was much more fickle than Debian. I'd downloaded and checked out the live CD's of Ubuntu and Kubuntu, and I finally got the new versions 5.04 working properly on my system. They do seem to work well and have a nice polish. With the financial muscle of a multimillionaire supporting a very energetic team of developers and with their large and enthusiastic user groups these two may well become the best supported cutting-edge Debian distributions. I'll have to check out how the Ubuntu and Kubuntu teams are handling security vulnerabilities. Thanks again. Robert

Y2k type problem for linux!!!! how true????

"sree" <sreeramkoneru(at)gmail(dot)com> wrote:
Hi, my fellow Linux lovers, I received an email from a friend of mine regarding the date problem that may arise with Linux OS. I am giving the full mail below. I am just wondering if anybody here are aware of this problem? Is it really true? "Tuesday, January 19 2038. Time: 03:14:07 GMT. If Linux programmers get nightmares, it's about this date and time. Immediately after that second is crossed, current computer systems running on Linux will grind to a halt or go into a loop. This will trip up a lot of databases. No, this is not another hoax raised by some anti-Linux lobby. It is Linux's own Y2K nightmare, says Businessworld. If you ask what this 2038 bug is, you will have to put up some technical argot. The bug has its origins in the way the C language, which has been used to write Linux, calculates time. C uses the 'time_t' data type to represent dates and times. ('time_t' is an integer that counts the number of seconds since 12.00 a.m. GMT, January 1 1970.) This data is stored in 32 bits, or units of memory. The first of these bits is for the positive or negative sign, and the remaining 31 are used to store the number. The highest number that these 31 bits can store works out to 2147483647. Calculated from the start of January 1 1970, this number would represent the 2038 time and date given at the top. Problems would arise when the system times of computers running on Linux reach this number. They can't go any forward and their value actually would change to -- 2147483647, which translated to December 13 1901! That will lead many programs to return errors or crash altogether. It's more damaging than the Y2K bug. That's because Y2K mostly involved higher-level applications such as credit card payment and inventory management. The 2038 bug, on the other hand, affects the basic time-keeping function. "I would guess the biggest issue would be in the embedded field, where software isn't changed all that often, if at all. Process controllers, routers, mobile phones, game consoles, telecom switches and the like would be the biggest victims," says Raju Mathur, GNU and Linux consultant and president of the Linux Delhi Users Group. He, however, adds that the rate at which we are changing technology, most systems are unlikely to use 32-bit processing by the time we get to 2038. But what about the present? Many applications running on Linux could soon be making calculations for dates 30 years away -- say, for mortgage and insurance calculations -- and could start giving out error messages well before D-day. The problem could be widespread because more and more corporates today are migrating to Linux because of the better security it offers. "The problem is not on the radar of most people, except the techies," says Charles Assissi, editor, Chip magazine. How can the problem be sorted? Modern Linux programs could use 64-bit or longer time_t data storage to overcome the problem. As for the existing systems, the way the C language stores time_t data could be changed and then all the programs could be recompiled. All this is easier said than done. "There must be millions, if not billions of lines of C code floating around that use the time_t value. Locating them, changing them, managing programs for which source isn't available, updating embedded systems, redeploying, is, in my opinion, an impossible task," says Mathur." thanx sree

Larry I Smith <larryXiXsmith(at)verizon(dot)net> wrote:
sree wrote: > [time_t rollover:] This affects all 32 bit OSes, including MS-Windows. I don't think I'll be sleepless over this... Larry "R.F. Pels" <spamtrap(at)tiscali(dot)nl> wrote:
sree wrote: > I received an email from a friend of mine regarding the date problem > that may arise with Linux OS. I am giving the full mail below. I am > just wondering if anybody here are aware of this problem? Is it really > true? Ah. A journalist without a story! I've read the article. And it's hogwash. As I said here (https://braincore.blogspot.com/2005/05/y2038.html): <quote> Hello!!! Reality check!!! Software does not have an average lifetime expectancy of 33 years. As do operating systems. Or hardware for that matter. Databases already use different storage formats for dates. Last but not least, time_t is a long int. Guess what that means on a 64bit architecture... As Bob Robertson put it, time_t on 64bit architectures will 'cover the heat death of the universe'... </quote> 'nuff said. Andrew <yogig(at)nospam.hotmail(dot)com> wrote:
mjt wrote: > ("sree" <sreeramkoneru(at)gmail(dot)com>) scribbled: > >>I received an email from a friend of mine regarding the date problem >>that may arise with Linux OS. I am giving the full mail below. I am just >>wondering if anybody here are aware of this problem? Is it really >>true?. > > ... is this all you're worried about? by the time > we get there, it'll be fixed. Yeah. That is true. I think it's 2036, not 2038, though. I wrote the test plan for Y2K that the whole Windows Team used back in early 99. I remember sitting in my office on New Years eve at 3pm waiting for New Zealand to flip over to Y2K. The whole team was on call, in case issues arose. Around 8pm, we started in on the champagne! No issues.

Rick Moen <rick(at)linuxmafia(dot)com> wrote:
Andrew <yogig(at)nospam.hotmail(dot)com> wrote: > Yeah. That is true. I think it's 2036, not 2038, though. Er... To recap: The maximum positive value of time_t in C/C++ as a 32-bit signed int is 7FFFFFFF hex, or 2^31 - 1 (the remaining bit being reserved for the sign). By convention, the starting time (time zero) is the beginning of January 1, 1970, UTC (aka GMT). Create this file as "x.c":
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include <time.h>
int main (int argc, char **argv)
{
            time_t t;
            t = (time_t) (0x7FFFFFFF);
            printf ("%d, %s", (int) t, asctime (gmtime (&t)));
            return 0;
}
Now, do "cc x.c" followed by "./a.out".  You'll see:
2147483647, Tue Jan 19 03:14:07 2038

(That's the decimal number of seconds elaped, followed by the exact 
date / timestamp in ASCII.)

<exasperated>
Please note that this is _not_ a "Linux problem", but rather a design
limitation of any and all old-style 32-bit C and C++ time-handling code
on _any_ OS platform.  
</exasperated>

Suse 9.X and the SOBER worm

"chrisv(at)texas(dot)net" <chrisv(at)texas(dot)net> wrote:
How does Suse protect against it in email attachments? Is AntiVir enought protection? Is Linux inherently protected from this type of virus? Please inform.. thanks

Rick Moen <rick(at)linuxmafia(dot)com> wrote:
[Followups have been set to comp.os.linux.security.] chrisv(at)texas(dot)net <chrisv(at)texas(dot)net> wrote: > How does Suse protect against it in email attachments? Is AntiVir > enought protection? Is Linux inherently protected from this type of > virus? Please inform.. "Sober" is a Microsoft Visual BASIC executable attachment that arrives attached to an e-mail. The payload has a .zip or .exe filename extension. For activation, it relies on recipients having an environment supporting such executables, and users stupid enough to execute binary program attachments received from nobody in particular. If activated on a Win32-supporting machine, it forks off an SMTP engine process to further propagate using e-mail addresses scanned from certain sorts of files on local disk volumes, makes some changes to the local Win32 registry (if this is an MS-Windows machine), displays some sort of lying message to the local user, and commences mailing itself to every address it can find. I have no doubt that some variants differ slightly from that description, but the details really don't matter. The minor point, of two, to note is that it's an MS-Windows executable, and thus natively can run -- assuming someone's stupid enough to run it -- on MS-Windows machines and OSes with some sort of close compatibility. The _major_ point to note is the one about requiring a recipient stupid enough to go out of his way to run the executable. Let's assume for the sake of discussion that a Linux system emulates MS-Windows's structures closely enough that it _could_ support running Sober. OK, fine: Now consider the other part, what's required to get the user to run it. There are 123 e-mail programs that run on Linux.[1] Not a single one of them will run a received attached executable for no better reason than the user "clicking on" it. The standard Unix default treatment is that you could save that file, e.g., to /tmp, and then, if you _really_ thought it wise to execute it, could do "chmod u+x /tmp/savedprogram" or some equivalent, and only _then_ run it. The chmod command is necessary because, by universal convention embedded in the system call used, the file will _not_ get saved with the executable bit set. Thus, the user has to use "chmod" (or equivalent) to enable it manually. The culture and structure of Linux (which reinforce each other) are such that it's made non-routine to perform such a reckless action: The user has to go out of his way to make it possible. The intent, in part, is to encourage the user to become wary when suddenly the system's somewhat in the way of him doing it. It gives him an opportunity to stop and think "Wait, do I want to do this? Is this in my interest? Or am I laboriously taking aim and shooting at my own foot?" Which is a Good Thing. It is also inherent in Linux (as in Unixes generally) that the system doesn't prevent you from doing stupid things, because that would also prevent you from doing clever things. So, ultimately, if the user is determined to blow up his system, the system won't stop him, and will barely slow him down. But, at that point, if he's that reckless, there are about a thousand other ways he's more likely to blow up his system, first. So, in at least two separate senses, Sober and kin are no threat at all. If you're passing mail _through_ a Linux box and want to artificially protect vulnerable downstream MS-Windows boxes, where that mail will be read, and where (with good reason) you probably don't trust the users, the mail-reading software, or the user culture / system architecture, then you can run Linux software to detect and strip the MS-Windows viruses. If you don't have vulnerable downstream systems, don't bother. Big-picture essays about Linux and "virus threats" (long): https://linuxmafia.com/~rick/faq/index.php?page=virus [1] See: "MUAs" on https://linuxmafia.com/kb/Mail/

peter <apvx95(at)dsl(dot)pipex(dot)com> wrote:
Rick Moen wrote: <snip /> > "Sober" is a Microsoft Visual BASIC executable attachment that arrives > attached to an e-mail. The payload has a .zip or .exe filename > extension. For activation, it relies on recipients having an > environment supporting such executables, and users stupid enough to > execute binary program attachments received from nobody in particular. > <snip /> > The _major_ point to note is the one about requiring a recipient stupid > enough to go out of his way to run the executable. Let's assume for the > sake of discussion that a Linux system emulates MS-Windows's structures > closely enough that it _could_ support running Sober. OK, fine: Now > consider the other part, what's required to get the user to run it. > > > There are 123 e-mail programs that run on Linux.[1] Not a single one of > them will run a received attached executable for no better reason than > the user "clicking on" it. The standard Unix default treatment is that > you could save that file, e.g., to /tmp, and then, if you _really_ > thought it wise to execute it, could do "chmod u+x /tmp/savedprogram" or > some equivalent, and only _then_ run it. The chmod command is necessary > because, by universal convention embedded in the system call used, the > file will _not_ get saved with the executable bit set. Thus, the user > has to use "chmod" (or equivalent) to enable it manually. > <snip /> Very interesting, Rick. Just out of interest - and not in relation to this program in particular - I wonder would, say, WINE run an attachment if you saved it to the file system and then clicked on it to execute it, in Konq for example? If this were possible, wouldn't it execute even without execute permissions set (because WINE is the executable)? Would it be possible for, say, mono to do the same thing? Just thinking out loud (and perhaps not too logically) about possible vectors. Cheers Peter

Rick Moen <rick(at)linuxmafia(dot)com> wrote:
peter <apvx95(at)dsl(dot)pipex(dot)com> wrote: > Very interesting, Rick. > > Just out of interest - and not in relation to this program in particular > - I wonder would, say, WINE run an attachment if you saved it to the > file system and then clicked on it to execute it, in Konq for example? > If this were possible, wouldn't it execute even without execute > permissions set (because WINE is the executable)? Would it be possible > for, say, mono to do the same thing? > > Just thinking out loud (and perhaps not too logically) about possible > vectors. Excellent questions. I'm going to be smart, for a change, and not speculate on matters I know nothing about, such as WINE. ;-> (That is, I've never used any of the Win32 emulation environments, so I really can't say.) Googling on "executable bit" wine attachment virus ...would seem likely to be promising, but I'm not finding much that's relevant. Also: Even if you do find written claims about this subject on the Net, please take care to try to replicate them, before believing what the author says. There's something about this topic that seems to draw out the cranks and those who shoot off their mouths first and verify later. I found this to be true even on the Linux Weekly News talkbacks, which otherwise tend to be generally clueful. (Konq. would not do what you mention by itself, by the way.)

Michael Heiming <michael+USENET(at)www(dot)heiming(dot)de> wrote:
In comp.os.linux.security Rick Moen <rick(at)linuxmafia(dot)com>: > peter <apvx95(at)dsl.pipex(dot)com> wrote: >> Very interesting, Rick. >> >> Just out of interest - and not in relation to this program in particular >> - I wonder would, say, WINE run an attachment if you saved it to the >> file system and then clicked on it to execute it, in Konq for example? >> If this were possible, wouldn't it execute even without execute >> permissions set (because WINE is the executable)? Would it be possible >> for, say, mono to do the same thing? >> >> Just thinking out loud (and perhaps not too logically) about possible >> vectors. > Excellent questions. > I'm going to be smart, for a change, and not speculate on matters I know > nothing about, such as WINE. ;-> (That is, I've never used any of the > Win32 emulation environments, so I really can't say.) IIRC did try it out for the fun of it ages ago, however results were as disappointing as this guy encountered: https://os.newsforge.com/article.pl?sid=05/01/25/1430222&from=rss It just doesn't work out even with a recent wine, for sure an area Linux really lacks. Even if you can get IE up with wine, but it doesn't really help you catching all the great mal-/spyware you get on doze soon after connecting to the internet.;( Alas, looks as if we would go nowhere until we get "great" stuff like IE and Outcrap in native Linux versions and even then without ActiveX and alike helpers it's likely we won't ever enjoy collecting all the malware until the system groans under the immense load... Sorry but currently we need to content ourself with things like the BSOD screen-saver, since this damn Linux won't even crash.;)

Rick Moen <rick(at)linuxmafia(dot)com> wrote:
Michael Heiming <michael+USENET(at)www.heiming(dot)de> wrote: >> I'm going to be smart, for a change, and not speculate on matters I know >> nothing about, such as WINE. ;-> (That is, I've never used any of the >> Win32 emulation environments, so I really can't say.) > > IIRC did try it out for the fun of it ages ago, however results > were as disappointing as this guy encountered: > >https://os.newsforge.com/article.pl?sid=05/01/25/1430222&from=rss I loved that piece. To my knowledge, I'm not related to the author (Matt Moen), but I sent him fan-mail and welcomed him to the clan, anyway. ;->

This page edited and maintained by the Editors of Linux Gazette

 

Copyright © 2005, . Released under the Open Publication license unless otherwise noted in the body of the article. Linux Gazette is not produced, sponsored, or endorsed by its prior host, SSC, Inc.

Published in Issue 115 of Linux Gazette, June 2005

next -->
Tux