Hello, everyone, and welcome to issue 60... that means the whole 'zine has been here for 5 years now? That's just amazing to me. In fact I'm coming up on 3 years as the HTML wizardess for TAG in only a few months. Y2K is almost over and all the usual questions are still here. The only thing different is that politics grow more boneheaded every year I don't care. I have my own plans for the season - what a fun Xmas this is going to be!
This seems to be the season that I get to help my friends who are only now getting into Linux (computing at all, in one case) get themselves all tucked in and snug in their distros. With any luck enough of you out there are doing the same, and we'll see a new blush on some HOWTOs in the LDP project which have gotten a bit dusty. (If you might want to work on that, see the thread "newbie installation question" below) For one of these pals, I'm not even sure which distro we are going to end up using... only that she can't bear to see a poor old 486 trapped under the yoke of Redmond any longer... (For a dissertation on selecting distros, see the thread "Best Linux Distro for a Newbie" where I recycled more than my fair share of electrons babbling about it.)
We just got a sweet little toy for ourselves here in the Starshine network, specifically, an NIC (New Internet Computer) from ThinkNIC.com. It comes with a CD, Linux based, and you just plug it in (power, modem or ether, it comes with speakers and keyboard), add a monitor and off you go. Errr, it didn't like our really old VGA monitor. I wonder just how long it's been since any of our machines have used that monitor for graphics at all... um, where was I? Oh yeah. It took a little while to find ssh and VNC in there, but it's a pretty useful setup. Nonetheless, we're going to see if we can run any other CD based distros on it too. This will make for hours of fun.
Now I suppose it's possible that you would be thinking of candied yams and duck dinners and the large fellow with the sack of toys about now. In our household it's more likely to be Bootable Business Card stocking stuffers (er, after we shave the contents down a bit - the RW business cards Jim got me are a bit small - but I'm sure you can find a dozen places selling 'em if you go to Google with the search keys business card and CDRW. Depending on the nerdiness factor in your household, the CDRW's might make a great stuffer even if left blank
As for the meal of the season, since Jim and I are heading out to LISA 2000 in New Orleans, the annual sysadmin's conference, we are going to enjoy some jazz and jambalaya. We'll also have a chance to hear Illiad as a keynote speaker. BOFH meets Dust Puppy? Oh my. This is gonna be fun...
Wherever the season takes you, and whatever it happens to bring, remember
we're all here to make Linux a little more fun!
From Caldera
As a followup to the LDAP discussions that have been answered here:
Caldera Systems' Linux management solution (formerly code-named "Cosmos") has been named Caldera Volution. The product, currently in open beta, is available for download from Caldera's Web site at
https://www.calderasystems.com/beta
More details can be found in our News Bytes (Distribution section).
Answers by: Dmitriy M. Labutin, César A. K. Grossmann, Niek Rijnbout
Hi,
You can dump NT event log with dumpel utility (it comes with WindowsNT Resource kit) into flat file.
Cheers
[Cesar] To do this I must "be" in the NT computer. Not a thing I can schedule a crontab at the Linux box to do it. I was thinking in some utility I can use to dump the log remotely, from the Linux box, where I have some freedom and tools to do nasty things such as reporting unusual activities from the users...
- [Nick] See
- https://www.eventreporter.com/en
...for a $25 application to send the NT log to a syslog host.
Regards
The app Nick mentions also appears to deal well with Win2000 and offers email as well as syslog transfer of the events. -- Heather
From Juan Pryor on Tue, 7 Nov 2000
Answered by: Heather Stern
I'm pretty new to Linux and I was wondering if there is a way in which I
can have two OSes working at the same time. I mean, I've had some trouble with the people at my house since they want to go back to Win98 and I only have one PC. Is there any win98 program that reboots and starts in Linux and then when the computer reboots it starts in win98 again? Any help will do.
Juan,
It's very common for Linux users to have their systems setup as dual-boot, sometimes up in MSwin, sometimes running Linux. Some distributions even try to make it easy to turn a box which is completely Windows into a half and half setup (or other divisions as you like).
There is a DOS program named LOADLIN.EXE which can easily load up a Linux kernel kept as a file in the MSwin filesystem somewhere - my friends that do this like to keep their Linux parts under c:\linux so they can find them easily. Loadlin is commonly found in a tools directory on major distro CDs. Of course, you do have to let Windows know that Loadlin needs full CPU control. In that sense, it's no different than setting up a PIF for some really cool DOS game that takes over the box, screen and all. Anyways, there's even a nice GUI available to help you configure it, called Winux, which you can get at https://www.linux-france.org/prj/winux/English ... which, I'm pleased to add, comes in several languages.
It's also possible to setup LILO so that it always prefers to boot MSwin (the option is often called 'dos') instead of Linux... in fact, I recommend this too, unless you want to not be able to boot Linux from anything but a floppy if MSwin should happen to mangle its drive space too far.
Now this is kind of different from "two OSes working at the same time"... It is possible to run VMware, and have a couple of different setups running together, but doing this might be rather confusing to family who are not used to anything but Windows. They might accidentally hit some key combination that switches to the other environment that's running, and think they broke something even if it's all running perfectly.
To finish off - it's also possible to find really friendly boot managers; I've been looking over one named GAG (don't laugh, it's just initials for Spanish words meaning "Graphical Boot Manager") that looks like it might be fun, at https://www.rastersoft.com/gageng.htm. It was just updated, too. Anyways, it can boot up to 9 different choices and has nice icons to use for a lot of different OSs you may have on a system. Unlike LILO and some other boot managers that only replace the DOS "master boot record", though, it takes over a fair chunk of track 0.
From Michael Lauzon to tag on Tue, 14 Nov 2000
Answers by: Dan Wilder, Ben Okopnik, Heather Stern
I am wondering, what is the best Linux distro for a newbie to learn on (I have been told never to ask this question or it would start a flame war; I of course don't care)...so in your opinion: what is the best Linux distro for a newbie?
--- Michael Lauzon
[Dan] <troll>
Slackware. Beause by the time you really get it installed and running, you know a lot more about what's under Linux's hood, than with any other common distribution!
</troll>
--
Dan Wilder
Darn those trolls anyway. They're eating the dahlias now!
[Ben] <Grumble> Sure, you don't care; we're the ones that need the asbestos raincoats![Heather] Well yeah, but I usually put out the flame with a Halon cannister labelled "waaay too much information." It does make me popular in the mailing lists though.[Ben] Spoilsport.
[Ben] To follow on in the spirit of Dan's contribution:
<Great Big Troll With Heavy Steel-Toed Boots>
Debian, of course. Not only do you get to learn all the Deep Wizardry, you get all the power tools and a super-easy package installer - just tell it which archive server you want to use, and it installs everything you want!
</GBT>
(The Linux Gazette - your best resource for Linux fun, info, and polite flame wars...
[Heather] Of course it helps if you know which archive server you want to use, or that the way to tell it so is to add lines to /etc/apt/sources.list ...[Ben] Oooh, are you in for a pleasant surprise! (I was...) These days, "apt" (via dselect) asks you very politely which server you want to use, and handles the "sources.list" on its own. I still wish they'd let you append sources rather than having to rewrite the entire list (that's where knowing about "/etc/apt" comes in handy), but the whole "dselect" interface is pretty slick nowadays. It even allows you to specify CD-based (i.e., split) sources; I'm actually in the process of setting up Debian 2.2 right now, and my sources are a CD-ROM and DVD drive - on another one of my machines - and an FTP server for the "non-free" stuff. Being the type of guy who likes to read all the docs and play with the new toys, I used "tasksel" for the original selection, "dselect" for the gross uninstallation of all the extraneous stuff, and "apt-get" for all subsequent install stuff. It's worked flawlessly.[Heather] I did write a big note on debian-laptops a while back about installing Debian by skipping the installer, but I think I'll let my notes about the handful of debian based distros stand.[Ben] I agree with your evaluation. It's one of the things I really like about Debian; I was able to throw an install onto a 40MB (!) HD on a junk machine which I then set up as a PostScript "server", thus saving the company untold $$$s in new PS-capable printers.[Heather] There is rpmfind to attempt to make rpm stuff more fun to install, but it's still a young package. I think the K guys have the right idea, writing a front end that deals with more than one package type.[Ben] Yep; "alien" in Debian works well, but I remember it being a "Catch-22" nightmare to get it going in RedHat. I've got package installation (whatever flavor) down to a science at this point, but it could be made easier.
[Heather] It's really a matter of requirements analysis. Most of the flame wars arise from people stating their own preferences, and fussing over those instead of trying to figure out which would work best for you.
Learning linux is a big definition, some people mean learning the unixlike features that they've never encountered before; some people mean learning to use the same things in Linux that they already know how to use in other systems. These are, to say the least, rather opposite needs...
If you want to goof off learning Linux but are very afraid of touching your hard drive's data, there are a few distributions designed to run off of a CD, or out of RAM. One pretty good one that runs directly from a RAMdisk is Tom's rootboot (https://www.tons.net/rb). While a lot of people use it merely as a rescue disk, Tom himself lives in it day to day. But, it's not graphical. And, it's libc5 based, so it's a little strange to get software for. It uses a different shell than most major distributions, but the same kernels. It's not exactly aimed at "just surfing the web and doing email" which I often hear newbies say that they'd be happy with. Linux Weekly News (https://www.lwn.net) has recently sorted their distributions, so you could find a CD based distro that meets these more mainstream desires fairly easily there.
If you want to learn about things from their raw parts, the way some kids like to learn about cars by putting one together themselves, there is a Linux From Scratch HOWTO stored at the LDP site (https://www.linuxdoc.org).
If the newbie's native language isn't English, he or she probably wants a localized distro, that is, one that installs and whose menus, etc. are in their language. (I'm guessing that such a newbie wouldn't be you - your .sig links were to purely English websites.) You can find a bunch of those at LWN too, but you'll have to go looking at home pages to be sure what languages are covered.
Otherwise, you probably want a "normal" linux, in other words, a major distro. Newbies generally want to be able to ask their local gurus for help, rather than wonder if some random wizard on the internet will ever answer them. If your local techie pals have a favorite, try that - they'll be better at helping you with it than stuff they don't know as well. I could be wrong of course - some techie folks prefer to learn stuff the same time you do, and you can get a great sense of energy by sometimes figuring out a thing here and there faster than they do. But by and large, gaining from someone else's experience will make things smoother, a smooth start will generally be more fun, and enjoying your first experiences will make you more willing to experiment later.
If you like to learn from a book, there are a fair number of books that are about a specific distro, and have a CD of that distro in the back. These are good, but not usually aimed at people who want to dual boot. Just so you know.
The big commercial brands usually try to push that they're an easy install. What they don't push so much is their particular specialty, the market they are aiming for. I've heard good things about Corel (esp. for dual boot plans), I've seen good things with both SuSE and Storm. Mandrake and Debian have both been a little weird to install - not too bad, but I'm experienced, and enjoy wandering around reading the little notes before doing things ... if you want the computer to be bright enough to do it all by itself, these might not be for you. (note, my Mandrake experience is a version old. And they compile everything Pentium optimized, so if things go smoothly, it will usually be a lot faster system.) Several of the brands are now pushing a "graphical installer" which is supposed to be even easier. However, if you have a really bleeding edge video card, it would also make the distro a real pain to install. Storm and RedHat favor graphical over non-graphical installs. LibraNet has a nongraphical install that still gives Debian a somewhat friendlier setup. I hear that Slackware is fairly friendly to people who like to compile their own software, and I never hear anything about their installer, so maybe it is really incredibly easy. Or maybe my friends don't want to tell me about their install woes once they get going, I dunno
If RedHat (6.2, I have to say I haven't tried 7 yet) is where you're going, and their graphical install is a bummer for you, use their "expert" mode. Their "text" mode is almost useless, and they really do have lots of help in expert mode, so it's not as bad as you would think.
In any case, I would recommend backing up your current system if there's anything on it you want to keep, not because the installs are hard - they're nothing like the days before the 1.0 kernel - but because this is the most likely time to really mangle something, and you'll just kick yourself if you need a backup after all and don't have one.
The next thing to consider is your philosophy. Do you want to be a minimalist, only adding stuff that makes sense to you (or that you've heard of), and then add more later? If so, you want a distro that makes it really easy to add more later. Debian and its derivatives are excellent for this - that includes Corel, Libranet, and Storm. SuSE's YaST also does pretty well for this, but they don't update as often... on the other hand, they don't get burned at the bleeding edge a lot, either. If most of the stuff you'll add later is likely to be commercial, RedHat or a derivative like Mandrake might be better - lots of companies ship RedHat compatible rpm's first, and get around to the other distros later, if at all.
If you have a scrap machine to play on, try several distros, one at a time; most of them are available as inexpensive eval disks from the online stores.
If you'd rather install the kitchen sink and take things back out later, any of the "power pack" type stuff, 3 CDs or more in the set, might work for you. Most of these are still based on major distros anyway, there's just a lot more stuff listed, and you swap a couple of CDs in. Umm, the first things you'll probably end up deleting are the packages to support languages you don't use...
A minimal but still graphical install should fit in a gigabyte or so - might want 2. A more thorough setup should go on 6 Gb of disk or so (you can, of course, have more if you like). It's possible to have usable setups in 300 to 500 Mb, but tricky... so I wouldn't recommend that a newbie impose such restrictions on himself.
To summarize, decide how much disk you want to use (if any!) and whether you want to go for a minimal, a mostly-normal, or a full-to-the-brim environment. Consider what sort of help you're going to depend on, and that might make your decision for you. But at the end, strive to have fun.
[Ben] Heather, I have to say that this is about the most comprehensive answer to the "WITBLD" question yet, one that looks at a number of the different sides of it; color me impressed.
WITBLD = "What Is The Best Linux Distro"
[Heather] The key thing here is that there are several aspects of a system. When one is "easiest" fo you it doesn't mean all the others are. So, you have to decide what parts you care the most about making easy, and what parts you consider worth some effort for the experience you'll get. Once you know that, you are less of a newbie already. I hope my huge note helped, anyway.
Well, I bought Caldera OpenLinux eDesktop 2.4, so I am looking for people who have had experience with OpenLinux. I still haven't installed it on a computer yet, as I need to upgrade the computer; but once I do that I will install it (though i do plan on buying other distros to try out).
--- Michael Lauzon
From vinod kumar d
Answers by: Heather Stern, Ben Okopnik
Hello I'm about to install Redhat Linux as a dual boot on my machine running win98 that came preconfig'd to use my 30 gigs all for windows, and for all the browsing i did through red hat's online docs, i could'nt figure out one basic thing: should i have an unallocated partition to begin installation, or will disk druid/fips do the "non-descructive repartitioning" as part of the install?
[Heather] I do not remember if RedHat will do the right thing here or not. CorelLinux will (in fact, made a great PR splash by being one to make this pleasant). Um, but CorelLinux is a debian-type system, not a rpm type system. I'm not sure what requirements had you pick RedHat, maybe you need something a bit more similar.
[Ben] Having recently done a couple of RH installations, I can give you the answer... and you're right, it's not the one you'd like to hear.
No, RedHat does not do non-destructive repartitioning. Yes, you do need to have another partition (or at least unallocated space on the drive) for the installation - in fact, you should have a minimum of two partitions for Linux, one for the data/programs/etc., and the other one for a swap partition (a max of 128MB for a typical home system.) There are reasons for splitting the disk into even more partitions... unfortunately, I haven't found any resources that explain it in any detail, and a number of these reasons aren't all that applicable to a home system anyway.
if i do need the unallocated partition, which is the best partition software to use cos i have stuff that i dont want to lose.
[Heather] If you feel up to buying another commercial product, PartitionMagic is very highly regarded. Not just amongst us linux-ers, but also for people who wanted to make a new D:, give half a server to Novell, or something like that. It's very smart.
It's also what comes in CorelLinux...
If you're more into Linux than MSwin and comfortable with booting under a rescue environment, I'm pleased to note that gparted (the GNU partition editor) deals well with FAT32 filesystems. Tuxtops uses that.
If you're feeling cheap, FIPS is a program that can do the drive division after booting from a DOS floppy, which you can easily make under the MSwin you already have. I'm pretty sure a copy of FIPS is on the redhat CD as a tool, so you could use that. It doesn't do anything but cut the C: partition into two parts. You'd still use disk druid later to partition the Linux stuff the way you want.
(Of course mentioning buying a preloaded dual boot from one of the Linux vendors like Tuxtops, VA Linux, Penguin, or others is a bit late. I'm sure you're fairly fond of your 30 Gb system with the exception of wanting to set it up just a bit more.)
None of these repartitioners will move your MS Windows swap file though. In the initial setup MS' is as likely to have the swap near the beginning of the drive, or the end. I recommend that you use the control panel advanced system options to turn off the swap file, and your favorite defragmenter, and then run a nice solid backup of your windows stuff before going onwards.
This isn't because Linux installs might be worse than you think (though there's always a chance) but because Windows is fragile enough on its own, and frankly, backups under any OS are such a pain that some people don't do them very often, or test that they're good when they do. (I can hardly imagine something more horrible than to have a problem, pat yourself on the back for being good enough to do regular backups, and discover that the last two weeks of them simply are all bad. Eek!) So now, while you're thinking:
"cos i have stuff that i dont want to lose."
is a better time than most!
[Ben] Following on to Heather's advice, here's a slightly different perspective: I've used Partition Magic, as well as a number of other utilities to do "live partition" adjustment (i.e., partitions with data on them.) At some point, all of these, with one exception, have played merry hell with boot sectors, etc. - thus reinforcing Heather's point about doing a backup NOW. The exception has turned out to be cheap old FIPS; in fact, that's all I use these days.
FIPS does indeed force you to do a few things manually (such as defragmenting your original partition); I've come to think that I would rather do that than let PM or others of its ilk do some Mysterious Something in the background, leaving me without a hint of where to look if something does go wrong. Make sure to follow the FIPS instructions about backing up your original boot sector; again, I've never had it fail on me, but best to "have it and not need it, rather than need it and not have it."
In regard to the Windows swap file, the best way I've found to deal with it is by running the defrag, rebooting into DOS, and deleting the swapfile from the root directory. Windows will rebuild it, without even complaining, the next time you start it.
i really tried a lot of faq's before asking you, so could you go easy if you're planning to: a) flame me about rtfm'ing first.
[Heather] Oboy, a chance to soapbox about doing documentation I promise, no flame!
If we should do this we generally are at least kind enough to say which F'ing M's to R. Which brings another thought to mind. FAQs and HOWTOs are okay, but they are sort of... dry. Maybe you could do an article for Linux Gazette about your experience, and "make linux a little more fun" (our motto) for others who are doing the dual boot install their first time out.
Unfortunately it's really sad that the FAQs and HOWTOs aren't as useful to everyone as they could be
If one of them was pretty close and just plain wasn't quite right, or wasn't obvious until you already went through it, give a shot at improving it a little, and send your notes back to the maintainer. If he or she doesn't answer you in a long time (say a month or two) let us know, maybe get together with some friends and see if you can become its new maintainer.
To be the maintainer of a Linux project doesn't always mean to write everything in it, just sort of to try and make sure it stays with the times. Linus himself doesn't write every little fragment of code in the kernel - though maybe he reads most of it :D - he maintains it, and keeps it from falling apart in confusion. This is really important. Documents need this too.
Because these things are not meant to be ground in stone, they're written to be useful and yeah, sometimes it happens that the fella who first wrote a given doc has moved on to other things. Meanwhile folks like you join the linux bandwagon every month and still need them, but Linux changes and so do the distros.
But, it's ok if you personally can't go for that. It's enough if we can find out what important HOWTOs could stand some improvement, since maybe it will get some more people working on them.
b) ignoring me totally.
[Heather] Sadly, we do get hundreds and hundreds of letters a month, and don't answer nearly that many. But hopefully what I described above helped. If it isn't enough, ask us in more detail - there's a whole Gang of us here, and some of us have more experience than others.
[Ben] Well, OK - you get off scot-free this time, but if you ever ask another question, we'll lock you in a room with a crazed hamster and two dozen Pokemon toys on crack. The Answer Gang in general seems to have taken its mandate from Jim Dennis, the original AnswerGuy: give the best possible answers to questions of general interest, be a good information resource to the Linux community, and eschew flames - incoming or outgoing. <Grin> I like being part of it.
btw really liked your answers in the column (well
here's hoping some old fashioned flattery might do the
trick
)
thanks in advance...
vinod
[Heather] Thanks, vinod. It's for people like you (and others out there who find their answer and never write in at all) that we do this.
[Ben] If you scratch us behind the ears, do we not purr? Thanks, Vinod; I'm sure we all like hearing that our efforts are producing useful dividends. As the folks on old-time TV used to say, "Keep those letters and postcards coming!"
From David Wojik
Answered by: Heather Stern, Paul MacKerras
I need to modify the PPP daemon code to enable dynamic requests to come in and renegotiate link parameters. I also need to make it gather packet statistics. Do you know of any textbooks or other documentation that explain the structure of the PPP protocol stack implementation? The HowTos only explain how to use Linux PPP, not how to modify it.
Thanks,
Dave
[Heather] Once the ppp link is established, it's just IP packets like the rest of your ethernet, so you should be able to get some statistics via ifconfig or other tools which study ethernet traffic, I'd think.
Still, renegotiating the link sounds interesting (I'm not sure I see what circumstances should cause it ... your modem renegotiating a speed is not at all the same thing). Anyways, if for some reason the source code of the PPP daemon itself isn't enough, your best bet would probably be to start a conversation with Paul Mackerras, the ppp maintainer for Linux. After all, if you really need this feature, there are likely to be others out there who need it too. I've cc'd Paul, so we'll see what he has to say.
Hi Heather,
Thanks for responding so promptly. My problem is that the product I'm working on uses Linux PPP to communicate between routers not modems. My software needs to be able to do things dynamically like take down the link, start an echo test, or change the mru.
[Heather] It sounds like you want to create a router-handler to do that part, that looks like a serial interface as far as the ppp functions are concerned. Then, these can remain seperated off.
The PPP protocol provides for dynamic renegotiation of link parameters but since Linux PPP was written primarily for modems connecting to ISPs, the PPP daemon is designed to take all of the parameters on the command line when it is invoked; after that it locks out any new input. My software also needs to count all of the different LCP packet types (Config-Ack, Config-Nak, etc.) and provide an interface to retrieve them.
[Heather] And logically the router-handler would do these too? (Sorry, I'm not up on whether these are internal to the PPP protocols, they look like higher level stuff to me.)
The PPP Protocol Stack implementation consists of thousands of lines of code. So what I am hoping to find is some high level documentation that will help me to determine how to modify only the parts I need. Even better would be to find some software that already does this as you suggest.
[Heather] Hmm. Well, best of luck, and we'll see if Paul can point us to something good.
Thanks again,
Dave
[Paul] David,
As you say, the Linux pppd doesn't currently let you change option values and initiate a renegotiation (not without stopping pppd and starting a new one). It should however respond correctly if the peer initiates a renegotiation. I have some plans for having pppd create a socket which other processes can connect to and issue commands which would then mean that pppd could do what you want. I don't know when I'll get that done however as I haven't been able to spend much time on pppd lately. As for counting the different packet types, that wouldn't be at all hard (you're the first person that has asked for that, though).
-- Paul Mackerras, Senior Open Source Researcher, Linuxcare, Inc.
Linuxcare. Support for the revolution.
Between Bryan Henderson and Mike Orr
In answering a question about the role of an ISP in making one's cable-connected computer vulnerable to hackers, Mike Orr makes a misstatement about the Internet that could keep people from getting the big picture of what the Internet is:
The cableco or telco connects you to your ISP through some non-Internet means (cable or DSL to the cableco/telco central office, then ATM or Frame Relay or whatever to the ISP), and then the ISP takes it from there. Your ISP is your gateway to the Internet: no gateway, no Internet.
[Bryan] The copper wires running from my apartment to the telephone company's central office are part of the Internet. Together with the lines that connect the central office to my ISP, this forms one link of the Internet.
The Internet is a huge web of links of all different kinds. T3, T1, Frame Relay, PPP over V.34 modem, etc.
The network Mike describes that all the ISPs hook up to (well, except the ones that hook up to bigger ISPs), is the Internet backbone, the center of the Internet. But I can browse a website without involving the Internet backbone at all (if the web server belongs to a fellow customer of my ISP), and I'm still using the Internet.
I would agree that you're not on the Internet if you don't have some path to the Internet backbone, but that path is part of the Internet.
[Mike] It depends on how you define what the Internet "is". My definition is, if a link isn't communicating via TCP/IP, it's not Internet. (IP isn't called "Internet Protocol" for nothing.) This doesn't mean the link can't function as a bridge between Internet sites and thus hold the Internet together.
Internet hops can be seen by doing a traceroute to your favorite site. The listing doesn't show you what happens between the hops: maybe it's a directly-connected cable, maybe it's a hyperspace matter-transporter, or maybe it goes a hundred hops through another network like ATM or Frame Relay or the voice phone network. Traceroute doesn't show those hops because they're not TCP/IP--the packet is carried "somehow" and reconstructed on the other side before it reaches the next TCP/IP router, as if it were a direct cable connection.
Of course communicating with another user at your ISP is "Internet communication", provided the ISP is using TCP/IP on its internal network (as they all do nowadays, not counting a parallel token ring network at an ISP I used to work at, where the mailservers were on the token ring). And of course, the distinction is perhaps nitpicky for those who don't care what precisely the network does as long as it works.
[Bryan] I'm with you there. But the link between my house and my ISP (which is quite ordinary) is TCP/IP. I have an IP address, my ISP's router has an IP address and we talk TCP/IP to each other. In the normal case that my frame's ultimate destination is not the router, the router forwards it, typically to some router in the backbone. Traceroute shows the hop between my house and the ISP.
All of this is indistinguishable from the way frames get from one place to another even in the heart of the Internet.
The layers underneath IP might differ, as you say, but you seem to be singling out protocols used in the home-ISP connection as not real TCP/IP, whereas the links between ISPs are real TCP/IP. There's no material difference between them. If not for the speed and cost disadvantage, the Internet backbone could be built on PPP over 28.8 modems and POTS lines.
One way we used to see that the home-ISP connection really _wasn't_ the Internet was AOL. You would talk AOL language to an AOL computer which was on the Internet and functioned as a gateway. The AOL computer had an IP address but the home computer did not. But now even AOL sets up an IP link between the AOL computer and the home computer. It's via a special AOL protocol that shares the phone line with non-IP AOL communications, but it's an IP link all the same and the home computer is part of the Internet whenever AOL is logged on.
From Shane Welton
Answered by: Ben Okopnik, Heather Stern, Mike Orr
As you know the world has gone wild for Linux, and the company I work for is no acception. We work with classified data that can be some what of a hassle to deal with. The only means of formatting a hard disk is the analyze/ format command that comes with Solaris. That method has been ensured as declassification method.
{Ben] Actually, real low-level formats for IDE hard drives aren't user-accessible any more: they are done once, at the factory, and the only format available is a high-level one. This does not impact security much, since complete data erasure can be assured in other ways - such as multiple-pass overwrites (if I remember correctly, a 7-pass overwrite with garbage data is recognized as being secure by the US Government - but it's been a while since I've looked into it.)
I was hoping you could tell me if Linux offers a very similar low-level format that would ensure complete data loss. I have assumed that "dd if=/dev/zero of=/dev/hda" would work, but I need to be positive. Thanks.
{Ben] Linux offers something that is significantly more secure than an "all zeroes" or "fixed pattern" overwrite: it offers a high-quality "randomness source" that generates output based on device driver noise, suitable for one-time pads and other high-security applications. See the man page for "random" or "urandom" for more info.
Based on what you've been using so far, here's something that would be even more secure:
dd if=/dev/urandom of=/dev/hda
If you're concerned about spies with superconducting quantum-interference detectors <grin>, you can always add a "for" loop for govt.-level security:
for n in `seq 7`; do dd if=/dev/urandom of=/dev/hda; done
This would, of course, take significantly longer than a single overwrite.
[Mike] Wow, seven-level security in a simple shell script!
[Ben] <Grin> *I've* always contended that melting down the hard drive and dumping it in the Mariannas Trench would add just that extra touch of protection, but would they listen to me?...[Heather] Sorry, can't do that, makes the Mariannas Trench too much of a national security risk. Someone could claim that our data has been left unprotected in international waters.Or, why security is a moving target: what is impossible one year is a mere matter of technology a few years or a decade later.
[Heather] You wish.
[Mike] My point being, that a one-line shell script can do the job of expensive "secure delete" programs.
[Heather] /dev/urandom uses "real" randomness, that is, quanta from various activities in the hardware, and it can run out of available randomness. We call its saved bits "entropy" which makes for a great way to make your favorite physics major cough. "We used up all our entropy, but it came back in a few minutes."
[Ben] Hey! If we could just find the "/dev/random" for the Universe...
[Heather] When it's dry I don't recall what happens - maybe you device wait on it, that would be okay. But if you get non-randomness after that (funny how busy the disk controller is) you might not really get what you wanted...
[Ben] That's actually the difference between "random" and "urandom". "random" will block until it has more 'randomness' to give you, while "urandom" will spit up the the entire entropy pool, then give you either pseudorandomness or a repeat (I'm not sure which, actually), but will not block.
[Ben] You're welcome to experiment - by which I mean, try it and study the results, check that they're what you want or not (confirm or refute the hypothesis).
I'm not clear from the original request if they're trying to clear the main drive on a system, or some secondary data drive. If it's the main, I'd definitely want to boot from Tom's rootboot (a RAM based distro) so there'd be no chance of the system resisting getting scribbled upon, or failing to finish the job. Also continuing to multitask (Toms has 4 virtual consoles, you can read some doc files or something) will give /dev/urandom more noise sources to gather randomness from.
/dev/random would be faster - not as random, but at 7 times, it's (wince now, you know what I'm going to say) good enough for government work. MSwin doesn't have a /dev/urandom, it only has pseudorandomness. At least, last I looked.
[Ben] Again, the other way around: "urandom" would be faster but marginally less secure (after 7 overwrites? The infinitesimal difference croggles my mind...), while "random" is slower but has the true /gelt/. Given that "/dev/hda" was used in the original example, Tom's RootBoot would be an excellent idea.[Mike] I thought /dev/urandom was the faster but less random one.[Heather] I just looked in the kernel documentation (/usr/src/linux/Documentation) and you are correct. /dev/random (character major 1 minor 8) is listed as nondeterministic, and /dev/urandom (character major 1 minor 9) is listed as faster and less secure.Anyways our readers will have to decide for themselves whether they want 7 layers of pseudo-random, or if their system will be busy enough in different ways to get a nice batch of true randomness out of the "better" source.
[Heather] I hear that the i810 motherboard has a randomness chip, but I don't know how it works, so I don't know how far I'd trust it for this sort of thing.
Thanks for the help and the humor, I shall pass the information on to our FSO in hopes that this will suffice. Again, thanks.
Shane M. Walton
From Dave
Answered By: Ben Okopnik
Hello Answerguy,
Since installing Debian a few days ago, I've been more than pleased with it.
However, I have run into a wee problem which I was hoping you could help me
with. Yesterday, I realised I hadn't installed GPM. I immediately got round
to installing using apt (a lovely painless procedure when compared to RPM).
All went great until I started to run X, at which point my mouse went insane
- just flying round the desktop at its own free will every time as I so much
as breathed on the hardware that operated it. I immediately killed GPM using
the GPM -k command, but to no avail. Then I shut down X, and restarted it
with no GPM running - the mouse refused to move at all. I then proceded to
uninstall GPM, and yet the pointer remains motionless :(. I'm using a PS/2
mouse.. Any suggestions?
I thank you for your time
-Dave-
Yep; it's a bad idea to kill or uninstall GPM.
In the Ages Long, Long ago (say, 3 years back), it used to be standard practice to configure two different ways to "talk" to the mouse: GPM for the console, and the mouse mechanism built into X. Nowadays, the folks that do the default configuration for X in most distributions seem to have caught on to the nifty little "-R <name>" switch in GPM. This makes GPM pass the mouse data onto a so-called "FIFO" (a "first in - first out" interface, like rolling tennis balls down a pipe) called "/dev/gpmdata" - which is where X gets _its_ mouse info. By removing GPM, you've removed the only thing that pays any attention to what the mouse is doing.
So, what's to do? Well, you could configure X to actually read the raw mouse device - "/dev/psaux" in most computers today, perhaps "/dev/ttyS0" if you have a serial mouse on your first serial port (or even "/dev/mouse", which is usually a symlink to the actual mouse device.) My suggestion is, though, that you do not - for the same reason that the distro folks don't do it that way. Instead, reinstall GPM - in theory, your "/etc/gpm.conf" should still be there, and if isn't, it's easy enough to configure - and make sure that it uses that "-R" switch (hint: read the GPM man page.)
Once you've done all that, you'll now need to solve the "jumping mouse" problem. In my experience, that's generally caused by the mouse type being set to the wrong value (usually "PS/2" instead of "Microsoft".) Here's the easy way to do it: from a console, run "XF86Setup"; tell it to use your current configuration when prompted. Once X starts up and you get the "Welcome" screen, tab to the "Mouse" button and press "Enter". Read the presented info page carefully: since you'll be using the keyboard to set the options, you'll need to know which keys do what. If you forget, "Tab" will get you around.
Make sure that the "Mouse Device" is set to "/dev/gpmdata", and try the various mouse protocols - these are obviously dependent on your mouse type, but the most common ones I've seen have been PS/2 and Microsoft. Remember to use the "Apply" button liberally: the changes you set won't take effect until you do.
Once you have the right protocol, the mouse should move smoothly. I suggest that, unless you have a 3-button mouse, you set the "Emulate3Buttons" option - you'll need it to copy and paste in X! Also, play with the resolution option a bit - this will set the mouse response. I've seen high resolution "lock up" a mouse - but by now you know how to use that "Tab" key...
Once you're done, click "Done" - and you're ready to fly your X-fighter.
From G David Sword
Answered By; Ben Okopnik, Mike Orr
I have a text file full of data, which I would like to turn into a bunch of fax documents for automated faxing. I could simply parse the file in perl, and produce straight text files for each fax.
Instead of this, I would like to be able to build up something which resembles a proper purchase order, or remittance, containing logos, boxes for addresses etc. Could I have an expert opinion (or six) on what would be the best method to use to achieve this - I have read a bit about LaTeX and groff, but I am not sure if they are the best solution or not.
Thanks in advance
G. David Sword
[Ben] Since you have already implied that you're competent in Perl, why not stick with what you know? Parse the data file (which you will have to do anyway no matter what formatting you apply to it afterwards), then push it out as HTML - Perl is excellent for that. I can't imagine an order form so complex that it would require anything more than that.
As a broader scope issue, learning LaTeX or groff is, shall we say, Non-Trivial. In my !humble opinion, neither is worth doing just to accomplish a single task of the sort that you're describing. SGML, on the other hand, is an excellent "base" format that can be converted to just about anything else - DVI, HTML, Info, LaTeX, PostScript, PDF, RTF, Texinfo, troff-enhanced text, or plaintext (as well as all the formats that _those_ can be converted into.) You can learn enough to produce well-formatted documents in under an hour (no fancy boxes, though) - "/usr/share/doc/sgml-tools/guide.txt.gz" (part of the "sgml-tools" package) will easily get you up to speed. If you want the fancy boxes, etc., check out Tom Gordon's QWERTZ DTD <ftp://ftp.gmd.de/GMD/sgml/sgml2latex-format.1.4.tar.gz>, or the LinuxDoc DTD (based on QWERTZ.) I haven't played with either one to any great extent, but they're supposed to do mathematical formulae, tables, figures, etc.
[Mike] Let me second this. If you need to get the reports out the door yesterday, stick with what you know. Get them to print in any readable text format now and then worry about enhancements later. The code you use to extract the fields and calculate the totals will still be useful later, whether you plug it into the new system directly or convert it into a new language.
TeX and troff both have a learning curve, and you have to balance this against how useful they will be to your present and future purposes. At best, they make a better temporary "output format" nowadays than a document storage format. SGML or XML is a much better storage format because it's more flexible, given the unpredictable needs of the future.
Actually, your "true" storage format will probably remain your flat file or a database, and then you'll just convert it to SGML or XML and then to whichever print format you want (via a generic SGML-to-something tool or your own home-grown tool).
I would look at XML for the long term, even if you don't use it right away. Perhaps someday you'll want to store your data itself in XML files rather than in the text files you're using. This does allow convenient editing via any text editor, and for new data, a program can create an empty XML structure and invoke an editor on it. And as time goes on, more and more programs will be able to interpret and write XML files. On the other hand, it is darn convenient to have that data in a database like MySQL for quick ad-hoc queries...
If you just want to learn a little bit of formatting for a simple document, troff is probably easier to learn than TeX.
You can always use the "HTML cop-out" one of my typesetting friends (Hi, johnl!) tells people when they ask him what's an easy way to write a formatted resume. Write it in HTML and then use Netscape's print function to print it Postscript.
From Bob Glass (with a bonus question from Dan Wilder)
Answered by: Ben Okopnik
Hi, everyone. I'm a newbie and need help with a linux machine that goes to sleep and has to be smacked sharply to wake it up. I'm trying to run a proxying service for user authentication for remote databases for my college. That's all the machine is used for. The Redhat installation is a custom, basically complete, installation of Redhat Linux 6.2. The machine is a 9-month old Gateway PIII with 128MB of RAM. The network adapter is an Intel Pro100+. My local area network is Novell 5.x and my institution has 4 IP segments. I have not configured my linux installation beyond defining what's needed to make the machine available on the local network (machine name, hard-assigned IP address, default gateway etc).
<Snip>
The problem I'm unable to deal with is: my proxy machine disappears from the network or 'goes to sleep.' At that point, I can't use a web browser to contact the proxy service machine, I can't telnet to the machine, and I can't ping the machine. However, if I go across the room to the proxy machine, open the web browser, go to an weblink (i.e., send packets out from the machine), then go back to my computer and test a link, ezproxy responds and all is well. However, usually in an hour or so, the proxy machine is unreachable again. Then much later or overnight, it will begin to respond again, usually after a 5-7 second delay.
[Ben] First, an easy temporary fix: figure out the minimum time between failures and subtract a couple of minutes; run a "cron" job or a backgrounded script that pings a remote IP every time that period elapses. As much as I hate "band-aid fixes", that should at least keep you up and running.
Second: I've encountered a similar problem twice before. Once with sucky PPP in an older kernel (2.0.34, if I remember correctly), and one that involved a flaky network card on a Novell network (I've sworn off everything but two or three brands of cards since.) Perhaps what I'd learned from troubleshooting those may come in useful.
[Dan] If you don't mind saying, which brands have you had the best luck with under Linux?[Ben] Intel EE Pro 10/100Bs have been faultless. I've used a stack of those to replace NE2K clones, and a number of problems - some of which I would have sworn were unrelated to hardware - went away. I can't say the same for the various 3Coms I've tried; whether something in the driver software or in the cards themselves (under Linux and Windows both), I could not get consistent performance out of them. My experience with LinkSys has been rather positive, although I've never had the chance to really beat up on them; perhaps this has to do with the quality of Donald Becker's driver, as they have been very friendly to the Linux community from the start (this was the reason I decided to try playing with them in the first place.)For consistently high throughput, by the way, I have not found anything to beat the Intels.
[Ben] Note that I'm not trying to give you The One True Solution here; this seems to be one of those problems that will require an iterative approach. The way I'd heard this put before is "when you don't understand the problem, do the part that you do understand, then look again at what's left."
A good rule of thumb is that if the problem is happening at regular intervals, it's software; if it's irregular, it's hardware. Not a solution, but something to keep in mind.
I have turned off power management in the BIOS. I have stopped loading the apm daemon. I have tried a different network adapter, 3Com509b. I have even migrated away from another computer to the machine described above. And still the machine goes to sleep ...!?$#@
[Ben] When it goes to sleep, have you tried looking at the running processes (i.e., "ps ax")? Does PPP, perhaps, die, and the proxy server restart it when you send out a request? Assuming that you have two interfaces (i.e., one NIC that talks to the LAN and another that sees the great big outside world), are both of them still up and running ("ifconfig" / "ifconfig -a")?What happens if you set this machine up as a plain workstation? No proxy server, minimum network services, not used by anyone, perhaps booted from a floppy with an absolutely minimal Linux system - with perhaps another machine pinging it every so often to make sure it's still up? If this configuration works, then add the services (including the proxy server) back, a couple at a time, until something breaks.This is known as the "strip-down" method of troubleshooting. If it works OK initially, then the problem is in the software (most likely, that is: I've seen NICs that work fine under a light load fall apart in heavy traffic.) If it fails, then the problem is in the hardware: NICs have always been ugly, devious little animals... although I must admit they've become a lot better recently; I can't say that I've had any problems with Intel Pros, and I've abused them unmercifully.(A related question: When you moved from one machine to the other, did you happen to bring the NICs along? This could be important...)
[Ben] My bad, there; I missed the part about the different NIC in the original request for help, even though I quoted it (blame it on sleep- deprivation...) - ignore all the stuff about the Evil NICs; it's certainly starting to sound like software.
On Tue, Nov 07, 2000 at 11:37:46AM -0500, Bob Glass wrote: Dear Mr. Okopnik,
Thanks so much for your suggestion about creating a cron job which pings a network device. I did just that, and now the problem is 'solved.' (finding a source which detailed how to set up a cron job to run every 15 minutes _and_ not e-mail the output to the root account was a bit of a challenge!) It's a measure of what a newbie I am that this didn't occur to me on my own!
I've talked to many people about this problem and have come to the conclusion that there's a weird mismatch between hardware and software at both the machine and network level (routers, switches, NICs, Linux, Novell who knows!@#$ I wish Novell would write network clients for Linux and Solaris. I have a Solaris machine which very occasionally has this same problem.) Having tussled with this for over a month and been shown a workaround which both works and causes no problems, I'm satisfied. And as director of my library, I've got to move on to other tasks.
Again, many thanks.
Bob Glass
[Ben] You're certainly welcome; I like being able to "pay forward" at least some of the huge debt I owe to the people who helped me in my own early struggles with Linux.
Pinging the machine is a workable solution, and I'm glad that it mitigated the problem for you - but let me make a suggestion. If you do not have the time to actually fix it now (or even in the foreseeable future), at least write down a good description of the problem and the workaround that you have used. The concept here is that of a shipboard "deficiency log" - any problems aboard a ship that cannot be immediately resolved go into this log, thus providing a single point of reference for anyone who is about to do any kind of work. ("I'll just remove this piece of wire that doesn't look like it belongs here... hey, why are we sinking???") That way, if you - or another director/admin/etc. - have to work on a related problem, you can quickly refresh yourself on exactly why that cron job is there. A comment in "crontab" that points to the "log" file would be a Good Thing.
As I understand it, Caldera's OpenLinux promises full Novell compatibility/connectivity. I can't comment on it personally, since I have no experience with OpenLinux, but it sounds rather promising - Ray Noorda is the ex-CEO of Novell, and Caldera is one of his companies.
From John Hinsley
Answered by: Mike Orr
I want a web site, but it looks like I'll have to put together my own server and put it on someone's server farm because:
What do you mean by server farm? You're going to colocate your server at an ISP? (Meaning, put the server in the ISP's office so you have direct access to the ISP's network?)
I need to run Zope and MySQL as well as Apache (or whatever) in order to be able to use both data generated pages via Zope and "legacy" CGI stuff (and it's far easier to find a Perl monger when you want one rather than a Python one!). If this seems remotely sensible, we're then faced with the hardware spec of this splendid server.
I set up one Zope application at Linux Journal (https://www.linuxjournal.com/glue). It coexists fine with our Python and Perl CGI scripts.
<ADVOCACY LANGUAGE="python"> While it may be easier to find a Perl monger than a Pythoneer, us Python types are becoming more common. And anybody who knows any programming language will find Python a breeze to snap up. The programming concepts are all the same, the syntax is very comprehensible, and the standard tutorial is excellent. </ADVOCACY>
So, proposed spec:
Athlon 700, 3 x 20 GB IDE hard drives, 2 of which are software raided
together and the third of which is for incremental back up. 256 Mb of
Ram (at least), 1 100 Mbps NIC. Open SSH as a mode for remote
administration, but otherwise with a lean kernel with an internal
firewall.
Does this sound like a remotely viable spec?
You didn't say how many hits per month you expect this site to receive. Our server has less capacity than that, and it runs Linux Journal + Linux Gazette + some small sites just fine. And yes, our servers are colocated at an ISP. You get much better bandwidth for the price by colocating.
I discussed your spec with our sysadmin Dan Wilder (who will probably chime in himself) and concluded:
** An Athlon 700 processor is way overkill for what you need. (Again, assuming this is an "ordinary" web server.) An AMD K6-2 or K6-3 running at 233 MHz should be fine (although you probably can't get a new one with less than 500 MHz nowadays...) Web servers are more I/O intensive than they are CPU intensive. Especially since they don't run GUIs, or if they do, the GUI is idle at the login screen most of the time! And if you really want the fastest chip available, an Athlon 700 is already "slow".
** Your most difficult task will be finding a motherboard which supports the Athlon 700 adequately. One strategy is to go to the overclocking web pages (search "overclocking" at www.google.com) and see which motherboards overclock best with your CPU. Not that you should overclock, especially on a production server! But if a motherboard performs OK overclocking your CPU, it should do an adequate job running your CPU at its proper speed.
** 256K MB RAM may or may not be adequate. Since memory is the cheapest way to increase performance at high server load, why not add more?
** 3 x 20 GB IDE (1 primary, 1 for RAID, 1 for backup) should be fine capacity-wise. Are you using hardware RAID or software RAID? Software RAID is pretty unreliable on IDE. Will you have easy access to the computer when you have to work on it? Or does the ISP have good support quality, and will they handle RAID problems for you? One thing we want to try (but haven't tested yet) are the 3Ware RAID I cards.
** IDE vs SCSI. SCSI may give better performance when multitasking. Of course, it's partly a religious issue how much that performance gain is. Given that a web server is by nature a disk-intensive application, SCSI is at least worth looking into. Of course, SCSI is also a pain to install and maintain because you have to make sure the cables are good quality, ensure they are properly terminated, etc.
** 100 Mbps Ethernet card. Are you sure your ISP's network is 100 Mbps? 10 Mbps should be fine. If your server saturates a 10 Mbps line, you're probably running video imaging applications and paying over US$7000/month for bandwidth. Make sure your Ethernet card operates well at 100 Mbps; many 10/100 Mbps "auto-switching" cards don't auto-switch that well.
** OpenSSH for remote admin. Sure.
The biggest FTP site in the world, ftp.cdrom.com, runs on an ordinary PC with FreeBSD. And the Zopistas at the Python conference in January said Zope can handle a million hits per day on an ordinary PC.
***** There are several ways to integrate Zope with Apache. We chose the "proxy server" way because it allows Zope's web server (Zserver) to multitask. You run Apache at port 80, Zserver at 8080, and use Apache's ProxyPass directive to relay the request to Zserver and back. You have to do some tricky things with mod_rewrite and install a scary Zope product, but it works.
(Scary because it involves modifying the access rules for the entire Zope site, which can lock you out of your database if you're not careful, and because it makes Zope think your hostname/port is what Apache publishes them as, rather than what they really are, and this can also lock you out of your database if Apache isn't running or the rewrites or proxying aren't working. I refused to implement virtual hosts on our Zope server--because they also require playing with access rules--until a safer way comes along. Why not let Apache handle the virtual hosting since Apache is good at it? You can use a separate Zope folder for each virtual site, or even run a separate Zope instance for each.)
In the end, we decided not to go ahead with wide-scale deployment of Zope applications. This was because:
- Adequate Zope documentation was missing. Most documentation was geared for the through-the-web DTML content manager rather than the application programmer. It was a matter of knowing a method to do X must exist, then scouring the docs to find the method name, then guessing what the arguments must be.
- Zope wants to do everything in its own private world. But text files and CGI scripts can handle 3/4 of the job we need.
- Zope's main feature--the ability to delegate sections of a web site to semi-trusted content managers who will write and maintain articles using the web interface--was not really what we needed. Our content managers know vi and know how to scp a file into place. They aren't keen on adjusting to a new interface--and having to upload/download files into Zope's database--when it provides little additional benefit for them.
We decided what we really needed was better CGI tools and an Active Server Pages type interface. So we're now deploying PHP applications, while eagerly waiting for Python's toolset to come up with an equivalent solution.
Disclaimers: yes, Zope has some projects in development which address these areas (a big documentation push, Mozilla-enhanced administration interface, WebDAV [when vi supports it] for editing and configuring via XML, built-in support for virtual hosts, a "distributed database" that an ordinary filesystem directory can be a part of), but these are more or less still in the experimental stages (although deployed by some sites). And yes, Python has Poor Man's Zope and Python Server Pages and mod_python, but these are still way in alpha stage and not as optimized or tested as PHP is. I also want to look into AOLserver's embedded Python feature we read about in October (https://linuxgazette.net/issue58/washington.html), but have not had the chance to yet.
[Mike again] I forgot to mention MySQL.
Our web server runs MySQL alongside Apache and Zope. MySQL is called by CGI applications as well as Zope methods.
It took a while to get MySQLdb and the ZMySQLDA (the Zope database adapter) installed, but they're both working fine now. I spent a couple weeks corresponding with the maintainer, who was very responsive to my bug reports and gave me several unreleased versions to try. These issues should all be resloved now.
One problem that remained was that ZMySQLDA would not return DateTime objects for Date/DateTime/Timestamp fields. Instead it returned a string, which made it inconvenient to manipulate the date in Zope. One problem of course is that Zope uses a same-name but incompatible DateTime module than the superior one the rest of Python uses (mxDateTime). I finally coded around it and just had the SQL SELECT statement return a pre-formatted date string and separate month and year integers.
Dear Mike,
thank you so much for a really comprehensive answer to my questions. Of course, it raises a few more questions for me, but I think the view is a bit clearer now.
Yes, I did mean colocation (co-location?). It's a term I have some problems with as it seems to suggest putting something in two places at one time.
We might be fortunate in that the funding for this is unlikely to come through before 2.4 about which I hear "around Christmas, early New Year". And even more so in that we could probably get away with hiring some server space for a month or two while we played around with the new server and tried to break it. Of course, this might well mean doing without much in the way of interactivity, let alone a database driven solution, but we can probably survive on static pages for a while and get some kind of income dribble going.
My inclination would be to go with software Raid and IDE (hence the attempt to break it!) but I will consider the other alternatives.
Ultimately whether we go with Zope (and in what context vis-a-vis Apache, or Zap) is going to have to depend on whether I can get it up and running to my satisfaction at home, but it's good to be reminded that PHP is a good alternative.
Once again, many thanks.
From Alex Kitainik
Answered by: Heather Stern
Hi!
I've found 'neighbour table overflow' question in your gazette. Explanation for this case seems to be not complete although. The most nasty case can happen when there are two computers with the same name in the LAN. In this case neighbours' search enters endless loop and thus 'neighbour table overflow' can occur.
Actually, the arp cache doesn't care about names - it cares about MAC addresses (those things that look like a set of colon seperated hex values in your ifconfig output). But, it is a good point - some cards are dip switch configurable, and ifconfig can change the 'hw ether' interface if you ask it to.
Between arpwatch and tcpdump it should be possible to seriously track down if you have some sort of "twins" problem of either type, though. At the higher levels of protocol, having machines with the same name can cause annoying problems (e.g. half the samba packets going to the wrong machine) so it's still something you want to prevent.
PS. I apologize for my English (it isn't my mother tongue...)
Regards -- Alex.
Your English is fine.
From Kopf
Answered by: Ben Okopnik
Hi,
I want to set up a home network, with 2 machines - workstation & server. The problem is, I want to configure Linux so that if I use the workstation, nothing is saved on the local drive, everything is kept on the server, so that if I shut down the workstation, and I go up to the server, I can work away there, without any difference of environments between the 2 boxes.
Another problem is, I'm a bit strapped for cash, so I don't want to buy a server & networking equiptment until I know what I want to do is possible.
Thanks!
Kopf
Not all that hard to do; in fact, the terms that you've used - workstation and server - point to a solution.
In the Windows world, for example, those terms have come to mean "basic desktop vs. big, powerful machine." With Linux, the meanings come back to their original sense: specifically, a server is a program that provides a service (and in terms of hardware, the machine that runs that program, usually one that is set up for only - or mainly - that purpose.)
In this case, one of a number of possible solutions that spring to mind is NFS - or better yet, Coda (https://www.coda.cs.cmu.edu). Either one of these will let you mount a remote filesystem locally; Coda, at least in theory (I've read the docs, haven't had any practice with it) will allow disconnected operation and continuous operation even during partial network failure, as well as bandwidth adaptation (vs. NFS, which is terrible over slow links.) Coda also uses encryption and authentication, while NFS security is, shall we say, problematic at best.
Here is how it works in practice, at least for NFS: you run an NFS server on the machine that you want to export from - the one you referred to as the "server". I seem to remember that most distributions come with an NFS module already available, so kernel recompilation will probably not be necessary. Read the "NFS-HOWTO": it literally takes you step-by-step through the entire process, including in-depth troubleshooting tips. Once you've set everything up, export the "home/kopf" directory (i.e., your home directory) and mount it under "home/kopf" on your client machine. If you have the exported directory listed in your "/etc/fstab" and append "auto" to the options, you won't even have to do anything different to accomodate the setup: you simply turn the machine on, and write your documents, etc. Your home directory will "travel" with you wherever you go.
Since you mention being strapped for cash, there's always another option: put together a minimal machine (say, a 486 or a low-end Pentium) that does nothing more than boot Linux. Telnet to your "big" machine, work there - run a remote X session, if you like. Other advantages of this setup include the need for only one modem (on your X/file server), the necessity of securing only a single machine, and, of course, the total cost. I would suggest spending a little of the money you save on memory and a decent video card, though - not that X is that resource-intensive, but snappy performance is nice to have. 32-64MB should be plenty.
I also suggest reading the "Thinclient-HOWTO", which explains how to do the NFS "complete system export" and the X-client/server setup.
Ben Okopnik
Hi! Thanks for all the great info!
What you've said has really enlighened me - I had never thought of remote mounting and stuff like that. Just one question, if I were to mount "/" on the server as "/" on the workstation, how much diskspace would I need on the workstation to start up Linux until it mounts all the drives? Or would I use a bootdisk to do this, and have absolutely no partition for Linux on the workstation?
You could indeed boot from a floppy, but it's a longish process, and floppies are rather unreliable; I would think that scrounging around can get you an small HD for just a few dollars. One of the things I really appreciate about Debian is that you can do a "base setup" - a complete working Linux system with networking and tons of system tools - in about 10 minutes, on about 20MB worth of drive space. I seem to remember that Slackware does much the same thing.
As to how much diskspace: you really don't need any. You could even set your machine up as a terminal (a bit more of a hassle, but it eliminates the need for even a floppy.) An HD is nice to have - as I've said, booting from one is much more convenient - but start with the assumption that it's a luxury, not a necessity. From there, everything you do is just fun add-ons.
The point to this is that there are almost infinite possibilities with Linux; given the tremendous flexibility and quality of its software, the answer to networking questions that start with "Can I..." is almost always going to be "Yes."
Also - I know the risks associated with allowing "(everyone)" to mount "/" or even "/home/" on Linux... Would I be able to restrict this to certain users, or even certain computers on the network?
Thanks for all the help!
Kopf
"Would I be able to..." qualifies; the answer is "Yes". The "NFS-Howto" addresses those, and many other security issues in detail.
Ben,
by the way, you talked about putting in about 32mb of Video memory into one of the computers to enhance X performance.. Which computer would I put it in, the X Server or Client?
Thanks!
Perhaps I didn't make myself clear; I believe I'd mentioned using a decent video card and 32MB of system memory. In any case, that's what I was recommending. Not that X is that hungry, but graphics are always more intensive than console use - and given the cost/performance gain of adding memory, I would have a minimum of 32MB in both machines. As to the video card, you'd have to go far, far down-market to get something that was less than decent these days. A quick look at CNet has Diamond Stealth cards for US$67 and Nvidia Riva TNT2 AGPs for US$89, and these cards are up in the "excellent" range - a buck buys a lot of video bang these days!
Ok, well, you've answered all questions I had!
Now 'tis time to make it all work.
Thanks again!
Kopf
Answer by Robert A. Uhl
I've some brief information on DSL for Linux.
Several phone companies do not officially support Linux since they do not have software to support our favoured platform. Fortunately I have found that it is still possible to configure a DSL bridge and have had some success therewith.
Let me note ahead of time that my bridge is a Cisco 675. Others may vary and may, indeed, not work.
The programme which you will use in place of the Windows HyperTerm or the Mac OS ZTerm (an excellent programme, BTW; I used it extensively back in the day) is screen, a wonderful bit of software which was included with my distribution.
To configure the bridge, connect the maintenance cable to the serial port. First you must su to root, or in some other way be able to access the appropriate serial port (usually /dev/ttyS0 or /dev/ttyS1). Then use the command
screen /dev/ttySx
to start screen. It will connect and you will see a prompt of some sort. You may now perform all the tasks your ISP or telco request, just as you would from HyperTerm or ZTerm.
One quits screen simply by typing control-a, then \. Control-a ? is used to get help.
Hope this is some use to some of the other poor saps experiencing DSL problems.
-- Robert Uhl
If I have pinged farther than others, it is because I routed upon the T3s of giants. --Greg Adams
... so mike asked ...
Hmm, I have a Cisco something-or-other and it's been doing DSL for Linux for almost two years. The external modems are fine, because there's nothing OS-specific about them, you just configure them in a generic manner.
It's the configuration that can be trouble. When I've called the telco, they've wanted to start a session to get various settings. 'Pon being informed that I'm using Linux, it has generally been `Terribly sorry, sir, but we don't support that.'
There's two ways to configure it: via the special serial cable that came with it or via the regular Ethernet cable using telnet. I tried telnet first but I couldn't figure out the device's IP number (it's different for different models and that information was hard to get ahold of). So I plugged in the serial cable and used minicom as if it were an ordinary null-modem cable. That worked fine.
I had a deal of difficulty with minicom. Screen seems to be doing a right fine job, at least for the moment. Figured I'd let others know.
Enjoy your magazine.
-- Robert Uhl
[Mike] Guess what. I had to configure a router at work last week. On DSL with a Cisco 675 bridge. Minicom didn't work. Screen did. And I never would have thought of using screen if it hadn't been for this TAG thread.
I pulled out the serial cable inside the box and reseated it before using screen, just in case it was loose, so perhaps it wasn't minicom's fault. But at least now I have more alternatives to try.
-- Mike Orr
Answer from Roy
Want to set a sticky note reminder on your screen? Create the tcl/tk script "memo"
#!/usr/bin/wish
button .b -textvariable argv -command exit
pack .b
and call it with
sh -c 'memo remember opera tickets for dinner date &'
Want to make a larger investment in script typing? Then make "memo" look like this:
#!/usr/bin/wish if {[lindex $argv 0] == "-"} { set argv [lrange $argv 1 end] exec echo [exec date "+%x %H:%M"] $argv >>$env(HOME)/.memo } button .b -textvariable argv -command exit .b config -fg black -bg yellow -wraplength 6i -justify left .b config -activebackground yellow .b config -activeforeground black pack .b
and the memo will appear black on yellow. Also, if the first argument to memo is a dash, the memo will be logged in the .memo file. The simpleness of the script precludes funny characters in the message, as the shell will want to act on them.
In either case, left-click the button and the memo disappears.
Preceed it with a DISPLAY variable,
DISPLAY=someterm:0 sh -c 'memo your time card is due &'
and the note will pop up on another display.