"The Linux Gazette...making Linux just a little more fun!"


(?) The Answer Guy (!)


By James T. Dennis, tag@lists.linuxgazette.net
Starshine Technical Services, https://www.starshine.org/


(?) setting up an ISP to serve email

From chris smith on Wed, 30 Dec 1998

Jim: Thanks for your response

in checking out my system with the command ps I find that there is no pop deamon running so I guess i will have to find that.

(!) in.popd (and most other POP daemons such as qpopper) wouldn't show up during 'ps' unless someone was accessing the service concurrently to your running the 'ps' command.
The whole point of 'inetd' is that it monitors all of the TCP/UDP ports (on all of your interfaces) and dynamically launches the services daemons (in.popd, in.ftpd, in.telnetd, etc) on demand.
So, check your /etc/inetd.conf --- and make sure that inetd is running. Then try to run a POP client.
Another trick is to use telnet to connect to the POP-3 port (110). You can then issue USER and PASS commands -- followed by a QUIT command. If those work then your POP daemon is responding.
As with most Unix TCP services, the control messages in the protocol are implemented as a set of short commands and standardized responses. This is the way that SMTP, FTP, POP, IMAP and several others work. (There are also services that use binary and null terminated strings for their protocol elements --- those generally can't be "spoofed" or "debugged" using just plain old 'telnet').

(?) as for my comments about the dos\windows directory structures, let me clarify in dos\ windows when you go to a a folder for say Netscape, you will find all of the files(for the most part) to run that program under that folder and in directories directly under that folder ( excepting perhaps some common system .dll and autoexec.bat config.sys, and 3 or 4 other common system files,ignoring the system registry fro a while) It seems to me that the programs under linux are scattered all over the place. I understand that mostly all of the files are text based (makes sense to me for set up reasons), but why are they everywhere, and no one has been able to tell me just what the major directories mean (or represent) just why is stuff where it is?

(!) First of all, "folders" are a completely different abstraction than "directories." Folders don't exist in MS-DOS. They are a Windows thing. (Terminology borrowed from the MacOS paradigm).
I think that you belief that Linux and Unix files are "scattered all over the place" (a complaint you've repeated twice now) is largely a matter of your perception. As you say, some DLL's, fonts, and other elements of Windows programs are put outside of the folders and directories that are associated with them.
In any event, Unix (and Linux) provide "mechanisms" --- they don't set "policy." So each programmer is free to use whatever conventions best suit their needs. Most Unix/Linux programmers follow a fairly complex set of conventions --- which have evolved over the course of about 30 years.
That's ten times longer than Windows '95 has been around, and twice as long as MS-DOS.
As for what the different directories "mean" --- read the FHS (filesystem hiearchy standard) which is part of the Linux Documentation Project.
It sounds like you spending more time fighting the conventions than understanding or accepting them. Some of them are a bit silly (/etc for configuration files, why isn't it /conf?) and some of the file names are historical (which is why we store user account names, shells, home directories, and other info in the /etc/passwd file --- and we store password hashes in the /etc/shadow file).
/usr is the home of "user space" programs and resources, while /var is the tree for /usr type files that are expected to differ between systems (things that used to be in /usr until people started trying to share /usr over NFS). /home is common on Linux and less common on other Unix platforms --- most of which use a set of fileystems like /u1, /u2, etc. /proc is a "virtual" filesystem --- a representation of the kernel's process status as a tree of nodes. This allows programs and shell scripts to access process status and other kernel data without requiring special interfaces into the kernel. The /dev directory is for "device nodes" (filenames through which programs can access and control devices).
It would take a rather lengthy book to go over all of these conventions. You could read "Linux Installation and Getting Started" for some of this. Most of it is more of an "oral" tradition (carried mostly by netnews, over mailing lists, in user group meetings and at technical conferences like USENIX, SANS, and the IETF workshops.

(?) there must be a philosophy behind this system I don't understand yet can you shed a little light on this??

(!) Read Peter Salus' "A Quarter Century of Unix" if you want to understand the background of Unix (and thereby the heritage of Linux). There is also another book whose title escapes me --- but it's something like: "the philosophy of Unix" --- which is more for programmers.

(?) thanks chris


Copyright © 1999, James T. Dennis
Published in The Linux Gazette Issue 37 February 1999


[ Answer Guy Index ] 1 2 3 4 5 6 7 8 9 10
11 12 14 15 16 17 18 19 21 22
23 28 29 30 31 32 33 34 37 38
39 41 42 43 44 45 46 47 48 49



[ Table Of Contents ] [ Front Page ] [ Previous Section ] [ Next Section ]