[ Prev ][ Table of Contents ][ Front Page ][ Talkback ][ FAQ ][ Next ]
LINUX GAZETTE
...making Linux just a little more fun!
More 2¢ Tips!
By The Readers of Linux Gazette

See also: The Answer Gang's Knowledge Base and the LG Search Engine


Two Sound Cards Under Linux

Tue, 14 Jan 2003 03:02:07 -0500
N4FWD - Tom Kocourek (tko from atempest.net)


The Need

As an Amateur Radio Operator, I wanted to use "QSSTV" under Linux. This program uses the DSP in a sound card to decode pictures being transmitted on Amateur Radio. However, I did not wish to give up the basic sound ability available under KDE. Thus I started reading about dual sound cards.


Research

Searches via Google did not turn up much information on dual sound cards, just the usual "HOW TO" references on getting one sound card running. But, one key piece of information did turn up, that multiple sound drivers can coexist!


Some experimentation and...

Multiple sound cards can work together provided:

  1. Each additional sound card must be a different chip set (ie. different drivers)
  2. Each sound card must have its own IRQ and distinct control register address space


Installation checkup

At this point, you have physically installed the additional sound card and have verified that the BIOS has assigned different IRQs to the cards.

Now you have booted Linux and have logged in. In Mandrake Linux there is an integrated program called the MCC (or Mandrake Control Center). You can either use MCC or you can execute in a term window:

	$ /sbin/lsmod | less

You are verifying that different drivers have been assigned to each Sound Card. If you are not using one of the more recent distributions of Linux (such as Red Hat, Mandrake, or SuSE), you may have to alter the configuration files by hand to achieve the necessary loading of the proper Sound Card drivers.

Next, you run a mixer setting program, like KMIX. If all is ok, the program should display 2 distinct mixers. If not, then you need to recheck the configuration files.


Now for the tough part...

Many sound programs are not well written. That is to say that the program assumes that only one sound card exist in your system. These types of sloppy programs will lock up Linux and require using the reset button

Well written programs allow you to set which sound card is to be used. XMMS is a well written program. While it assumes that sound card 0 is the only sound card in the system, It does not lock down Linux. QSSTV is an even better written program in that it allows you to configure which sound card is to be accessed.

"ARTSD" is a poorly written program and MUST be disabled when you run dual sound cards in your system. Otherwise, you will be reaching for the reset button!


Lastly...

I am able to play my music via XMMS and Sound Card 0; while QSSTV decodes pictures using Sound Card 1 simultaneously under Linux!


rpm in debian ?

Tue, 7 Jan 2003 14:17:47 +0530
Kapil Hari Paranjape (kapil from imsc.res.in)
Question by Joydeep Bakshi (joy12 from vsnl.net)

Hi, I am a Debian user and interested to install the rpm packages ( from RH or MDK cds ) in Debian. but is it possible to do so ? if yse , how ?

[Kapil] A debian package:
Package: alien
Section: admin
Architecture: all
Description: install non-native packages with dpkg
 Alien allows you to convert LSB, Red Hat, Stampede and Slackware Packages
 into Debian packages, which can be installed with dpkg.
 .
 It can also generate packages of any of the other formats.
 .
 This is a tool only suitable for binary packages.
This suggests that "apt install alien" would do the trick for you.
This works as follows. You run
   fakeroot alien -r <RPM>
This produces a .deb which can be installed.
It is a good idea to read the documentation first. In particular, please heed the warning about not installing any critical packages this way. IF (and this is a big if) some mission critical package you absolutely must have is not in Debian (stable or testing or unstable), then it is generally better to run "debmake" on the unpacked source tree to build the relevant debian package. (of course to do this you should generally have installed "build-essential").
[JimD]
... and created a debian/rules file (a makefile starting with
#!/usr/bin/make -f).
[Kapil] The "alien" package is largely for (boo-hiss) non-free stuff that is only available as binaries packaged as RPMs.
[JimD] It is also possible to install the debian rpm package. You can then directly use RPM commands. However, there won't be any dependency database (dbm files) so all dependency checks will fail.
At some point someone may come with with a very clever (and probably difficult to maintain) adapter that will generate a reasonable RPM/DBM database set from a Debian /var/lib/dpkg/info tree. Alas that is not in the cards for now.
'alien' is probably the best way to go in most cases.

Thanks a lot for ur valuable hints. alien is excellent. but *alien -i* command didn't check any dependency when I installed open office (making .deb from Mandrake cd ), hence it could not be started due to missing libraries.

[Kapil] Dependencies are certainly a problem for alien. The way I understand it, if you have the correct libraries installed then the dependencies are included in the .deb package produced by "alien". Otherwise "alien" only produces error messages about unmet dependencies...
... a bit of a catch 22 alright!
But if you create the .deb files and install them in the "correct" order (and assuming that there are no cross dependencies!) the binary dependencies should work out correctly. What "alien" does (I'm guessing here) is it runs "ldd" on the executables and looks for the package that supplied the relevant library. This is how it is often done during .deb creation.
Non-binary dependencies are probably unresolvable unless you can lay your hands on an LSB package---whatever that is.
The Linux Standards Base is an industry-wide effort to make life easier for companies that want to produce commercial shrinkwrap products. If they adhere to the filesystem layout and principles described there, then the package should be able to be installed on any Linux distro which also claims to be LSB compliant.
The installers haven't quite perfected this as far as to handle everybody's slight differences in initscript setup, but other than that it's not too bad. At the very least a knowledgeable system admin has no problem grafting such applications into the company-wide server. -- Heather

1) is it possible to let the kpackage to handle this type of converted .deb packages and their dependency ?

[Kapil] I don't know anything about kpackage but I would guess that if the information is not in the .deb file there is not much kpackage can do.

2) if I have a particular directory to store all these converted .deb packages then how to modify kpackage to display those packages in its tree view ? ( if it is possible at all )

[Kapil] There are some debian packages that allow you to create your private repositories - there is a sledge-hammer called "apt-move" but there may be something simpler for your requirement.
When the deb file is installed, if it has no section it will be placed in the "Obsolete and Locally Created Packages" section under aptitude. I assume kpackage has a similar feature, although I've been a bit shy of the X-windows based apt front-ends, since I prefer to have a minimum of processes running when updating my systems. -- Heather

once again thanks 4 ur solution.

[Kapil] As far as openoffice and other such packages are concerned your best bet is the "unofficial apt repositories" (which I forgot to mention in my list of stable/testing/unstable). You can find these unofficial repositories at:
http://www.apt-get.org
I seem to remember that this site lists a site for openoffice. You can add that site to the list in /etc/apt/sources.list and you should be able to then use apt-get (or probably kpackage) to install openoffice with dependencies resolved.
Be warned that the unofficial repositories are un-signed packages and could contain trojans and other such!

Thanks 4 all ur technical info.

best regards


propagating ownership and permissions

Mon, 30 Dec 2002 08:30:09 -0500
Ben Okopnik (the LG Answer Gang)

A while back, I wrote a utility that propagates ownership and permissions from a sample file to a group of files. Imagine a situation where you have, say, several dozen documents with a scattershot list of permissions and owners/groups (since they were created by different people.) The way to "bring them into line" would be to pick a file that already has The Right Stuff - it doesn't even have to be in the same directory - and say:

cpmod /path/to/example/file *

Note that this utility is self-documenting. Its internal "man page" can be read (as long as "cpmod" is somewhere in your path) with

perldoc cpmod

If you want an actual man page, one can be easily created with

pod2man cpmod|gzip -c>cpmod.1.gz

Put the resulting file somewhere in your man directory structure (/usr/share/man/man1, perhaps).

See attached cpmod.pl.txt

[JimD] In newer GNU utils you can use something like:
	#!/bin/sh
	reference="$1"; shift
  	for i in "$@"; do
		chown --reference="$reference" "$i"
		chmod --reference="$reference" "$i"
		done

[Ben] Very cool, Jim! I hadn't seen that one before; I was only familiar with the older versions.

[JimD] (Technically I think you can just make that for i; do ... since I think that for loops default to being in "$@" if you don't specify an explicit list. I know they default, but I'm not sure if they default to $* or "$@" --- if you care about the distinction; as usual the subtleties of soft-quoting are there to protect degenerate filenames containin whitespace!).
In other GNU utils you can use a little trickery like:
  	#!/bin/sh
	reference="$1";  shift
	UID=$(find "$1" -maxdepth 0 -printf "%U" )
	MODE=$(find "$1" -maxdepth 0 -printf "%m" )
  	for i in "$@"; do
		chown "$UID" "$i"
		chmod "$MODE" "$i"
		done
Ben, am I missing some subtleties here? (Other than the obviously argument counting, error checking and messages, and some getopts to provide --help, --owner-only, --mode-only etc.)

[Ben] Not so far as I can see. However, the Perl version is shorter (if you ignore the included man page.) :)


boot to windows by default

9 Jan 2003 05:16:50 -0000
David Mandala, Jim Dennis (the LG Answer Gang)
Question by anurag sahay (anuragsahay from rediffmail.com)

Hi Answer guy, I ahve two questions

1. I have linux and Windows both loaded on my system.i wanted to boot to windows by default.how can i chang the lilo.conf file.what are the changes to be made there.

[David] The answer to your question about lilo is to edit the /etc/lilo.conf file.
Your file might look something like this:

See attached linux-and-dos.lilo-conf.txt

Cheers, Davidm
[JimD] Essentially, add a default= directive to your /etc/lilo.conf (or edit your /boot/menu.lst file if you're using GRUB). Read the lilo.conf man (and/or GRUB info) pages for more detail on that.
The Linux Documentation Project (http://www.tldp.org ) has an entire section of HOWTOs on boot loaders and related topics (about a dozen of them):
http://www.tldp.org/HOWTO/HOWTO-INDEX/os.html#OSBOOT


network programming - accepting data

9 Jan 2003 05:16:50 -0000
Kapil Hari Paranjape, Jim Dennis (the LG Answer Gang)
Question by anurag sahay (anuragsahay from rediffmail.com)

Hi Answer guy, I ahve two questions

2. This about unix network programming: How to accept any data from any given port.

thanking you
yours anurag

[Kapil] Have a look at the utlities "netcat" and "socat".
[JimD] You could use netcat (often named /usr/bin/nc) or socat directly (from shell scripts, etc) to listen on arbitrary TCP or UDP ports. Note: the process has to have 'root' privileges to listen on "privileged" ports -- those from 1 to 1023 inclusive (or maybe it's 1024 inclusive --- I never remember that one).
More to the point, you can read the source code to netcat or socat (included with most distributions on the "Source Code" disc or readily downloadable from many archive sites on the net. As a Debian user I find it most convenient to get most sources with a simple 'apt-get source' command. Debian tracks, index, and automatically fetches, unpacks and patches the sources for me. With an 'apt-get build-dep' command I can also have Debian fetch and install all of the packages that are required to build almost any other package from its sources (they're still working on that feature).
It makes me reluctant to hunt down the upstream sources, suitable for other distros and other forms of UNIX.
These things change far too frequently, but Google is our friend. It appears that the current canonical location for finding Hobbit's netcat sources is at:
http://www.atstake.com/research/tools/network_utilities
... where he (Hobbit) seems to have an e-mail address. Perhaps he works at @Stake.
As for socat its author, Gerhard Rieger, conveniently list the package's home page in the man page that comes with the package (at least with the Debian package): http://www.dest-unreach.org/socat
Reading the sources to these will teach you alot about UNIX network programming. In particular netcat has been around for a very long time and has had VERY FEW bugs reported against it. It's been scrutinized by thousands, probably tens of thousands of programmers.
You should also buy Richard Stevens' seminal textbook on UNIX Network Programming (Prentice Hall). Read more about that at:
http://www.kohala.com/start


Key bindings in X

Wed, 22 Jan 2003 07:51:49 +0800
jamie sims (jaymz from operamail.com)

Here's the fix I finally hit upon to get those F keys working in xterm. I edited a copy of /usr/X11R6/lib/X11/app-defaults/XTerm and added the following:

See attached XTerm.app-defaults.txt

I then saved it as .Xdefaults and it works very well.

You can use the .Xdefaults file in your home directory to add or override X internal resources for any application - so make sure that if you already have some features stored there, that you add this into it, instead of replacing it. -- Heather


alsa in debian

Sun, 19 Jan 2003 12:52:21 +0530
Kapil Hari Paranjape (kapil from imsc.res.in)
Question by Joydeep Bakshi (joy12 from vsnl.net)

Hi there, u know alsa in not built in debian 3.0 by default. but alsa utils... & driver & header files are present in the 7cd set. could any one please tell me how to build the alsa modules in debian & the required packages 4 this ?

Note: there are some alsa-modules ( in the cds ) based on 2.4.16 kernel, but mine is 2.4.18

Where you got the kernel-image-2.4.18 you should also find the relevant alsa-modules-2.4.18. Anyway here is the procedure to build alsa modules for debian.

1. Use apt-get to install the relevant alsa-source package. You could also download the sources from the alsa ftp site --- I haven't tried that but it should work.

2. Install the relevant kernel source package, and the package kernel-package.

3. Unpack the kernel source and alsa-modules in /usr/src.

4. Run "make-kpkg --config=menuconfig" configure in the kernel source directory.

5. Run make-kpkg kernel-image and make-kpkg modules-image.

6. This should build a pair of compatible kernel-image and alsa-modules package files which you can install with dpkg.

7. Of course you need to edit your grub menu or lilo conf file and so on to run this kernel.

8. You can then configure alsa with alsa-conf alsa-base and so on.

Remember to set and save the mixer settings so that /etc/init.d/alsa script (which is part of alsa-base) can restore these settings.


pppd

Fri, 3 Jan 2003 11:24:26 -0800
Mike Iron Orr, Ben Okopnik (the LG Answer Gang)
Question by Joydeep Bakshi (joy12 from vsnl.net)

pppd command shows a few strings character in RH, but in debian it shows error

" remote system needs to authenticate itself" & discontinue

[Ben] Ah, I'd missed this part. Neil is right - you don't have the "noauth" option defined in your "/etc/ppp/peers/provider" or whatever options file you're using.
[Iron] I haven't used ppp for years (but I will soon, when I set up my mom's computer), but yes, if you're dialing into an ISP you want "noauth". Otherwise your Linux box will require authentication from the server, which it won't do. The server thinks *it's* trusted and *you're* the one who has to authenticate yourself. And even if it was willing to authenticate itself, how could it? It doesn't have a password to authenticate itself with. The (nonexistent) password the servers would authenticate themselves with is different than the user password you authenticate yourself with.
If people are dialing into your Linux system, then you want authorization for those calls.

Thanks 4 the solution, it is working now.


Is that your FIN_WAIT Answer?

Mon, 13 Jan 2003 19:00:25 -0800
Jim Dennis (the LG Answer Guy)

I am using RedHat Advanced Server 2.1, Kernel 2.4.9 and am having the following problem:

If I log on as userA via a telnet session and run Test_pgm and then disconnect the telnet session by closing the window instead of properly logging out, this is what is shown from the ps command:

UID    PID  PPID  C STIME TTY          TIME CMD
userA 8505     1  0 14:00 ?        00:00:00 login -- userA
userA 8506  8505  0 14:00 ?        00:00:00 -bash
userA 8540  8506 87 14:00 ?        00:00:42 Test_pgm

Notice that there is no longer a TTY associated with the running program or the original login and the PPID of the login has been inherited by process ID#1. Furthermore, if I do a top command, the results show that the CPU Idle % is zero, with the Test_pgm using up all of the CPU %. The load average goes through the roof. I've seen it up close to 30.0. However, the system's performance does not seem to be effected by me or by any of the users. These processes are not listed as zombies and are never cleaned up by the system unless I kill the login process or restart the server.

Most of this seems normal (for a program that's ignoring SIGHUP). The loadavg number seems odd.

This scenario happens whether the user is running an in-house 'C' program or an operating system utility such as Redhat's setup. Within our own 'C' programs, I have tried to capture a terminating signal, using the signal() command, but I am not seeing any of the signals that I would expect to see, such a SIGTERM or SIGHUP.

Does anyone have any ideas as to how to tell RedHat to take down the processes associated with a telnet when a tty disappears?

Thanks in advance.
DP

in.telnetd should be sending a SIGHUP to the process when the TCP connection is closed (including when the keepalive fails?).
Run 'netstat -na' and see if the TCP connection is lingering in FIN_WAIT state. This could be a case where your (probably MS-Windows) telnet client is failing to properly perform the three-way disconnection handshaking that's required of TCP. (I recall problems with some MS Windows FTP clients resulting in similar symptoms on high volume public FTP servers).
Try it with a UNIX telnet client.
Try it with ssh.
If it works with ssh, perhaps you can use that as leverage with your users and management to abandon this insecure and deprecated protocol! (PUTTY is a very good, and free, ssh client for MS Windows operating systems. There are many others).
Other than that, I would try upgrading the kernel (2.4.9 was pretty miserable under memory load) and watch one of these sessions with tcpdump and strace (so you can correlate what's happening on the wire with what's happening in the process). Upgrading to RH 7.3 might also be good since the compilers and libraries in 7.1 and 7.2 had ... issues.
Without knowing more about what Test_pgm is supposed to do, I can't immediately suggest any other workarounds.


direct rendering for nvidia RIVA 128

Sun, 19 Jan 2003 00:13:51 +0100
Yann Vernier (yann from algonet.se)
Question by tag@lists.linuxgazette.net, Scott Frazier (rscottf from ieee.org)

I have a nvidia velocity 128 video card, which uses the RIVA 128 accelerator chip. I'm running Mandrake 9.0, which sets it up with glx (3D capability), but with no direct rendering (uses software rendering). Needless to say this REALLY slows it down for games. Does anyone know how I might resolve this? I've tried changing an entry in the XF86Config file, in the MODULES section. I added the line Load "dri", to no avail. I'm pretty sure the card is dri capable, as it is able to do bus mastering, which is a must for this.

Sorry to disappoint you, but last time I checked there was no DRI driver for the Riva 128. It's among the earliest nVidia chips, and nVidia's own binary-only driver only supports TNT or later (two models newer). There was a partly accelerated Mesa-based GLX implementation for XFree86 3 that supported it, however, called Utah-GLX. You may be able to run that, but you'd obviously lose out on all other new features of XFree86 4.


xcdroast post cdrom mount problem

Fri, 10 Jan 2003 17:32:51 -0500
()
Question by Brian (bbertsch from surfside.net)

hello, i'm a recovering os/2 user. i used it today, and i may have to tomorrow... but i can stop any time i want to.. but my modem....

Anyway, after i use xcdroast, (which i am getting used to, under RH8-KDE) i am unable to check the cdrom just made because the cdrom will not mount. (ide double cheapo brand 48x, works great). i have to use the newly-made cd on my os/2 machine to check it. my friends laugh at me.

thanks, brian

[JimD] You probably need to change /dev/cdrom to be a symlink to /dev/scd0 or something like that.
Linux normally handles your ATAPI CD-R drive via a SCSI emulation layer. Once this layer is active (possibly via a loadable module) then all access to the CD has to go through the SCSI device nodes (/dev/sg* for writing, and /dev/scd0 for mounting CDs).
Try that. Try this command first:
mount -t iso9660 -o ro /dev/scd0 /mnt/cdrom
... from a root shell prompt.
[John] Greetings from another former OS/2 user - although I used it for about 2 yrs or so, and switched to Linux.
Anyway, have you read CD's made from that cooker before? Could be a hardware issue. Some of those really cheap devices lack some features. But chances of that would seem a bit slim if it's a 48X drive, cuz those compatibility problems are usually more common with the older drives. But I wouldn't rule it out as a possibility.


iptables: What They Are and What They Do

Tue, 7 Jan 2003 04:18:33 -0800
Jim Dennis (the LG Answer Guy)
Question by peter collins (collin_sq2003 from yahoo.com)

could you please explain to me what iptables are and what they do

IPTables are tables (lists) of packet filtering rules in the Linux kernel. They are added (passed into the kernel's address space) and manipulated using a command named: 'iptables' and they are interpreted by various kernel modules written to the "netfilter" APIs (primarily by Paul "Rusty" Russell).

Each rule is a pattern matching some sorts of network traffic based on many criteria (IP source or destination addresses, TCP or UDP source and destination ports, ICMP type, IP or other options (flags), connection status (correlated from other, previous packets), even MAC addresses, which interface and direction they're coming from or destined to, which local processes are generating them, etc.). Part of each rule is a "disposition" like: DROP, REJECT, ACCEPT, "jump" to another ruleset (table) etc.

The ability to conditionally process different packets in various ways, and even to conditionally "call" on some rulesets, makes iptables into a very specialized programming language. IPChains was somewhat different, simpler packet filtering language (also by Rusty), and ipfwadm was a much simpler packet filtering system back in the 2.0 kernel days.

It looks like the 2.6 kernel, probably due out sometime this year, will be the first one since 1.3 that hasn't had a major overhaul in the packet filtering language. IP Tables was released with 2.4 and has only undergone minor bug fixes and refinement since then.

Note that most of the packet filtering rules relate to whether to allow a packet through the system, to DROP it (with no notice) or REJECT it (providing an ICMP or error back to its sender, as appropriate), MASQUERADE or TRANSLATE it (change its apparent source address and port (usually setting up some local state to dynamically capture and re-write any response traffic related to it), REDIRECT it (change its destination address and/or port), change its "ToS" (type of service) bits. It's also possible to attach an FWMARK to a packet which can be used by some other parts of the Linux TCP/IP subsystem.

What IPTables is NOT:

There is another subsystem, similarly complex and seemingly related --- but distinct from netfilter (the kernel code that support IP Tables). This is the "policy routing" code --- which is controlled with the tersely named 'ip' command (the core of the iproute2 package).

Policy routing is different that packet filtering. Where packet filters is about whether the packets go through, and whether some parts of a packet are re-written, policy routing is purely about how they are sent towards their destination. Under normal routing every outbound and forwarded packet is sent to its next hop based exclusively on its destination address. Under policy routing it's possible to send some traffic through one router based on its source address, port or protocol characteristic, etc. This is different than the IP tables "REDIRECT" because this doesn't change the packet --- it just sends it to a different router based on the policy rules.

The two subsystems can interact, however. For example policy routing does include options to match on the ToS or FWMARK that might be attached to a packet by the iptables rules. (These FWMARKs are just identifiers that are kept in the kernel's internal data structure about the packet --- they never leave the system and can't go over the wire with the packet. ToS are only a couple of bits in the header, hints that traditionally distinguish between "expedited" (telnet) and "buld" (ftp) traffic).

The iproute2 package and the 'ip' command replace the ifconfig command and provide considerable control over interfaces. It also allows one to set "queueing disciplines" to interfaces which determine which packets get to "go first" when there are more than one of them waiting to be sent over given interface.

There is alot more I could tell you about Linux routing and network support. For example none of this relates to dynamic routing table management. There are user space programs like routed, gated, and various GNU Zebra modules, that can listening to various dynamic routing protocols such as RIP, RIPv2, OSPF, BGP, etc. to automatically add and remove entries to the kernel's routing tables. Some of these might be able to also dynamically set policies as they do so. There is also a Linux compile time option called "Equal Cost Multi-path" which is not part of policy routing. Normally if you added two static routes of "equal cost" than the first one (of the lowest cost) would always be used, unless the system was getting "router unavailable" ICMP messages from somewhere on the LAN. However, with Equal Cost Multipath the system will distribute the load among such routes. This can be used to balance the outbound traffic from a very busy system (such as a popular web server or busy mail gateway) among multiple routers (connected to multiple ISPs over multiple T1s or whatever).

(This is similar to a trick with policy routing --- assigning a couple of IP "aliases" --- different IP addresses --- to one interface; one from one ISP, another from a different one, and using policy routing to ensure that all response/outbound packets from one of these sources go through the appropriate router. DNS round robin will balance the incoming load, and policy routing will balance the response load. Equal Cost Multipath will balance traffic initiated from that host).

Again, all of these last paragraphs are NOT IP tables. I'm just trying to give you a flavor of other networking stuff in Linux apart from it, and to let you know that it, if you don't find what you need in the iptables documentation, it might be somewhere else.

To learn more about Netfilter and IP Tables, please read though the appropriate HOWTOs:

http://www.tldp.org/LDP/nag2/x-087-2-firewall.future.html http://www.netfilter.org


Code folding in Vim

12 Jan 2003 23:53:53 +0530
Ashwin N (ashwin_n from gmx.net)

Vim versions 6.0 and later support a new feature called Code Folding. Using code folding a block of code can be "folded" up into a single line, thus making the overall code easier to grasp.

The Vim commands to use code folding are quite simple.

To create a fold just position the cursor at the start of the block of code and type : zfap

To open a fold : zo

To close a fold : zc

To open all the folds : zr

To close all the folds : zm

For more commands and information on code folding in Vim query the inbuilt help feature of Vim : :help folding

[John Karns] You're quite right. Folding is particularly useful for long sections of code that contain loops, etc. I use it extensively in this context.
Other uses include long paragraphs of prose.
But make sure you are in command mode! If you are in text entry mode, just typing in "zfap" would literally embed that string into your text!
If you're in text entry mode, press Escape to get back into command mode.
Vi has two command modes and a text entry mode. When you come in you are at ordinary command mode. When you type a colon (such as what precedes the word "help" above) then you end up with a small colon prompt. The above commands are NOT colon mode commands, except for help. But you do need your cursor at the right location.
The colon prompt is also called "ex mode" by old hands at vi, but I'm not entirely sure that all the commands that use it are really old commands at all. Some are surely long words allowing you to access some enhanced features, too, because there are only so many letters in the alphabet.
To get out of the help mode you may need to type :q to quit the extra window it created. Your original textfile is still around, don't worry. -- Heather


Debian "Woody" boot error

Tue, 21 Jan 2003 16:30:32 -0600
Robos (the LG Answer Gang)
Question by Rich Price (rich from gandalf.ws)

After installing the Woody release of Debian using the idepci kernel I noticed the following boot message

modprobe: Can't locate module char-major-10-135

Some Google searching led me to the following factoid:

"char-major-10-135" refers to the character device, major 10, minor 135,

which is /dev/rtc. It provides access to the BIOS clock, or RTC, the Real Time Clock.

[Robos] OH MY GOSH! REINSTALL! (Just kidding)
This doesn't actually mean that your computer has no sense of time at all; it just means you won't be able to access the additional precision it has available, without extra code in the kernel. If you have SMP, the kernel docs warn that it's important to compile this in. Otherwise, very few things actually care.
But in a new enough kernel, with devfs support, any app which is curious about it (that is, would use the extra support if you have it, but ignore it if you don't) will provoke a complaint when the userland devfsd attempts to autoload the module. You can tell it to ignore that stuff, detailed in devfsd's man page. -- Heather

So, fine, I want it.

[Robos] Hmm, ok

I looked around in the distro CDs, but I couldn't find the char-major-10-135 module. No luck at the Debian site either. Where can I find a copy of this module compiled for the Debian Woody idepci kernel?

[Robos] Actually it has to be compiled in the kernel to be either hard integrated or to be loadable as a module. It seems as if they (the debian kernel package maintainer) did neither. So, either you bake your own kernel and tick the appropriate field in make xconfig or you need to look (grep) through some configs of kernels (packaged ones) to find one which has rtc true oder m. BTW, I have this message too on all my machines with hand made kernels and it didn't bother me a thing till now...
[Iron] char-major-10-135 is a generic name; the module itself won't be called that. Take a look in /etc/modules.conf . The "alias" lines map the generic name to a specific module that provides it, for instance:
alias char-major-10-175 agpgart
In this case, some program or daemon is trying to access the real time clock. You can also create your own aliases; e.g., I name my Ethernet cards according to their interfaces:
alias eth0 3c59x
alias eth1 eepro100
So when my network initialization script does:
modprobe eth0
modprobe eth1
I know eth0 will connect to the 3C905 card (DSL) and eth1 will connect to the EE PRO card (LAN). And if I have to change cards later, I can just change the alias lines and leave everything else along. (The only thing I can't do is use two cards of the same brand, because then I would have no control over their initialization order except by seeing which PCI slot has the lowest base address: "cat /proc/ioports". If eth0 and eth1 get reversed, the network won't work because packets will get routed to the wrong network.)
Anyway, the easiest way to "fix" your problem is to add an alias:
alias char-major-10-175 off
That tells modprobe to shut up because there is no module for that service. So whatever is asking for that module will abort or do without. Whether that's a problem or not depends on what the program is trying to do and whether you need it. I have never run into problems aliasing char-major-*-* modules off.
Of course, the "correct" solution is to find out what's using the module and disable it if you don't need it.
In my Linux 2.4.17 source, "make menuconfig", "character devices", "Enhanced Real Time Clock support", "Help" (press Help while the cursor is on the RTC line) says the module file is "rtc.o". You can also guess that from the config option name at the top: CONFIG_RTC. That's the file you want from your distribution disk. On Debian it would be in a kernel modules package.
Note that Debian has a configurator for /etc/modules.conf. Instead of editing that file directly, edit /etc/modutils/aliases and then run "update-modules". See "man 8 update-modules".


Proxying with MAC address

Sun, 12 Jan 2003 05:00:20 -0800
Jim Dennis (the LG Answer Guy)
Question by Ganesh M (gansh from rediffmail.com)

Thanks to Karl-Heinz Herrmann for bearing with me, just one little
question please.

Is it possible to restrict internet access by private LAN PCs taking into account their MAC address instead of the IP address by any means, i.e., Masquerading/Proxying etc. Can masquerading and proxying co-exist, and if so, what is the advantage?

Thanks
M Ganesh

It should be possible (though very cumbersome) to configure your networks so that only registered MAC addresses are routed from one internal network to another (including via the border router to the Internet).

Under Linux you could write scripts to do this using the MAC Address Match option/module in the Linux kernel configuration (*) (named: CONFIG_IP_NF_MATCH_MAC in the .config file).


*(Networking Options --> Netfilter Configuration --> IP Tables)

However, it's probably an ill-advised strategy. Many people try to limit this by setting up their DHCP servers with known MAC addresses and refusing to give out IP addresses to unknown systems. They then might couple this with monitoring using the 'arpwatch' package to detect new ARP (MAC/IP address combinations) and with 'snort' to warn them of other suspicious network activity.

As for co-existence of IP Masquerading and applications layer proxying. Yes they can co-exist --- and are even sensible in some cases. In fact it's common to use something like IP Masquerading with the Squid caching web proxy --- in its "transparent proxy" configuration.

In general you might use proxies for those protocols that support it, and for inbound connections; while letting systems fall back on IP masquerading other work (subject to your packet filtering, of course).

The advantages of application proxy systems are largely in three dimensions: They can be quite simple, and run in user space often as a non-privileged process (security and simplicity); they can reflect higher level policies because they have access to the applications and sessions layers of the protocol that is being proxied (flexibility and control), they may be able to provide better performance (performance, especially via caching).

However, any particular proxy might not provide real advantages in all (nor even ANY) of these areas. In particular the Delegate proxy system seems to be riddled with buffer overflows, for example. Squid is a nice caching proxy for web and some other services --- and it has some security and policy management features and optional modules. However, Squid configuration and administration can be quite complicated. It's far too easy to inadvertantly make your Squid system into a free anonymizing proxy for the whole Internet, or to make it into an unintentional inbound proxy to your own intranet systems.

While a proxy might have access to the application/session layer data (in the payloads of the IP packets) --- it might not have a reasonable means for expressing your policies regarding acceptable use of these protocols.

Also there are always those new protocols for which no proxies have been written. There will frequently be considerable demand by your users and their management to provide access to the latest and greatest new toys on the Internet (Pointcast was an historic example, Internet radio is, perhaps, a more recent one).

These issues are very complex, and I can't do them justice at 5am after staying up all night ;)


fwd: Re: [TAG] wrestling with postfix...

Sun, 19 Jan 2003 09:01:44 -0800
Dan Wilder (the LG Answer Gang)
Question by Radu Negut (rnegut from yahoo.com)

Hi! After going twice through the postfix documentation, I still couldn't figure it out if it is possible to configure mail for groups (e.g. sales_managers@domain.com) otherwise besides aliasing all group members to that address in /etc/postfix/aliases. Does postfix reread the aliases as well if 'postfix reload' is issued or only the .cf file? Does 'service postfix restart' reset all mailques, resulting in dropped/lost mail? I've looked

For alias lists, add stuff to /etc/aliases then run

postalias /etc/aliases

If you don't care whether the new aliases are effective instantly, you're done. Very shortly Postfix will notice the aliases file is updated and will reload it.

You may keep aliases in additional files. See the

alias_maps =

parameter in main.cf. You can add as many alias files as you like.

For bigger lists, or frequently changing ones, investigate mailing list software. I use Mailman or majordomo myself. See the URL below.

around but couldn't find if postfix can be configured to use accounts other than from /etc/passwd (and I'm not talking about aliases). What I mean is normal mail spools, but for users that get specified in a separate file and who do not have any permissions on the system whatsoever.

Briefly, you can't do normal UNIX mail delivery except to users from /etc/passwd. However you can do POP3/IMAP delivery to a software that maintains its own list of users. You're looking for something like Cyrus. You'll find it under the POP3/IMAP servers section of

http://www.postfix.org/addon.html

Take the time to browse the other pages of the postfix.org site.

-- Dan Wilder


This page edited and maintained by the Editors of Linux Gazette
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/
Copyright © 2003
Copying license http://www.linuxgazette.net/copying.html
Published in Issue 87 of Linux Gazette, February 2003

[ Prev ][ Table of Contents ][ Front Page ][ Talkback ][ FAQ ][ Next ]