...making Linux just a little more fun!
While most people know to turn off any services they don't want to offer the world, many do not realize this applies at the interface level as well as the service level.
[Kapil] Other than configuring this by editing the configuration files for the individual daemons that open the listening sockets, you can also use iptables/ipchains to block the (ir)relevant address/port pairs.
Here is the relevant portion of a file called "iptables.save" on a machine that runs a public web server and also accepts ssh connections.
*filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -d 127.0.0.1 -i lo -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT COMMIT
You can enable this with
iptables-restore < iptables.save
You can add/remove ports according to what connections you wish to accept. You should probably also accept some icmp connections in order to avoid losing routing information.
A typical networked computer has two interfaces: lo (the loopback) and eth0 (the Ethernet). Most daemons listen on all interfaces unless you tell them otherwise. Obviously, your web server, mail server, and CUPS (printer) server must listen on the public interface if you want other computers to access them. But if you're running mail or CUPS only for your own computer, you should make them listen only on the localhost. This eliminates a bunch of security vunerabilities because inaccessible programs can't be exploited.
There are several portscanners available but good ol' netstat works fine if you're logged into the target computer.
# netstat -a --inet Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:631 *:* LISTEN tcp 0 0 *:https *:* LISTEN udp 0 0 *:bootpc *:* udp 0 0 *:631 *:*
Add the "-n" option to bypass domain-name resolution. Here we see the secure web server listening on all interfaces (*:https), good. But CUPS is also listening on all interfaces (*:631, both TCP and UDP), bad. (We know port 631 is CUPS because that's what we type in our web browser to access the admin interface.) To make CUPS listen only on the localhost, I edited /etc/cups/cupsd.conf, commented the "Port 631" line and added "Listen localhost:631". (I like Listen better than Port because it shows in one line exactly which host:port combinations are in effect.) Note that you can't specify an interface directly, you have to specify the domain/IP attached to that interface.
Then I restarted the server and checked netstat again:
# /etc/init.d/cupsd restart * Stopping cupsd... [ ok ] * Starting cupsd... [ ok ] # netstat -a --inet Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 localhost:631 *:* LISTEN tcp 0 0 *:https *:* LISTEN tcp 0 0 10.0.0.1:32775 example.com:imaps ESTABLISHED udp 0 0 *:bootpc *:* udp 0 0 *:631 *:*
Good, the TCP line changed to "localhost:631". The UDP line is still "*:631". I searched the config file and "man cupsd" for "udp" but found nothing. I guess that means you can't turn it off? I decided not to worry about it.
There's a new line in netstat: "10.0.0.1:32775 to example.com:imaps". It looks like Mozilla Thunderbird is automatically checking for mail. 10.0.0.1 happens to be my public IP. (IPs/domains changed to protect the innocent.) It connected to the secure IMAP port on example.com. 32775 was a free port the kernel chose at random, as always happens when you connect to an external server.
There's still one suspicious line, "*:bootpc". I'm not running a diskless workstation or doing any exotic remote booting, so what is this? "lsof" is a very nifty program that tells you which process has a file or socket open.
# lsof -i :bootpc COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhcpcd 3846 root 4u IPv4 5398 UDP *:bootpc
I am using DHCP, which runs this daemon while you're leasing an IP. I ran "man dhcpcd" and searched for "bootpc" and "port". Nothing. I guess it uses that port for some unknown reason. I decided not to worry about it.
[Kapil] Not quite. You shouldn't be running the dhcp-server (which is what the dhcpd program is). You are using dhcp in client mode so you should disable dhcpd from starting up.
[Peter] True, but the program in question listening on the UDP port 68 (bootpc) is "dhcpcd", not the dhcp-server which indeed has the name "dhcpd". When a client requests a DHCP address, a proccess (either "dhclient" or "dhcpcd") listens on UDP port 68.
It's eleven o'clock. Do you know which services your computer is running?
[Pedja] OK, what about this?
pedja@deus:~ ]$ netstat -a --inet Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:6000 *:* LISTEN
That's X server, right?
root@deus:/home/pedja#lsof -i :6000 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME X 3840 root 1u IPv6 9813 TCP *:6000 (LISTEN) X 3840 root 3u IPv4 9814 TCP *:6000 (LISTEN)
I should add something like 'tcp -nolisten' to options that are passed to X when it starts(I use startx to,well,start X ). My question is where to?
[Thomas]
/etc/X11/xinit/xserverrc
Is the file you're looking for. By default (on most distros, anyway), the '-tcp nolisten' are set already.
[Pedja] There's no xserverrc in Crux, so I made one with
#!/bin/sh exec /usr/X11R6/bin/X -dpi 100 -nolisten tcp
in it. I've put it in my home folder.
[Pedja] Should I make an alias in .bashrc,like
startx () { /usr/X11R6/bin/startx -- -dpi 100 ${1+"$@"} 2>&1 | tee $HOME/.X.err ; }
or modify .xinitrc in ~, or... What's The Right Thing(tm) to do?
[Thomas] No alias. See above.
Hi,
I am a starter in GNU/Linux. I am using Linux Kernel 2.4.20-8 Redhat Linux 9.
I have written a TFTP client and server. I have created a UDP socket and as per the RFC i am sending a structure with the proper TFTP header and then data.
it is working fine and i am able to send and get files.
my problem is when i use ethereal and tell to capture the TFTP and specified port it shows that the packets are UDP + data. I think i should get UDP header , then TFTP header and then data. But this is not happening in my case. My TFTP header is also coming as data.
How can I solve this problem...
[Breen] You're not by chance using a non-standard port for your tftp server, are you? If the traffic isn't on port 69/udp, ethereal won't know to decode it as TFTP.
[Ben] I think that your best bet would be to look at a standard TFTP conversation and compare it to yours. There may be some subtle difference that you're missing, or perhaps a part of the RFC that you're misinterpreting.
I dont have any guide.. hope to get a reply and from you people.
[Ben] I have not read it myself, but I understand that Richard Stevens' "UNIX Network Programming" series is the classic reference for this kind of work.
Hi Breen
you are right.. i had used a non std port. so it was not showing it as TFTP.
[Breen] Hi Deepak --
I've got two requests:
1) Please don't post html. Email is a text medium.
2) When you ask a question on a mailing list, you should follow up on the mailing list. That allows all subscribers to benefit from the answer you receive. I've added The Answer Gang back to the recipients of this email.
Glad we were able to help you!
Is there any way to have multiple HTTPS domains on the same IP/port? The mod_ssl FAQ says name-based virtual hosts are impossible with HTTPS [1]. I've got two sites currently on different servers. Each is distinguished by a path prefix ("/a" and "/b"), so they aren't dependent on the domain name and can be installed in the same virtual host. The boss wants them consolidated on one server, and to plan for additional sites in the future. The problem is the certificates. A certificate is domain-specific, and it looks like you can have only one per virtual host.
So person A types https://a.example.com/a/ and it authenticates fine, but person B types https://b.example.com/b/ and gets a "domain does not match certificate" dialog. (I have seen this in some cases, but haven't gotten it in my tests. But it may be because we're still using unofficial certificates and getting the "unknown certificate authority" dialog instead.) The only solutions seem to be using a general domain for all the sites, getting a separate IP for each one, or running them on nonstandard ports.
[1] https://www.modssl.org/docs/2.8/ssl_faq.html ("Why can't I use SSL with name-based/non-IP-based virtual hosts?")
[Jay] Correct. You can't have more than one SSL server per IP address, because the certs are IP based, not domain name based.
They have to be, if you think about it, because you can't spoof IP [1] the way you can spoof DNS.
[1] unless you manage a backbone.
[Brian] I think, if your example is true, then [IIRC, you'll have to do more research] you can spend the bucks to get a wildcard cert that will handle [a-g].example.com/blah just fine. Alternatively, get extra IP addresses, alias the eth as needed, and multiple single-host certs can be applied. That works just fine. A separate set of SSL stanzas in each virtual host section, virtual host by number, not by name.
You may, in that case, actually want to run a separate invocation of apache for the SSL side of things, so that you can do IP-based virtual hosts for SSL, and name-based virtual hosts for port 80.
[Ramon] Because encryption is set up before any HTTP headers are sent, name based vhosting with multiple certificates is not possible.
The only thing that does work is multiple vhosts with one certificate that validates all of them. I've done that successfully with a project vhost server on ssl for multiple software development projects. You can get a wildcard certificate from rapidssl https://www.rapidssl.com for $199.
They're a dirt cheap certificate provider BTW $69 for a two year standard webserver certificate accepted in most (if not all) browsers
If it were a small organization that would be a possibility. But we're part of a large organization and can't monopolize the entire domain (*.example.com). At the same time the sites are for multiple departments, and we haven't been able to come up with a *.subdomain.example.com that would satisfy all of them.
Oh wait, you're talking about wildcard IPs rather than wildcard domains? (checking rapidssl website) No, it is domains.
Hmm, getting a wildcard certificate would obviate the need for multiple certificates but that's actually the lesser of our problems. The greater problem is getting more IPs, justifying them, and putting in a new subnet for them. But I guess I'll tell management that if they really want these multiple domains on one computer, they'll have to factor a new block of IPs into the price.
Has anybody had experience with https://cert.startcom.org/ ? It appears to be a nonprofit project geared toward free certificates.
"The StartCom Certification Authority is currently undergoing an initial self and successive third party audit as required by various software vendors, such as Microsoft and Mozilla. This will lead to the natural support of the StartCom CA by the most popular browser and mail clients. Right now you still have to import our CA certificate into your browser, but chances are, that, during the life-time of your certificate (one year), your certificate will be supported without the need of the CA import."
Probably not an option for us yet, but it looks worth watching.
Duh, our netadmin pointed out that when the second site is moved over, we can take the IP from that computer. And my other site will replace seven other servers so we can take their IPs too. That'll last us past the foreseeable future. Anybody got a few HTTPS sites they need hosting for a year or two? (Just kidding.)
Mozilla has started hogging my screen. I can select other windows, but if Mozilla is maximised it remains in front of them. There is presumably a setting somewhere that is causing this behaviour, but the only setting I can find I can't seem to change. FYI, this is in KDE.
If I right click the Mozilla title bar and select advanced->special window settings->preferences, there is a checkbox either side of the "keep above" setting. The checkbox on the right is checked and greyed out. With a little fiddling I can get it unchecked, but if I click OK and then reopen the window to check it, I find that it is selected again.
I don't know if that setting is the source of the problem, but the other windows don't have it checked, so it's a good candidate.
Any ideas how to fix this one?
OK. Going down into the "special window settings" wasn't necessary. If I just use "advanced->keep above others" it toggles that checkbox. It's annoying and a little confusing that it can't be changed from "special window settings".
[Ben] Hmm. Perhaps one or two - my Firefox started doing some ugly thing a while back, so I whacked it over the head a couple of times, and will happily relate what LART I used. Mind you, this is in the nature of shotgunning rather than troubleshooting (I can hear the sounds of retching from the other techies here, but, hey, it works - and I didn't feel like pulling down a hundred meg or so of code and wanking through it.)
- Move your ~/.mozilla to, say, /tmp/DOTmoz.
- Start Mozilla.
- If $UGLY_BEHAVIOR is still present, uninstall the mozilla package (making sure to blow away, or at least _move_ away all the stuff in "/usr/lib" and "/etc") and reinstall from scratch. If it's still there, curse life and file a bug. Otherwise -
- Make a copy of your new ~/.mozilla (as, say, /tmp/DOTmoz_default.) Start replacing the subdirectories in the one in $HOME, one at a time, from /tmp/DOTmoz until the problem reappears. Narrow it down to the specific file, then diff that file against the default one. The line causing the problem should be relatively obvious - since Mozilla uses more-or-less sensible, descriptive names for their config variables.
To (mis)quote the folks at the Mozilla Project, "it worked for me."
I'd say this was starting from the wrong end. Possibly my fault because I flagged it as Mozilla hogging the screen. With window behaviours like this, it's far more likely to be a window manager issue.
I have solved the problem now. You should have seen a followup email on the list.
[Ben] I've had similar problems (back in Netscape days, actually), and thought that it was the WM originally - it just made sense. Turned out to be that Netscape was doing some of its own craziness, at least in that case; I can definitely see where it could just as easily be the WM.
Hi Everyone, I have a couple of questions for the perl experts that seem to lurk around the TAG mailing list.
[Ben] Never heard of any around here. However, I do play one on a center stage once in a while, so I'll try to help.
I was playing around with the Yahoo Search API and decided to write a program that uses it to search for images based on user input and creates a collage from the results. I actually managed to get it to work (https://scripts.suramya.com/CollageGenerator) but need some help in fine tuning it.
The program consists of two parts: the frontend which is a php page and the backend which is a perl script. The PHP frontend writes the user input to a mysql DB which another perl script I call wrapper.pl checks frequently, when it finds a new row it calls the collage.pl that creates the collage.
[Jimmy] Um... is there any reason why the information has to be in a database? It seems like you're over complicating things: PHP is able to download files (IIRC, fopen can open from URLs), and Perl is well able to do CGI (use CGI , and can be embedded in HTML like PHP using HTML::Embperl (https://search.cpan.org/~grichter/HTML-Embperl-1.3.6/Embperl.pod). This page (https://www.cs.wcupa.edu/~rkline/perl2php) has a Perl to PHP 'translation', but it's also good for the other direction.
You can also directly embed Perl in PHP (https://www.zend.com/php5/articles/php5-perl.php), and PHP in Perl (https://search.cpan.org/~karasik/PHP-0.09/PHP.pm https://search.cpan.org/~gschloss/PHP-Interpreter-1.0/lib/PHP/Interpreter.pm), and Perl can read some PHP directly (https://search.cpan.org/~esummers/PHP-Include-0.2/lib/PHP/Include.pm).
The original machine where my site was hosted was not a very powerful machine so the collage creation took ages.
So I decided to use a client server model where I could run the backend on multiple machines and have each of them process a small portion of the requests which the system got. Thats why there's a DB involved so that I can keep track of who's working on what query and the backend can run on my home machine or a different more powerful system.
Right now I am running just one backend process but once I get most of the bugs worked out I will prob put them on other systems I have. (Just to decrease the wait time..)
Thanks for the links though, They will be useful in other programs I am thinking about.
Now my first problem is that I am using the following function to download the images to the local system for processing and I am not comfortable with it.:
sub download_images { my $url = shift; $url =~ s/\"/\%22/g; $url =~ s/\&/\%26/g; $url =~ s/\'/\%27/g; $url =~ s/\(/\%28/g; $url =~ s/\)/\%29/g; $url =~ s/\*/\%2A/g; $url =~ s/\+/\%2B/g; $url =~ s/\;/\%3B/g; $url =~ s/\[/\%5B/g; $url =~ s/\]/\%5D/g; $url =~ s/\`/\%60/g; $url =~ s/\{/\%7B/g; $url =~ s/\}/\%7D/g; $url =~ s/\|/\%7c/g; # print "Getting " . $url . "\n"; `wget -T 1 -t 1 -q $url`; }
Is there a way I can download the images fast to my computer without having to use wget? I download upto 10 images eachtime for creating a collage. I don't like passing results I get from the net directly to a shell but this is the only way I could get it to work. Another disadvantage of wget is that if it can't download an image it takes forever to timeout and goto the next url in the list.
[Ben] Take a look at the LWP toolkit at https://cpan.org ; it contains support for any kind of HTTP/FTP/NNTP/etc. usage you might want from within Perl. The above can be done this way:
use LWP::UserAgent; use HTTP::Request; # Create user agent my $u = LWP::UserAgent -> new; # Create request my $r = HTTP::Request -> new( GET => "$url" ); # Configure the request however you want - e.g., $r -> timeout( 10 ); # Pass request to UA my $ret = $u -> request( $r ); print "Error fetching $url" if $ret -> is_error();
There are much simpler ways to do it - i.e.,
perl -MLWP::Simple -we 'mirror "https://foo.bar.com"'
does the whole thing in one shot - but it's not nearly as flexible as the above approach, which allows tweaking any part of the interaction.
Thanks for the info. I will check out this package. It looks like it does what I want. How is this package speed wise/resource usage wise?
[Ben] Forgot to mention: this is untested code, just off the top of my head - but stands a reasonably good chance of working. See 'perldoc LWP::UserAgent' and 'perldoc HTTP::Request' for the exact public interface/usage info.
Ha ha, don't worry I had guessed that this was the case. Afterall I can't expect you to do all the work for me... I will try out the code and let you know how it went.
The second problem is that my mysql connection seems to drop at random times during execution. What can I do to prevent the mysql server from going away?
[Ben] 1) Stop shelling out. If in doubt, read "perldoc perlsec" (Perl security considerations) - and then stop shelling out. This includes command substitution (backticks) as well as the 'system' call.
2) In any interaction involving file system calls the timing of which could affect the Perl functions, force the buffers to autoflush by setting the '$|' variable to non-zero. Oh, yeah - and stop shelling out.
Below is the code I use in wrapper.pl to check the DB for changes:
See attached wrapper.pl.txt
The script usually dies around the last $update->execute. What I think might be happening is that the collage.pl is taking too long to run and the DB connection times out, is that possible? Can I force the connection to not timeout? (I did try searching on google but didn't find any ways of changing the keep connection alive variable from a script).
Any idea's/suggestions? Thanks in advance for the help.
PS: Any suggestions on improving the script would be most welcome as I am using this to try to learn perl.
I'm trying to get rsync access to an OS X server with a paranoid sysadmin who doesn't know much about Unix progams. (He's a GUI kind of guy.) He's offered me FTP access to one directory but I'd really like to use rsync due to its low-bandwidth nature and auto-delete feature (delete any file at the destination that's been deleted at the source). His main desire is not to grant a general-purpose account on the server, so if I can convince him that rsync+ssh can be configured to grant access only for rsync in that directory, I may have a chance. But since they're two separate programs (as opposed to *ftpd and mysqld, which can have private password lists for only their program), I'm not sure how to enforce that. Would I have to use rsyncd alone, which I guess means no encryption? (Granted, ftp has no encryption either, but I think he's just using that due to lack of knowledge of alternatives.)
(And when is ssync going to arrive, to avoid this dual-program problem?)
[Benjamin] Take a look at rssh (https://www.pizzashack.org/rssh/index.shtml) or scponly (https://sublimation.org/scponly) - both can be used together with ssh to restrict access to just rsync.
However, access to a single directory would probably require a user jail - - all is explained in the rssh and scponly docs, but it's not really for your "GUI" types.
[Kapil] I suppose you mean something that combines ssh and rsync. In any case your particular problem might be solved by means of an authorized_keys file entry that looks like (this is all in one line one line)
from="202.41.95.13",command="rsync -aCz --server --sender $SRCDIR .", no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-dss AAAAB3NzaC1kc3MAAACBAMzBqJtkx52y3IL9g/B0zAna3WVP6fXDO+YZSrb8GsZ2Novx+1sa X/wBYrIXiKlm0LayvJpz7cF17ycWEyzANF9EivYbwMXPQWecxao82999SjKiM7nX2BUmoePN iUkqpnZuilBS2fPadkTZ7CSGevJ2Y9ryb1LOkFWkdBe2c4ETAAAAFQCUk+rB5HRYj0+KIn5H fiOF0dQtvwAAAIA7ezUaP6CpZ45FOJLunipNdkC0oWj3C5WgguwiAVEz3/I5ALAQ9hdmy+7+ Bi0hUGdkTcRoe4UPMgsahdXLZRNettMv+zdJiQqIiEnmzlQxlNY2LBlrfQyRwVU1SW3QWGog tssIoeOp9GRx7N5H2ZAzMoyGBaUDfHVQueI2BeJiGwAAAIEAhV0XWIm8c2hLAGRyYZeqi5G3 yDAB0DZPyiuYyfnBav3nqjkO65faDuajKUv6as90CGYVlixWO+tjOL+/D9c0DoqdMllPEiZw aGBxd96o1pVrsFw/Ff0jJOtxzj+/Tzgaw9AdI0Usgw1cfXwWS1kJlhXqR00O/ow/XATejWpW 0i8= kapil@neem
Here you must put the appropriate source directory in $SRCDIR.
The authorized key file can be put in a dummy users directory. This dummy user should have appropriate read/write permissions for the directory in question.
As an alternative you can use a configuration file "--config=$FILE" in place of $SRCDIR.
Once this is done, the owner of the SSH private key associated with the public-key (which is the bit that starts ssh-dss AAA....) can connect to the ssh server and start the above command and only the above command.
Hi there,
I'm not a TAG subscriber, so I can't see the list archives to verify, but hopefully this mail isn't repeating something that you've already had a dozen times this month.
[Thomas] So far, you're the first.
From September's gazette: "my machine only boots from floppy, and I want it to boot from cd" might be addressed with a smart boot manager, such as sbm. The debian (sarge) one credits James Su and Lonius as authors, and says it was downloaded from https://www.gnuchina.org/~suzhe , but it looks like the useful content can now be found at https://btmgr.sourceforge.net
[Thomas] Indeed. It has been mentioned in the LG in the past (twice by me, and once by Ben, I believe.)
[Ben] Wasn't me; I hadn't run across SBM until now.
[Thomas] It's OK, and provides a lot of elaborate features that can be quite interesting on certain types of hardware, it has to be said.
[Ben] As is often the case, Debian already has it as a package (pretty amazing folks, those Debian maintainers!) -
ben@Fenrir:~$ apt-cache search smart boot bmconf - The installer and configurator of the Smart Boot Manager sbm - Smart Boot Manager (SBM) is a full-featured boot manager
As Francis has already mentioned, though, it won't boot USB devices. Too bad; that would make it quite useful, especially given that modern kernels are too big to fit on a floppy anymore.
By the way - the fact that they are too big annoys the hell out of me. There are plenty of folks out there who need floppy-based booting - troubleshooting and booting weird hardware configurations are two situations where that capability can be critical - and "new systems all come with a CD-ROM" is NOT equivalent to "all existing systems have a CD-ROM". Yeah, older kernels, whatever; as time goes on, those become less and less useful - and support less and less common hardware. I'll admit that I'm coming from ignorance here, but - there should have been a way to make the kernel modular enough to provide the "compile small kernel" option instead of just losing this important capability.
Thanks for the reply. Oops -- I hadn't spotted that. I did try searching for "sbm", and all I found was a (presumably) mis-spelled samba config file. But now that I try again, searching for "smart boot manager", I see that it does appear in the archives.
No harm done.
"sbminst" it to a floppy to confirm that it can use your hardware, then consider putting it in your primary disk mbr, consigning lilo or other boot loader to a partition or secondary disk. Of course this last bit presumes that "my machine only boots from floppy" really means "my machine only boots from floppy or one hard disk", but that's probably a reasonable assumption.
Worked for me with an ATAPI cd drive that the BIOS didn't like. I suspect it won't work with the SCSI cd in the original problem, sadly. And am almost certain that it also won't work with the USB stick in the original original problem. So it isn't a full solution -- or even a useful solution in these specific cases -- but it might help someone with a slightly different problem.
Hi there,
I'm wondering if it's wise to allow a remote user within the LAN to log in as root, by adding that user's public key to root's "authorized_keys" for that machine.
[Kapil] There is an "sudo"-like mechanism within SSH for doing this. In the authorized_keys file you put a "command=...." entry which ensures that this key can only be used to run that specific command.
All the usual warnings a la "sudo" apply regarding what commands should be allowed. It is generally a good idea to also prevent the agent forwarding, X11 forwarding and pty allocation.
Here is an entry that I use for "rsync" access. (I have wrapped the line and AAAA.... is the ssh key which has been truncated).
from="172.16.1.28",command="rsync -aCz --server --sender . .", no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-dss AAAA..... rsyncuser
I'm writing some scripts to back up data on our small business network here. One option is to get each machine to periodically dump its data on a specific machine using NFS. The option I'm interested in is to get a designated machine to remotely login to each machine and transfer the files over a tar-ssh pipe.
The only reason to be using root access is because some directories (/root, some in /var/lib) can only be read by root. Would changing permissions (e.g. /var/lib/rpm) affect anything, if I chgrp the directories to a "backup" usergroup?
I'm concerned with one machine, a web server, that will be included in the backup scheme. All machines here use Class A private network addresses and are behind a NAT firewall, but the web server can be accessed from the Internet. Will allowing root login over ssh on that machine pose a huge security risk, even by allowing ssh traffic from only the local network?