A personal web server. Today, almost any Linux user has one. Some folks do really serve content with them; others use it for development of PHP or CGI programs. Others like me just have it to read the documentation via the browser and to play with it. I decided that running the Apache web server is overkill for my personal applications. Currently I have access to a CGI and PHP capable provider so I do not need support for these on my own machine. Just plain serving of files without having to run a huge Apache binary in background.
As a result, I decided to drop running my own Apache web server in favor of having a simple micro web server that only answers requests when there are any. It saves me some disk space and RAM, although that wasn't really a significant factor since my computer has plenty of capacity. Mostly I wanted to play around with new software and nifty small but usable solutions.
Just a few ordinary things, nothing involving PHP or CGI:
This leads to another important thing: at least some sort of directory indexing must be supported by the web server. That is, if the final URL component is a directory, redirect to that directory (add the final slash), then serve up the index.html in that directory. (The redirect is important so that relative links on the page will function correctly.) Although this can be done with automated scripts run by cronjobs. But I prefer a simple builtin solution. It doesn't have to be as complex as the Apache indexing function although that one is very nice indeed.
In short: I can use almost any web server that supports the http protocol but it doesn't need many fancy features.
In fact no. All of these can be accomplished by symlinking external pieces into the web server's root directory. No need for "Alias" directives or other complicated options. Just the web server root and I'm happy. Perhaps customizing the port the web server listens on.
But nothing more. A simple command line like this one should be sufficient for my purpose: "binary /path/to/webserver/root".
I decided to use a TCP wrapper solution. The web server binary gets only called when there really is a request. No need to mess around with init scripts. Just a simple line in /etc/inetd.conf and off we go.
However such a solution is not very performant. In fact, if you plan to have more than a few sporadic accesses to your server, go for a standalone server that runs all the time.
Beside a few really awkward solutions ( there are web servers written in Java, bash or awk out there), I decided to go for a compilable solution.
I found a web server called micro_httpd at https://www.acme.com/software/micro_httpd/. This one is written in plain C, takes just around 150 lines of code and does exactly what I want. Runnable from TCP wrapper, no CGI nor PHP, plain serving of files with indexing capability.
I just added a few more mime types in the code and it worked out of the box.
Grab the sources of micro_http and unpack them.
Become root and edit /etc/initd.conf with your favorite editor. Add a line
http stream tcp nowait wwwrun /usr/sbin/tcpd /usr/local/sbin/micro_httpd /var/httpd/wwwroot/to it and restart the Internet super-server inetd.
On my SuSE 7.2 Linux, I type "/etc/init.d/inetd restart" as root.
Make sure to substitute "/var/httpd/wwwroot/ in the example above with the correct path to your new document root.
Substitute the wwwrun with any valid user account, preferably one that has almost no rights on the system for security reasons.
Now try it out: put a few html files in your new WWW root and make them readable by the user account specified. Then point your favorite browser to https://localhost/. You should get either an automated index or your index.html file.
Got this far? Great, your small and micro web server is up and running.
Note:The TCP wrapper does log all connects to the server to /var/log/messages. But don't expect a complete Apache-style log from it. Just plain lines like this:
micro_httpd[886]: connect from x.x.x.x (x.x.x.x)However with knowledge of the http protocol and the code it should be possible to code an advanced logging facility. I leave that one up to you.
In general, any web server that can be run from inetd can be setup like this one. So look around at Freshmeat.
If your needs are as simple as that, it takes a few minutes to switch from Apache to such a minimalistic solution.
It works pretty good although I'm aware that this solution will fail if there are too many requests. For a simple personal web server without heavy traffic it should be sufficient.
At least I'm a bit happier now. Decide - perhaps such a solution would suit your needs as well?
[There's also Tux, a micro web server in a Linux kernel module. It works similar to micro_http, and can chain to a bulkier web server for URLs it can't handle (e.g., CGI scripts). But note that Tux and micro_http serve different niches. Tux is for high-traffic sites that serve lots of simple files (e.g., images) and must keep per-request overhead low to avoid overloading the system. micro_http via inetd is for sites with light web traffic, where the greater overhead of running a separate process for each request is overshadowed by the no overhead at all when there are no requests. Of course, both micro_http and Tux serve a third niche: nifty small usable solutions you can play with. Or as LG contributing editor Dan Wilder would say, "small sharp tools that each do one thing well in the honorable UNIX toolbox tradition."For more information about Tux, see Red Hat's Tux 2.1 manual. I thought Tux was in the standard kernel but I can't find it in 2.4.17, so you'll have to look around for it. -Iron.]