...making Linux just a little more fun!
[ In reference to the article Stepper motor driver for your Linux Computer in LG#122 ]
(jovliegen at gmail.com) jovliegen at gmail.com
Sun Apr 16 05:20:36 PDT 2006
Hello, First of all, I like to thank you guys for this great "Gazette". I realy do enjoy it !!!
I'm trying to build the stepper motor driver, that you published in issue 122. While reading the code, I have a little remark. I'm not quite sure that I'm right, but ... what do I have to loose :D
On line 20 : "static int pattern[2][8][8] = {"
Shouldn't that be pattern[2][2][8] ?
Like I said before, I'm not sure of this ... just wondering ...
Thanks again for your nice work, all of you
[Jason] - Thanks! It's great to hear from readers.
I went and looked at that author's code, and it sure looks like you're right. Note that it doesn't hurt anything to declare the array as larger than you need it, but it's probably not the best style.
There's a few other things in that code that I might have done differently. For instance, his "step" function looks like this:
int step() { if(k<8) { // if(pattern[i][j][k]==0) { // k=0; // printk("%d\n",pattern[i][j][k]); // k++; // } // else { printk("%d\n",pattern[i][j][k]); k++; // } } else { k=0; printk("%d\n",pattern[i][j][k]); /*#####*/ k++; /*#####*/ } return 0; }Note that this does not have the exact same behavior as the original function. In the original function, after step() returns, k is in range [1,8], inclusive. In the version I give, k is kept in the range [0,7], inclusive, which is the correct range if k is being used as the index of an array of length 8.
Unless I'm really missing something, I don't think the actual code to write the parallel port was included in that article.
Thanks for taking the time to point this out.
[[Ben]] - As I recall, the author used a shell script to prod the device.
[ In reference to the article uClinux on Blackfin BF533 STAMP - A DSP Linux Port in LG#123 ]
Robin Getz (rgetz at blackfin.uclinux.org)
Sat May 6 09:09:25 PDT 2006
In https://linuxgazette.net/123/jesslyn.html Jesslyn Abdul Salam wrote:
>The Blackfin processor does not have an MMU, and does not provide any >memory protection for programs. This can be demonstrated with a simple program:
This is only true if you leave this feature turned off.
The Blackfin processor does include hardware memory protection, which the uClinux kernel supports - but since it effects performance, and most embedded developers only turn this on when doing development, and it is turned off by default.
https://docs.blackfin.uclinux.org/doku.php?id=operating_systems#introduction_to_uclinux
When you take:
int main () { int *i; i=0; *i=0xDEAD; printf("%i : %x\n", i, *i); }
and run it on the Blackfin/uClinux, you get:
root:~> uname -a Linux blackfin 2.6.16.11.ADI-2006R1blackfin #4 Sat May 6 11:38:53 EDT 2006 blackfin unknown root:~> ./test SIGSEGV
Which is pretty close to my desktop...
rgetz at test:~/test> uname -a Linux test 2.6.8-24.18-smp #1 SMP Fri Aug 19 11:56:28 UTC 2005 i686 i686 i386 GNU/Linux rgetz at test:~/test> ./test Segmentation fault
[ In reference to the article Interfacing with the ISA Bus in LG#124 ]
Abhishek Dutta (thelinuxmaniac at gmail.com)
Mon May 8 01:11:49 PDT 2006
Hi,
Could you please change address my website in the article https://linuxgazette.net/124/dutta.html published in MArch,2006 issue of linuxgazette.net
You can get more details and photos related to this project at https://www.myjavaserver.com/~thelinuxmaniac/isa
The website address has changed from mycgiserver.com to myjavaserver.com
--
Abhishek
[Thomas] - I wouldn't have thought so. Most of the mirrors have probably synched by now, not to mention copious other sources. By changing your article now, that's be known as a fork -- something not desirable at all.
[[Ben]] - Even more importantly - since 'creating a fork' for already-published content doesn't bother me much - it would be essentially pointless; we currently have nearly 60 mirrors and several translation sites, and the only site that we can change is ours (i.e., the root site.) In other words, even if we change it - which I'll do, since it's trivial - anyone searching for your article will still be about 70 times more likely to find the old address rather than the new one.
For anyone contemplating writing an article, this is why I encourage including the pertinent images, sources, etc. as part of the submission: external links die. The author's details - including the URL in their bio - are flexible, since the bios are reloaded monthly by all the mirrors; the article content, once published, is essentially frozen in time.
[Thomas] - The best you can hope for is that I/we/someone publishes this request in the gazette for next month. ;) Of course, given that the talkback feature points here, and not to your personal address, you can at least be assured that any correspondence will reach us and not some void.
[[Ben]] - Excellent point, Thomas! I hadn't considered that particular benefit of it.
[ In reference to the article Build a Six-headed, Six-user Linux System in LG#124 ]
Eric Pritchett (eric at gomonarch.com)
Mon Apr 24 15:07:31 PDT 2006
How would you configure this setup to use usb speakers for each user. I think the ideal solution would be to have a usb keyboard with a built in usb hub that has two usb slots, so you can plug in your mouse and usb speakers. Any thoughts?
[Jimmy] - That would solve the simpler problem: that of having the speakers close to the user; it wouldn't address the other difficulties of using multiple sound cards: of ensuring that the intended sound is directed to the correct sound card, in particular.
Ob. mailing list requests: please don't send HTML email (there's a not-so-tenuous connection between postal workers and the frequenters of mailing lists: you have been warned :), and should you reply, please don't top post: top posting makes the Baby {Jesus, Buddha, Xenu, etc.} cry. :)
If you understood my first paragraph, feel free to stop reading, as (bearing in mind that e-mail to this list is intended for publication) there is a fairly common misconception here; otherwise: USB is smart, but not that smart - the fact that devices are located close to each other physically is not communicated to the OS (that is, the system can't tell that soundcard A is connected to keyboard A, soundcard B to keyboard B, etc.), which is therefore unable to act accordingly.
[[Karl-Heinz]] - If the keyboard is its own hub -- the usb tree hierarchy should be quite able to mirror the physical proximity. I've not read the beginning of this thread -- but I can imagine a setup with two heads and two keyboards where "usb speakers" are dedicated to sound produced on one of the heads.
Just don't expect a standard distribution to do this for you -- this is manual configuration and it can not be saved in the users settings (like kde) if the users could be on either head.... pam extensions like group setting on login for devices like audio might be able help with the setup.
[[[Ben]]] - Interesting idea, that. I can imagine a system - something like LDAP in general scope if not in implementation - that assigns configured user resources from the available pool... hm, wish I knew a bit more about creating a project like that. It wouldn't even have to involve kernel programming, since everything could be done at the USB level.
[ In reference to the article Build a Six-headed, Six-user Linux System in LG#124 ]
Claude Ferron (cferron at gmail.com)
Thu Apr 27 05:30:25 PDT 2006
any update on the bugs you were experiencing?
[BobS] - Not really. There is a new nVidia driver which does not fix the problem. Two members of the Xephyr project have written to say that the problem I'm seeing is due to a reset problem in the nVidia driver. Their solution is to run several X sessions inside one big X session. This way the boards only get reset once. The URL for Xephyr is at: https://www.c3sl.ufpr.br/multiterminal/index-en.php .
[ In reference to the article Build a Six-headed, Six-user Linux System in LG#124 ]
Izzy esteron (izzyesteronjr at yahoo.com)
Sun May 21 06:58:01 PDT 2006
To whom it may concern,
This article by Mr. Smith is amazing. In my opinion, this is the solution all 3rd world countries are looking for to keep hardware cost down. I know the technology is not perfect yet, but it is a great start.
Anyway, does this setup allow each workstation to have their own audio? How about network/LAN gaming?
Thank you for your time,
izzy
[Ben] - Well, the former was noted as a problem in the article; there has also been some discussion on the issue. What it looks like to me is that it would be possible to make this available with a little work in the kernel - but it would either take someone experienced in audio and kernel hacking, or convincing the current maintainers that this is necessary. Conversely, individual external (USB) sound devices might be a solution.
The latter does not seem like it would be a problem at all as long as the CPU and the memory is up to it; most of the load in gaming, as I understand it, is on the video card - and every user has one of their own in that configuration.
[[Izzy]] - Thanks for the reply, it was really helpful.
Last February Mr. Smith posted a message on https://blog.chris.tylers.info . He mentioned that
"The system is VERY unstable. I get a kernel oops fairly often when a user logs out. Has anyone seen this problem before? Any ideas on how to fix it? Also, any suggestions to improve my article? thanks, Bob Smith"Also, there was a couple of replies after this message and showed how to solve the problem by downgrading the nvidia drivers. Has Mr. Smith tried the "fixes"?
Thanks again.
[ In reference to the article HelpDex in LG#125 ]
Diego Roversi (diego.roversi at gmail.com)
Tue Apr 4 01:49:33 PDT 2006
I can see the comics. What's wrong with the old good image formats? Why not PNG or JPG?
[Rick] - Guys, plainly we need to FAQ this.
Diego, this was discussed in the prior issue (#124). Please see: https://linuxgazette.net/124/misc/nottag/flash.html
Note open-source (reverse-engineered) Flash implementations in various degrees of completion, detailed at https://osflash.org/ . Note also excellent and authoritative analysis by Kapil Hari Paranjape at the first URL cited. (Kapil views HelpDex using the open-source swfplayer software, which he finds adequate to the task. I recommend that plus the Flashblock browser extension, https://flashblock.mozdev.org/, so that you retain the ability to view Flash only when you wish, not when some advertiser so dictates.)
Diego, you're welcome to work with us every month to convert the Ecol comic strip to formats you prefer -- without the results sucking, please -- or become a cartoonist and start sending us good cartoons in a graphics format you prefer. Otherwise, your answer would seem to be as above.
[[Ben]] - I don't know that an FAQ entry would be of much use; in my estimate, the percentage of our readers who have looked at our FAQ is very small.
[[[Rick]]] - Interestingly, one of the less-appreciated but substantial benefits of a FAQ has nothing to do with whether people ever bother to read it before asking its questions or re-raising its tired old subjects. That is: When somebody does go there for the 1000th time, you can definitively answer, with near-zero effort, with just a URL.
[[[[Ben]]]] - Which presumes remembering the URL in question... but I do see your point. That "secondary" use of the FAQ has even more benefits than I thought - all natural consequences, to be sure, but my mind wasn't quite twisty enough to see all the rami. In short, kudos.
[[[[[Rick]]]]] - Why, thank you, sir.
You might enjoy the slightly cranky bit of documentation that I posted to the prototype wiki replacement for Silicon Valley Linux User Group's Web site (on one of Heather's machines), starting with header "Standards, Nits, Peeves" on this page:
https://gemini.starshine.org/SVLUG/Teams/Web_Team
I was definitely getting in touch with the curmudgeon within, when I wrote this entry:
website/Website: No such word. Correct to "Web site". No, there's nothing wrong with inventing new words, especially where they make our language more expressive and fill a need not met by existing words. This one doesn't qualify, and is really just the result of people being sloppy. Some would call our cleaning it and similar things off our pages "prescriptivism"; I prefer the term "leadership".
[[[Rick]]] - Over the years, I've used this trick to dispose of innumerable topics after getting tired of discussing them. My first-level expectation is that hardly anyone will read my answers until I send their URLs. Note: I do take advantage of that expectation to expose readers to adjoining text on related questions.
[[[[Ben]]]] - Sneaky. And useful.
[[Ben]] - However, it might make sense to add some sort of a note regarding this issue to the top of the strip. I'll give the wording some thought and glue it in.
[[[Rick]]] - My opinion, yours with a small fee and disclaimer of reverse-engineering rights: Less is better. E.g.,
'' Format: Flash ''...where "Flash" is anchor text hyperlinking to a FAQ, for which https://linuxgazette.net/124/misc/nottag/flash.html might serve nicely.
Metacomment: Computerists tend to overestimate public willingness to deal with explanatory text. The public at large tends to react badly to verbosity, assuming its contents to be
- unimportant, and/or
- open to debate
This is why we have "STOP" signs, rather than "Stop, though of course there are lots of exceptions including the need to do anything required to avoid dangerous situations" signs.
[[[[Ben]]]] - [laugh] Kat just finished her Power Squadron course (scored 100% on her test, woo-hoo!); this is almost exactly the phrasing used for right-of- way rules in navigation. Which is why, I suspect, so many people screw up in that regard.
[[Suramya]] - Umm... I have a low tech approach to converting the Ecol comic strips from flash:
Click on the link to view the flash file, take a screen shot and save it in the format you want. Sample output:
https://www.suramya.com/Temp/HelpDex.jpg
:)
If I get the files early enough I volunteer to do this. But let me know what you think of this...
[[[Ben]]] - 1) At the moment, there's no way to do actual thumbnails linked to full-size images in the cartoon section. Changing that would require twiddling Mike's Python scripts - and I'd have no idea how to do that.
2) JPG files are significantly larger than the equivalent SWFs - and go all jaggy when viewed at anything but "native" size (unless, of course, you go with an even larger file size.) E.g.:
:-r!ls -l /tmp/Help* -rw-r--r-- 1 ben root 80715 2006-04-06 12:50 /tmp/HelpDex.jpg -rw-rw-rw- 1 ben root 18546 2005-12-06 21:21 /tmp/HelpDex.swfAdd a thumbnail to that, and we've got a big chunk'o'download - for no gain (and, in fact, image quality loss.)
3) The issue is - except for the very rare complaint - settled. There's Open Source viewing software available, which seems to work fine, and those who complain will now be redirected (either from here or from the link on the page itself) to the discussion covering it.
So - thank you for the offer, Suramya, but there's no need. It's all good to go.
[ In reference to the article A Brief Introduction to IP Cop in LG#125 ]
André Fellows (andrefellows at hotmail.com)
Tue Apr 11 13:43:21 PDT 2006
Hi folks!
This IP Cop is another leaf of the Linux Firewall/Router/DHCP server distros.
I have to tell you that this IP Cop seems to me not good as BrazilFW.
I used Coyote (https://www.coyotelinux.com/ ) but it was discontinued. The BrazilFW (www.brazilfw.com.br) is the Coyote continuation.
Try there, its free, fast and painless!
Cheers
Fellows
[Ben] - Then the obvious thing to do is a bunch of research comparing the two and presenting your results in an article. Otherwise, thank you for telling us how it "seems" to you... but I'm not sure why you'd want to share this information. [1]
I've used Coyote in the past on several occasions, and liked it a lot; for basic routing and firewalling, it was quite a nice gadget. However, IPCop has many more features than Coyote did - at least so I gathered from the article.
DIfferent pieces of software, different purposes. One is not "better" than the other - just different. Implying otherwise, especially without supporting data, is rather inconsiderate toward the person/people who took the time to write the software that you don't like.
[1] This is a (perhaps somewhat testy) way of pointing out that saying "program $X is good" does NOT require adding "because program $Y sucks" for the purpose of emphasizing the former; there's no subordinate clause implied. Since this is not the world of proprietary software, in which vendors are fighting tooth and nail to sell their latest crap-a-riffic bugware, *we do not have to denigrate competing software* - it brings us, as a community, no benefit, and may do damage to a fellow programmer's reputation. So, yeah, this hits the rant button for me.
[[André]] - My sincere excuses!!!
I forgot to mention that my "seems" is not to consider!
[[[Ben]]] - Relax - my mini-rant was a generalized one. I replied to your statement, but it's about more than that; it's about an important consideration for the Linux community in general. We should always be striving to support each other, not tear each other down.
Often, operating in the world of Open Source requires re-thinking the "standard" ways of interacting with the other people involved. This differs from what we're generally used to, but is usually easy to adapt to: substituting cooperation for competition, especially when the benefits are obvious and immediate, presents little difficulty for most people.
It does, however, require thinking about it ahead of time - which is what I'm trying to get people to do with my rant. Thanks for the opportunity. :)
[[André]] - My intention was to tell you guys about the BrazilFW, the Coyote sucessor. Like Coyote, BrazilFW have many addons to improve its funcionality. The "seems" part was because I thinked you guys could make an article comparing both and its funcionalities.
[[[Ben]]] - Well, we don't generally write articles - but we do accept well-written ones on Linux-related topics. If you'd like to write one about BrazilFW, I'd be happy to look at it.
[[André]] - This is the point of the information. Really sorry if I wasn't clear (and I really wasn't ).
Very very very sorry to make you Benjamin spend your time on mine nonsense comparison...
[[[Ben]]] - Don't worry - there was nothing particularly awful in your email, and I wasn't upset.
[[André]] - Oh, by the way, Linuxgazette is a GREAT SITE with GREAT information!!!
[[[Ben]]] - Thank you, André! We do our best. Glad to hear that you're finding it useful.
[ In reference to the article Preventing DDoS attacks in LG#126 ]
Thomas Adam (thomas at edulinux.homeunix.org)
Tue May 2 13:49:25 PDT 2006
Hello,
I was intrigued by this article about DDoS. It's something one hears about more and more, and so I thought I'd give it a go, and play along at home. Ben, I don't know just what percentage of the original article remains, having done the rewrite on it (something that I agree with, especially in terms of attribution), but I would have liked it if the original author could have expanded on a few things...
* Check if your CPU load is high and you a have large number of HTTP process running
Heh. Yes, I quite agree. But so what? I mean, how are you supposed to distinguish that from most other processing that goes on? A simple 'w' isn't going to help you much there. It's pretty normal for most CPUs to peak.
As for the HTTP process, this is typically mitigated by setting the following in /etc/apache/httpd.conf to a reasonable value:
MaxClients 30
Now, the Apache folks make it quite clear that this should not be set too low, blah, blah. But for a small-end server (perhaps even one running on a home ADSL-connection) setting this to something perhaps lower than 30 in this case would help in stopping Apache from consuming everything.
Then there's the following command:
ps -aux|grep -i HTTP|wc -l
I know I can be pedantic at times, but on non-BSD systems the leading '-' causes issues, not to mention the classic "You'll return grep too" syndrome, as well as the useless use of 'wc' (this isn't for your benefit, Ben, more the original author):
ps aux | grep -ic '[h]ttp'
* Determine the attacking network
This section was quite good. Note that, as mentioned before, the number of concurrent connections via Apache can be capped. Again, the command:
netstat -lpn|grep :80|awk '{print $5}'|sort
Can be reduced to:
netstat -lpn | awk '/:80/ {print $5}' | sort
(Too many people seem to think Awk only understands columns in terms of quoting printed ouput. Such a shame.)
It's worth mentioning here the use of PAM and chroots. A slight side-note is that in any DDoS attack, the targeted process or thread is generally going to expand as the load increases on it. This is generally undesirable, and so one can tell PAM to employ memory usage limits via /etc/security/limits.conf (think ulimit for processes). As for the chroots, running various processes inside those can encapsulate their environment away from everything else.
I also couldn't see what the following code was supposed to do:
for f in /proc/sys/net/ipv4/conf/*/rp_filter do echo 1 > done echo 1 > /proc/sys/net/ipv4/tcp_syncookies
I assume that's meant to read:
for f in /proc/sys/net/ipv4/conf/*/rp_filter do echo 1 > "$f" done echo 1 > /proc/sys/net/ipv4/tcp_syncookies
Hmm. This reads as though I've picked the hell out of the article for no apparent reason -- far from it. I hope these little tidbits of information will prove useful. By all means feel free to forward these comments on to the other author.
[Ben] -
On Tue, May 02, 2006 at 09:49:25PM +0100, Thomas Adam wrote: > Hello, > > I was intrigued by this article about DDoS. It's something one hears about > more and more, and so I thought I'd give it a go, and play along at home. > Ben, I don't know just what percentage of the original article remains, > having done the rewrite on it (something that I agree with, especially in > terms of attribution), but I would have liked it if the original author > could have expanded on a few things... > > * Check if your CPU load is high and you a have large number of HTTP > process running > Heh. Yes, I quite agree. But so what? I mean, how are you supposed to > distinguish that from most other processing that goes on? A simple 'w' > isn't going to help you much there. It's pretty normal for most CPUs to > peak.I'm a fan of iostat/vmstat, myself - but 'w' will give you the standard 1/5/15 load averages.
That's not much to do with peaks. (Just FYI, I avoided changing the actual content of the article as much as possible except where it was clearly wrong. Different usage, or even suboptimal usage, I left alone. Ditto the idiom, which is why it still sounds mostly like Blessen rather than me.)
As for the HTTP process, this is typically mitigated by setting the > following in /etc/apache/httpd.conf to a reasonable value: > > MaxClients 30 > > Now, the Apache folks make it quite clear that this should not be set too > low, blah, blah. But for a small-end server (perhaps even one running on > a home ADSL-connection) setting this to something perhaps lower than 30 in > this case would help in stopping Apache from consuming everything.Right, but he's talking about a server in a data center - I believe that was mentioned right at the beginning.
> Then there's the following command: > > ps -aux|grep -i HTTP|wc -l > > I know I can be pedantic at times, but on non-BSD systems the leading '-' > causes issues, not to mention the classic "You'll return grep too" > syndrome, as well as the useless use of 'wc' (this isn't for your > benefit, Ben, more the original author): > > ps aux | grep -ic '[h]ttp'[Nod] Agreed.
> It's worth mentioning here the use of PAM and chroots. A slight > side-note is that in any DDoS attack, the targeted process or thread is > generally going to expand as the load increases on it. This is generally > undesirable, and so one can tell PAM to employ memory usage limits via > /etc/security/limits.conf (think ulimit for processes). As for the > chroots, running various processes inside those can encapsulate their > environment away from everything else.Again, chrooting is something I'm a big fan of. Heather told me about it ages back, and I glommed onto the idea like a drowning man onto a log...
> I also couldn't see what the following code was supposed to do: > > > for f in /proc/sys/net/ipv4/conf/*/rp_filter > do > echo 1 > done > echo 1 > /proc/sys/net/ipv4/tcp_syncookies > > I assume that's meant to read: > > for f in /proc/sys/net/ipv4/conf/*/rp_filter > do > echo 1 > "$f" > done > > echo 1 > /proc/sys/net/ipv4/tcp_syncookies > > Hmm. This reads as though I've picked the hell out of the article for no > apparent reason -- far from it. I hope these little tidbits of > information will prove useful. By all means feel free to forward these > comments on to the other author.Yeah, I missed that in the welter of the other stuff I'd corrected. Well spotted! I'll pass it on - and it should go into the Mailbag as well, as a followup to the article.
[ In reference to the article Preventing DDoS attacks in LG#126 ]
Ulrich Alpers (ulrich.alpers at ub.uni-stuttgart.de)
Thu May 4 03:00:01 PDT 2006
Hi,
if I am the 10000th to mention, sorry for that ...
The last script snippet looks a bit weird:
----------------------------------------------------- for f in /proc/sys/net/ipv4/conf/*/rp_filter do echo 1 > done echo 1 > /proc/sys/net/ipv4/tcp_syncookies -----------------------------------------------------
How about this:
----------------------------------------------------- for f in /proc/sys/net/ipv4/conf/*/rp_filter do echo 1 > $f done echo 1 > /proc/sys/net/ipv4/tcp_syncookies -----------------------------------------------------
As to the syncookies thing: I am not quite sure, but if you have already put the line
net.ipv4.tcp_syncookies = 1
into the sysctl.conf - what is the syncookies line in the rc.local for?
Regards,
Ulrich
[Ben] - Come to think of it, it's only a couple of days after publication - and the error is egregious enough that it shouldn't be propagated. [sounds of axes, sawing, hammering, copying, and pasting [1] in the background] All fixed now.
How about this instead:
----------------------------------------------------- #!/bin/bash for f in /proc/sys/net/ipv4/{conf/*/rp_filter,tcp_syncookies} do echo 1 > $f done -----------------------------------------------------Love those curly-brace Bash expansions. :)
[[Thomas]] - Horses for courses -- both ways'll work.
[Ben] -
> As to the syncookies thing: I am not quite sure, but if you have > already put the line > net.ipv4.tcp_syncookies = 1 > into the sysctl.conf - what is the syncookies line in the rc.local > for?Right; that should have been (and now is) worded as
Conversely, you could add this code to your '/etc/rc.local':
[[Thomas]] - But this is still wrong. /etc/rc.local is not honoured across all Linux distributions. SuSE and RH use it (although SuSE for a time used to only honour /etc/local.rc). RH uses /etc/rc.boot. Debian, for instance, has /etc/bootmisc.sh, but one shouldn't go changing that arbitrarily anyway (it'll get overwritten when 'initscripts' is updated). For the Debian people, one can create the following directory:
/etc/rc.boot... and place any scripts in there, ensuring that the directory and
scripts themselves have octal permissions '755'.
[[[Ben]]] - That would be the reason for the word "conversely", so I wouldn't call it "wrong" - just different. Is there a distro out there that doesn't have an "/etc/sysctl.conf"? I thought that was pretty much universal in Linux.
On the gripping hand, there's also the fact that we don't purport to cover all distros - although I like to encourage authors to broaden those "$DISTRO_OF_CHOICE-only" articles into something a bit more useful. The way I figure it is, mentioning a couple of different methods should suffice to clue people in about adapting it to their specifics. If you have a better suggestion that will, without fail, cover any and every distro (including the one that I'm creating right now for the specific purpose of evading whatever solution you come up with :), I'd like to hear it.
[Ulrich] - SuSE uses /etc/init.d/boot.local (among a lot of other boot.* files).
BTW, as we are collecting the locations of the script to put the code into - Gentoo uses /etc/conf.d/local.start
[Ben] - Between Thomas Adam (who first noted the problem) and yourself, you two have managed to get me off my lazy butt and fix it. Consider yourselves promoted, and entitled to wear The Grand and Sublime Order Of The Linux Gazette [2]. Thank you, congratulations, and may you remain worthy of this high honor! :)
[1] For those who are not familiar with the BOFH Saga, the sound of these last two operations is precisely like the sound that will
immediately precede the End of The World:CLICKETY-CLICK
[2] This looks just like a red stick-on Post-It dot, but is imbued with special powers; for this week only, we've secretly replaced the Post-It dots in your local stores with these GaSOOTLGs, which will be activated as soon as you buy and apply them.
[ In reference to the article Preventing DDoS attacks in LG#126 ]
René Pfeiffer (lynx at luchs.at)
Sun May 7 15:01:20 PDT 2006
Hello, Linuxgazette!
I read Ben's introduction to the "Preventing DDoS attacks" article. I can imagine that you get lots of submissions like this. I've given the idea of having stand-by co-authors some thought. I like the suggestion very much, especially if it helps to "rescue" content which could interest a wide audience. However this is clearly much more than proofreading. It involves communication with the author, and it requires a change of style, obviously. I think it is a challenge to walk the fine line between helping someone and completely changing the original idea into something very different.
I am teaching at a technical academy here in Vienna. I supervised and helped several students working on their graduation. Last year a student submitted a piece of work that we could not accept, even with all the best intentions. We gave the student a long list of improvements and had him rewrite the whole thing from scratch. Of course this is a different approach, because no one can assume co-authorship for a diploma our students are supposed to write alone. Nevertheless it illustrates the point of enabling someone else to improve something.
Provided you have a pool of stand-by co-authors and co-authorship is welcome, it would be nice to have a simple set of rules how to proceed in such a case. I don't think that it is a good idea to advertise this as a feature and to encourage people to send half-written articles. Maybe postponing an article with a list of suggestions to the author is an option then.
Well, that's my brainstorming. English is not my native language, but in case you want to create a pool of helping hands that co-author and improve articles, count me in.
Best wishes, René.
[Ben] - [Quoting René]
> Hello, Linuxgazette!Hi, René! Good to hear from you again.
> I read Ben's introduction to the "Preventing DDoS attacks" article. I > can imagine that you get lots of submissions like this.These days, it's averaging a bit more than one per month.
> I've given the > idea of having stand-by co-authors some thought. I like the suggestion > very much, especially if it helps to "rescue" content which could > interest a wide audience.The original suggestion came from my wife, Kat, some months ago. Ever since then, I'd been planning to write a Back Page (i.e., get up on my soapbox), propose the idea, and invite participation - but somehow never got around to it. Except, of course, now - in a fit of desperation.
> However this is clearly much more than > proofreading. It involves communication with the author, and it requires > a change of style, obviously. I think it is a challenge to walk the fine > line between helping someone and completely changing the original idea > into something very different.Indeed. Many of these articles express an important viewpoint that's just poorly-enough stated, or address an interesting technical point but fail technical review by an amount just beyond trivial. If it's a case of pointing out the errors to the author and having them do a rewrite, or perhaps directing them to try restating their point a bit better, fine - as I keep telling people, "there's always the next issue". But what if the problem is caused by the author lacking just that tiny bit of knowledge, or that edge of ability to express themselves well in what may be a foreign languge? I hate to turn those down because I can see many of these authors trying their best, and missing by just ->that<- much. If we can find some volunteers, this would be a great resource.
> I am teaching at a technical academy here in Vienna. I supervised and > helped several students working on their graduation. Last year a student > submitted a piece of work that we could not accept, even with all the > best intentions. We gave the student a long list of improvements and had > him rewrite the whole thing from scratch.I almost never simply turn an article down; if I reject one, I'll have a long list of suggestions and examples that an author could use to improve their article and learn to write better articles overall. This is, as I see it, one of the duties of an editor.
> Of course this is a different > approach, because no one can assume co-authorship for a diploma our > students are supposed to write alone.What??? "Alone", as in "by themselves"? Shocking. If it wasn't for those famous co-authors, Messieurs Ibid, Opcit, and Anon, I suspect that fully 90% of the modern "original contributions" would never have seen the light of day. :)
> Nevertheless it illustrates the > point of enabling someone else to improve something. > > Provided you have a pool of stand-by co-authors and co-authorship is > welcome, it would be nice to have a simple set of rules how to proceed > in such a case. I don't think that it is a good idea to advertise this > as a feature and to encourage people to send half-written articles.[Nod] All joking aside, those are wise suggestions. I think that half-baked articles have a substantially different quality from the ones that I find to be this sort of a moral dillemma, and I have no problem bouncing those back to the author - but, yes, co-authors would be a resource to be assigned with care and forethought.
> Well, that's my brainstorming. English is not my native language, but in > case you want to create a pool of helping hands that co-author and > improve articles, count me in.René, I'll take your diction and clarity of expression over many a native English speaker I've known - no joke. Your offer is gladly accepted. I don't know that I'll have anyone to send to you anytime soon (I'll be filtering submissions very carefully in this regard), but I'll definitely keep this possibility in mind. Thank you!
[ In reference to the article From Assembler to COBOL with the Aid of Open Source in LG#126 ]
(trevor at haven.demon.co.uk)
Tue May 2 02:19:42 PDT 2006
Tag guys 'n' gals,
Converting 6000 lines of old assembly code to, err, cobol ? Why ? I would estimate that since a simple line of say c#/c++/pascal/basic compiles down to at least 6 lines of assembly and probably many more then we are looking at less than 1000 lines of 'high' level code. So it strikes me it would be quicker and less bug prone to just re-implement the program in a modern language and gain a few advantages along the way readability included.
It would have been nice if the original code/cobol code had been linked to or a summary or explaination of the programs function was included.....
Trev
[Ben] - [quoting Trev] "Converting 6000 lines of old assembly code to, err, cobol ? Why ? "
I'd imagine that it's due to The Golden Rule: He Who Has The Gold, Makes The Rules.
> I would estimate that since a simple line of say c#/c++/pascal/basic > compiles down to at least 6 lines of assembly and probably many more > then we are looking at less than 1000 lines of 'high' level code. So > it strikes me it would be quicker and less bug prone to just > re-implement the program in a modern language and gain a few > advantages along the way readability included.There are many mainframes out there that don't run "modern" (i.e., PC-based) languages. Since Edgar is a mainframe consultant - at least so I gather from his bio - I can see why he'd be constrained to COBOL and such.
> It would have been nice if the original code/cobol code had been > linked to or a summary or explaination of the programs function was > included.....Well, I seriously doubt that Edgar's free to release his customer's code for everybody's perusal. I agree, it would be nice if the world ran on Open Source principles by default, and that it would indeed be interesting to see what the original programming task was - just so people can snark at the complexity and re-do it all in three lines of Perl or whatever - but that's not how things are. Until we take over, at least. :)
[[Jimmy]] - Ah, Cobol. Not the reason I dropped out of college (that was RPG), but a damn close second place.
[[Edgar]] - I won't try to one-up Ben's responses to the substantive issues. We readers know why we keep him around...
[[[Ben]]] - Thanks, Edgar - I appreciate the implied compliment.
[[Edgar]] - But essentially you and I have no dispute. Your objections/questions are all very reasonable. As I remarked to Ben, given a choice, I would prefer C over any other programming language I have ever used. Regarding the article, in retrospect it might have made sense to do things a bit differently. Some background.
Although I have been using Linux since SuSE 5.0, I still consider myself somewhat a newbie. Several decades of mainframe experience have inured me against the daily slings and arrows. But even a casual reading of my articles likely elicits no more than a polite "ho, hum" from the members of TAG. A while back I discovered how easy it is for even a newbie to do things that might sound intimidating, to a newbie. I'm addressing newbies.
[[[Ben]]] - Just to expand on that point a bit - that's one of the types of articles I'm always looking for. In fact, one of my larger challenges in vetting, editing, and writing articles is to keep or access that "beginner's mind" - and it's not an easy one. I'm always appreciative of new Linux users who can write well about their experiences - this is one of the most important types of articles in LG.
[[Edgar]] - As I was writing the article, I was concerned the problem itself might be getting too much "air time" as it were. But I doubt most newbies have done anything in Assembler. Any Assembler. And COBOL?
When I submitted the article, Ben properly chastised me for mentioning that 5-letter-word.
[[[Ben]]] - Ah-ah-ah! I did not. I did, however, poke some gentle fun at you about it. :)
[[Edgar]] - But I felt it necessary to motivate what I went through and to explain it in enough detail to make the problem understandable.
The point I really wanted to make -- and it may not have come across well -- was that Open Source might be of help where one wouldn't necessarily expect it and that it needn't be terribly difficult to install a package not part of the distribution you are using.
[[[Ben]]] - This is, in fact, how I took it; that's been a wonderful part of my Linux experience as well. The Open Source nature of it seems to spawn effort and creativity in people who wouldn't normally give a rat's ass about writing software, and motivates others who would otherwise let their ideas die without expression. As an example, some years ago I ran across a Linux tool that translated Norton Guide databases - something that made me very, very happy, since it allowed me to convert some very useful data I'd thought of as lost. Another author had rewritten the first IDE for C/C++ that I'd ever used under DOS (Borland's Turbo C++) as a Linux prog (RHIDE), which was responsible for a wonderful bout with nostalgia. :) There are lots of examples, many people doing wonderful things simply because the environment is available, there's nearly infinite room to play in, and you get full credit for your work.
[[Edgar]] - To that extent it might as well have been Pascal to C. Except that that wouldn't even deserve "ho, hum".
For precisely the reasons mentioned by Ben the code in the article is not even the code dealt with, although functionally equivalent. And long-term it is not a question of one, single program. This is far more than merely proof of concept. The people I am doing this with long ago left the starting blocks. And there is reason to believe that many other companies need quickly to get away from Assembler into something else main-stream. Anything! Before their last remaining Assembler programmer retires.
But, Trevor, you have given me a very interesting idea.
I need to check this out, but I am fairly confident that C is an intermediary step in the compilation process, under Linux. I'm the only one in this crowd regularly using Linux, but not C. The code might be ugly. I need to look into it. But down the road...?
Thanks for your comments.
[[[Ben]]] - I'll be looking forward to that article, too. :)
As to package installation - heck, most interesting things are already available as packages, especially in distros that make a point of it. When I find something on the Net that I'd like to install, I usually check for an existing Debian package containing it - and find it in better than half the cases.
ben at Fenrir:~$ apt-cache show open-cobol Package: open-cobol Priority: optional Section: devel Installed-Size: 464 Maintainer: Bart Martens <bart.martens at advalvas.be> Architecture: i386 Version: 0.32-1 Depends: libc6 (>= 2.3.5-1), libdb4.3 (>= 4.3.28-1), libgmp3c2, libltdl3 (>= 1.5.2-2), libncurses5 (>= 5.4-5), libcob1 (= 0.32-1), libcob1-dev (= 0.32-1) Filename: pool/main/o/open-cobol/open-cobol_0.32-1_i386.deb Size: 175364 MD5sum: e939fd76f9592030eabd051e8f168ba4 Description: COBOL compiler OpenCOBOL implements substantial part of the COBOL 85 and COBOL 2002 standards, as well as many extensions of the existent compilers. OpenCOBOL translates COBOL into C and compiles the translated code using GCC. . Homepage: https://www.opencobol.org/So for me, it's just a matter of typing "sudo apt-get install open-cobol". Debian takes care of all the dependencies, etc.
[[[[Trevor]]]] - Well guys I'd better explain what went through my mind when I read the article, it went something like "This guy's creating a problem' now my experience and current nightmare is legacy code, where I'm at (I work for IBM on uk government outsourcing projects) we have a number of older systems that need modified, updated and in many cases replaced. The cost of hiring people with 'legacy' skill sets which these days increasingly includes COBOL is high.
One day that piece of COBOL will need either it's function modified or the hardware will be replaced (nothing lasts for ever) with an off the shelf x86/opteron commodity server (hopefully running Linux) and someone will have to re-do the work and that might meaning having to hire an expensive contractor or re-write the whole system in some other language and that is a cost that could, at least in part, have been avoided.
I have seen huge amounts of time and money spent dealing with this type of 'patching up' it's my pet hate.
[[[[[Ben]]]]] - Sure; anyone who's been a programming consultant for any length of time has seen this, and it's always the same kind of double-bind. However, this is also exactly the situation I had in mind when I quoted the Golden Rule - what most of us neglect to consider, at least when we first run into the situatuon, is the financial/operations end of the problem from the client's point of view.
When a client (whose entire codebase consists of, say, COBOL) discovers that he needs to patch/modify/fix/tweak/whatever that codebase, the first questions that arise will inevitably be these:
1) What will it cost?
2) How much of an interruption of business will it create?
In the case of the first question, a patch - or even a large series of patches or mods - is usually much cheaper than a complete rewrite, especially when that codebase is large. So, the decision to patch, and keep patching some obscure horror that can only be fixed by one doddering wizard (all others having either croaked or switched to Java, which is essentially the same thing :) is pretty much the default - absent some immediately-pending and obvious (or already extant) catastrophe. Trying to convince clients that it's going to happen before that point will mark you as a doom-sayer with a sly second agenda, and will not result in happy relations with that client.
In the case of the second question, the problem is even worse - far worse. A patch - even if it's relatively major - can usually be rolled back fairly easily in case of problems; a complete replacement - which usually involves system changes (hardware, configuration, OS change) cannot. Certainly not easily. Can you, or any programmer, guarantee that bugs - even major ones - will not show up after the replacement, and take the business out of operation for some length of time? The answer is, of course, no - and so, a complete rewrite is very risky.
In short, clients usually hate rewrites, and love patches - and if you keep wishing that it wasn't otherwise, you'll only get frustrated. I learned, a long time ago, to take my satisfaction in charging all the market will bear for overtime, etc. when the crash does come; verbal "I told you so"s bring no more than transient satisfaction, and put no money in your pocket. :) By that time, I tend to be uniquely qualified as The Right Programmer to do the job - since I've used the patching process to study the client's entire business flow (as it relates to the software) and can not only write the new code but know how and where the system can be improved.
Looking at this kind of thing as an opportunity rather than a problem has, over time, proven to be very rewarding. :)
[[[[[Edgar]]]]] - Hi Trevor, I hope you don't feel that "we" are picking on you. This wasn't orchestrated. I still have essentially your attitude but a few years ago resigned myself to the fact that the real world is as described by Ben. About 5 years ago I quit telling everyone who might understand that we need to get away from Assembler. As an outside consultant I don't have access to the appropriate managers and they wouldn't understand the problem anyhow.
Basically I'm as frustrated as you are. But I have learned to live with it and don't lose any sleep over the situation. That the managers are incompetent is clear when you consider that I am now the only one left with Assembler skills. An outsider! Unexpectedly, I spent a couple of weeks off work this year due to skin cancer (taken care of) and during that time absolutely nothing happened at work, in spite of the fact that we had a deadline of 31 March. How can they sleep at night if they understand the situation?!
That is my major customer at the moment, government service. I can't speak about the UK but I am now convinced that the government service systems in Germany and the USA cannot be reformed and need to be junked. I'm not going to start a campaign and it of course won't happen. But if the tax-payers were aware of the money wasted and the virtual sabotage on the part of too-highly-rated civil servants...
In the case of converson to COBOL, that is a project for someone I have twice worked for and have known for well over 30 years. The customer, for whatever reasons, wants to move from Assembler to COBOL. My guess is that the rest of what they have is in COBOL and if that in fact is the case, the decision is not completely wrong. At least it is a simplification of their computing environment.
By the way, in both cases I am dealing with financial computing, i.e. BIG mainframes. Until discovering Open COBOL and GMP I wasn't aware that decimal arithmetic was readily available outside the mainframe world. Well, back in 8080 and Z-80 days I used DAA, 'nuff said.
I'm afraid, as much as I don't want to admit it, Ben once again is right. The world just ain't the way we'd like it. When I did my military service there was an expression: shit floats. Basically Peter was an optimist, if only incompetence were readily recognizable at the level where it first floats! But somehow, perhaps as it rots and begins to stink, it seems to take up more space and float even higher.
Sadly, too many people, like Billy, could sell SNOW to Eskimos.
Don't give up trying to do things right. But don't let it get to you psychologically.