...making Linux just a little more fun!
Scott Bicknell (sbicknel at cox.net)
Tue Jul 4 09:33:45 PDT 2006
RE: https://linuxgazette.net/128/lg_tips.html#2-cent-tips.1
> from the Control Center select Appearance & Themes. Choose > Background from the sub-menu. Yeah, Background. Who knew?
It's even easier than that. Just right-click the desktop background and click "Configure Desktop...." It takes you directly to the Background configuration screen without having to navigate the K menu and the Control Center interface. From there, just follow the same procedure outlined in your tip.
Edgar Howell (Edgar_Howell at web.de)
Sun Jul 2 06:30:45 PDT 2006
Just found the time (unusual mid-week) to glance at Novell's latest newslatter, Novell Linux News, of 28 June and there is a potentially interesting article on use of VMware. It involves compilation and references OES Linux, which I am not familiar with. Nonetheless it might be worth pointing out to readers as an alternate approach or at least another source of information for SuSE users.
Running a VMware Virtual Machine as a Service in OES Linux
https://www.novell.com/coolsolutions/appnote/17414.html
Hmmm, having just read the reader comments at the end, dunno. But nothing wrong with letting other people make up their own minds.
[Rick] - The intended reference is obviously to Novell OpenEnterprise Server. A minority of us old-timers have some lingering fondness for Novell NetWare, and might have lost at least a little sleep wondering what happened to it: Well, OES is where it went.
NetWare historically started from DOS on a tiny FAT partition, with the booting NetWare core then seizing control from DOS and then mounting the NetWare-native filesystems, and starting various network services. As reimplemented in OES, the base OS is a full-blown SUSE Linux Enterprise Server (SLES) installation instead of DOS -- and the NetWare core, ZenWorks, and other stuff are implemented as (I gather) regular SysVInit services.
The referenced Novell application note explains how to launch VMware within the Novell environment that's in turn running on SLES. That's not actually useful for SUSE users as such, though it is indeed a "cool solution" (in the wording of that app note) for NetWare / OES admins.
Benjamin A. Okopnik (ben at linuxgazette.net)
Tue Jul 18 06:55:20 PDT 2006
Interesting bit of discussion on the subject:
vsesto (vsesto at adelphia.net)
Sat May 20 16:37:43 PDT 2006
Hello Thomas
I have an interesting issue that I wanted to ask you about regarding FvwmCommand.
I have some apps that I "exec" in the InitFunction in the .fvwm2rc file.
One of these apps issues a FvwmCommand. Does it take time for the FvwmCommandS FIFO server to start up before being reasonably able to begin issuing commands?
I ask this because I noticed that the FvwmCommand fails ... but then if the app issues it some time later it succeeds. Kinda strange ... should my app actually wait some time to give Fvwm time to finish it's initialization?
[Thomas] - As an aside to your question, the use of InitFunction (if you're not using FVWM 2.4.X) isn't something you should use. Thanks to the Test command, you can now conditionally check to see at which state FVWM is in when it is either restarting, loading, etc. FVWM will always read the ''StartFunction'' across inits and reboots, hence:
DestroyFunc StartFunction AddToFunc StartFunction + I Test (Init) Exec exec my_applicationIf you just had:
+ I Exec exec my_applicationin your StartFunction, that would get run regardless of whether FVWM was restarting or initialising.
> One of these apps issues a FvwmCommand. Does it take time for the > FvwmCommandS FIFO server to start up before being reasonably able to > begin issuing commands?That depends (note that the use of "server" is slightly erroneous, given that it does nothing more than create the FIFO). It isn't so much the fact that it takes time, as it is a matter of when it gets ran. Consider for a moment the following derived function:
DestroyFunc StartFunction AddToFunc StartFunction + I Module FvwmEvent some_alias + I Test (Init) Exec exec feh --scale-bg some_image.jpg + I Test (Init) Exec exec xteddy -wm + I Module FvwmCommandS + I FvwmCommand 'Function myfunction'One might expect those commands to run synchronously (one after the other in the order that they're specified). By and large that's often the case, however there are times when that isn't always true. If you consider that some commands might have a certain latency about them, then it may well be that some of the commands are started out of sync to the order listed in the function. FVWM makes no attempt at synchronisation within commands started in the 'I'mmediate context.
So what are your options? In FVWM 2.5.X you can use the ''Schedule'' command which waits a certain number of milliseconds before starting a command. This is non-blocking to anything else, hence:
DestroyFunc StartFunction AddToFunc StartFunction + I Module FvwmEvent some_alias + I Test (Init) Exec exec feh --scale-bg some_image.jpg + I Test (Init) Exec exec xteddy -wm + I Module FvwmCommandS + I Schedule 900 FvwmCommand 'Function myfunction'Here, the schedule command will wait 900 milliseconds (almost a second) before running the specified command.
In FVWM 2.4.X however, you can't use the Schedule command as it wasn't introduced then. What you would need to do is use PipeRead if it was something you needed to send FVWM:
+ I PipeRead 'sleep 5 && echo "Function myfunction"'Or if you wanted to just spawn a separate process:
+ I Exec sh -c 'sleep 3 && exec my_application'> I ask this because I noticed that the FvwmCommand fails ... but then > if the app issues it some time later it succeeds. Kinda strange ... > should my app actually wait some time to give Fvwm time to finish it's > initialization?See above. Note that your use of FvwmCommand within a function which FVWM is already evaluating seems very superfluous. Consider what FvwmCommand does -- its job is to send commands from an external source (such as a shell script, or terminal). It works fine in those situations, but do you need it from within a function? No, of course not -- FVWM is already interpreting those commands for itself. Why go the long-way around in spawning a command externally, and then sending it back to FVWM?
What you probably are angling for (I'm guessing here since you haven't said what it is you're trying to do) is interpolating various conditions and then getting FVWM to react to certain things based on the outcome. Since such "processing" is typically done at the shell-level, this is where the ''PipeRead'' command comes in useful. Note only that but it's operation is synchronous. :)
Where PipeRead shines is being able to script various things at the shell, and then send back commands to FVWM to act upon. Take a derived example. Suppose you wanted to evaluate an environment variable -- or perhaps more importantly set its value. FVWM has the ability to export environment variables into its own environment space by way of the ''SetEnv'' command. Hence:
SetEnv myvariable 200... will export and declare a variable called 'myvariable' with a value of 200. In doing so, other FVWM modules when they are spawned will inherit those variables (because of the fact that all process created from FVWM inherit information from its process space -- just like the paent/child relationship to shells). This is really their main use -- despite the fact that some people think it a good idea to overuse them. (1)
Now let's assume that we wanted to create a new variable which holds the result of performing some mathematical function on our ''myvariable'' variable. FVWM has no builtin capabilities to operate numeric operations (yet). The only way you're going to be able to do that is at the shell level. PipeRead will help us achieve that, as in:
PipeRead 'echo SetEnv mynewvar $(($[myvariable] + 1))'It's important to realise what this line is doing (and how FVWM operates in this context). As soon as you open up a PipeRead command, you're at the shell level. That means shell-quoting techniques apply. The next thing to realise is that it's an evaluative mechanism. Typically the whole point of using PipeRead is to evaluate a condition and react to it. So FVWM typically expects a reply in return. FVWM will know you're asking it to do something by you echo'ing back your responses. These reponses are then read by FVWM, as though they were typed in an FVWM configuration file, or in FvwmConsole.
Variable interpolation happens first of all. $[myvariable] (which is FVWM's way of interpolating a variable) is expanded. The shell then performs an increment on the variable. When that's done the value is echo'ed back to FVWM along with the "SetEnv mynewvar" part, hence FVWM sees and executes:
SetEnv mynewvar 201...cool huh? Note that I have seen some people try to achieve the same thing by doing this:
SetEnv y PipeRead "echo $[2*$[x]]"I can see the logic behind this, but consider why this doesn't work (based on what you now know). FVWM will run the PipeRead command and have back some number (whatever the result of 2*$x interpolates to) -- of course, FVWM has no knowledge of what the number might do. It also (because of where the expansion occurs) has no idea to connect it to the SetEnv part before it. Hence in order to resolve that issue, it's important to remember that everything has to be in the PipeRead command. I've explored this issue in the following (2)
The other great thing about PipeRead (as I have said) is that it's synchronous. FVWM will wait for a PipeRead command to finish. So this is probably what you really want to use. This greatly ensures that commands are ran one after the other.
I hope that helps, and I apologise for the somewhat rambling nature of my replies -- I have no idea how many other people on this list care for the information (or the readership this will reach), but it can't hurt. :)
MNZ (mnzaki at gmail.com)
Fri Jun 9 05:57:04 PDT 2006
Hi, I'm a Linux enthusiast who would like to "get better". I just dunno what to do next (don't laugh!). I have a Debian, working quite well (after a long struggle) , and everything's alright. I just want to know what to do/learn/try next?
This is not a linux-unrelated question. In fact it's very Linux related!
[prays not to get flamed]
--
//MNZ
[Kapil] - Ah. "I enter the shop of the-101-flavours-try-any-number-for-free and I don't know what to do next!"
[MNZ] - That's exactly my situation....... : - /
[Kapil] - Take a step back and think about why you wanted to use the computer in the first place:
1. Write nice documents.
2. Calculate some things.
3. Play games.
4. Write/design games.
5. Create graphics.
6. Play audio/video.
7. Compose audio/video.
8. Impress your friends.
....
[MNZ] - Interesting question....... What do I use a computer for? [At this point, I got very conffuzled @_@]
[Kapil] - This is the garden of infinitely forking paths and each one leads somewhere. Pick one. After that you can pick applications specific to that activity---look at "tags" under Debian as a way of looking at the choices for applications or ask at TAG for more info.
[MNZ] - Thanks, I'll look at the tags
[Neil] - How about picking some of the items you struggled with and volunteering to improve the documentation.
[Kat] - Your question's sort of vague, and thus hard to answer, but I suspect someone will be along shortly with some responses that might shake out some clarity all around in that regard.
Your question is Linux related, and I think it'd be the start of an excellent article!
[MNZ] - My Stupid question can start an article? I amaze myself sometimes......
[Kat] - * grin * OH, all sorts of things can spark articles.
[Kat] - I'm meddling here, but it strikes me that your Debian-configuration struggle sounds like it was recent, and that your recollection of it might be fresh. There are lots of people for whom that would be old hat, but I know that I am interested in hearing that sort of story.
[MNZ] - Actually my "struggle" is just 5 months old, normal stuff for a biggner. Mainly opengl and ALSA. But don't take me lightly! I might be a biggner but I have learnt a lot in this time. I used wind0ze for a loooong time (7 years) but it was my only choice, and I never actually knew that there was another OS (just to avoid bedazzlement, I'm 14. I guess that explains a lot)
[Kat] - You're 14? I think I'm more prepared to believe that you're a typing & talking dog. You could have told me you were 44 and I'd have believed that.
[MNZ] - I do know a lot now, good C++ and good PHP , linux in general. From here I just dunno where to go next.
[Kat] - Write up what you remember about how you got to where you are now. This phase usually doesn't get captured, and I think it's the sort of thing that can really inspire other people.
[Kat] - Oh, and...there's not really a standard curriculum for what comes next. It seems to me that the real question is, what do you want to do next? Is it a matter of being utterly at a loss to know what the next options are, nevermind choosing one? (I've been there in various forms, knowing that I was ignorant and not knowing how to start fixing it.)
[Jason] - If you're interested in programming, you might want to take a look at Eric S. Raymond's "How To Become A Hacker" essay .
[MNZ] - I have read that before, and also parts of the jargon
[Jason] - One of ESR's suggestions is to learn different many different languages. This is a great idea because of the different programming approaches this exposes you to. Every programming language is biased towards some way of solving problems. But you won't be able to appreciate this fact unless you are familar with how another language does things.
For instance, recursion is a powerful concept. It's also somewhat of a second class citizen in languages like C++ and PHP. So if you're coding along in a language like that, and you run into a problem that's a natural fit for recursion, you might not realize it unless you know a language where recursion is used extensively, like Lisp or a functional programming language, such as Haskell or Ocaml.
By the way, for learning Lisp, "The Little Schemer" is a great introduction to Scheme[1]. It's actually less of an introduction to Scheme as it is a guide on how to think recursively. It uses an unusual teaching style (the Socratic method, actually) which I find to be effective because it forces you to think about hard concepts.
[1] "Lisp" is not actually not particular language, but rather a family of languages, to which Scheme (and many other languages, of course) belongs.
[MNZ] - Thanks for the suggestion! That's what I'll do now, Try to learn Scheme from that book. In fact I'll start right away.
[Lew] - Funny, but I was going to reply (and still am, :-) ) similarly, but with a difference. I'm a programmer by trade (an "IT Specialist", if you believe my job description), and I spend a lot of time at my computer. However, I don't spend a lot of time developing on my systems at home; that would be too much like work, I guess.
I find myself "learning many different applications", rather than many different languages. I concur with ESR's opinion on the benefits of learning many different languages (I know at least 8 or 9 programming languages, p'haps more, if I dig a bit), and I have found that such knowledge helps me do my job. Extending ESR's opinion a bit, I find that the more I know about the tools and applications that I use, the more alternatives I can find for methods and practices (the how and why) of building applications.
So, my suggestion would be to do something like
- install and configure a caching name server, and learn how DNS works
- install and configure a Wiki on your system and learn how web services work,
- install QEMU and MSWindows and learn how virtualization technology works
- put together a VPN using StrongSwan and learn how VPNs work
- build a tough firewall from scratch, and learn how firewalls workThese will give you the breadth of experience in order to pick and choose techniques and solutions when you do write your next programming project.
[Thomas] - I can't thank you enough for asking this question. Ok, so you have Debian installed (which flavour was it, by the way?). Stable's probably good to start off with -- that is guaranteed more or less not to go wrong (from a package management point of view). You haven't said how far along with things you are, so here's some ideas/experiences for you.
Get to know the command-line. I cannot stress that enough. I realise in the sugar-coated world of KDE and GNOME, there's now a GUI that can wipe your arse, but in the $REAL_WORLD, using KDE and GNOME for the rest of your linuxy life is just not going to be a likelihood you can rely on. If X11 breaks (it can do) then you'll be left at the command-line.
[MNZ] - I Forgot to say How Far I got, quite sorry, It could have saved you alot, but I did say that on another reply. Actually I've been more than 5 months with Debian (testing) now. I Already got used to the command-line, because X11 did break! It was a really pain getting Opengl up properly and ALSA too.
[Thomas] - No -- I wasn't really replying with your own skills in mind.
[Thomas] - What do I mean when I say "command-line"? I don't mean learn shell programming (which in your case, like most linux distros, will be bash). That can come in time. I mean learn some of the basics about the commands that affect your system, such as package management. So, here's a little overview of how to do that, along with some history and examples, just to bore you.
Back in the Debian 2.0 days (Slink and the like), there existed apt-get. It had always been there from Debian's inception, and it's one of the best package management resolution tools there is. Debian pretty much uses the "one tool only, and that tool will do its job well" philosophy so evident throughout UNIX history (read Eric Raymond's "The Art of Computer Programming" if you're interested). Example: you wan't to search for packages that relate to circuits:
$ apt-cache search circuits... would search the descriptions and names to match the word 'circuits'. If you just wanted to search the package names for that word, you can use:
$ apt-cache --names-only search circuitsIf you like the look of a package, you can install it, simply by issuing the command:
$ apt-cache install <name>... where "<name>" expands to some package name. Now, there's one thing you should always do before installing new packages, and that's:
$ sudo apt-get update(If you don't know what sudo is, it runs programs as the root user -- hence where I use sudo in my examples, you will have to ensure you're the root user.)
What that command does is pull down all the new packages and dependencies information from the branch of Debian you're using. How does it know that? Simple. It uses the URLs listed in /etc/apt/sources.list.
It's this information that apt-get then uses in determining not only which packages are available to you, but also their dependencies, etc. Apt-get does it all for you, and it's not something you need to worry about. So apt-get is reponsible for dependency resolution in the grand scheme of things. Once it has worked out all of that and downloaded the packages, the next tool in the chain is "dpkg". This tool's job is simply to install the packages, and manage them at the lower level. It doesn't understand or care about dependencies.
Removing packages is much the same, although the term "removing" comes in two flavours. The term "remove" removes all files of that packages except its configuration files, whereas the term "purge" removes all files including those configuration files, hence:
$ sudo apt-get remove <name>... removes all but the configuration files, whereas:
$ sudo apt-get --purge remove <name>... removes all files, plus the configuration files. It's an important distinction, and it's done that way so that a package can be removed during the upgrade of a package (say) without there being any loss to the custom configuration of those files.
As to which one you as the user wants to use -- if you know it's a non-critical package (and that you have some form of configuration files stored in your $HOME directory). using --purge won't hurt you, since dpkg can only remove those files that belonged to the package initially, and not ones created by the user which still -relate- to it (this is the one thing the RPM package manager does though, and that I agree with to an extent).
Following so far? Good. Because that's the very, very, basics of it. That will get you installing and removing packages that you perhaps don't want.
What's next? Hmm. Mess about in some of the GUIs. That's very important since you need to find one that you're going to be best productive in, I suppose. Install KDE. Install GNOME. Do whatever with them -- both have their advantages and disadvantages. If you want to know what others there are, then see:
... if you want to see which ones are packaged for Debian, a list of package names can be had by running the following command:
$ apt-cache showpkg x-window-manager(Note that I used to maintain this webpage you might find useful: https://www.hantslug.org.uk/cgi-bin/wiki.pl?LinuxHints/DebianTips )
[MNZ] - I have checked out the page. There's lots of stuff I didn't know that's because I usually use aptitude. But I'm trying to learn using apt-get now, It somehow seems...... stronger.
[Thomas] - Debian is pushing aptitiude. I resist it because it's broken. Especially in that it does things differently depending on whether you're in its interactive mode (ncurses) or command-line.
[MNZ] - Thanks for your help. By the way the page you sent me is VERY useful! great site!
[Thomas] - After that -- I am not sure. What interests do you have that you could apply to your learning?
[MNZ] - It's the "after that" part that I don't know. But Anyway, the url and the part that I snipped here were very helpful Thanks alot Thomas.
[Raj] - If you want to become a sysadmin (or a power user), here is my list of things to learn https://rajshekhar.net/content/view/26/26/ .
[MNZ] - Thanks! Now I Have a lot to do. I'm going to try to do everything listed on that page. I already know/did some of those, but there's a lot I never tried. Anyway I'm trying to learn Scheme now, completing these will be my next goal.
[Rick] - To be a sysadmin, it also helps to have a bad attitude -- though it's not absolutely essential to arrive with one. (It'll be issued to you.)
[Kat] - 1/2;) - why does it help to have a bad attitude? A bit of cynicism, that I can see being helpful. Outright bad attitudes toward lUsers, I've never understood that.
[Rick] - Because the BOfH's four main weapons are a bulk eraser, an etherkiller, an electrified doorknob..., and a fanatical devotion to Simon Travaglia. And surprise. Surprise and fear.... nice black uniforms. No, no, our five main weapons are...
...I'll come in again.
[Martin] - :) Made me grin...
Rick do you know his adventures are here: https://www.theregister.co.uk/odds/bofh/
[Rick] - I did, indeed. (The early ones are still the best.)
[Jimmy] - The guy who introduced me to Linux said this of users: "Always assume stupidity. You'll rarely be wrong." :)
[Rick] - Alternative, equally unkind formulation is in .signature block.
-- Cheers, The Technical Support Credo: Rick Moen Remember, there are no stupid questions, rick at linuxmafia.com only stupid people asking questions.
[MNZ] - One more thing, I like that part that says " in infinite wisdom MNZ spoke thus On 06/09/2006 06:27 PM:" XD
[Rick] - MNZ, you might actually want to work on your own attribution string, which I'm guessing must be the default of
> User-Agent: KMail/1.7.2Meaning no personal criticism, the attribution phrase "you wrote" is a bit less than informative when used on mailing lists. You also may or may not want to change your GECOS field to something approximating your name. If nothing else, "MNZ" is a bit challenging to pronounce. ;->
[Kat] - Oh, "emenzee" and "menzee" seem like possibilities...
[Jimmy] - I'd have gone with 'minz', but that's probably because I've recently been introduced to a popular Polish activity: making fun of the Czech language. :)
[Ben] - Articulate, polite, computer-savvy, and ambitious - all at 14 years old. MNZ, I'm impressed. :)
[MNZ] - Thanks!
[Ben] - As has already been suggested here, you should indeed write an article about your experiences with Linux; I suspect that it would make a very interesting exposition for many people who are, perhaps, too timid to dive into this big ocean. If you're interested in doing so, take a look at our New Author Guide (appended), and email me if you have any further questions. It would be a pleasure to see one from you.
[MNZ] - Mr. Neil Youngman has suggested documentation, I know. But I don't fell very comfortable writing something (Although I'm considering giving it a shot, I already have a little website on my localhost that I might try to enlarge later), I think I 'll just go with learning something new now.
[Ben] - Cool beans. :) If you change your mind, you know where we live (articles go to articles@linuxgazette.net, BTW.)
The New Author Guide (excerpted from https://linuxgazette.net/faq/author.html ) that Ben appended at the end of his e-mail has been clipped. - Kat
[MNZ] - I have read the whole article (New authors guide), very nice.
One last thing, it's a little off topic but I just had to share it :) I found this on my google personal homepage under jokes:
"There was once a young man who, in his youth, professed his desire to become a great writer.
When asked to define "great" he said, "I want to write stuff that the whole world will read, stuff that people will react to on a truly emotional level, stuff that will make them scream, cry, howl in pain and anger!"
He now works for Micr0s0ft, writing error messages."
[Ben] - [laugh] I hadn't heard that one before; it's cute. Thanks, MNZ!
clarjon1 (clarjon1 at gmail.com)
Mon Jun 12 09:01:26 PDT 2006
Hey, Ben
Do you remember those Frink and Woomert stories you had? I enjoyed them: Humour, and education! Wish my school did that sorta thing... Anyways, think you could write one once in a while? Thanks!
[Thomas] - https://linuxgazette.net/126/lg_mail.html#gaz.1
[Ben] - Well, thanks for the encouragement. Life's been pretty full lately, but I've actually managed to toss a few ideas into a file destined to eventually become a Perl One-Liner of the Month. I'm not sure of when I'll have a chance to write it (might even be this month, since the article stream has been pretty thin), but it's definitely on the boards.
[Clarjon1] - Hey, ben, as to that one idea, with the spaceship and all? what if their language could be translated by decoding it with some perl modules? u know, like with morse and i dunno, mebbe uuencode or something? a few different formats maybe? just an idea to throw past you...
[Ben] - It's a cute idea - thanks! - but then I'd have to define their language. Hmmm... maybe it's similar to Klingon, or Igpay-Atinlay... :)
[Clarjon1] - PS: I wrote my first perl program! it's very simple, based mostly on one of the things in the perl tutorial, and it searches through lines in a file for the criteria...
[Ben] - Ah, good old "perl -wne'print if /criterion/' file.txt". :)
[Clarjon1] - I use it to keep track of appointments and important stuff. Right now i have to use a text editor to add stuff, but I'm sure I can get an input worked in eventually... I used that example of how to call up a filename, and then search for keywords. Works for me thus far!
[Ben] - Which Perl tutorial is this? There are a lot of them out there.
[Clarjon1] - Umm, one of the man pages...
[Ben] -
ben at Fenrir:~$ cat /usr/share/perl/5.8.8/pod/*|grep -c $ 156874
Somewhere in the over 150k lines of'em. Oh well... :)
(BTW, do me a favor - insert a blank line between the text you're responding to and your response. This makes it much easier to read.)
[Ben] - As to a reading and writing the sort of scheduler you're talking about, there's a little complexity involved; if you want to write to a file, you need to learn about the 'open' call and filehandles. Be careful swimming in those waters; if you're not careful, you'll get hooked and end up as a Perl programmer. Sure, people will throw lots of money at you, you'll be able to do what you want in a tenth of the time that you used to spend, and you'll have lots of fun, but who wants to be bothered with all that stuff? :)
[Clarjon1] - You're right. Enjoying oneself does NOT have a place, or time, in this society. What was i thinking??
Anyways, I've attatched the script as it is right now...
[Ben] - Except that, of course, Mailman scrubs out attachments.
[Rick] - It actually doesn't. I'm guessing Jonathan accidentally omitted the attachment.
[Clarjon1] - Eep! Oh, shoot, i did, didn't I? And i brought my PC into the school to hog -- err, i mean make use of the high speed internet.... Might have been a good thing... Actually, the whole thing is a total of 34 lines long, and only about 8 lines are the actual program <blush>. I've heard so many complaints about people not documenting, I guess I went a little overboard... Ah, well.
Ok, i've pasted the actual bits of code, with a very minimalistic version of comments below. calprog.pl:
#!/usr/bin/perl open(INFILE, "calendar") or die "Can't open input.txt: $!";#outta curiosity, can INFILE be changed to another name, like CALFILE or something? $searchme = shift; #This just grabbed the search term from the command line. How do you get more than one term? must find out... while (<INFILE>) { print if /$searchme/; #searches thru the file for the searchterm } print "\n";#this just spits out a newline when all is done, so that the last result doesn't immediately precede the commandline.
[Ben] - [quoting Clarjon1] -
> #outta > curiosity, can INFILE be changed to another name, like CALFILE or > something?Certainly. You should 1) Construct the name using [A-Za-z_] characters, preferably not beginning with an underscore, 2) Make the filehandle name either Sentence-capped or ALL-capped, 3) preferably make it indicative of what you're opening (e.g., 'In' for an input file, 'Out' for output, 'Data' for reading a data file, etc.), and 4) Make it short so you don't wear your fingers out typing. :)
Most people just tend to use 'F' or 'Fh' in most cases.
> $searchme = shift; > #This just grabbed the search term from the command line. How do you > get more than one term? must find out...# Actually removes the elements from @ARGV $a = shift; $b = shift; $c = shift;or
# Does not remove, just copies them ( $a, $b, $c ) = @ARGV[0, 1, 2]
[Clarjon1] - Not much, but it's a start.
[Ben] - Not a bad start at all!
[Thomas] - I use 'tdl' to do this for me: https://www.rpcurnow.force9.co.uk/tdl/
[Rick] - The main Mailman queue process actually doesn't even know what an attachment is, let alone strip them out. The semi-separate Pipermail Web archiver module does present them, if present, on the archive Web pages as separate selectable links, rather than as an inline mess, as was formerly the case.
Pipermail used to be one of the part of Mailman that notoriously sucked, and was included over superior competition such as MHonArc solely because somehow had (half-assedly) coded it in Python (before orphaning it). Fortunately, the Mailman guys eventually got around to making it Suck Less <tm>, bringing Mailman as a whole up to a wholesome mediocrity.
(There: I just eliminated myself from their Christmas card list, I think.)
[Ben] - Feel free to paste code bits into the body of your post - unless, of course, it's huge (at which point, it's best to toss it some place where it's downloadable and provide a link.)
[Rick] - What he said. FYI, this mailing list is configured to reject any message over 40 kB -- solely because that's the Mailman default. It can be adjusted as TAGgers wish, but of course any setting will displease somebody. ;->
[Clarjon1] - /me gets a bit back on topic... As for the language of the aliens, how about a mixture of:
1) igpay atinlay 2) morse 3) that code one of the previous adventures came up with for the spam-proof emails? (I think it was the one with the government officials who extracted sysadmin or someone like that...)
Suramya Tomar (security at suramya.com)
Thu Jun 15 15:35:10 PDT 2006
Hi Everyone, Couldn't think of a better title than that, so if it didn't make sense I blame the lack of caffeine.
This is a script that I wrote so I can give ssh access to my server to a friends computer whose IP address keeps changing everyday without any manual intervention on my side. I have been thinking about this for a while and this seemed like the easiest solution with the least work on my side. Plus it was an interesting challenge for me to get this to work :)
[Some of you might remember that I had asked a question about this topic a while back. I switched ISP's so my IP address doesn't change that often anymore but my friend is stuck with my old ISP and wanted to get access to my system. The result is this script.]
Now its working but since I would have to run this as root for it to work I wanted to share it with you so that if it has some security implications I haven't thought of then I can fix it before I start using it...
I had my friend create a DynDns account and had him configure his system to keep the IP address updated using one of their update clients.
On the server side I use the host command to get the latest IP address for his hostname and then give that IP address access to my system. Below is the script I made, if you see something wrong/unsafe let me know and I will try to fix it:
------------- Start get_ip.sh --------------- #!/bin/bash IP=`host blah.dyndns.com` # This reads in the OLD IP address he had read OLD < OLD_IP.dat # Get the current IP address if [[ $IP =~ '(.*)has address (.*)' ]] then if [ "$OLD" != "${BASH_REMATCH[2]}" ]# Didn't match so the IP changed then # So We log the current date/time and the new IP to a file echo `date` ": Removing access for" $OLD "and giving access to" ${BASH_REMATCH[2]} >> access_log.log # Set the Static firewall rules Result=`/home/suramya/bin/S41firewall` # Create a new rule to allow the current IP address access Result1=`iptables -A INPUT -s ${BASH_REMATCH[2]} -p tcp -m tcp --dport 22 -j ACCEPT` # Drop all other connections to port 22 `iptables -A INPUT -p tcp --dport 22 -j DROP` # Replace the old IP address with the new one echo ${BASH_REMATCH[2]} > OLD_IP.dat fi fi ---------------- End get_ip.sh --------------
Any comments/feedback on this would be appreciated.
Thanks, Suramya
PS: If this looks ok and doesn't cause a major security hole I will release this under the GPL, and if you feel its worth it/fits then you can include it in LG.
[Thomas] - Bash3 specific, eh? Nasty. That's going to be portable to about 0.1% of computers. Fine if you're running it on your own machine, but you sent it in here, to users whom might not be so fortunate. Better to use egrep:
egrep '(.*)has address (.*)' <<< "$IP"
> if [ "$OLD" != "${BASH_REMATCH[2]}" ] > # Didn't match so the IP changed > then > > # So We log the current date/time and the new IP to a file > > echo `date` ": Removing access for" $OLD "and giving access to" > ${BASH_REMATCH[2]} >> access_log.log
Again with the above you'd have to ensure you separated out the matched clauses. You could use an array for this, or use awk.
> # Set the Static firewall rules > > Result=`/home/suramya/bin/S41firewall`
You should use $HOME here.
[Ben] - [Quoting Suramya]:
> Hi Everyone, > Couldn't think of a better title than that, so if it didn't make sense > I blame the lack of caffeine.Many evils have been perpetrated due to that factor, yes...
> ------------- Start get_ip.sh --------------- > #!/bin/bash > > IP=`host blah.dyndns.com` > > # This reads in the OLD IP address he had > read OLD < OLD_IP.dat > > # Get the current IP address > if [[ $IP =~ '(.*)has address (.*)' ]]This would always fail on my system since my 'host' output does not contain anything like that.
ben@Fenrir:~$ host www.dyndns.com www.dyndns.com A 63.208.196.66Perhaps something like "host blah.dyndns.com|awk '{print $3}'" would work better.
> then > if [ "$OLD" != "${BASH_REMATCH[2]}" ]# Didn't match so the IP changed > then > > # So We log the current date/time and the new IP to a file > > echo `date` ": Removing access for" $OLD "and giving access to" > ${BASH_REMATCH[2]} >> access_log.log > > # Set the Static firewall rules > > Result=`/home/suramya/bin/S41firewall`I notice that you didn't use $Result anywhere after this, so there's no reason for creating it - right? Also, since no one except you knows what is in that 'S41firewall' script, this script isn't going to be very useful - unless running 'S41firewall' is not a requirement.
> # Create a new rule to allow the current IP address access > > Result1=`iptables -A INPUT -s ${BASH_REMATCH[2]} -p tcp -m tcp > --dport 22 -j ACCEPT`Ditto for 'Result1' - although, presumably, you need to inform 'iptables' of your new IP, so it needs to be run.
> # Drop all other connections to port 22 > > `iptables -A INPUT -p tcp --dport 22 -j DROP`Why are you using command substition (i.e., those backticks) anyway? Is this a cargo-cult programming dagger I see before me?> # Replace the old IP address with the new one > > echo ${BASH_REMATCH[2]} > OLD_IP.dat > fi > fi > > ---------------- End get_ip.sh --------------Here's my version:
#!/bin/bash # Created by Ben Okopnik on Fri Jun 23 11:14:59 EDT 2006 old= new=`host www.dyndns.com|awk '{print $3}'` [ "$old" != "$new" ] && { $HOME/bin/S41firewall /sbin/iptables -A INPUT -s "$new" -p tcp -m tcp --dport 22 -j ACCEPT /sbin/iptables -A INPUT -p tcp --dport 22 -j DROP /bin/sed -i "s/^old=/&$new/" $0 }This one has the advantage of not requiring an external file for the current IP; that gets saved within the script file itself. I'm assuming that you're clever enough to do 'chown root:root foo; chmod 0700 foo', which would set the right ownership and permissions, and you should be all happy and like that. :)
Set it up as a cronjob, and you'll never have to think about it again.
David Martin (davidmartin1 at gmail.com)
Sun Jun 18 00:15:09 PDT 2006
Hi Gurus,
I need urgent help with grep/awk ok please, I've spent over 17hrs pulling my hair out over this question. I hope you can help a Linux newbie please.
My lecturer has asked me to do the following with this file, But I cannot get this on one line:
grep -o "[[:digit:]]*[.][[:digit:]]*[.][[:digit:]]*[.][[:digit:]]*" httpdaccess.log > ipaddresses.log
One thing I did not do was to remove duplicate entries from the output. I should have run this from the shell instead.
grep -o "[[:digit:]]*[.][[:digit:]]*[.][[:digit:]]*[.][[:digit:]]*" httpdaccess.log | sort -u > ipaddresses.log | rzip ipaddresses.log
Here is the question:
Q5
Using an httpd log (which will be provided on the subject forum)
write a single command line using a pipeline of commands mentioned in
The complete guide to Linux system administration: Chapter 5 to
determine the top ten busiest dates (most objects accessed).
I've reformatted this thread, but I've left in Rick's suggestion/request for the general enlightenment of future querents. - Kat
[Rick] - Hi, David! I'm the listadmin, and notice in the logs that you seem to have withdrawn your posting, which was held for listadmin approval because its 1 MB attachment greatly exceeded the 40 kB ceiling on message size.
However, I'm sure the TAG gang would indeed like to help you. Is there a way you could cut the attachment size to just the log portion relevant to your problem -- or to put the logfile on a Web or ftp site and send us the URL rather than a big honkin' file? ;->
If 40kB seems tiny to you, please be aware that many TAG members are on slow dial-up lines. A sudden unplanned 1 MB download is, alas, not OK.
I'm going to forward your post, without the attachment, to TAG, by the way.
[Thomas] - This is NOT a "we do your homework club". However, you're lucky. This question is of suifficient interest that it's one that sometimes crops up from time to time. So I am going to answer it. Do you have you lecturer's email address? I'd like to know how many marks I'd get. :P
Did your lecturer give you that command, or is that your attempt to match an IP address? If your lecturer gave you that command he's an idiot. Because you're using "*" that could potentially match anything -- the only requirement that has is that the periods (.) match, with something vague in between.
Of course, it does work, only because you've told grep to match the explicit form (via it's -o switch). We can improve upon this a bit though. I'm going to use egrep(1) here -- since you seem to be using grep(1), you're going to have to escape the character classes in the following:
egrep -o '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' ./access.logThis is much better. This is now matching numbers, since the "+" operator ensures that each number within the tuple delimited by a period (.) must have at least one number. This is true of an IP address. Note though that you can write that a different way:
egrep -o '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ./access.logThis is a little more explicit in that it matches anything between one or three numbers, followed by a period (.) -- in the example before it, we were reliant entirely on the fact there might be a number (occuring at least once) upto some length that then had a period (.) -- with the above example, that's contained to anything between 1 -- 3 digits inclusive.
Note that there many more complex regexps one can use to match IP addresses (which essentially look for validity in terms of IP class structures, etc.) although I shall leave those for you to persue at your own leisure.
[Jason] - Does the "IP class" structure really exist anymore? I had thought that when the number of hosts on the internet exploded, the solution was basically "Okay, we need all the address space we can get. Let's throw out IP classes and require explicit netmasks." How far off am I?
[Thomas] - You're not too far out. That is indeed the theory behind it, but for many organisations, their internal networks still use a class-based approach -- they more or less have to, otherwise how else is organised chaos supposed to work when you're a BOFH? :P
[Barry] - You're right on the nail actually. "Classful Notation" has not existed since 1993. It used three classes:
C - (now /24) 256 addresses (the first three octets fixed - 111.111.111.xxx) B - (now /16) ~65k addresses (the first two octets fixed - 111.111.xxx.xxx) A - (now /8) ~16.5m addresses (the first octet fixed - 111.xxx.xxx.xxx)Using classful addressing only fixed sized ranges could be assigned thus wasting address space.
This problem was solved with "Classless Interdomain Routing" (CIDR) (pronounced Cider for the drinkers!). With CIDR, the network is determined with the "slash notation" where the slash indicates the number of significant bits if the IP address was written in binary. e.g.
192.168.1.0/24 - 256 addresses - 24 significant bits with eight bits per octave resulting in the first three octets fixed as with a class C subnet. 192.168.1.0/25 - 128 possible addresses 192.168.1.128/25 - 128 possible addressesSo we can assign two subnets of 128 instead of two class C's of 256 each.
/24's are still commonly used and (incorrectly) referred to as class C's for many internal LANs. This is not really an issue as these use private IP space from one of the three ranges:
10.0.0.0/8 172.16.0.0/12 192.168.0.0/16The most common of course being 192.168.1.0/24.
[Thomas] -
> One thing I did not do was to remove duplicate entries from the > output. I should have run this from the shell instead. > > grep -o "[[:digit:]]*[.][[:digit:]]*[.][[:digit:]]*[.][[:digit:]]*" > httpdaccess.log | sort -u > ipaddresses.log | rzip ipaddresses.logI'd have used "uniq" here, but it's up to you. Note that the command above as you have it doesn't work -- at least not as far as rzip is concerned. What you probably wanted was something like:
egrep -o '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ./access.log | \ sort -u > ./ipaddresses.log && rzip ipaddresses.logAssuming of course that rzip is capable of reading and writing to the same file. I don't know -- never used it, so it's something you'd have to check. Note of course what was wrong with your command initially. You were redirecting the output from sort(1) to a file and then somehow expecting that output to be available at the pipeline for rzip to interpret. That's not true -- the contents of the pipeline from sort had been dumped to a file, hence the need to get rzip to use the ipaddresses.log file. Again, because I don't know what rzip is like, you might be able to use:
egrep -o '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ./access.log | \ sort -u | rzip ipaddresses.log> Here is the question: Q5 Using an httpd log (which will be provided > on the subject forum) write a single command line using a pipeline > of commands mentioned in The complete guide to Linux system > administration: Chapter 5 to determine the top ten busiest dates (most > objects accessed).What a crap question. I have no idea why your lecturer thinks there needs to be a pipeline of commands --- one can implement this easily enough in awk or ruby without using anything else. Here's a start to what you want:
tail -n 500 ./access.log | \ egrep -o '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | \ sort -n | uniq -c | sort -nr -t ' ' -k 1I've used tail so I only see the last five hunderd or so lines (my access.log file is huge). I've used uniq to count and display the number of unique entries of IP address matched. The sort command (overly superfluous with its options here. but you ge the idea, I'm sure) sorts the first column for frequency and reverses it.
Using AWK, here's the same thing:
tail -n 500 ./access.log | awk '$1 ~ [0-9]+\.[0-9]+\.[0-9]+\.[0-9]+ {print $1}' \ | sort -n | uniq -c | sort -nr -t ' ' -k 1You then want to match the date. Hmm. That depends on the format of the log file your lecturer has given you. Of course, what has been matched above is an accummulation of IP address frequencies across all dates, and not anything specific. For the access.log file I am looking at, the date entry looks like this:
[15/Nov/2005:00:18:46 +0000]You can then extract the date and construct an array based on IP addresses that date entry matched in your file. Example:
tail -n 500 ./access.log | awk 'BEGIN {FS="["}; {print substr($2,0,11)};'That matches all the dates between the 500 lines returned from tail. I suppose crudely you could come up with something like this:
tail -n 500 ./access.log | \ awk 'BEGIN {FS="["} {a=split($2,array,":")};{for (i in array) $1 ~ /array[1]/; print $1, array[1]}'Then it's up to you to put the results into an array and count the occurances. I've given you more of a headstart than perhaps you deserve on this one. :P
[Ben] - As Thomas said, we don't normally help with homework questions - but it sounds like you actually have put some time into this one rather than just dumping it in our laps, which I suppose deserves some consideration. Again, like Thomas, I'm not going to give you a direct answer - it is, after all, supposed to be your homework, and you're supposed to ask your instructor if you just get stuck on an assignment - but I'll be happy to give you a hint.
> My lecturer has asked me to do the following with this file, But I > cannot get this on one line:I've read the specification (i.e., the question you were asked), and it doesn't say "on one line"; it says "a single command line". Since Linux (or, more precisely, the shell CLI) allows you to chain processes and use statement separators, you could (theoretically) write a 100kB-long program "on one line" - so it's not much of a problem.
> grep -o "[[:digit:]]*[.][[:digit:]]*[.][[:digit:]]*[.][[:digit:]]*" > httpdaccess.log > ipaddresses.logIs there a reason that you want to use character class names instead of explicit character classes? '[0-9]' works just as well as '[[:digit:]]' (barring some vague mutterings about the $LANG variable, which doesn't apply in shell-based scenarios anyway.) As well, the above expression isn't very useful; if you're trying to match an IP, then something like
egrep '\<([0-9]{1,3}\.){3}[0-9]{1,3}\>'is probably much more useful. On the other hand, matching an IP has nothing to do with the solution to the stated problem, which, I suppose, is why I'm giving you a complete answer here. :)
> One thing I did not do was to remove duplicate entries from the > output. I should have run this from the shell instead.If you're trying to figure out the busiest dates, then removing duplicate entries is definitely NOT what you want to do - at least not initially.
What I'll do here is give you a general idea of how the task is done. I'm assuming that you understand the available tools well enough to implement a solution once you understand how to look at the problem (if you don't, then you're beyond any help that I'm willing to provide.)
The task essentially comes down to creating a frequency counter. This is a fairly standard programming methodology, used a lot in - ta-daa! - log analysis. What you need is a list of unique dates, and a number of hits for each of those dates - essentially a line count of anything that matches them.
I've taken a look at your log (being one of the listadmins has its privileges :), and it's nothing more than Apache's CLF (Common Log Format) - i.e.
210.49.49.147 - - [18/Apr/2004:22:59:44 +1000] "GET /ASGAP/gif/forest2.gif HTTP/1.1" 200 1857 203.40.195.112 - - [18/Apr/2004:23:01:33 +1000] "GET /ASGAP/gif/bguide.gif HTTP/1.1" 200 288 134.115.68.21 - - [18/Apr/2004:23:03:42 +1000] "GET /ASGAP/gif/forest.gif HTTP/1.0" 304 - 150.214.167.133 - - [18/Apr/2004:23:04:54 +1000] "GET /AFVL/tagasaste.htm HTTP/1.0" 200 3266 203.40.195.112 - - [18/Apr/2004:23:06:03 +1000] "GET /ASGAP/jpg/styphels.jpg HTTP/1.1" 200 5318in which fields are defined as
IP identd user [dy/mon/year:hh:mm:ss zone] "request" status sizeMatching the date is very easy: it consists of the six characters following a square bracket. You can isolate those - think of what tool you need to do that, since that's the main "processing" you need to do! - and get a unique list of them. Once you've got that unique list, you can loop over it and simply count anything that matches a square bracket followed by those characters, then sort the counted output. If you want to get really fancy, you can report only the first line of the count, which will give you the largest count - i.e., the busiest day.
There is at least one standard Unix program that allows you to do all that in one pass; however, using it is probably a bit complex for where you are at the moment. Implementing it as I described above should work fine for you, and only requires relatively basic tool knowledge.
(bvdp at xplornet.com)
Thu Jun 22 09:30:42 PDT 2006
Hi all. I'm having some "odd" connection problems and wondering how to track it down.
First off, I'm on a 2way satellite connection with xplornet (same as wildblue in the US). Not ideal, but beats my other "choices". Rural life is wonderful, but not perfect.
The satellite comes to me though a modem which is pretty much the same as cable modem. I just plug the ethernet cable in and it runs. From what I've found out, I can not get any info from the modem. Only the ISP can do that.
The modem does have some status lights. POWER, CONNECTED, IN and OUT. The last 2 indicate computer <> modem traffic. The CONNECTED light is solid when there is a connection the satellite.
Now, for the most part all this works just fine. But, at other times I seem to lose the connection ... When that happens the CONNECTED light is still on, but my computer doesn't appear to have a connection the the internet. The IN/OUT lights to blink, so that does tell me that the modem/computer link is up?
I blame the modem/satellite. The ISP tells me to reboot the computer (too much time debugging windows?). Quite often this does work. But, this morning it didn't. But, repowering the modem did the trick.
So, I could call the ISP ... and after a long hold they will tell me that it has to be down from them to debug, that it isn't the modem, must be me, etc.
What I'm after is some debugging ideas. What can I run on my computer to see if there is a problem here.
I figure that we're down to a few possibilities:
Maybe I should have a router in the chain? Not sure what that would prove.
[Thomas] - Let's assume for the moment the fact that you're using a satellite as a means of connecting to the Internet is a red-herring. And now let's assume that the issue lies with your modem and more likely your computer.
[BobV] - From my understanding, the fact that it is satellite should make no difference. Well, expect that sat is not as robust as wired solutions.
[Thomas] - I've had issues (still do) exactly as you describe, save for the fact that my main server does internal NATing, and everything else. Sometimes this too will stop forwarding requests to the outside world despite the fact the router is still connected.
In my case, I suspect it's ip_conntrack filling up its bit bucket. In your case I suspect the software on the modem is faulty. Get your ISP to get off their virtual arse and fix it. Keep a log of when it "freezed", and then taunt that with them when they next tell you it couldn't possibly be the modem.
[BobV] - I'm confused. You're saying it is my computer at fault or the modem or the ISP?
[Thomas] - The modem most likely. If you can't ping anything once your connection freezes then it must be the modem.
[BobV] - If ip_conntrack is filling up ... ummm, can I do anything with this by reducing txqueuelen?
Of course, since posting it has been 100% :)
[John] - Well, I have zero experience with satellite equipment, but I have had my share of connection issues with my ADSL setup over time, so I'll mention some of the things I do when confronted with such problems.
[quoting BobV]
> I figure that we're down to a few possibilities: > > 1. The modem is flaky. > > 2. The satellite connection is flaky (my friend has the same system and > is NOT having these problems).Could be attributable to differences in your respective environments. I would assume that a satellite link is somewhat line of sight. I could be wrong, but if you're surrounded by a lot of trees for example, where there is dense foliage between the dish and the satellite, that could affect signal strength.
[BobV] - Well ... the friend was running his system direct to a XP box, bu has since had a linux box installed which he is using a router. He's distributing his connection to some neighbours.
[John] - The thought later occured to me that, your friend willing, you might consider swapping modems with him for a few days to compare performance.
[BobV] - Can we call a sat link at 22500 miles "line of sight"? But, seriously, the trees, etc should not be the issue. According to the ISP my SNR (signal to noise ratio) is "as good as it gets". No magic trees jumping up in the way. Now, really bad weather can effect all this. But, the times I'm talking about are not bad weather.
[John] - Maybe not the best choice of words, and I rather doubt it myself - just another point to consider, along with my disclaimer that I have no experience with a satellite link :). Probably better said that a satellite link would be to some degree vulnerable to changes in atmospheric conditions.
> 3. I'm having a software problem. Buffer overflows or something. Which > means that a reboot would fix it (sometimes it does).
[John] - Depends on what kind of connection protocol - if that's the right terminology - your modem uses. Perhaps one of the easiest to deal with is just a plain old ethernet connection, where the port of your modem provides a routable IP address.
[John] - Many ISPs these days have moved to setups using some variant of ppp, such as PPPoE, which requires an additional daemon running on your connected host to support it. On a Linux host using PPPoE, the most popular software for that is from Roaring Penguin. Depending on your Linux distro, that can be tricky to set up. But since you didn't mention it, I'll assume that it's not an issue here.
[BobV] - Yes. I think that is the case here. The IP is dynamic, but I don't think it changes very often. Hmmm, could this be a DHCP issue?
[John] - Certainly not out of the realm of possibility.
> 4. I'm having a hardware problem with the on board ethernet. Don't know > if a soft reboot would effect this, but the power cycle today didn't.
[John] - Always a possibility, although usually of low probability. The way to test for that is with a different host and / or NIC.
[BobV] - Yes, that was one of the ISP's suggestions. I'll have to cobble another box together to see. If I have time ..
> Maybe I should have a router in the chain? Not sure what that would > prove.
[John] - Principal merits of a router would be in the realm of connection sharing (NAT, etc) for multiple hosts and firewall setup.
Just a WAG on my part, but I would guess that your modem is rather similar to ADSL, in that it uses a synchronous connection. I would further guess that the CONNECTED light indicates whether or not the modem is synchronized. In my experience, that's usually where my connection problems arise. Re-initializing the modem usually takes care of that issue by forcing connection renogociation with the port on the ISP side.
To summarize, my procedure is something like this:
1) From a terminal command prompt: "ifconfig" to see the status of the host network interfaces. You should see a response something like:
wlan0 Link encap:Ethernet HWaddr 00:E0:98:49:85:6D inet addr:192.168.1.116 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6042 errors:10 dropped:0 overruns:0 frame:10 TX packets:6281 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5052344 (4.8 MiB) TX bytes:955796 (933.3 KiB)The wlan0 above is the name that the Linux host is giving to the interface. Yours could be different, such as 'eth0'. One can also ping the local interface to see if it's talking to the host. In this case "ping 192.168.1.116".
[BobV] - Question: WHen pinging the local host like this does the chain leave the local box?
[John] - No it doesn't. The whole point with that is to confirm that your NIC, as part of the link, is functional at that particular time. The general approach I used was to start the diagnostic process at the closest point of origin to the host, and worked outward. The link to the outside world obviously includes things other than your modem.
Although you say "I blame the modem/satellite", a comprehensive troubleshooting process should, IMO, include looking at other components involved as well. Although rarely, I have experienced situations when the link problem was due to the NIC, and a reload of the NIC driver resolved the issue.
[BobV] - I'm not sure what this proves other than the fact the IP address is valid and correct? Or does it have to go the modem first? Which would be an indication that the modem is "active".
[John] - No, just that the card is responding to the host.
[John] - 2) "netstat -nr" should show the gateway address, indicated with a 'UG' flag on the same line. Depending on whether your modem is running bridged or as a router (depends on the equipment and the ISPs choice of confiuration), this could be an interface on the modem (functioning as a router / dhcp server) or an interface that the ISP is providing. A successful ping (a response is echoed to your terminal screen, with a time in seconds indicating the latency of the response) to that IP would indicate that the problem is outside of your host / modem environment.
[BobV] - Funny that you mention ping (again). Last time I had the problem I tried to ping my isp:
bob$ ping xplornet.com PING xplornet.com (207.179.143.226) 56(84) bytes of data. --- xplornet.com ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6000msAt the time I thought this "showed something". Hmmm, later I find that the ISP is blocking.
[John] - It may not be conclusive however, because in some cases, the ISP will block pings (as mine does). In that case, you can pick some known internet address. Use a number rather than a name though (such as 209.73.186.23 rather than www.yahoo.com), to eliminate DNS issues from this step. If you succeed with the ping to the IP address, but not to the name address, then the problem is not a connection issue, but is in name resolving - most likely a problem in the ISPs domain.
[BobV] - Yup. Interesting that I can ping the IP gateway from netstat -nr
[John] - 3) Re-init the modem. Even with a steady-on sync lite, the connection status could be flaky, such as might be caoused by a short interruption in the power to the modem.
Step 3) is important, particularly if your power line has noise spikes, brown-outs, or occasional interruptions (fractions of a second of AC cut-outs can leave your modem in an undetermined state, without it being apparent from looking at the status lights. I'm just giving broad suggsetions here of potential problem issues. Obviously I have no direct knowledge of the particulars of your AC power conditions.
[John] - 4) Depending on your distro (Slakware being an exception, as it uses a BSD style init rather than SysV - could be different now though), you can try re-initializing the networking subsystem on your host:
/etc/init.d/networking restart
[BobV] - Mandrake. Yup, tried that without success.
[John] - That often does it for me, especially after resuming from a suspend, if my link dies.
[John] - works on most Debian derived distros, as well as SuSE, IIRC.
This procedure is not exhaustive, but should help get you started.
[BobV] - Question: would pulling the ethernet cable in/out prove or do anything?
[John] - Not for any reason that I can cite, except for the possibility of a flaky cable - not to be dismissed out of hand. Swapping the cable would be the tack to take there. Dis/re-connecting just stirs it up a bit, but would be quite hit or miss.
John H anonymous@null.com
Tue Jul 11 11:04:46 PDT 2006
TAG brilliantly covered this problem in LG57: A pipe implicitly forks a subprocess. In olden days, the right side of the pipe was forked. This caused stuff piped into read statements to not work properly. So, modern ksh forks the left side of the pipe.
Here's a stripped down version of what I am trying to do:
#!/bin/ksh y=0 for w in 1 do for z in 1 2 3 do for x in 1 2 3 4 5 do y=$(( ${y} + 1 )) echo ${y} done done done | grep "somedata" >> /Some/output/file for x in 1 2 3 do echo ${y} done
At first glance, one would expect the above to work like this: The for w (and nested for loops) increment ${y} 15 times, then the for x loop at the bottom prints out the value of ${y} (in this case, 15) three times.
However, this is not the way it works. Applying TAG's answer from LG57, everything left of the pipe (in this case the entire for w loop) is forked as a separate process. All the incrementing of ${y} is done in this subprocess, and it all "goes away" when that process completes. ${y} is still 0 in the parent process (script).
My question is: Is there a way to explicitly tell ksh which side of the pipe should get forked? In the above example, I want the right side of the pipe forked, but ksh defaults to forking the left side. I've Googled "+ksh +pipe +subprocess" (I like the old AltaVista Syntax), with no luck. The book UNIX in a Nutshell has been no help. I tried putting the right side in parens:
done | (grep "somedata" >> /Some/output/file)
But that does not appear to work, either. ${y} still comes out as 0. In a worst-case scenario, I could simply dump the output to a temp file, and have a separate line of code pick up the temp file and work with it, avoiding the pipe altogether. But, that would be messy, and I don't want to clean up the temp file afterwards. If there is a more elegant solution, I cannot find it.
If you have any ideas, please advise. Thank you for your time.
[Thomas] -
> My question is: Is there a way to explicitly tell ksh which side of > the pipe should get forked? In the above example, I want the rightNo, there isn't.
> side of the pipe forked, but ksh defaults to forking the left side. > I've Googled "+ksh +pipe +subprocess" (I like the old AltaVista > Syntax), with no luck. The book UNIX in a Nutshell has been no help. I > tried putting the right side in parens: > > done | (grep "somedata" >> /Some/output/file)The best thing you can do is either do all of your work within a subprocess, or, perhaps more conveniently, avoid it altogether. Here's a contrived example:
foobar=0 while read line; do foobar="$((foobar + 1))" done < /etc/passwd echo $foobarIt doesn't manipulate anything you've asked, but it does demonstrate the principle you can use within your own example.
[Ben] - Doing your work within the subprocess, as Thomas mentions, is the right answer. You could, say, echo your output to STDERR (since your STDOUT is being redirected):
... do y=$(( ${y} + 1 )) echo ${y} done done for x in 1 2 3 do echo ${y} >&2 done done | grep '[0-9]' >> output_fileAnother, perhaps more "honest" version, would be to save your output in a variable for later use:
... do y=$(( ${y} + 1 )) echo ${y} out=`echo "$out $y"` done done done | grep '[0-9]' >> output_file echo $out
[Francis] - I don't think this last variant will work -- it's the same problem the original poster raised.
Essentially, it is
$ y=$(($y + 1)) $ echo $y $ y=$(($y + 1)) | cat $ echo $yor even
$ t=7 $ echo $t $ t=6 | cat $ echo $tand spot the (lack of) difference with the outputs.
[Ben] - Note, by the way, that this is not KSH-specific; Bash does the same thing.
[Francis] - zsh 4.2.5 (i686-pc-linux-gnu) is the only bourne-alike I have here that gives me the hoped-for output with the above examples. Using that may or may not be an acceptable workaround.
...but now that I test the original script with that zsh, I see that it doesn't print 15.
Sorry for the noise...
[Ben] - As best as I can reconstruct it, I must have been looking at an edited but unsaved version of the script while executing it in another xterm (meaning that the previous version, one that does print, was getting executed.) What makes it really silly is that I have the F5 key in Vim set up to save, chmod, and execute the file I'm looking at, so there's no reason to have been doing that.
I always either test the scripts that I discuss or mention that they're untested, and I thought I was doing that this time as well. Oh well, better next time.
[JohnH] - Thank you, all, for your assistance.
[Ben] - Glad it was helpful, John.
[JohnH] - Thomas - My original thought was as yours: Avoid the pipe (and the need for interprocess communication) completely. That is where I was talking "worst-case" scenario in my original post. Write to a temp file, then make grep a seperate step. However, the data being piped was very long and complex, and would have generated a big file.
[JohnH] - Ben - Your idea of feeding stuff through standard error is brilliant beyond evil! According to Linus, you are not truely a hacker until someone else calls you one. With that kind of "Evil Genius" thinking, I'll say it: You're a hacker. We'll get to your "more honest approach" in a moment.
[Ben] -[grin] Thanks. Using STDERR for messages is something that should be done more often than it is, anyway; the standard Un*x idea of making every program a filter doesn't work without it (if you're doing something like 'foo|bar|xyz', and your error messages from 'foo' go to STDOUT, it's going to mess up your whole concept something fierce.) It also provides a nice bit of granularity: the ability to do 'foo > foo_out.txt 2>foo_err.txt' (i.e., capture the output stream in one file and the errors in another) can be very useful on occasion.
[Thomas] - This is where I use 'exec' (portable, too):
foobar=0 exec 4<&0 # Best to duplicate STDIN, else it's permenant. exec 0< /etc/group while read a; do .... done exec 0<&4 # Restore STDIN. exec 4<&- # Close FD 4.
[Ben] - Sorry, I'm missing the utility of duplicating STDIN here. I understand why you'd want to do it if you've got, say, some specialized processing you want applied to STDIN before using it, or if you want to launch a child process and have access to its input - but what are you trying to do with '&4'?
[JohnH] - Francis - I made the same assesment as you on Ben's "more honest approach." But one thing struck me, and got me going in a different direction: Why was he using an echo in backticks to do what a simple out=${y} would accomplish?
[Ben] - The shell doesn't have a concatenation operator, so I was building a list ('out=$y' would simply replace whatever was already in '$out'.) I was also trying to replicate the output that you would have had if things worked as you thought they should (there, parse that three times fast) - meaning that each number got printed on a line by itself. So, I used 'echo' and the backticks to stick a newline between each element. Saying 'out="$out\n$y"' and 'echo -e $out' at the end would have been worked just as well.
[JohnH] - That unusual use of the syntax lead me to my final solution, which is as follows:
(echo "#!/bin/ksh" echo "y=${y}") > /a/temp/file done | grep "somedata" >> /Some/output/file . /a/temp/file rm /a/temp/file for x in 1 2 3It's an ugly hack, but effective. In my production code, ${y} is a series of variables keeping statistics on what is happening in the loop. This is a small, finite number of variables which can be written out to a small file and read back in on the other side of the done. True, I still have temp file maintenence to deal with, but not the large, complex file I would have to deal with if I were to write my output out, and then grep.
Thanks again, to all of you for your assistance.
(Tommyjanie at aol.com) Tommyjanie at aol.com
Fri Jul 14 17:56:47 PDT 2006
This originally had no subject line, but I've taken the liberty of entitling it "Excel and Serial Port" as an example of the sort of subject it should have had. - Kat
I am looking for a way to import and export data from excel cells (8) using the serial port . Also I would like use linux, four imputs and four exports for control imputs will be 0 - to 12 volts or 0 -5 volts. . My basic problem is ---how do i get data in and out of excel or a simlar spread sheet
[Brian] - I presume you don't mean that you want serial port access to excel, but that you want to send spreadsheet cell values out the serial port, and read values therefrom back into a spreadsheet. Yes?
google for linux excel cvs yielded some interesting results, as did this command on my kubuntu workstation:
bilbrey@vimes:~$ apt-cache search excel | grep excel w3m - WWW browsable pager with excellent tables/frames support libdbd-excel-perl - provides an SQL interface (via DBI) for accessing Excel files libspreadsheet-parseexcel-perl - access information from Excel Spreadsheets libspreadsheet-parseexcel-simple-perl - A simple interface to Excel data libspreadsheet-writeexcel-perl - create Excel spreadsheets plopfolio.app - Clone of Serence's excellent KlipFolio for GNUstep w3mmee - WWW browsable pager with excellent tables/frames, MB extensionIt looks like libspreadsheet-* may be useful for you, at least on a Debian-based system. You don't mention which distribution of Linux you're favoring.
Good luck. Note: You may also just want to export your excel spreadsheets to csv, and work with them in that format, then pull csv back into excel if that's what is needed for reporting or whatnot.
[Ben] - Tommy, you need to decompress your questions a bit. You can't "import and export data from excel cells using the serial port" - as far as I know, Excel has nothing to do with serial ports (or any other kind.) Perhaps what you're saying is that you'd like to connect your computer to some peer or a network via a serial connection, and you would also like to be able to exchange data, which is stored in an Excel spreadsheet, between your computer and another node.
I'm going to proceed on that assumption, just in case I got it right, since some of the answers may be useful to our readers. However, in the future, I would appreciate it if you'd save us all from having to guess, and perhaps end up answering the wrong question.
As to communicating over a serial port, the traditional and common way to do that is a modem. I won't belabor the obvious for that case, since you can get information on configuring one in many places. On the other hand, if you're trying for a direct serial-to-serial connection, I'd suggest taking a look at the end of the PPP-HOWTO (section 29, "Using PPP across a null modem (direct serial) connection"); I found it very helpful several years ago when I was doing exactly that. Do make sure that you use a null-modem cable instead of a standard serial cable; it will not work with the latter.
Once you have the hardware portion of this configured an working, the next question is, how will you transport the data between the machines? Since you didn't state any requirements here, it's impossible to answer the question in technical terms - except perhaps to present a range of options. You can copy the file back and forth between the two machines; you can send just the differential data and apply it on the appropriate machine; you could set up a server-client pair and have the data modified on the fly.
The first option requires the least knowledge to set up but is the most "manual" of the three. The second one requires just a little less interaction, but still requires a human to bang on the keyboard. The last one can be completely automatic - the data comes in, the client pings the server, the server modifies the data and notifies the client that it's been done, and all the human has to do is check the log file for failures once in a while. It does, however, require a fair bit of programming expertise.
Zohaib Najeeb (37zohaib at niit.edu.pk)
Tue Jun 20 01:17:04 PDT 2006
This was originally entitled "Need help". - Kat
Hi, I want to know if there is a way I can run my applications at Linux startup. I have Fedora Core 2 and I have written an application in Java. I want the Java application to run at startup. Could you please tell me an easy way.
Regards, Zohaib Najeeb
[Thomas] - Is this Java application using Swing or AWT such that it's going to need a GUI?
For the case of making X11 boot automatically without a username or password, you should probably use runlevel 4 for this (assuming you're not using Slackware which uses a BSD-style runlevel). Historically runlevel 4 has been reserved for people's own nefarious uses -- although on Debian it doesn't matter in that runlevels 2 - 5 inclusive are all the same.
So... how do you go about that? /etc/inittab is read by init each and everytime it boots -- indeed, it is this file which determines what the default init level will be, as in:
id:2:initdefault:... Changing that number to 4 should suffice. Then you'll want to add a line which looks something like this:
T4:respawn:/bin/su - some_user -c startxHence for whichever user you want to have logged in automatically you would replace 'some_user' above with that username. This line changes to that user, and runs startx. Probably setting that user's password to nothing would help here.
As to how you tell startx what to load is easy. So many people have been brainwashed with the crap that is GNOME and KDE that they've forgotten how all of this works. Startx(1) will read ~/.xinitrc by default, or ~/.xsession if ~/.xinitrc does not exist. You most likely want ~/.xinitrc in this case, hence:
#!/bin/sh java /path/to/my_java_application exec fvwmMight look like something you want. That will start your java application up and load the window manager up with it -- you should change that to suit your own needs, ensuring that once you've saved the file that you run:
chmod 700 ~/.xinitrc... Since it should be treated like any other shell script. If, on the other hand, you're already using GDM (ugh!) then that has an option to make automatic logins from the file /etc/gdm/gdm.config:
AutomaticLoginEnable=true AutomaticLogin=your-userThat's simple, eh? If all you wanted was to have this Java application start without any automatic logging in of a user, then just add your necessary details to ~/.xsession, and read in part the following:
https://edulinux.homeunix.org/fvwm/fvwmchanfaq.html#cf14If this java application doesn't use Swing or AWT, and/or you just want to run it at the console, then you will need to add something like the following to /etc/init.d/rc.local, or /etc/init.d/rc.boot, or some other start up file your distribution defines:
su - my_user -c java /path/to/java_application &