...making Linux just a little more fun!
Dr. Parthasarathy S [drpartha at gmail.com]
I have come across this thing called FIFO named pipe. I can see how it works, but I find no place where I can put it to good use (in a shell script). Can someone give me a good application context where a shell script would need to use FIFO ? I need a good and convincing example, to be able to tell my students.
Thanks TAG,
partha
-- --------------------------------------------------------------------------------------------- Dr. S. Parthasarathy | mailto:drpartha at gmail.com Algologic Research & Solutions | 78 Sancharpuri Colony | Bowenpally P.O | Phone: + 91 - 40 - 2775 1650 Secunderabad 500 011 - INDIA | WWW-URL: https://algolog.tripod.com/nupartha.htm ---------------------------------------------------------------------------------------------
[ Thread continues here (13 messages/23.98kB) ]
Henry Grebler [henrygrebler at optusnet.com.au]
Hi,
[I wrote what follows nearly a month ago. Since I started working full-time-plus, I have not been able to respond as promptly as I would prefer. For many, I suspect, the subject has gone cold. Nevertheless, ...]
As often happens, I have not phrased myself well. [sigh]
I apologise for trying to respond quickly, when clearly a fuller and more considered response was indicated.
Let me try again.
As I understand it, this is the original statement of the problem:
If you've ever tried to delete Emacs backup files with
rm *~
(i.e. remove anything ending with ~), but you accidentally hit Enter before the ~ and did "rm *", ...
Silas then goes on to suggest a solution.
Perhaps I took the problem statement too literally. When I wrote cleanup.sh, it was aimed exclusively at emacs backup files. When I talked about it, I was thinking exclusively of emacs backup files. I am not aware of any other place where tildes are used.
If it's a file I've been editing with emacs, I've usually created it. So I get to choose its name.
The message I am trying to convey to TAG's broad readership is this. If you are happy with Silas's solution, that's fine. If you are happy with any of the other solutions provided, that's fine too. I myself have felt the need to deal with this problem. I offer my approach to dealing with emacs backup files for your consideration.
Part of my approach is the cleanup script; this is the technical outcome, the tangible I can offer to others.
More important is a holistic approach to the problem as originally stated. I took it to mean that certain behaviours are more risky than others. This is again in the eyes of the beholder. Some people always alias 'rm' to 'rm -i'. If they do this, they will always be prompted before deleting any file. If they specify multiple files, there will be a prompt for each file individually.
Such an approach would have saved Silas.
Again, if such an approach appeals to you, by all means adopt it.
It seems to me ill-advised to adopt a safe behaviour in one circumstance and a risky behaviour in another. The nett effect will not be as safe as your safest behaviour, but rather as risky as your riskiest behaviour. (A chain is no stronger than its weakest link.)
So, first of all, according to my philosophy, don't go creating files with dangerous characters in them. It's really not important to me to call a file 'a b' when I can just as easily call it 'a_b'. If I download a file with spaces in the filename AND I plan to edit it with emacs, it's worth renaming the file, replacing spaces with underscores (or any other innocuous character).
And I am strongly recommending this behaviour of mine to others.
If you live largely in a Linux world, you should rarely encounter filenames containing dangerous characters. If you are creating files, you can choose to avoid dangerous characters. Or not. It's up to you.
[ ... ]
[ Thread continues here (2 messages/9.65kB) ]
Jimmy O'Regan [joregan at gmail.com]
Since I got commit access to Tesseract, I've been getting a little more interested in image recognition in general, and I was pleased to find a Java-based 'face annotation' system on Sourceforge: https://faint.sourceforge.net
The problem is, it doesn't support face detection on Linux, but it does have a relatively straightforward way of annotating the image using XMP tags. Perl to the rescue - there's a module called Image::ObjectDetect on CPAN... it's a pity the example in the POD is incorrect.
Here's a correct example that generates a simple HTML map (and nothing else
#!/usr/bin/perl use warnings; use strict; use Image::ObjectDetect; my $cascade = '/usr/share/opencv/haarcascades/haarcascade_frontalface_alt2.xml'; my $file = $ARGV[0]; my @faces = Image::ObjectDetect::detect_objects($cascade, $file); my $count=1; print "<map name='mymap'>\n"; for my $face (@faces) { print " <area shape='rect' alt='map$count' href='https://www.google.com' coords='"; print $face->{x}, ", "; print $face->{y}, ", "; print $face->{x}+$face->{width}, ", "; print $face->{y}+$face->{height}, "'/>\n"; } print "</map>\n"; print "<img src='$file' usemap='mymap'/>\n";
(to install Image::ObjectDetect, you first need OpenCV. On Debian, that's
apt-get install libcv-dev libcvaux-dev libhighgui-dev
cpan Image::ObjectDetect
and you're done
-- <Leftmost> jimregan, that's because deep inside you, you are evil. <Leftmost> Also not-so-deep inside you.
[ Thread continues here (4 messages/6.48kB) ]
Ben Okopnik [ben at linuxgazette.net]
From: John Hedges <john@drystone.co.uk> To: TAG <tag@lists.linuxgazette.net> Subject: RE: Testing new anti-spam systemHi Ben
I've read with interest the thread on C/R antispam technique.
The gmail solution sounds promising except in one use case - where you open an account through a website. Unfortunately this often results in a confirmation email - sometimes even with a username and password - exacly the kind of information you would not want sniffed. One solution, as suggestesd by Martin Krafft (https://madduck.net/blog/2010.02.09:sign-me-up-to-social-networking/), is to use a unique receiver address, such as myname-googlesignup at mydomain.co.uk for these signups that you can whitelist in anticipation of a response, and with the advantage that you can blacklist the address in the future if it gets abused.
Please share with the list if you see fit.
I've enjoyed LG for many years, thanks.
John Hedges
Ben Okopnik [ben at linuxgazette.net]
On Fri, Jun 04, 2010 at 12:22:51PM +0800, bayu ramadhan wrote:
> dear linuxgazette, > > i've read about migrating mail server to postfix/cyrus/openldap, , , > > i've tried u'r tutorial, , ,
Who's "u'r"? I'm pretty sure it was Rene Pfeiffer who wrote that article, not "u'r".
HINT: please use standard English, and preferably standard punctuation as well. As you already know, your English isn't of the best; please don't make it any more difficult by adding more levels of obscurity to it.
> but i've some error when execute the perl script,, > > i've attach the screenshoot of the error and the script, , ,
Next time, please just copy and paste the error. It's plain text, so there's no need for screenshots. The error - which I've had to transcribe from your screenshot (an unnecessary waste of time) - was
Communications Error at ./cgate_migrate_ldap.pl line 175, <DATA> line 225.
Taking a look at line 175 of the script shows a pretty good possibility for where the possible problem might be:
173 --> # Bind to servers 174 --> $mesg = $ldap_source->bind($binddn_source, password => 'ldapbppt'); 175 --> $mesg->code && die $mesg->error;
That is, you're trying to bind to a server using the same password that Rene had used in his article. Unless you're using that password, the connection is going to fail.
-- * Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *
[ Thread continues here (2 messages/3.37kB) ]
Ben Okopnik [ben at okopnik.com]
Hi, Evaggelos -
On Sun, Jun 27, 2010 at 08:44:01PM +0300, Evaggelos Balaskas wrote:
> Hi, > > First of all let me introduce my self: > > My name is: Evaggelos Balaskas, or ebal for short, and i dont know perl!
Welcome! That's a good Alcoholics-Anonymous-style introduction.
> I wanted to add many subjects to my search/clear function, > so i used a list.
Not a bad idea so far...
> while ( my $msg = $mb->next_message ) { > my $flag = 0; > my $s = $msg->header->{subject}; > > if ( !$s ) { > $s = "empty_subject"; > }
If you're trying to set a default value for a variable, Perl has a nifty way to do that by using "reflexive assigment":
$s ||= "empty_subject";
This essentially says "if $s is true - i.e., contains anything - then leave it alone; otherwise, assign this value."
> foreach (@subjects) { > if ( $s =~ $_ ) {
Although that will work, it's not a good idea: the binding operator (=~) is intended to work with the match operator (/foo/ or m/foo/), and omitting the latter makes your code fragile (i.e., since the usage is non-standard, your code could fail with a later version of Perl.) In addition to that, since $_ is the default binding for regexes, you don't need to use it explicitly; you could just write
if (/$s/){
and be done with it.
However, there's a larger problem here: you don't need to build a state machine to determine whether your messages contain one of the above subjects - you can just use 'grep', or (my preferred method) a hash lookup. I also noticed that you don't use the default value in "$s" anywhere after defining it - and if you're just throwing it away, you don't need to define it in the first place.
# Using grep my @subjects = ("aaaaaa", "bbbbbb", "ccccc", "ddddd"); while ( my $msg = $mb->next_message ) { print "$msg\n" unless grep /$msg->header->{subject}/, @subjects; }
Alternate method, faster lookup with less grinding:
# Using a hash lookup my %subjects = map { $_ => 1 } ("aaaa", "bbbb", "cccc", "dddd"); while ( my $msg = $mb->next_message ) { print "$msg\n" unless defined $subjects{$msg->header->{subject}}; }
This latter version will also deal gracefully with undefined subject lines.
> PS: Plz forgive me for my bad english - not my natural language.
[smile] I'll be sure to "forgive" you - as soon as my Greek is as good as your English (which, incidentally, is excellent as far as I can see.) Again, welcome!
-- OKOPNIK CONSULTING Custom Computing Solutions For Your Business Expert-led Training | Dynamic, vital websites | Custom programming 443-250-7895 https://okopnik.com
Jimmy O'Regan [joregan at gmail.com]
A friend just sent me a Sierpi?ski triangle generator in 14 lines of C++ (https://codepad.org/tD7g8shT), my response is 11 lines of C:
#include <stdio.h> int main() { int a, b; for (a = 0; a < 64; ++a) { for (b = 0; b < 64; ++b) printf ("%c ", ((a + b) == (a ^ b)) ? '#' : ' '); printf("\n"); } return 0; }
...but I'm wondering if Ben has a Perl one-liner?
(BTW, CodePad is awesome!)
-- <Leftmost> jimregan, that's because deep inside you, you are evil. <Leftmost> Also not-so-deep inside you.
[ Thread continues here (15 messages/15.96kB) ]
Deividson Okopnik [deivid.okop at gmail.com]
Hello TAG
I have an ubuntu linux machine here thats got a samba shared folder, writable by anyone.
Is there any way i can know from what IP a certain file came?
Cause one of the windoze machines on my network is spreading some virus, and I cant figure out what machine is that. It creates an autorun.inf and a .exe on my ubuntu shared folder, thats the reason I wanted to try to figure out where it came from.
Thanks Deividson
[ Thread continues here (4 messages/5.60kB) ]
Share |
By Deividson Luiz Okopnik and Howard Dyckoff
Contents: |
Please submit your News Bytes items in plain text; other formats may be rejected without reading. [You have been warned!] A one- or two-paragraph summary plus a URL has a much higher chance of being published than an entire press release. Submit items to bytes@linuxgazette.net. Deividson can also be reached via twitter.
After years of litigation, all of the SCO claims have been denied and it will be going into bankruptcy hearings in July - which had been delayed until the final judgement.
As noted on the Groklaw web site, Judge Ted Stewart ruled for Novell and against SCO. Futhermore, SCO's claims for breach of the implied covenant of fair dealings were also denied in the June jugement.
Here is the major part of Novell's statement from June 11:
"Yesterday, United States District Court Judge Ted Stewart issued a Final Judgment regarding the long standing dispute between SCO Group and Novell. As part of the decision, the Court reaffirmed the earlier jury verdict that Novell maintained ownership of important UNIX copyrights, which SCO had asserted to own in its attack on the Linux computer operating system. The Court also issued a lengthy Findings of Fact and Conclusions of Law wherein it determined that SCO was not entitled to an order requiring Novell to transfer the UNIX copyrights because "Novell had purposely retained those copyrights." In addition, the Court concluded that SCO was obligated to recognize Novell's waiver of SCO's claims against IBM and other companies, many of whom utilize Linux."
In an agreement that could provide advanced, affordable tablet computers to classrooms throughout the world, One Laptop per Child (OLPC), a global organization helping to provide every child in the world access to a modern education, and Marvell, a leader in integrated silicon solutions, will jointly develop a family of next-generation OLPC XO tablet computers based on the Marvell Moby reference design. This new partnership will provide designs and technologies to enable a range of new educational tablets, delivered by OLPC and other education industry leaders, aimed at schools in both the U.S. and developing markets. Marvell is also announcing today it has launched Mobylize, a campaign to improve technology adoption in America's classrooms.
The new family of XO tablets will incorporate elements and new capabilities based on feedback from the nearly 2 million children and families around the world who use the current XO laptop. The XO tablet will require approximately one watt of power to operate compared to about 5 watts necessary for the current XO laptop. The XO tablet will also feature a multi-lingual soft keyboard with touch feedback, and will also feature an application to directly access more than 2 million free books available across the Internet.
The first tablets in the line will be based closely on the Moby, not the XO-3 design, and will focus more on children in the developed world. These will be on display at CES 2011 in January, and available next year for under $100. The original XO-3 design is still planned for 2012 and will benefit from the experience of both the XO-1.75 and the Moby efforts.
"The Moby tablet platform - and our partnership with OLPC - represents our joint passion and commitment to give students the power to learn, create, connect and collaborate in entirely new ways," said Weili Dai, Marvell's Co-founder and Vice President and General Manager of the Consumer and Computing Business Unit. "Marvell's cutting edge technology - including live content, high quality video (1080p full-HD encode and decode), high performance 3D graphics, Flash 10 Internet and two-way teleconferencing - will fundamentally improve the way students learn by giving them more efficient, relevant - even fun tools to use. Education is the most pressing social and economic issue facing America. I believe the Marvell Moby tablet can ignite a life-long passion for learning in all students everywhere."
Powered by a high-performance, highly scalable, and low-power Marvell ARMADA 610 application processor, the Moby tablet features gigahertz processor speed, 1080p full-HD encode and decode, intelligent power management, power-efficient Marvell 11n Wi-Fi/Bluetooth/FM/GPS connectivity, high performance 3D graphics and support for multiple platforms including full Adobe Flash, Android, Windows Mobile and Ubuntu. The Moby platform also features a built-in camera for live video conferencing, multiple simultaneous viewing screens and Marvell's 11n Mobile Hotspot which allows Wi-Fi access that supports up to eight concurrent users connected to the Internet via a cellular broadband connection. The ultra low power mobile tablet has a very long battery life.
Moby is currently being piloted in at-risk schools in Washington, DC, and Marvell is investing in a Mobylize campaign to improve tech adoption within US classrooms. This should help OLPC in the developed world.
For more information, visit https://www.mobylize.org.
In June, One Laptop per Child (OLPC) was awarded a bid by Plan Ceibal to provide 90,000 updated XO laptops for high school students in Uruguay. This is the first time the OLPC XO laptops have been specifically designed for high school-aged students and represents a major expansion in its global learning program.
The XO high school laptop (XO-H) is built on the XO-1.5 platform and based on a VIA processor. it will provide 2X the speed of the XO 1.0, 4X DRAM memory, and 4X Flash memory. The XO-H is designed with a larger keyboard that is better suited to the larger hands and fingers of older students. It will feature the learning-focused Sugar user interface on top of a dual-boot Linux operating system, with Gnome Desktop Environment that offers office productivity tools.
The XO-H will be delivered with age-appropriate learning programs adapted to the scholastic needs of secondary schools. A new color variation for the laptop's case (light and dark blue) will be an option for the high school model. The XO, specially designed for rugged environments, is well-suited for remote classrooms and daily transportation between home and school. The XO uses three times less electricity than other laptops and is built as a sealed, dust-free system.
The government of Uruguay through Plan Ceibal has completely saturated primary schools with 380,000 XOs and will now begin to expand the highly successful One Laptop per Child learning program to its high schools. There are 230,000 high school students in Uruguay.
"Until now, the 1.5 million students worldwide using XO laptops had no comparable computer to 'grow up' to," said OLPC Association CEO Rodrigo Arboleda. "The XO high school edition laptop demonstrates how the XO and its software can easily adapt to the needs of its users."
One Laptop per Child (OLPC at https://www.laptop.org) is a non-profit organization created by Nicholas Negroponte and others from the MIT Media Lab to design, manufacture and distribute laptop computers that are inexpensive enough to provide every child in the world access to knowledge and modern forms of education.
Swype is offering a narrowcast Beta on its updated Android keyboard application. This application interprets word choices in real time from a dictionary database as a user slides a finger between letters in a continuous motion.
The Swype software is very tightly written with a total memory footprint of under 1 MB.
The patented technology enables users to input words faster and easier than other data input methods - over 50 words per minute. The application is designed to work across a variety of devices such as phones, tablets, game consoles, kiosks, televisions, virtual screens and more.
A key advantage to Swype is that there is no need to be very accurate, enabling very rapid text entry. Swype has been designed to run in real time on relatively low-powered portable devices.
The beta software has built-in dictionaries for English, Spanish, and Italian. The beta software supports phones with HVGA (480h x 320w), WVGA (800h x 480w) and WVGA854(854h x 480w) screen sizes (screen size is detected by the software).
Here are some details on the beta:
* It will be open for a limited time;
* Initially, only English, Spanish, and Italian - more languages to
come;
* Some key features of Swype require OEM integration;
* Limited End User Support - mostly forum.
If your phone came pre-installed with Swype, DO NOT download this beta (it won't work).
The beta is open to all phones using Android which do not already have Swype pre-installed. For more information on Swype and the current beta test, go to: https://beta.swype.com/.
The OpenOffice.org Community recently released OpenOffice.org 3.2.1, the newest version of the world's free and open-source office productivity suite.
OpenOffice.org 3.2.1 is a so-called micro release that comes with bugfixes and improvements, with no new features being introduced. This release also fixes major security issues, so users are encouraged to upgrade to the new version as soon as possible.
This version is the first to be released with the project's new main sponsor, Oracle, and comes with a refreshed logo and splash screen. Following OpenOffice.org's usual release cycle, the next feature release of OpenOffice.org is version 3.3, expected in Autumn 2010.
The OOo Community celebrates its 10th anniversary this year at the annual OOoCon in Budapest, Hungary, from August 31 to September 3.
OpenOffice.org 3.2.1 is available in many languages for all major platforms at: https://download.openoffice.org.
Linus Torvalds will speak at first ever LinuxCon Brazil where the country's developer, IT operations, and business communities will come together. LinuxCon Brazil will take place August 31 - September 1, 2010 in Sao Paulo, Brazil.
Brazil has long been recognized as one of the fastest growing countries for Linux adoption. The Brazilian government was one of the first to subsidize Linux-based PCs for its citizens with PC Conectado, a tax-free computer initiative launched in 2003. Brazil's active and knowledgeable community of Linux users, developers and enterprise executives bring an important perspective to the development process and to the future of Linux.
"Brazil leads many other countries in its adoption of Linux and is a growing base of development. The time is right to take the industry's premier Linux conference to Brazil," said Jim Zemlin, executive director at The Linux Foundation.
Confirmed speakers for LinuxCon Brazil include Linux creator Linus Torvalds and lead Linux maintainer Andrew Morton, who will together deliver a keynote about the future of Linux.
Novell released SUSE Linux Enterprise 11 Service Pack 1 (SP1) in May, which delivers broad virtualization capabilities, high availability clustering, and more flexible maintenance and support options.
SUSE Linux Enterprise 11 SP1 is optimized for physical, virtuali, and cloud infrastructures and offers numerous advancements, including: Broad virtualization support, including the latest Xen 4.0 hypervisor with significantly improved virtual input/output performance, support for KVM, an emerging open source virtualization hypervisor, and Linux integration components in Hyper-V - an industry first. Best open source high-availability solution, with clustering advances such as support for metro area clusters, simple node recovery with ReaR, a source disaster recovery framework, and new administrative tools including a cluster simulator and web-based GUI. First enterprise Linux distribution with an updated 2.6.32 kernel, which leverages the RAS features in Intel Xeon processor 7500 and 5600 series.
New technology on the desktop, including improved audio and Bluetooth support, as well as the latest versions of Firefox, OpenOffice.org and Evolution, which includes MAPI enhancements for improved interoperability with Microsoft Exchange.
With SP1, Novell is implementing more flexible support options that will allow customers to remain on older package releases during the product's life cycle, and will significantly lower the hurdle to deploy upcoming service packs. While current releases deliver the most value with proactive maintenance updates and patches, for customers that place a premium on stability, this program delivers superior control and flexibility by allowing them to decide when and how to upgrade.
Long Term Service Pack Support will continue to be available for customers that want full technical support and fixes backported to earlier releases.
IBM and other component, system, and software vendors like AMD, Broadcom, Brocade, Dell, Emulex, Fujitsu, HP, Intel, LSI, Microsoft, nVidia, QLogic and SGI leverage Partner Linux Driver Program tools to support their third-party technology with SUSE Linux Enterprise. Novell's partners provide more than 5,000 certified applications today for SUSE Linux Enterprise, twice that of the next closest enterprise Linux distribution provider.
A new beta version of the popular Opera web browser is available for testing with added support for new HTML 5. Opera 10.6 will include support for the HTML 5 video and the WebM codec that Google released as open source last month. It also includes support for the Appcache feature of HTML 5 that allows web-based applications to run even when a computer is no longer attached to the Internet.
Opera 10.6 is also faster than Opera 10.5 on standard web tests and is informally reported by testers to edge out the current versions of Firefox and Chrome. Additionally, there are enhancements to the menus and tab previews.
There is also support for Geolocation applications. Opera allows you to share your location with apps like Google Maps and can find your location on mobile devices with a GPS.
Download the 10.6 beta for Linux and FreeBSD here: https://www.opera.com/browser/next/.The Apache Software Foundation (ASF) - developers, stewards, and incubators of 138 Open Source projects - has announced Apache Cassandra version 0.6, the Project's latest release since its graduation from the ASF Incubator in February 2010.
Apache Cassandra is an advanced, second-generation "NoSQL" distributed data store that has a shared-nothing architecture. The Cassandra decentralized model provides massive scalability, and is highly available with no single point of failure even under the worst scenarios.
Originally developed at Facebook and submitted to the ASF Incubator in 2009, the Project has added more than a half-dozen new committers, and is deployed by dozens of high-profile users such as Cisco WebEx, Cloudkick, Digg, Facebook, Rackspace, Reddit, and Twitter, among others.
"The services we provide to customers are only as good the systems they are built on," said Eric Evans, Apache Cassandra committer and Systems Architect at The Rackspace Cloud. "With Cassandra, we get the fault-tolerance and availability our customers demand, and the scalability we need to make things work."
Cassandra 0.6 features include:
* Support for Apache Hadoop: this allows running analytic queries with
the leading map/reduce framework against data in Cassandra;
* Integrated row cache: this eliminates the need for a separate
caching layer, thereby simplifying architectures;
* Increased speed: this builds on Cassandra's ability to process
thousands of writes per second, allowing applications to cope with
increasing write loads.
"Apache Cassandra 0.6 is 30% faster across the board, building on our already-impressive speed," said Jonathan Ellis, Apache Cassandra Project Management Committee Chair. "It achieves scale-out without making the kind of design compromises that result in operations teams getting paged at 2 AM."
Twitter switched to Apache Cassandra because it can run on large server clusters and is capable of taking in very large amounts of data at a time. Storage Team Technical Lead Ryan King explained, "At Twitter, we're deploying Cassandra to tackle scalability, flexibility and operability issues in a way that's more highly available and cost effective than our current systems."
Released under the Apache Software License v2.0, Apache Cassandra 0.6 can be downloaded at https://cassandra.apache.org/.
Paving the way to a Web 3.0 Internet, Primal announced its Primal Thought Networking platform and beta version of its newest service, Pages, from the 2010 Semantic Technology Conference in June.
Delivered on the Software as a Service (SaaS) model, the Primal Thought Networking platform, using semantic synthesis, helps content producers to expand their digital footprint and improve engagement with their consumers, at lower cost. For consumers, the platform delivers a more personalized experience that is based on content relevant to their thoughts and ideas. Rather than sifting through page after page of content, the Thought Networking platform's software assistants automate the filtering processes, delivering more relevant content.
In addition to the launch of the Thought Networking platform, Primal also introduced the beta version of its newest offering, Pages. With Pages, users can now quickly generate an entire Web presence based on a topic relevant to their interests, which then may be explored, customized and shared with friends or the general public. The Pages beta website generator utilizes Semantic Web technology that goes beyond the simple facilitation of content discovery, pulling information from sites like Wikipedia, Yahoo!, and Flickr, delivering relevant and useful content in a centralized view in real-time.
"Semantic synthesis is the ability to create in real time a semantic graph of what I need. With our growing portfolio of IP and patents, we have spent the past several years developing an intuitive technology that annotates a person's thoughts and ideas," said Primal's Founder and Co-President Peter Sweeney, speaking live from the Semantic Technology Conference. "On the content manufacturing plane, we provide content developers with a completely new market of content but will also enable an entirely untapped group of people who would not otherwise have the technical aptitude, but have big ideas to share, to quickly and easily become content developers."
Using Primal's semantic synthesis technology, teachers will be able to seamlessly build a website of course materials for their students; hobbyists can create a one-stop, definitive source of information for their pastime; politicians can build hyper-local, targeted sites featuring local activities and news in support of their constituents.
Join the Primal community at https://www.primal.com or on Facebook at https://www.facebook.com/primalfbook.
TRENDnet announced in June a first-to-market 450Mbps Wireless N gigabit router, its model TEW-691GR. Designed for high performance and quality of service, this router uses three external antennas to broadcast on the 2.4GHz spectrum.
Three spatial streams per antenna produce a record 450Mbps theoretical wireless throughput. Multiple Input Multiple Output (MIMO) antenna technology boosts wireless coverage, signal strength, and throughput speed. One gigabit Wide Area Network port and four gigabit Local Area Network ports also offer very fast wired throughput performance.
Wi-Fi Protected Setup (WPS) integrates other WPS supported wireless adapters at the touch of a button. Instead of entering complicated encryption codes, simply press the WPS button on the TEW-691GR, then press the WPS button on a compatible wireless adapter, confirm you would like to connect, and the devices automatically exchange information and connect. The router is compatible with IEEE 802.11n and backward compatible with IEEE 802.11g/b/a devices.
Advanced wireless encryption and a secure firewall offer protection for digital networks. Router setup is fast and intuitive. WMM Quality of Service (QoS) technology prioritizes gaming, Internet calls, and video streams.
"TRENDnet's ability to launch this ground breaking 450Mbps product ahead of other brands says a lot about our recent growth," stated Pei Huang, President and CEO of TRENDnet. "We are ecstatic to set a new performance threshold in the consumer wireless revolution."
The MSRP for the TEW-691GR is U.S. $199.00.
TRENDnet in June announced the availability of PC cards that add USB 3.0 Ports to both desktops and laptops. The adapters add two super speed 5Gbps USB 3.0 ports that can transfer content from USB 3.0 storage devices, flash drives, and camcorders at 5Gbps speeds, which are 10 times faster than USB 2.0.
TRENDnet's 2-Port USB 3.0 ExpressCard Adapter, model TU3-H2EC, plugs into any available ExpressCard port on a laptop computer to provide two USB 3.0 ports. A power adapter provides full power to all connected USB 3.0 devices. Backward compatibility allows users to connect USB 2.0 devices to the USB 3.0 ports.
The 2-Port USB 3.0 PCI Express Adapter, model TU3-H2PIE, connects to an available PCI Express port on a tower computer. A 4-pin power port connects to an internal power supply in order to provide full power to all connected USB 3.0 devices. Backward compatibility allows users to connect USB 2.0 devices to the USB 3.0 ports.
"USB 3.0 is like HD television - once you experience it, you won't go back," stated Zak Wood, Director of Global Marketing for TRENDnet. "The tremendous speed advantage of USB 3.0 (5Gbps) versus USB 2.0 (480Mbps) is driving strong consumer adoption. TRENDnet's user friendly adapters upgrade existing computers to high speed 5Gbps USB 3.0."
TRENDnet's 2-Port USB 3.0 ExpressCard Adapter, model TU3-H2EC, will be available from online and retail partners shortly. The MSRP for the TU3-H2EC is U.S. $59.
TRENDnet's 2-Port USB 3.0 PCI Express Adapter, model TU3-H2PIE, will be available from online and retail partners shortly. The MSRP for the TU3-H2PIE is U.S. $49.
Share |
Talkback: Discuss this article with The Answer Gang
Deividson was born in União da Vitória, PR, Brazil, on 14/04/1984. He became interested in computing when he was still a kid, and started to code when he was 12 years old. He is a graduate in Information Systems and is finishing his specialization in Networks and Web Development. He codes in several languages, including C/C++/C#, PHP, Visual Basic, Object Pascal and others.
Deividson works in Porto União's Town Hall as a Computer Technician, and specializes in Web and Desktop system development, and Database/Network Maintenance.
Howard Dyckoff is a long term IT professional with primary experience at
Fortune 100 and 200 firms. Before his IT career, he worked for Aviation
Week and Space Technology magazine and before that used to edit SkyCom, a
newsletter for astronomers and rocketeers. He hails from the Republic of
Brooklyn [and Polytechnic Institute] and now, after several trips to
Himalayan mountain tops, resides in the SF Bay Area with a large book
collection and several pet rocks.
Howard maintains the Technology-Events blog at
blogspot.com from which he contributes the Events listing for Linux
Gazette. Visit the blog to preview some of the next month's NewsBytes
Events.
Keywords: NFS
In Part 1 I outlined my plans: to build a server using network install. However, I got sidetracked by problems. In Part 2 I made some progress and dealt with one of the problems. In Part 3 I detailed the first part of the network install -- from start to PXE boot. This part details the rest of the procedure.
If you've been following me so far, you have a target machine which uses the PXE process to boot into the PXE boot Linux kernel. Importantly, you've tried it out and confirmed that this much of the exercise behaves as desired.
The PXE boot process comes to the point where you see the prompt:
boot:
We have not yet set up enough to proceed to the next stage, for which I used NFS.
First decide on an install method. My exercise was with Fedora 10. The possible install media are:
CD or DVD drive Hard Drive Other Device (presumably USB) HTTP Server FTP Server NFS Server
Only the last 3 are network installs, which is all we are interested in here. I chose NFS Server.
Further, I happened to have a machine which was PXE-capable. So my plan was to turn on the machine and (as near as possible) have it install all the software without my intervention.
If your machine is not PXE-capable in hardware, it may still be possible to perform an unassisted install - but you will need to create a CD or DVD to achieve the PXE part. It might be possible to create a PXE floppy.
You will of course need a server to serve PXE data and other info over the network. I chose my desktop as the PXE server.
PXE client (see previous article) PXE server (see previous article) tftp dhcp pxelinux kickstart NFS (see below)
Since the network method I chose was NFS, I had to set up NFS. I judged that it would be easier to set up NFS than to set up HTTP; I was probably a wee bit foolish not going for FTP -- at the time I had security reservations, but I think for an internal private network like mine, there is no difference in security between FTP and NFS.
Arguably TFTP would have been even better because it was already in place having been needed at an earlier step. But it was not available as an option.
mkdir -p /Big/FedoraCore10/NFSroot ln -s /Big/FedoraCore10/NFSroot /NFS
Create /etc/exports:
# exports /NFS 192.168.0.0/24(ro,no_subtree_check,root_squash) /NFS/CD_images 192.168.0.0/24(ro,no_subtree_check,root_squash)
All I've done here is allow any machine on my local subnet to access NFS on the server (my desktop -- where I'm typing this).
NB exports do not "inherit". If you export "/NFS" that won't allow clients to mount "/NFS/CD_images".
mkdir /var/lib/nfs/v4recovery /etc/rc.d/init.d/nfs start exportfs -a exportfs -v
I had all sorts of problems with NFS. I found it useful to test locally using:
mkdir /mnt/nfs mount 192.168.0.3:/NFS /mnt/nfs
And if you want to snoop network traffic, you'll need:
tshark -w /tmp/nfs.tshark -i lo
This should behave in a way akin to what the client PC will see when it tries to NFS-mount directories from the server.
The layout of the NFS directory:
ls -lA /NFS lrwxrwxrwx 1 root staff 25 Nov 26 22:12 /NFS -> /Big/FedoraCore10/NFSroot ls -lA /NFS/. total 44 drwxrwxr-x 3 root staff 4096 Nov 28 17:59 CD_images lrwxrwxrwx 1 root staff 13 Nov 27 10:31 b2 -> ks.b2.cfg.sck -rw-rw-r-- 1 henryg henryg 3407 Dec 7 18:28 ks.b2.cfg.sck
The NFS directory also contains many other files left over from numerous false starts. (Hey, I'm human.)
ls -lA /NFS/CD_images total 3713468 -rw-rw-r-- 2 henryg henryg 720508928 Nov 26 07:43 Fedora-10-i386-disc1.iso -rw-rw-r-- 2 henryg henryg 706545664 Nov 26 08:03 Fedora-10-i386-disc2.iso -rw-rw-r-- 2 henryg henryg 708554752 Nov 26 08:38 Fedora-10-i386-disc3.iso -rw-rw-r-- 2 henryg henryg 724043776 Nov 26 09:17 Fedora-10-i386-disc4.iso -rw-rw-r-- 2 henryg henryg 720308224 Nov 26 09:18 Fedora-10-i386-disc5.iso -rw-rw-r-- 2 henryg henryg 83990528 Nov 26 10:15 Fedora-10-i386-disc6.iso -rw-rw-r-- 2 henryg henryg 134868992 Nov 26 10:17 Fedora-10-i386-netinst.iso
It is not necessary to mount the CD images; it seems that anaconda (the program which performs the actual install) knows how to do that. Sharp-eyed readers will have spotted that the iso images are hard-linked in the /NFS/CD_images directory. That's because I first downloaded the images to /Big/downloads thinking I would need to mount them; later when I was setting up the NFS directory I finally discovered that I needed the images as not-mounted files. Rather than copy the files and waste a heap of disk space, or move the files and risk making something else fail, I chose to hard-link them and get "two for the price of one".
I used system-config-kickstart to generate a first kickstart file. I edited it to get more of the things I wanted and then decided to try it out. After several go-arounds I got to the point which got me into trouble (as described in Part 1).
Here is the final kickstart file:
# ks.b2.cfg.sck - created by HMG from system-config-kickstart for b2 ## - - debugging - - - - - - - - - - ## :: uncomment the following to debug a Kickstart config file ## interactive #platform=x86, AMD64, or Intel EM64T # System authorization information auth --useshadow --enablemd5 # System bootloader configuration bootloader --location=mbr # I guess the "sda" will prevent trashing my USB stick clearpart --all --initlabel --drives=sda part /boot --asprimary --ondisk=sda --fstype ext3 --size=200 part swap --asprimary --ondisk=sda --fstype swap --size=512 part / --asprimary --ondisk=sda --fstype ext3 --size=1 --bytes-per-inode=4096 --grow # Use graphical install graphical # Firewall configuration firewall --enabled --http --ssh --smtp # Run the Setup Agent on first boot firstboot --disable # System keyboard keyboard us # System language lang en_AU # Use NFS installation media nfs --server=192.168.0.3 --dir=/NFS/CD_images # Network information network --bootproto=static --device=eth0 --gateway=192.168.99.1 --ip=192.168.99.25 --nameserver=127.0.0.1 --netmask=255.255.255.0 --onboot=on network --bootproto=static --device=eth1 --gateway=192.168.25.1 --ip=192.168.25.25 --nameserver=198.142.0.51,203.2.75.132 --netmask=255.255.255.0 --onboot=on --hostname b2 # Reboot after installation reboot #Root password rootpw --iscrypted $1$D.xoGzjz$kMojNQR7KFddumcLlQPEs0 # SELinux configuration selinux --enforcing # System timezone timezone --isUtc Australia/Melbourne # Install OS instead of upgrade install # X Window System configuration information xconfig --defaultdesktop=GNOME --depth=8 --resolution=640x480 # Clear the Master Boot Record #zerombr %packages @development-tools @development-libs @base @base-x @gnome-desktop @web-server @dns-server @text-internet @mail-server @network-server @server-cfg @editors emacs gdm lynx -mutt -slrn %end
That's it. Here's a walk-through of what happens for an install.
User connects network cable to target machine and powers up. PXE gains control and asks network for an IP address and other information server has for this machine (at this stage identified by MAC address). Server sends the requested info. PXE configures the NIC with the received IP address. PXE uses tftp to download a Linux kernel. The linux kernel announces itself with the prompt "boot: ". User enters b2. User is no longer needed. Still using tftp, the target machine downloads another Linux kernel. In accordance with the label b2, it then uses NFS to download the kickstart file. The kickstart file specifies that the install should also use NFS. The installer uses the parameters of the kickstart file to govern the installation. When the install is complete, the target machine (now a shiny new server) reboots. PXE gains control as before and the steps above are followed. However, at the "boot: " prompt, either the user simply presses Enter, or, more likely, because the user is not there, the boot loader times out. Either way, the default label lhd is taken: the target machine boots off the recently installed hard drive. Some time later, the user reboots and disables the PXE boot which is no longer needed.
https://docs.fedoraproject.org/install-guide/f10/en_US/sn-automating-installation.html https://fedoranews.org/dowen/nfsinstall/ https://www.instalinux.com/howto.php https://ostoolbox.blogspot.com/2006/01/review-automated-network-install-of.html open source toolbox Tuesday, January 31, 2006 Review: automated network install of suse, debian and fedora with LinuxCOE https://nfs.sourceforge.net/ Linux NFS Overview, FAQ and HOWTO Documents https://docs.fedoraproject.org/mirror/en/sn-server-config.html https://docs.fedoraproject.org/mirror/en/sn-planning-and-setup.html https://nfs.sourceforge.net/
Share |
Talkback: Discuss this article with The Answer Gang
Henry has spent his days working with computers, mostly for computer manufacturers or software developers. His early computer experience includes relics such as punch cards, paper tape and mag tape. It is his darkest secret that he has been paid to do the sorts of things he would have paid money to be allowed to do. Just don't tell any of his employers.
He has used Linux as his personal home desktop since the family got its first PC in 1996. Back then, when the family shared the one PC, it was a dual-boot Windows/Slackware setup. Now that each member has his/her own computer, Henry somehow survives in a purely Linux world.
He lives in a suburb of Melbourne, Australia.
Henry and I were having a conversation via e-mail, and happened upon a subject of difficult clients. As in many things, he and I see eye to eye on this issue - but he managed to do me one better: while I was still fuming just a bit, and thinking about how to make things go better the next time, Henry sent me this story that captured my (and clearly his) experience perfectly, and ticked off all the checkboxes I was trying to fill. In fact, I was bemused and stunned by how well it did so... but that is traditionally the case with parables. They're timeless, and their lessons endure.
This tale may not have much to do with Linux, or Open Source in general;
however, part of LG's function, as I see it, is to educate and entertain -
or better yet, to educate while entertaining. For those of you who, like
myself, work as consultants, or plan to go on your own and become
consultants, I recommend this story heartily and without reservation: read
it, understand it, follow it. Someday, it may save your sanity; for the
moment, I hope that it at least tickles your funny bone.
-- Ben Okopnik, Editor-in-Chief
"Here is our cottage," said Tacco. "We'd like it painted nicely."
"Of course," said Ozzy. "We always try to do a good job. Our workers are very diligent. I'm sure you will be satisfied."
Tacco showed Ozzy all around the cottage. Ozzy asked the occasional question and made some notes.
"We usually provide an estimate of how long the job will take and charge time and materials," said Ozzy.
Tacco looked horrified. "No! We cannot agree to that!" he exclaimed. "We will only proceed with you if you give us a fixed-price quote."
Ozzy looked uncomfortable. "We might be able to provide a fixed-price quote. But we would need to have everything specified fairly precisely. We wouldn't be able to permit ad hoc renovations once we start the work."
"That's all right," said Tacco.
"And I'll bring in a couple of my estimators so that you and they can agree exactly what the job is," continued Ozzy.
"When can you start? We want this job finished very soon because people will be wanting to live in this cottage in the very near future. We expect to have finished refurbishing in a couple of weeks."
They discussed dates for a while and finally agreed that the painting would be finished in three months, at the end of May. They made an appointment for Ozzy's estimators to go over the cottage with Tacco.
A few days later, Ozzy asked Tacco to go over the cottage in some detail and tell the estimators and Ozzy the details of the painting job: number of coats of paint, colour scheme and so on. The estimators took measurements of the room sizes and layout.
Back in the office, Ozzy pored over the figures to come up with a quote for Tacco. He had his secretary type it up and then he checked all the details. He even had his estimators check his calculations. At last, he mailed off the quote to Tacco.
A few days later, Tacco rang Ozzy to tell him the good news. Ozzy had won the business. Tacco was eager to get the work under way. Ozzy's team of painters would start the following week.
When the painters arrived, Tacco was there to meet them. He led them to a building.
"This isn't the cottage that we looked at last time," said David, one of the painters (who was also one of the estimators).
"No," agreed Tacco. "But its very similar to the one you saw. I'm sure the differences are not important."
David was a bit doubtful, but he was reluctant to appear uncooperative. So he bit his tongue and led his work-mate, Charlie, into the cottage.
"It doesn't look like the renovators have finished this cottage," said Charlie.
"No, they haven't," agreed Tacco. "But they'll be finished by the time you are ready to start painting."
"But I'm ready to start right now," wailed Charlie. "David and I were going to start preparing the walls today."
"Well, can't you prepare the walls in those 2 rooms over there and come back and prepare the walls in here another day?" asked Tacco.
"I can do that," agreed Charlie. "But I was counting on doing all the preparation in one day."
Tacco gave him a look that suggested that Charlie was being most unreasonable.
Charlie and David prepared the walls in the two rooms specified and returned to the office. It was too late to start another job, so they sat around playing darts.
A few days later, Charlie received a phone call from Tacco saying that the renovators had finished all their modifications and so the rest of the cottage was ready.
"When will you be here?" Tacco demanded.
Charlie was in the middle of some work for another customer. But Tacco was so insistent and Charlie tried so hard to please his customers that he agreed to go out that same afternoon.
When Charlie got to the cottage, Tacco's twin-brother Racco was there obviously very busy performing renovations on the cottage.
"How come you're here?" asked Racco abruptly. "I said I'd ring you when the renovations were finished."
"B-but Tacco rang me this morning," pleaded Charlie.
"Anyway, it shouldn't bother you if the renovations aren't entirely finished. There must be other things you can go on with because we finished 4 other rooms yesterday. When you're ready to start work on the new rooms, we will have finished them."
"But the cottage we first saw had only 4 rooms altogether. It sounds like there are now 6 or 7 rooms. This is no longer a cottage, it's a house."
"Stop complaining," demanded Racco. "The rooms we added on aren't very big and they are basically exactly the same as the ones we showed you."
"I'm not supposed to do any more work until all the renovations are finished," said Charlie.
"Oh well. Since you're here I'll get Jocco to finish these renovations. It'll only take about 5 minutes. Can you wait?"
Charlie felt trapped. 5 minutes didn't seem like such a long time. It would be churlish of him to walk out now.
"All right. I'll wait," he replied.
Charlie tried to go on with some other work that needed to be done. He noticed that some of the walls that he had prepared last time had had some further renovations carried out on them. I guess I had better redo the preparations on these walls, he thought.
Just then Racco came back. "Look, is it all right if I just take some measurements in here?" he asked, and without waiting for a reply began to measure the walls.
Charlie was non-plussed. Now there was nothing he could go on with. I guess I'll get myself a coffee, he decided.
As he was coming back with the coffee, he overheard one of Racco's workers saying, "Those workers of Ozzy's are bludgers. All they ever do is sit around and drink coffee."
She betrayed no embarrassment at being overheard. Charlie was almost in tears and turned away so as to hide his distress. He made a production of drinking his coffee to calm himself.
An hour later, Racco came back and informed Charlie that the renovations were now complete. He gave Charlie a few scribbled pages.
"These are the room numbers and the labels on the corresponding keys," he explained to Charlie. "All the rooms are, of course, locked. Check that each key works in the corresponding lock. Let me know if there are any problems and I will have our Service Department provide a new key."
Charlie looked at the list in dismay. There were 136 keys!
He put the list aside and decided he would deal with that another day. For now, he would try to finish the preparation on 2 of the original rooms.
After half an hour, he had nearly finished the first room. Charlie found it most satisfactory to divide a room into sections. Each section consisted of an area of wall 2 metres wide, from floor to ceiling. He was just starting the last section when he heard a crashing noise and found that the plaster sheet had fallen to the ground and smashed to pieces.
He went to see Racco. Racco was very annoyed. "You must have hit it with a sledge hammer - our plasterers guarantee that our walls can withstand normal wear and tear."
They went to inspect the damage; all the while Racco muttered about incompetent fools. "I'm going to have to get someone to rebuild this wall," he complained.
"Look how strong these walls are," he announced, banging on another section with his fist. "They're all built to the same standard," he continued, banging on another wall. This time he banged a little harder. "We've been guaranteed that they can withstand even the heaviest pounding by the strongest fist. Of course, they're not intended to withstand a sledge hammer."
Proudly, he banged on each panel in turn. Suddenly, there was a loud crack, and the panel disintegrated. Racco swore loudly.
"Those bloody cretins in Building Maintenance," he yelled, inspecting the underlying frame. "I told them to check the frames and replace any rotten ones. It's not our fault. We only attach the plaster. If the frames are rotten, the nails don't hold and the plaster falls off. It happens all the time.
"Shouldn't bother you though. You can continue working in one of the other rooms. They're all the same so it shouldn't matter which room you work on next."
Charlie tried hard to regroup. Things seemed to have gotten completely out of hand. When had this exercise first left the rails, he wondered. Which was the exact moment when things had started to go wrong? It was an impossible question.
He left the building, thoroughly dejected.
The next day, Charlie and David went back to the house. "I'll start on some of the new rooms," said David. "You see if you can't finish the 2 original rooms."
Charlie started mixing paints. This is better, he thought. He wasn't too keen on preparation, although he understood the need for it. He felt there was little to show for an awful lot of hard work. But painting was dead easy, and the customers were always cheered when they saw how nice a room looked once the painting had been finished. They never seemed to understand the connection between fastidious preparation and a stunning final result. But Charlie understood.
As he worked, he started to hum and his spirits soared. Then David came into the room where Charlie was painting. He looked pale.
"What's up?" asked Charlie.
"I've just tried the doors on half-a-dozen rooms," replied David. "None of the keys fit!"
"What do you mean?" asked Charlie. "You must have to jiggle them a bit. Keys get stuck sometimes."
"No," cried David. "The keys DON'T FIT - I can't even get them in the keyhole! They're the wrong sort of key for the lock!"
Charlie went back with David to check. Sadly David was right. They went to see Racco.
"I'm not responsible for keys - that's Security Services. We have no control over keys and locks." Racco paused for a moment. "Look, the renovations are finished. We'll get Security Services to cut new keys. They shouldn't take long. You'll have the new keys and you can try them out," he concluded with some satisfaction.
"Gimme a break," cried Charlie. "Are you expecting me to try 136 keys?"
"Well, I'm going to arrange to get them cut!" shouted Racco. "You don't expect me to test them as well, do you? Look, by the time you need to start work, all the keys will be done. Just tell me which room you want to start work on and I'll get that key immediately. I can't be fairer than that, can I?"
On their return to the office, Charlie complained to Ozzy, who rang Racco. The response was staggeringly quick: there would be a Meeting. Not just a meeting or a Meeting, but in fact A MEETING.
There was movement at the station, for the word had passed around... ...So all the cracks had gathered to the fray. All the tried and noted riders from the stations near and far Had mustered at the homestead overnight, For the bushmen love hard riding where the wild bush horses are, And the stock-horse snuffs the battle with delight. 'The Man from Snowy River', A. B. Paterson
Everyone from Tacco's organisation was at THE MEETING. David and Juliette were the only two representatives from Ozzy's office.
David tried to explain the situation.
Racco jumped to his feet. "That's bullshit!" he shouted. "It is not our fault. Those guys from Ozzy's office don't know their job. They do everything wrong. They couldn't even cut the keys for the locks. We had to do that!"
Flacco was Tacco's boss. When she heard this outburst from Racco, her eyebrows shot up. She seemed to be in a conciliatory mood.
"I suggest that Ozzy's people give us an Impact Statement," she proposed. Turning to Juliette, she continued, "Produce a list detailing things like the time the plaster fell off the walls, and how much time was lost. We'll look at it."
Back at Ozzy's office, the staff went into a huddle.
Finally, Borodin spoke.
"We are not going to produce a detailed Impact Statement. Here is the impact. We were asked to start this job in February. It was understood that for us to do our painting, the renovations had to be finished. It is now June. The renovations are still not complete. The impact is the total time from February to June.
"Further, the impact continues until the renovations are complete. That means 'completely complete'. Not complete in these 5 rooms and 80% in the remaining 10. We want to go in on one day, start at one end and finish the preparation all the way through to the other. Without bits of plaster hitting the floor. Without discovering that the keys don't work on Friday. Without discovering that the 7th of July is a Buddhist holiday.
"We reject entirely the question, 'But isn't there something you can go on with?' The issue is not whether some part of the building can be accessed. The only issue is whether the building is completely accessible. If it isn't, if renovations are still being performed, people cannot start occupying the building, let alone us painting it."
He folded down a second finger.
"We were asked to give a fixed-price quote on painting a four-room cottage. We were shown one cottage; we are being asked to paint an entirely different cottage. And it isn't a cottage any more. It's been renovated into a 16-room bloody mansion! And the renovations have still not been completed.
Borodin brought down a third finger.
"Further, there does not appear to have been any design involved in producing the mansion. The reality is, that we have been the guinea pigs: when we have pointed out a problem, like the fact that there were 4 bathrooms upstairs but no toilets, they have gone away, scratched their heads and come up with a "solution". Which is why there are so many extra rooms now. The building has grown, but it was never planned.
"So we have no guarantee that it will ever be finished. Because I don't think anyone knows what the building is meant to do. Since there appears to be no definition of "finished", how will anyone recognise the endpoint if it should ever miraculously appear?"
Borodin reached for his fourth finger.
"There is a serious problem to do with size. Tacco's crew are hoping to accomodate 120 people in that mansion. But we have checked with council. They will not permit more than 46 people on that block. We have pointed that out to Macco. He says he knew there was a limit. In fact, his understanding was that there was a by-law limiting occupancy to 32 people.
"What is Tacco's response? Is there any point in us painting enough rooms for 120 people when only 46 can be accommodated? We could argue that that is Tacco's problem. But, ..." Borodin left the thought hanging.
He opened his hand and took hold of his thumb.
"We have told Tacco that we expect to be able to finish the painting in eight days. Originally, we quoted 39 days. Tacco might say to us, 'How can you claim an impact of the entire time when you originally quoted 39 days and now you claim you are only 8 days from finishing?'
"The answer is simple. We are claiming an impact of the entire time because we cannot claim an impact of more than the entire time. If we could, we would. The real impact is in fact 90 days (or thereabouts): from February to June. That is the time we have wasted due to Tacco's incompetence. It may show that we were remiss in our estimation: we allowed 31 days for Tacco's delays; there were actually 3 times as many.
"We have actually worked (or tried to) during that time. We have expended more hours than we budgeted. We have been prevented from painting other houses. It has cost us even more than it appears on paper. It is one thing to wear travel time of one hour if one actually performs 6 hours of useful work at the site. How does one account for the travel time if one goes out, discovers that one can't work usefully and comes back an hour later? What about the case where one goes out, performs 6 hours of hard work doing preparation on a wall, only to discover the next time we arrive at the site that the wall has collapsed in the meantime, has been rebuilt, and has to be prepared again?"
He paused and looked around the room.
"None of what I have said up till now helps to fill the biscuit tin.
"Today is the first day of the rest of the project. We are basically at the start of the project. There is a difference: usually, at the start of a project, we don't know how well we can work with the customer. We tend to assume that the customer will be moderately co-operative and moderately competent.
"We have now had 3 months' experience with the customer. We know him to be below par in competence and co-operation. We know that he attributes absolutely no firmness to deadlines. Worse, he announces deadlines and, in the same breath, assures us that these will NOT be met.
"Whatever estimates we made 3 months ago, we now know them to be naively optimistic. Back then, we expected to be finished in 39 mad-, er, I mean, man-days. Consequently, we can expect this job to take much longer than another 39 man-days.
"My guess is that it would be better value for us to cut our losses. We were barely making money when we quoted 39 man-days. We sure in hell are gonna lose money if the job takes another 39 plus, say 30%, ... what's that, um, ... 52 man-days.
"Can we expect the job to go better from now on? Let me tell you, guys: I can't see it; I just don't think that it's likely.
"However, I think there is one slim possibility (even though I can't see it happening). If, somehow, it can be arranged that someone important from Tacco's place is in a position where his rooster is on the solid piece of wood, there might be a chance.
"Then, there might be some resources allocated to this project. So that, when we come into a room and discover that a wall has fallen down, we can find someone to tell; and that person has been given the job of keeping us going as his highest priority; and he has the ability to either fix the wall himself or he has the resources at his disposal to get the wall fixed.
"So that, when we get a little lost because there are so many rooms, we can ask someone to help us navigate around the castle.
"So that there is a full-time person whose responsibility is to learn how we prepare and paint rooms and who can help us to prepare and paint rooms. Not because we can't do the job without him - on the contrary, we can do the job much more quickly without him! But, the existence of such a person serves a number of goals. First it shows a commitment on Tacco's part. Secondly, it causes Tacco to feel some of our pain when things get bogged down (because his person is also out of action until the project is finished). Finally, one of Tacco's requirements was that he wanted his staff to be able to perform touch-up painting after we had finished.
"At the moment, we are the only one's whose roosters are vulnerable."
He stopped. Was there anything else?
"It's a huge shame, really. We started this project with a number of technical concerns (could our painting machine handle some of the tricky shapes in Tacco's cottage?). We passed all those tests with flying colours. We adapted our painting machine and it is now bigger and better than it used to be. It turns out that we haven't been able to paint the cottage-house-mansion-castle because it isn't finished. Technical problems were overcome with breathtaking alacrity.
"So, the last issue is: how will we look if we walk away from this job? I can't answer categorically. I do know that we have walked away from other customers who were unreasonable and it doesn't appear to have hurt us. Generally, it has had a therapeutic effect on our sanity! Because customers like that mess with your mind. They create the suspicion that we have not given our all. They attempt to drag us into the same pit of mediocrity and incompetence in which they lurk.
"My vote would be to call it quits.
"Can we take anything away from this debacle? I think so. I think there is a lesson here for us for next time.
"Moral: Everyone must stand to lose a rooster."
Share |
Talkback: Discuss this article with The Answer Gang
Henry has spent his days working with computers, mostly for computer manufacturers or software developers. His early computer experience includes relics such as punch cards, paper tape and mag tape. It is his darkest secret that he has been paid to do the sorts of things he would have paid money to be allowed to do. Just don't tell any of his employers.
He has used Linux as his personal home desktop since the family got its first PC in 1996. Back then, when the family shared the one PC, it was a dual-boot Windows/Slackware setup. Now that each member has his/her own computer, Henry somehow survives in a purely Linux world.
He lives in a suburb of Melbourne, Australia.
By Krishnaprasad K., Shivaprasad Katta, and Sumitha Bennet
The intent of this article is to exhaustively capture every step of booting Knoppix Live CD from a PXE Server. Booting Knoppix from PXE is a straightforward task. But here we would like to pinpoint some areas where we faced difficulties in booting Knoppix from a PXE server so that it may be useful for those who might be facing the same issues. We were attempting to automate system maintenance tasks such as performing system BIOS updates, updating firmware of system peripherals such as storage controllers, NICs, etc. We planned to boot Knoppix from PXE and then customize it to execute the .BIN upgrade packages (For upgrading system component firmware). We came across the Knoppix Live CD and its' Howtos but we still hit roadblocks which had to be sorted out before we could boot Knoppix successfully from PXE on a Broadcom NIC-based system.
Knoppix is an amazing "one CD does it all" distro where you can customize it and use it for running your customized programs. About 1900MB of software compressed and stuck on a 700MB CD, with everything from sed to Openoffice. Knoppix 6.2 DVD comes with lot of packages so most of the needed programs / packages are covered in the DVD. Making it network boot capable will be an advantage as we dont need to run around for a CD/DVD. This article talks about how to create Knoppix Live CD/DVD PXE Boot capable and the points to be noted if you use any Broadcom card as the PXE Bootable NIC
This article does not cover on how to create a PXE setup assuming that the readers will have the knowledge to setup a PXE server. Refer the following link for setting up a PXE Server.
Requirements:-
- A system with a CD/DVD Drive to boot Knoppix Live CD (preferably Knoppix 6.2.1 which is the latest release) .
- A PXE server configured in your network.
- A system running a NFS server. This is for holding the content of the images that we are going to PXEBoot. The same PXE server can be used for setting up the NFS Server for holding root filesystem.
- Now you need the Knoppix Live CD ISO image burnt onto media. If you have downloaded a Knoppix version of different langauage other than English, you need to pass the param lang=us while booting to the live CD. No one will download a different language version but it can happen. :-)
That's it. You are ready to start...
Knoppix has the built-in ability to auto-configure itself to network boot itself, and this makes the process rather easy. Boot into the Live CD (runlevel 5 - default one). Follow the steps to create the necessary files:
Create an entry for Knoppix in your PXE Server menu (/tftpboot/pxelinux.cfg/default) as below:
- label Knoppix
- kernel /path/to/vmlinuz
- append nfsdir=NFS Server IP:/path/to/knoppix-CD-DVD-contents/ nodhcp lang=us ramdisk_size=100000 init=/etc/init apm=power-off nomce vga=791 initrd=/path/to/miniroot.gz
- where vmlinuz is the kernel image that we created from Knoppix Live CD and miniroot.gz is the initrd image. Note that the complete Knoppix CD/DVD contents is available at the NFS Server mentioned in the Requirements list.
Installing an NFS server is beyond the scope of this document; however, this will give you what's needed to configure your NFS server.
In /etc/exports on the system (NFS Server), make sure that the share noted below is exported:Note how this matches up with the initial nfsdir parameter in the original knoppix pxelinux.cfg/default file. Do exportfs -a to make the share visible to the outer world.
- /path/to/Knoppix-CD-DVD-contents *(ro,no_root_squash,async)
- ...<NFS Server IP>:/path/to/knoppix-CD-DVD-contents/...
As mentioned above, copy the entire Knoppix CD/DVD contents to the above-mentioned NFS share. Knoppix expects to mount a directory over NFS and see the folder "KNOPPIX" right there. So In our NFS Server setup we changed nfsdir line to match our server's IP address, and then added an export on our NFS server as above.
That's it. Your NFS Server is ready with Knoppix contents and now it's time to boot the client boxes with Knoppix.
This is the heart of this article. Till now you may have seen other documentation which talks about booting Knoppix from Intel-based PXE. If your setup has a Broadcom-based PXE-enabled NIC, the same steps wouldn't get you to boot your system successfully. This article highlights why you run into those issues and how you can get rid of them.
The Knoppix initrd image (miniroot.gz) is going to fail in the client boxes where you use Broadcom cards... why? :-( Here is the solution :-)
As far as Broadcom drivers (eg/- bnx2) are concerned, it is required to load the firmware image while loading bnx2 driver module to the kernel. Under some circumstances, as explained below, it would be interesting to keep firmware images in non-swappable kernel memory or even in the kernel image (probably within initramfs).
So for making the Knoppix PXE Bootable in this case, you will have to have the firmware images available in the initial ramdisk image (initrd) and then make it available at the time of driver load. Here are the steps/commands with a small piece of code for doing the same :
- - If the device that needs the firmware is needed to access the filesystem. If due to some error the device has to be reset and the firmware reloaded, it won't be possible to get it from userspace.
example:- A diskless client with a network card that needs firmware. The filesystem is stored in a disk behind a scsi device that needs firmware. This is because in our case, real Knoppix image sits in NFS share which needs to be accessed via the NIC that we boot the kernel and initrd image of Knoppix.
- Note that this command will extract miniroot cpio package and hence it would be better to create a temporary directory and extract the contents to it.
- Note that it's better to copy the whole lib/firmware directory to initrd as there are different firmware images of different drivers. In the future it will help if you want to make use of same initrd for different devices.
#!/bin/sh [ -z "$FIRMWARE" ] && exit echo 1 > /sys/$DEVPATH/loading cat /lib/firmware/bnx2/$FIRMWARE > /sys/$DEVPATH/data echo 0 > /sys/$DEVPATH/loading
Refer to know more about the need of loading the firmware image while loading the driver.
Now Replace the old initrd inside /tftpboot/path-to/miniroot.gz by newly created miniroot.gz inside /tmp/
There you go - you are done with the configuration and recreation of initrd!
Boot your client boxes to PXE menu and enter Knoppix as the boot option. This will load vmlinz and miniroot.gz. There you go! You can change the miniroot.gz initrd image to add your own programs and executables such that you create your own live CD booting from PXE Server.
There are lot of advantages as you can test your own program if any during/after bootup, with out installing it on any OS. This is completely driven out of NFS share! This setup can be used extensively in academic institutions, schools and for hands-on sessions in workshops with addition of ample custom settings.
We would like to convey our gratitude and thanks to Knoppix User Forum for helping us to solve this problem!Share |
Talkback: Discuss this article with The Answer Gang
I am a strong follower and a big fan of GNU/Linux from India. I really admire the freedom, stability and strength offered by Linux. I must thank my guru, Mr. Pramode C. E., for introducing me to the wonderful world of Linux. I completed by B.Tech in Computer Science from Govt. Engineering College, Sreekrishnapuram, Palakkad (Kerala, India). Currently I work with Dell Inc. Bangalore, India, as a Software Engineer. My passion and ambition is to provide some useful things to the community, be it code or articles.
I work with Dell Inc. Bangalore, India as a Software Engineer. I did my B.Tech in Computer Science from JNTU, (Hyderabad, India) and am currently working on my MS in Software Systems from BITS Pilani through the Work Integrated Learning Program. I am a strong believer in and a follower of linux. My first exposure to Linux was during my graduation days and I was impressed with the effective utilization of resources. I love to work on Linux and am a great fan of its stability, freedom, and its capabilities over other operating Systems. My passion for Linux steered me toward work in the areas of Grid Computing and High Performance Computing Clusters. I can proudly say that these areas are dominated by Linux. I love to share the knowledge and experience that I've gained with the Open Source community through articles, tips, and code.
I hold a Bachelor's degree in Electrical and Electronics Engineering from Bharathiyar University, Coimbatore, India and am currently serving as a software Engineer at Dell Inc. in Bangalore, India.
I'm a strong proponent and follower of free and open-source software, which promotes and encourages the development of enhanced software capability through a community of like-minded and self-less folks with a passion for making software affordable and accessible to all.
I look forward to learning from the rich community of enlightened folks & wish to give back in any humble capacity I can. :)
Wanna call out my buddy, mentor, and colleague, Krishnaprasad K who instilled in me the same passion that he holds for Linux and free/open-source software in general.
By Ben Okopnik
In the past month or so, I've had a chance to develop and polish a simple but highly effective anti-spam system. My requirements, and my reason for developing it, form a rather narrow and specific niche - my network connectivity situation is quite unusual, rather similar to what a lot of road warriors encounter - but the solution is nicely generalized and usable by anyone with a GMail account and running Linux. It's very fast and not at all CPU-intensive (unlike most anti-spam solutions), and so far, it has an excellent track record for accuracy (zero false positives, very small number of false negatives once past the initial test cycle.) At this point, it looks stable enough that I feel like sharing it with the Linux community is worthwhile; it also has enough flexibility for any experimentation that you may want to do.
Some time ago, I posted a question on the Linux Gazette's Answer Gang list; I was looking for a solution to my somewhat unusual situation which had left me stumped for a number of years. In essence, it came down to this: given that I travel and move around a lot, and thus have unpredictable and often fragile and/or slow connectivity, how do I filter spam effectively?
The general gist of that conversation came down to exactly what I had learned and expected over the years of struggling with this topic:
I had been using SpamAssassin for several years - but in the last couple of months, the frequency of spam mails that it let through became intolerable, despite the best tuning I could do. In addition, it filtered out a number of valid emails - i.e., false positives - which was a much worse problem, with a much greater hassle attached to it: every few days, I had to do a visual scan of my spam inbox hoping to spot valid emails before my eyes glazed over from zooming over thousands of messages (and I'm convinced that I've lost at least a few due to those factors.) All of this added up to a simple imperative: I had to either change my spam-filtering approach or resign myself to my email becoming progressively less useful and less reliable. The latter was not an option, since most of my business is either done via, or at least partially involves, email.
Initially, I started experimenting with a challenge-response system. The basic premise of such a system is that it depends on two lists containing email addresses - a whitelist (i.e., all emails from that address are accepted) and a blacklist (all emails from that address are discarded.) Anything in between gets tagged and held, while a one-time confirmation message is sent back to the sender's address; if they reply, that address gets added to the whitelist and their message is released.
This was an OK solution - but I was unhappy about the additional load that it generated, both in the number of emails necessary as well as the necessity of taking the sender's time. The latter, by the way, is usually used as the standard reason for not implementing C-R more widely: it is ostensibly "offensive" to people to get and answer confirmation messages. The standard scenario portrays the outraged receiver deleting the confirmation email in a huff (or perhaps printing it out, throwing it on the ground, and jumping up and down on it in a rage until it's all shredded or they have a stroke due to apoplexy.) Personally, I strongly disagree with that so-called "reasoning" and find it offensive: anyone who does not consider communicating with me to be of enough value that they can't hit 'Reply' once is unwelcome in my mailbox in the first place. However, as a personal preference, I dislike adding to anyone's workload - no matter in how miniscule the fashion - without a good reason, and if it can be avoided at no cost to me, and no reduction of functionality in my spam filtering, I'll be happy to do it.
In that light, one of the responses from The Answer Gang really piqued my interest: Steve Brown's idea of using GMail as an external filter (thanks, Steve!) I decided to combine the best features of C-R and external filtering to create my ultimate solution, which eliminated the response requirement. Although it took quite a bit of experimenting initially, the results have been excellent.
For comparison purposes, here's how the new system stacked up against my tweaked-to-the-max SpamAssassin system. It may be relevant to note that, as the Editor-in-Chief of LG, I'm in a rather exposed position, spam-wise: my email address is out there in hundreds of thousands of places, and I usually make no attempt to disguise it. As a result, I get ~1000 messages per day, with 98-99% of those being spam. Pretty ugly... but on the other hand, it makes for a great test bed: either my solutions work really well, or they fail abysmally. That's a test environment that's really meaningful!
|
False positives (real emails treated as spam) |
False negatives (spam emails treated as real) |
---|---|---|
Procmail/Gmail (first week/subsequent weeks) |
4/0 |
22/6 |
SpamAssassin (recent weekly averages) |
1-2 |
36-70 |
Again, this system has only been in operation for a little over a month - but the results, once I was done tuning it, have been rock-stable. For myself, I'm pretty excited about it: the countless hours that I've spent tuning and retuning SpamAssassin and looking through the spam bucket to see if it mis-identified something are now just a bad memory. I still check the "all incoming mail" list from my current system once in a while (more and more rarely as time goes on), just to confirm that I'm not tossing any valid emails - but given the mechanism that's in use, I feel pretty secure about it not discarding any email without me having explicitly asked it to do so. That's a very, very good feeling.
The initial part of setting up the system, whether C-R or otherwise, consists of creating all the relevant files - primarily, the whitelist and the blacklist. In the configuration section of the .procmailrc file that I'll make available at the end of this article, you can call them whatever you like; for myself, I used '~/.mail-accept-list' and '~/.mail-deny-list', respectively. I also created a list of symlinks to all the relevant files so I could look at them easily right from my home directory:
MAIL-ACCEPT-LIST -> ~/.mail-accept-list MAIL-DENY-LIST -> ~/.mail-deny-list MAIL_PROCMAILRC -> ~/.procmailrc MAIL_PROCMAIL_LOG -> /var/log/procmail MAIL_SAVE_ALL -> ~/.mail_save_all
The names are, I hope, obvious indicators of the function of each file. If you're not familiar with "procmail", it is a very powerful and commonly-used email processor written by Stephen R. van den Berg. It uses '~/.procmailrc' as its configuration file; this is composed of "recipes" that determine how to process mail. My system is constructed of those recipes, plus a few external files and system utilities.
Before we go on to that, though, we'll need to populate the whitelist and the blacklist. If, like me, you've been saving your email - and I've got more than 20 years of mail archives - that's not too hard; all we need to do for the initial whitelist is extract the addresses of anyone who has ever written to me as well as those to whom I've written. (Yes, it's possible that some of those will need to be blacklisted later - but that's so simple that it's not worth worrying about.) I used a combination of shell scripting, "formail", and Perl to do the extraction [1]. Since I've learned over the years that various mail clients do some really ugly things to mail headers, I use extreme caution and circumspection in processing them; in most cases, this means a "belt-and-suspenders" sort of an approach. In this case, I'm using "formail" to concatenate ('-c') continued fields in the header and split ('-s') the mboxes into individual emails, and Perl to extract either the 'From:' address (preferred) or, failing that, the 'Return-Path:' address.
#!/bin/bash # Created by Ben Okopnik on Mon Jun 28 15:31:08 EDT 2010 # 'cd' to your mail directory cd ~Mail for file in * do # Ignore all directories and the "Sent_mail" file (we'll process that later) [ "$file" == "Sent_mail" -o -d "$file" ] && continue echo "Processing '$file'" formail -cs \ perl -wlne'$f=$1 if /^(?:return-path|from):.*?([\w\.=\-]+@[\w\.=\-]+\w+)/i;print $f and last if /^$/' \ < "$file" >> /tmp/whitelist done # Process the mail that I've sent; this time, we'll extract the 'To:' headers echo "Processing the 'Sent_mail' file" formail -cs \ perl -wlne';print $1 and last if /^To:.*?([\w\.=\-]+@[\w\.=\-]+\w+)/i' \ < Sent_mail >> /tmp/whitelist sort -u /tmp/whitelist -o /tmp/whitelist
So there it is; a list of all my "validated" email addresses collected into a single file (/tmp/whitelist). Note the last line: this produces a list of sorted addresses with no repeats. Not all that complex, right?
The blacklist is even less complicated. Since we're going to stamp all our outgoing email with a special header that identifies it as really being from us, the first thing we'll put into the blacklist is... all our valid email addresses. No fooling. Seems a bit counterintuitive, but that's exactly what we need to do - because spammers very often send their stuff with it being marked as coming from the same address they're sending it to. This approach gets rid of that very large category, painlessly and safely. You'll see precisely how this works as we go through the .procmailrc file.
Next, let's take a look at the .procmailrc file itself. Mine has a few things in it besides the anti-spam system, so I'll highlight just the bits that we're discussing. Let's take a look (ignore the line numbers; they're not part of the code, and are there just so I can refer to a given line):
001 PATH 002 SHELL=/bin/sh 003 MAILDIR=/var/spool/mail 004 DEFAULT=$LOGNAME 005 LOGFILE=/var/log/procmail 006 # VERBOSE=on 007 008 # This gives you the 'From:' address if it's available, or the 'Return-Path:' address otherwise. 009 :0 hw 010 FROM=|/usr/bin/perl -wlne'$f=$1 if /^(?:return-path|from):.*?([\w\.=\-]+@[\w\.=\-]+\w+)/i;print $f and last if /^$/'
The first six lines just set up the procmail variables. The only bits to note are that you may not necessarily want your procmail logfile to be in /var/log (in fact, you'd need root permissions to set that up); also, 'VERBOSE=on' is currently commented out but still there in case you want to enable it for troubleshooting. When enabled, it produces a lot of output in the logfile, and can be very useful. Line 10 is, of course, the sender address extractor that we used to such good effect earlier.
Now, let's jump right to the spam filter:
011 #************* GMAIL-BASED ANTI-SPAM SYSTEM ************** 012 # 013 # Customize all these constants as necessary: 014 MY_EMAIL=ben@okopnik.com 015 MY_GMAIL=okopnik@gmail.com 016 # Spam-Kill stamp; use some unique string without spaces 017 SPAM_KILL=74d04eab1341a01117de96f2 018 # "Secret word" for email control messages 019 SECRET=Funky 020 021 FORMAIL=/usr/bin/formail 022 GREP=/bin/grep 023 SENDMAIL=/home/ben/bin/bssmtp 024 025 DB=$HOME/.mail-accept-list 026 DENY_DB=$HOME/.mail-deny-list 027 NOTIFY=$HOME/Mail/000-notify 028 NDNS=$HOME/Mail/000-ndns 029 TRASH=/dev/null 030 SAVE_ALL=$HOME/.mail_save_all 031
This is the configuration section - pretty straightforward stuff. You'll need to put in your email address and your GMail address; you'll also need to come up with a couple of unique strings (don't worry; these aren't the real ones that I use. :) You could, of course, use the same string - but $SECRET should be something that's easy to type out on, say, your Blackberry whenever you want to validate someone on the spot (we'll see how this works in a moment.)
$DB is your whitelist; $DENY_DB is the blacklist. $NOTIFY - assuming you want to set that up - is mail that you regularly receive (say, monthly notifications from your listbots) but don't want to read; archiving is good enough. $NDNS are Non-Delivery Notifications; for now, I'm collecting those, looking through them monthly, and then tossing them. In another month or so, I'll just trash them, but for now, I'm still in a testing phase. $SAVE_ALL is another testing phase sort of thing: it saves all received email, just so I can go over it and check that everything is getting filtered correctly. Sooner or later, it too is going to disappear.
033 # Immediately deliver anything containing my verification string (the 034 # header is added to all outgoing email via my .muttrc). You should now add 035 # all your email addresses to the blacklist, since anything "from you" that 036 # fails this test is spam. 037 :0: 038 * $ X-Spam-Kill: $SPAM_KILL 039 ${DEFAULT}
This is the gadget that delivers all the real email that comes from us; since I use "mutt" for my email client, I simply set it up to add a header with that stamp - i.e., 'X-Spam-Kill: ' followed by my $SPAM_KILL string. This bypasses pretty much all the tests and goes right into my inbox.
041 # This should be either empty, or a regex that matches any addresses from 042 # which you get lots of mail that you want to archive but not read: 043 BOTS=(mailman-owner@list1.com|mailman-owner@list2.com)
Right - this is what we'll be archiving without reading.
045 # This should be a regex that matches all domains from which you know you 046 # won't get spammed: 047 KNOWN_DOMAINS=(safedomain1.com|safedomain2.com|safedomain3.com)$ 048 049 # This should be either empty, or a regex that matches the To: headers of 050 # any mailing lists you're on: 051 LISTS=(list1@lists.net|lists2@lists.net|list@yahoo.com|list@lists.mail.org)
Another rather obvious one. If you use Mutt, like I do, simply copy your 'lists' line here and modify it so that it becomes a valid regular expression, like the above.
All right, here comes the meat of the "program" itself:
053 #################################################################### 054 # Don't change anything below unless you know why you're doing it! # 055 #################################################################### 056 057 :0 c 058 $SAVE_ALL
This line saves everything into the file we defined earlier.
060 # You can email yourself to whitelist an address; note use of "secret word" in 061 # subject 062 :0 063 * ^Subject: ${SECRET}-approve \/.* 064 * ? echo $MATCH >> $DB 065 ${TRASH} 066 067 # You can email yourself to blacklist an address; note use of "secret word" in 068 # subject 069 :0 070 * ^Subject: ${SECRET}-deny \/.* 071 * ? sed -i '/^'"$MATCH"'$/d' $DB 072 * ? echo $MATCH >> $DENY_DB 073 ${TRASH}
These two recipes allow you to whitelist or blacklist an address by mail: just send yourself an email with the secret word that you defined above, followed by a dash and either the word 'approve' or 'deny' followed by a space and the email address that you want to define. Nice little feature - not that I use it much.
075 # If message is from a blacklisted sender, dump it 076 :0 h 077 # * ? $GREP -i ^$FROM $DENY_DB 078 * ? echo $FROM|$GREP -f $DENY_DB 079 ${TRASH}
Other than the "whitelist/blacklist by email" functionality, note this recipe that takes precedence over everything else: if someone is blacklisted, they're gone. Doesn't matter if they're on a whitelisted mailing list that you're subscribed to or anything else; once they earn a place in that file, you'll never see them again.
Incidentally, note the commented-out line (#77): originally, I used the email address as the "grep" search string and the file as the source, and if the string was found in the file, then that was the end of it. However, I discovered that there were times when I wanted to block an entire domain, or use a regular expression to define exactly what I wanted to block - but this was not possible with that recipe! After that, I changed my approach to the one on line #78: I pipe the address into "grep" and use the content of $DENY_DB as the list of regular expressions to check against that string. This allows me to put in, e.g., '@spammer.org' and block that whole domain, or 'joe_slick' and block all addresses containing that string. Do be careful, though: if you accidentally add something like a space to that file, you'll throw away all email!
For it is the chief characteristic of the religion of science that it works, and that such curses as that of [its priests] are really deadly. -- Isaac Asimov, "Foundation"
081 # If message is from a bot, archive it 082 :0 083 * BOTS ?? (.) 084 * $ FROM ?? $BOTS 085 ${NOTIFY} 086 087 # If message is a Non-Delivery Notification, archive it 088 :0 089 * MAILER-DAEMON 090 ${NDNS} 091 092 # If message is from a known domain, deliver it 093 :0 094 * KNOWN_DOMAINS ?? (.) 095 * $ FROM ?? $KNOWN_DOMAINS 096 ${DEFAULT} 097 098 # If message is to a list we're on, deliver it 099 :0 100 * LISTS ?? (.) 101 * $ ^TO_$LISTS 102 ${DEFAULT}
No surprises there, hopefully; we just distribute the mail to the boxes that we defined according to the rules that we set up for them.
104 # If the message has the "been-filtered-by-Google" stamp, deliver it. 105 # This clause implies that we trust Gmail, but not so much that we'll 106 # auto-whitelist anybody that it passes. If you want to do that as well, 107 # just uncomment the 'echo $FROM' line. 108 :0 109 * $ ^X-Gstamp: $SPAM_KILL 110 # * ? echo $FROM >> $DB 111 ${DEFAULT}
As the comment says, this is for all emails that have been validated by GMail. Anything with the 'X-Gstamp:' header (which we add in the next recipe) simply gets delivered.
113 # If sender isn't in the DB, add an X-Gstamp and forward it to GMail for filtering 114 :0 f 115 * $ ! ^X-Loop: $MY_EMAIL 116 * ! ? $GREP -i ^$FROM $DB 117 |$FORMAIL -A"X-Gstamp: $SPAM_KILL" 118 119 :0 A 120 ! $MY_GMAIL 121 122 #********** END OF GMAIL-BASED ANTI-SPAM SYSTEM **********
If an email has made it through all of the above recipes without being dumped or delivered, then we don't know what it is (ham or spam) - so we'll let GMail decide for us. In theory, this minimizes our privacy exposure, since we should have already whitelisted the people who are likely to send us that kind of important info. Best of all worlds!
Again, the average .procmailrc file will have other things in it - perhaps header fixups for friends with seriously broken email clients, or logic to decide which listmail should go into which mailboxes. If you know how to write procmail recipes, this is all still usable: filters (such as the header fixups) would go just below the procmail variable definitions (say, just below line 10), and list distribution recipes might replace the simple "list delivery" recipe (98-102). If you don't know how, it's relatively simple - and the documentation that comes with procmail is excellent and detailed (see 'man procmail', 'man procmailrc', and 'man procmailex' for lots of good examples and explanations.)
I use "fetchmail" for mail retrieval, so setting that up was pretty trivial: I just grab the mail from my mailhost and from GMail via POP (the latter requires changing the settings at GMail, which is pretty simple.) Since I use Mutt as my mail client, I've added a convenient shortcut to it which allows me to blacklist spam instantly; in fact, it replaced the "spam, not ham" shortcut that I had been using for SpamAssassin. Here are the necessary entries in ~/.muttrc, in case you happen to be using Mutt yourself:
macro index \cb |"/home/ben/bin/blacklist^M" macro pager \cb |"/home/ben/bin/blacklist^M"
So, if I ever do run across a spam that managed to make it through GMail, all I have to do is hit 'Ctrl-B' - and that address is gone forever. The script that it invokes is a pretty simple one:
#!/bin/bash # Created by Ben Okopnik on Tue May 11 23:32:58 EDT 2010 FROM=$(perl -wlne'print $1 and last if /^From:\s*.*?([\w\.\-]+@[\w\.\-]+\w+)/') if [ -n "$FROM" ] then sed -i '/^'"$FROM"'$/d' ~/.mail-accept-list echo $FROM >> ~/.mail-deny-list fi
Note that if that entry exists in the whitelist, it'll be removed from there. Oh, one more thing for .muttrc: there's also the 'X-Spam-Kill:' header that marks the email as actually coming from me.
# No, this is still not my real X-Spam-Kill string. :) send-hook ~A 'my_hdr X-Spam-Kill: 74d04eab1341a01117de96f2'
Taken all together, this forms an easy to use, effective spam killer; I've recovered a number of hours that I used to waste in dealing with spam, and have reduced the wear-and-tear on my nerves caused by finding the occasional business email in my spambox. All in all, I'm really glad that I've spent the time developing and implementing this system.
Feel free to download my .procmailrc file and experiment. I've got to say that I'm pretty excited about this whole system: previously, while retrieving email in the morning, I used to watch my poor little netbook bogging down as SpamAssassin overloaded its tiny brain. In addition, processing even a hundred emails took at least five minutes. Now, when I try to watch my mail log via 'tail -f /var/log/mail.info, the emails fly through the processing so fast that I'd have to be a speed reader to catch them all. The major delay factor in retrieving them is simply the bandwidth/latency of whatever connection I happen to have.
In the near future, once I'm completely satisfied with all the testing, I'm going to try moving this setup off my local system and onto my mail server - given its nature, it's certainly flexible enough to work that way. This will mean using the whitelist/blacklist-by-mail feature and adapting the "blacklist" script to work over the network, or perhaps simply synchronizing the local and the remote lists via a cronjob - but it will also mean much less traffic between my local machine and that mailhost, since all the blacklisted mail will get dumped without me ever downloading it. The GMail-bound traffic will also be sent off from there, meaning that my system will never have to do that round-robin transaction either, so the only thing I'll see is whitelist-validated and GMail-filtered stuff - perhaps a 100-to-1 reduction in volume. I'm really looking forward to that.
Overall, this experiment has made large, positive, time-saving changes in my life; a huge improvement over my previous spam-handling method. Hurrah for Linux and the ability to tweak, play, and experiment!
[1] I could have done this with Perl alone, but I have an additional purpose here: the Perl one-liner that I used is also a nice tool that we can re-use in our .procmailrc - we definitely need to extract the address from each email, right? - so we might as well start using it here.
Share |
Talkback: Discuss this article with The Answer Gang
Ben is the Editor-in-Chief for Linux Gazette and a member of The Answer Gang.
Ben was born in Moscow, Russia in 1962. He became interested in electricity at the tender age of six, promptly demonstrated it by sticking a fork into a socket and starting a fire, and has been falling down technological mineshafts ever since. He has been working with computers since the Elder Days, when they had to be built by soldering parts onto printed circuit boards and programs had to fit into 4k of memory (the recurring nightmares have almost faded, actually.)
His subsequent experiences include creating software in more than two dozen languages, network and database maintenance during the approach of a hurricane, writing articles for publications ranging from sailing magazines to technological journals, and teaching on a variety of topics ranging from Soviet weaponry and IBM hardware repair to Solaris and Linux administration, engineering, and programming. He also has the distinction of setting up the first Linux-based public access network in St. Georges, Bermuda as well as one of the first large-scale Linux-based mail servers in St. Thomas, USVI.
After a seven-year Atlantic/Caribbean cruise under sail and passages up and down the East coast of the US, he is currently anchored in northern Florida. His consulting business presents him with a variety of challenges such as teaching professional advancement courses for Sun Microsystems and providing Open Source solutions for local companies.
His current set of hobbies includes flying, yoga, martial arts,
motorcycles, writing, Roman history, and mangling playing
with his Ubuntu-based home network, in which he is ably assisted by his wife, son and daughter; his Palm Pilot is
crammed full of alarms, many of which contain exclamation points.
He has been working with Linux since 1997, and credits it with his complete loss of interest in waging nuclear warfare on parts of the Pacific Northwest.
This month's article isn't much of an article, but more like an interview. It's not very technical, but potentially satisfying. True or False? If you have been around Linux for a few years, you may have been called an open source advocate, a computer guy or just a plain and simple nerd. Probably true, right? For years, that's how Linux has been seen by many outside the open source community, and even a few within it. This article, I mean interview, is a living example that this stereotype does not stand true any longer.
About 15 years ago, when I first arrived in the US as an exchange student, I met a family in a small town called Hampden, ME. Over the years, I became friends with this family, saw their kids graduate from High School and College, and even lived as neighbors to their parents for about three years of my life. The person I am writing about is my good friend Brenda McCleary, a mother of two college graduates who decided to give Linux a try.
I also want to emphasize that I have done very little editing on this interview because I want to show the reader that one must not have to know all the right terminology or Linux versions and distributions to be able to start using it.
Anderson: Hi, Brenda, tell us a little bit about yourself.
Brenda: Hi, Anderson. I've been married 31 years to husband
John. I am the mother of two adult sons: Patrick (26) and DJ (24). I live in
Simpsonville, SC. I have worked cleaning homes, been a church secretary, an
administrative assistant in a denturist office, and currently in the food
service industry as dishwasher/baker/prep person.
A: When was the first time you heard about Linux?
B: I heard about Linux from you Anderson.
[Note: Yes, I've been using Linux since 1996, and virtually every time I hear someone complaining about their Windows machine being infected with whatever destructive technology is out there, my reply is usually: You should give Linux a try...]
A: What made you want to switch from Windows to Linux?
B: I switched to Linux because you recommended it and because
I was tired of Windows not running properly. I often dealt with viruses and
having to buy a new computer more often than I should because after a few
times of trying to restore Windows it would no longer work. You convinced me
that Linux was user-friendly and that I wouldn't have to worry about viruses
like I did with Windows. You also explained that there would be some learning
curve in using this new Operating System and that some applications or hardware
that I currently use may not be compatible.
A: What Linux distribution and version are you using?
B: I am currently using Ubuntu 10.4?? It is the latest upgrade
that Update manager offered. Is this right Anderson?
[Note: Ubuntu 10.04]
A: Tell me about your biggest frustration with Linux.
B: My biggest frustration with Linux is the simple things like
trying to upgrade my Garmin. I did research on-line to try and see how to fix
the problem but because of my lack of education concerning words like bzip,
tarballs, source codes and such. It may as well be Greek or Latin and I gave up
not knowing what to do. I finally had my son upgrade it on his computer with
Windows.
[Another problem that I had, but not related to Linux itself, but more of a vendor support issue was:] When I first received my computer from Dell with Linux OS I was so excited. I could not however get the DVD to work. I called Dell and after many conversations, being transferred to different sections of the company and was asked to reinstall the OS system twice, that didn't fix the problem, I hung up angry and frustrated. Dell does not give customer support for Linux. I finally sent my husband and my computer to your home and you had it fixed in less than a hour. If the fix was that easy why couldn't they help me?
A: Have you ever used the Linux terminal? If yes, what did you do
with it?
B: I have tried to use the Linux terminal using step by step
instructions based on on-line articles by other Linux users. To be honest I
don't remember what it was I was trying to fix, but I do remember when the
terminal accepted my codes, copied line for line from the article, I was
tickled pink!
A: Have you ever had to upgrade your computer by yourself? If yes,
what was the experience like?
B: I recently upgraded my computer through the Update manager
which automatically lets me know when there are updates and upgrades to be
installed. Just a few clicks of the mouse and the update or upgrade is
finished. So much easier than Windows! After the upgrade there were a couple
annoying glitches, but they were soon worked out with new updates.
A: Tell us a bit about what you do with Linux at home? Anyone else in
the family use it? What do they do?
B: The two major reasons I use my computer is to access e-mail
and Facebook. I also do on-line searches, upload pictures from my camera with
F-Spot photo manager, listen to music on Rhythmbox, type documents on Open
Office, and create CD's on the CD/DVD creator. Synaptic Package Manager has also
been a useful tool..as well as Users and Groups. I especially enjoy it that I
can log into my own account and not have to deal with family members bookmarks
or icons that they choose to use on their desktop screen. My husband also uses
Linux for e-mail and Facebook, paying bills on-line and doing on-line searches
to read news from our home state, Maine.
A: What's your favorite part about using Linux?
B: My favorite part about using Linux is not having to deal
with viruses, cookies and not having to worry about when the OS is going to die.
It is a stable, trustworthy system that I hope in time will be more and more
compatible with Windows so that many other simple people like myself can enjoy
the many benefits of using Linux without the frustration of certain things being
incompatible. Although I do have to say that the upgrades to newer versions of
Linux seem to fix some problems automatically which means Linux is aware of the
incompatibility issues and is taking action.
A: Well, Brenda, thank you so much for taking the time to share with
us about your Linux experience. One final question, if you had to recommend
Linux to another mother like yourself, how would you 'sell' it?
B: I would definitely recommend Linux to other mothers because
of the many benefits I have explained in this interview. I would however
recommend to them that they do their research concerning compatibility issues
of hardware and software they are currently using to make sure the change would
best fit their needs. I would also recommend that they also find a current Linux
user that could help them through the transition in case they run into the same
problem I did of no customer support from the seller. You are welcome.
My friend Brenda McCleary has been a Linux user for a little over two years now, and every once in a while I check in with her via Facebook to check out her level of satisfaction with it, and even though I, myself, am a Fedora user, I try to make myself available for her when she has questions about how to get something working. But I have to confess that Brenda has been flying solo with her Linux install for several months now without any major incidents.
The Linux Operating System has definietly grown up, and so have its users. We are not all just College kids with nothing better to do than hack on our computers trying to solve a programming problem. A lot of us are ordinary folks with a simple wish: when I get in front of a computer, I want to connect with others, share media, and even work without having to keep fighting viruses, malware, trojans, etc. Not to say, that Linux is a safe haven, yet, in comparison to Windows, it is safe enough.
Share |
Talkback: Discuss this article with The Answer Gang
Anderson Silva works as an IT Release Engineer at Red Hat, Inc. He holds a BS in Computer Science from Liberty University, a MS in Information Systems from the University of Maine. He is a Red Hat Certified Engineer working towards becoming a Red Hat Certified Architect and has authored several Linux based articles for publications like: Linux Gazette, Revista do Linux, and Red Hat Magazine. Anderson has been married to his High School sweetheart, Joanna (who helps him edit his articles before submission), for 11 years, and has 3 kids. When he is not working or writing, he enjoys photography, spending time with his family, road cycling, watching Formula 1 and Indycar races, and taking his boys karting,
These images are scaled down to minimize horizontal scrolling.
Flash problems?All HelpDex cartoons are at Shane's web site, www.shanecollinge.com.
Talkback: Discuss this article with The Answer Gang
Part computer programmer, part cartoonist, part Mars Bar. At night, he runs
around in his brightly-coloured underwear fighting criminals. During the
day... well, he just runs around in his brightly-coloured underwear. He
eats when he's hungry and sleeps when he's sleepy.
These images are scaled down to minimize horizontal scrolling.
All "Doomed to Obscurity" cartoons are at Pete Trbovich's site, https://penguinpetes.com/Doomed_to_Obscurity/.
Talkback: Discuss this article with The Answer Gang
Born September 22, 1969, in Gardena, California, "Penguin" Pete Trbovich today resides in Iowa with his wife and children. Having worked various jobs in engineering-related fields, he has since "retired" from corporate life to start his second career. Currently he works as a freelance writer, graphics artist, and coder over the Internet. He describes this work as, "I sit at home and type, and checks mysteriously arrive in the mail."
He discovered Linux in 1998 - his first distro was Red Hat 5.0 - and has had very little time for other operating systems since. Starting out with his freelance business, he toyed with other blogs and websites until finally getting his own domain penguinpetes.com started in March of 2006, with a blog whose first post stated his motto: "If it isn't fun for me to write, it won't be fun to read."
The webcomic Doomed to Obscurity was launched New Year's Day, 2009, as a "New Year's surprise". He has since rigorously stuck to a posting schedule of "every odd-numbered calendar day", which allows him to keep a steady pace without tiring. The tagline for the webcomic states that it "gives the geek culture just what it deserves." But is it skewering everybody but the geek culture, or lampooning geek culture itself, or doing both by turns?
Ben Okopnik [ben at okopnik.com]
On Sun, Jun 06, 2010 at 07:12:15PM +0530, Prof. Parthasarathy S wrote:
> Just curious. > > Is there a place where I can see the names of ALL the members of the > Answer gang (that includes me, of course). Want to see who all are > there, since I may be able to lcate some of my friends and colleagues > with whom I have lost contact.
Well, you could take a look at our authors' list. Anyone who has submitted a bio and is a member of TAG is marked by an asterisk.
https://linuxgazette.net/authors/
-- OKOPNIK CONSULTING Custom Computing Solutions For Your Business Expert-led Training | Dynamic, vital websites | Custom programming 443-250-7895 https://okopnik.com
[ Thread continues here (9 messages/9.95kB) ]
Jimmy O'Regan [joregan at gmail.com]
-- <Leftmost> jimregan, that's because deep inside you, you are evil. <Leftmost> Also not-so-deep inside you.
Share |