Tux

...making Linux just a little more fun!

deleted file recovery

J.Bakshi [j.bakshi at icmail.net]


Thu, 16 Oct 2008 20:24:54 +0530

Dear list,

I generally worked with Linux terminal. Frequently I have to do "rm" to delete files/folders which are not required any more. And this actually raised a oquestion in mind. what to do if some point of time I need a file/foldr which has been already deleted before few min. ago. I have googled a lot but have not found any open source tools which can recover the deleted ones. testdisk is there but it is totally different. there are some commercial GUI based tools available but I am looking for a CLI tool or technique which practically recover the deleted file/folder.

Hope some one can enlighten me. Thanks


Top    Back


René Pfeiffer [lynx at luchs.at]


Thu, 16 Oct 2008 17:12:30 +0200

Hello!

On Oct 16, 2008 at 2024 +0530, J.Bakshi appeared and said:

> [...]
> I generally worked with Linux terminal. Frequently I have to do "rm" to delete
> files/folders which are not required any more. And this actually raised a
> oquestion in mind. what to do if some point of time I need a file/foldr which
> has been already deleted before few min. ago. I have googled a lot but have
> not found any open source tools which can recover the deleted ones. testdisk
> is there but it is totally different. there are some commercial GUI based
> tools available but I am looking for a CLI tool or technique which
> practically recover the deleted file/folder.

The problem of data recovery heavily depends on which filesystem you are using and what you do immediately after the deletion. Unfortunately the common filesystems do not support undeletion. Ext2 does to some extent, Ext3, XFS and ReiserFS do not. I am not sure about JFS. I dug up some references, but both are far away from being a GUI-solution.

https://www.xs4all.nl/~carlo17/howto/undelete_ext3.html https://www.cgsecurity.org/wiki/ReiserFS_File_Undelete_HOWTO

In almost all cases of accidental deletion of data you must act immediately, i.e. you must shutdown the system sitting on the data storage involved and recover your data immediately. Any delay may cause your deleted data to be overwritten by disk activity.

The best GUI-solution to this problem is implemented by most window managers - it's the trash can / recycle bin folder. This sounds strange, but considered the fact that most modern filesystems are designed without the undeletion feature, copying the data first and then deleting it some time after that is the safest way.

Best, René.


Top    Back


J.Bakshi [j.bakshi at icmail.net]


Thu, 16 Oct 2008 21:35:42 +0530

On Thursday 16 Oct 2008 8:42:30 pm René Pfeiffer wrote:

> Hello!
>
> On Oct 16, 2008 at 2024 +0530, J.Bakshi appeared and said:
> > [...]
> > I generally worked with Linux terminal. Frequently I have to do "rm" to
> > delete files/folders which are not required any more. And this actually
> > raised a oquestion in mind. what to do if some point of time I need a
> > file/foldr which has been already deleted before few min. ago. I have
> > googled a lot but have not found any open source tools which can recover
> > the deleted ones. testdisk is there but it is totally different. there
> > are some commercial GUI based tools available but I am looking for a CLI
> > tool or technique which practically recover the deleted file/folder.
>
> The problem of data recovery heavily depends on which filesystem you are
> using and what you do immediately after the deletion. Unfortunately the
> common filesystems do not support undeletion. Ext2 does to some extent,
> Ext3, XFS and ReiserFS do not. I am not sure about JFS. I dug up some
> references, but both are far away from being a GUI-solution.
>
> https://www.xs4all.nl/~carlo17/howto/undelete_ext3.html
> https://www.cgsecurity.org/wiki/ReiserFS_File_Undelete_HOWTO
>
> In almost all cases of accidental deletion of data you must act
> immediately, i.e. you must shutdown the system sitting on the data
> storage involved and recover your data immediately. Any delay may cause
> your deleted data to be overwritten by disk activity.
>
> The best GUI-solution to this problem is implemented by most window
> managers - it's the trash can / recycle bin folder. This sounds strange,
> but considered the fact that most modern filesystems are designed
> without the undeletion feature, copying the data first and then deleting
> it some time after that is the safest way.
>
> Best,
> René.

Hello Rene,

Thanks a lot for the discussion and also sorry for not mentioning the filesystem. I generally use reiserfs but in case of a rented hosting server space I see ext3 filesystem in general. I'll definately look into the links you have provided. I always keep a backup before deletion as you have mentioned already but I wounder if there is any tool or command which can help to recover deleted file.

Thanks


Top    Back


Kapil Hari Paranjape [kapil at imsc.res.in]


Thu, 16 Oct 2008 21:51:06 +0530

Hello,

On Thu, 16 Oct 2008, J.Bakshi wrote:

> And this actually raised a oquestion in mind. what to do if some
> point of time I need a file/foldr which has been already deleted
> before few min. ago.

The command-line trashcan utility "trash-cli" entered Debian lenny a few days ago.

Another utility is "libtrash" which implements are trashcan like feature by preloading functions that save "unlink"ed files.

Kapil. --


Top    Back


Jimmy O'Regan [joregan at gmail.com]


Thu, 16 Oct 2008 19:18:02 +0100

2008/10/16 J.Bakshi <j.bakshi@icmail.net>:

> Dear list,
>
> I generally worked with Linux terminal. Frequently I have to do "rm" to delete
> files/folders which are not required any more. And this actually raised a
> oquestion in mind. what to do if some point of time I need a file/foldr which
> has been already deleted before few min. ago. I have googled a lot but have
> not found any open source tools which can recover the deleted ones. testdisk
> is there but it is totally different. there are some commercial GUI based
> tools available but I am looking for a CLI tool or technique which
> practically recover the deleted file/folder.

You don't want testdisk, you want the author's other tool - PhotoRec (https://www.cgsecurity.org/wiki/PhotoRec). It scans for deleted files based on their signatures, and by default recovers all deleted files on a disk image.

I've used it a few times, and it works quite well.


Top    Back


Karl-Heinz Herrmann [kh1 at khherrmann.de]


Thu, 16 Oct 2008 22:17:31 +0200

On Thu, 16 Oct 2008 17:12:30 +0200 René Pfeiffer <lynx@luchs.at> wrote:

> On Oct 16, 2008 at 2024 +0530, J.Bakshi appeared and said:
> > I have googled a lot but have not found any open
> > source tools which can recover the deleted ones. testdisk is there
> > but it is totally different. 
> Unfortunately the common filesystems do not support undeletion. Ext2
> does to some extent, Ext3, XFS and ReiserFS do not. I am not sure

Hm... isn't it possible to unmount the ext3 and after that treat it as ext2 until files are recovered? Once unmounted the transaction journal should not be changing anymore. After recovery the journal could be reactivated.

> about JFS. I dug up some references, but both are far away from being
> a GUI-solution.

Another one for reiserfs.... https://antrix.net/journal/techtalk/reiserfs_data_recovery_howto.comments?parent=74-1&title=Re:%20Re:%20ReiserFS%20undelete/data%20recovery%20HOWTO

> The best GUI-solution to this problem is implemented by most window
> managers - it's the trash can / recycle bin folder. This sounds

Hm... there is also the CLI version:

alias rm="rm -i"

or as also stated in: https://forums.macosxhints.com/archive/index.php/t-9123.html

del () { /bin/mv -i ${*} ~/.Trash; }

and use del (reason why this can't be achieved with alias in bash is also given in the url).

> strange, but considered the fact that most modern filesystems are
> designed without the undeletion feature, copying the data first and

I've found that even some gui's tend to create trash folders per device/filesystem so they do not have to copy all the time to ~/home/kde/trash (or wherever), but can simply move the files inside the filesystem boundaries. That way the mv is almost as quick as the rm.

K.-h.


Top    Back


René Pfeiffer [lynx at luchs.at]


Thu, 16 Oct 2008 22:46:20 +0200

On Oct 16, 2008 at 2217 +0200, Karl-Heinz Herrmann appeared and said:

> On Thu, 16 Oct 2008 17:12:30 +0200
> René Pfeiffer <lynx@luchs.at> wrote:
> [...]
> > Unfortunately the common filesystems do not support undeletion. Ext2
> > does to some extent, Ext3, XFS and ReiserFS do not. I am not sure
>=20
> Hm... isn't it possible to unmount the ext3 and after that treat it as
> ext2 until files are recovered? Once unmounted the transaction journal
> should not be changing anymore. After recovery the journal could be
> reactivated.=20

Yes, but this means that the filesystem will have to be modified. In an ideal world you don't modify the filesystem when doing recovery. I'm no expert at data recovery, so I can't tell what difference it makes and whether mounting ext3 as ext2 will improve or worsen your chances.

> [...]
> I've found that even some gui's tend to create trash folders per
> device/filesystem so they do not have to copy all the time to
> ~/home/kde/trash (or wherever), but can simply move the files inside
> the filesystem boundaries. That way the mv is almost as quick as the
> rm.

This is a smart trick and is often overlooked.

Best, René.


Top    Back


Jimmy O'Regan [joregan at gmail.com]


Thu, 16 Oct 2008 22:15:32 +0100

2008/10/16 René Pfeiffer <lynx@luchs.at>:

> On Oct 16, 2008 at 2217 +0200, Karl-Heinz Herrmann appeared and said:
>> On Thu, 16 Oct 2008 17:12:30 +0200
>> René Pfeiffer <lynx@luchs.at> wrote:
>> [...]
>> > Unfortunately the common filesystems do not support undeletion. Ext2
>> > does to some extent, Ext3, XFS and ReiserFS do not. I am not sure
>>
>> Hm... isn't it possible to unmount the ext3 and after that treat it as
>> ext2 until files are recovered? Once unmounted the transaction journal
>> should not be changing anymore. After recovery the journal could be
>> reactivated.
>
> Yes, but this means that the filesystem will have to be modified. In an
> ideal world you don't modify the filesystem when doing recovery. I'm no
> expert at data recovery, so I can't tell what difference it makes and
> whether mounting ext3 as ext2 will improve or worsen your chances.
>

Generally - where possible - it's a better idea to use dd to make an image of the disk elsewhere and use that instead. If that's not possible, you could use qemu and COW images to ensure that any changes made are isolated.


Top    Back


Paul Sephton [paul at inet.co.za]


Thu, 16 Oct 2008 23:28:54 +0200

On Thu, 2008-10-16 at 22:46 +0200, René Pfeiffer wrote:

> On Oct 16, 2008 at 2217 +0200, Karl-Heinz Herrmann appeared and said:
> > On Thu, 16 Oct 2008 17:12:30 +0200
> > René Pfeiffer <lynx@luchs.at> wrote:
> > [...]
> > > Unfortunately the common filesystems do not support undeletion. Ext2
> > > does to some extent, Ext3, XFS and ReiserFS do not. I am not sure
> > 
> > Hm... isn't it possible to unmount the ext3 and after that treat it as
> > ext2 until files are recovered? Once unmounted the transaction journal
> > should not be changing anymore. After recovery the journal could be
> > reactivated. 
> 
> Yes, but this means that the filesystem will have to be modified. In an
> ideal world you don't modify the filesystem when doing recovery. I'm no
> expert at data recovery, so I can't tell what difference it makes and
> whether mounting ext3 as ext2 will improve or worsen your chances.
> 
> > [...]
> > I've found that even some gui's tend to create trash folders per
> > device/filesystem so they do not have to copy all the time to
> > ~/home/kde/trash (or wherever), but can simply move the files inside
> > the filesystem boundaries. That way the mv is almost as quick as the
> > rm.
> 
> This is a smart trick and is often overlooked.

Indeed it is. Some people go as far as to move the 'rm' command away and alias the rm command to a mv to trash folder.

Something not to be discarded [no pun intended] is the fact that Unix filesystems allow one file to be hard linked to another file. This simply creates two links to the same inode (or physical file decriptor) and increments a reference counter. The file will only be ultimately unlinked and returned to free space once that reference drops to zero. Multiple links to the same file take only as much space as the file name in the directory file, so it's a very efficient way to protect files against accidental deletion.

Effectively, create a subdirectory and go "cd mydir; ln /pathto/protecteddir/* ."

Take a look at the GNU options for ln as well; It allows one to do versioned backups of files instead of overwriting everything in the target directory. So

  cd targetdir; ln -b -V numbered /pathto/sourcedir/* .
in a cron script would maintain a set of versioned backups of your source directory. Of course, hard links cannot cross file system boundaries, but I believe some of the backup options apply to other GNU command line utilities as well as 'ln.


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Thu, 16 Oct 2008 18:52:27 -0400

On Thu, Oct 16, 2008 at 10:17:31PM +0200, Karl-Heinz Herrmann wrote:

> 
> Hm... there is also the CLI version:
> ``
> alias rm="rm -i"
> ''

That would be the scripted version of "if you're not sure you should delete it, then don't." Frankly, I have a certain stubborn mindset when it comes to this: if you don't practice good data hygiene (i.e., backing it up, really thinking about whether you're ever likely to need that file ever again - not that I'm perfect at it myself...), then you deserve to suffer an occasional sharp pain for not doing so.

  Be wary of systems that degrade gracefully, for unless they inflict some
  pain in an attempt to right their hurt, they will tend to always operate
  in a degraded state.
   -- Kent Borg
> or  as also stated in:
> https://forums.macosxhints.com/archive/index.php/t-9123.html
> 
> ``
> del () { /bin/mv -i ${*} ~/.Trash; }
> ''
> 
> and use del (reason why this can't be achieved with alias in bash is
> also given in the url).

Even if we ignore my first point, above, there are still significant problems with this approach. It does not (easily) handle identical filenames - in fact, it will give a highly-confusing message in that case; it also allows ostensibly deleted files to keep taking up disk space forever. It would take a bit more than just a simple function to handle that.

Here's something that should address both issues (I have not done any significant testing on this, and would certainly welcome comments):

#!/bin/bash
# Created by Ben Okopnik on Thu Oct 16 18:17:50 EDT 2008
 
[ -z "$1" ] && { printf "Usage: ${0##*/} <file_to_safely_delete>\n"; exit; }
 
################ User-defined variables ###########################
savedir=${HOME}/.Trash	# Directory in which files are backed up  #
keeptime=30				# Delete backups after this many days     #
###################################################################
 
# Create $savedir if it doesn't exist
[ -d "$savedir" ] || mkdir "$savedir"
 
# Generate unique, time-stamped filename
savename="${1##*/}:`/bin/date '+%s%N'`"
 
# Delete file only if hardlink creation succeeded
/bin/ln "$1" "$savedir/$savename" && /bin/rm "$1"
 
# Delete any files in $savedir that are older than $keeptime
/usr/bin/find $savedir -mtime +$keeptime -delete
-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *


Top    Back


Thomas Adam [thomas.adam22 at gmail.com]


Fri, 17 Oct 2008 00:03:31 +0100

2008/10/16 Ben Okopnik <ben@linuxgazette.net>:

> # Delete any files in $savedir that are older than $keeptime
> /usr/bin/find $savedir -mtime +$keeptime -delete
> ''
At least makes this portable: `` find $savedir -mtime +$keeptime -print0 | xargs -0 rm -fr

It moaned on my Solaris system with good reason.

-- Thomas Adam


Top    Back


Francis Daly [francis at daoine.org]


Fri, 17 Oct 2008 00:58:07 +0100

On Thu, Oct 16, 2008 at 06:52:27PM -0400, Ben Okopnik wrote:

> Here's something that should address both issues (I have not done any
> significant testing on this, and would certainly welcome comments):

Even minor nitpicky ones? :-)

> ``
> #!/bin/bash
> # Created by Ben Okopnik on Thu Oct 16 18:17:50 EDT 2008
> 
> [ -z "$1" ] && { printf "Usage: ${0##*/} <file_to_safely_delete>\n"; exit; }
> 
> ################ User-defined variables ###########################
> savedir=${HOME}/.Trash	# Directory in which files are backed up  #
> keeptime=30				# Delete backups after this many days     #
> ###################################################################
> 
> # Create $savedir if it doesn't exist
> [ -d "$savedir" ] || mkdir "$savedir"

I tend to "mkdir -p". Presumably they have similar failure modes if $savedir exists as a non-directory. Possibly I'm sacrificing pre-POSIX portability for less typing.

> # Generate unique, time-stamped filename
> savename="${1##*/}:`/bin/date '+%s%N'`"
> 
> # Delete file only if hardlink creation succeeded
> /bin/ln "$1" "$savedir/$savename" && /bin/rm "$1"

I'm not sure I see the difference between this and just "mv" -- apart from the "cross-filesystem" thing. Maybe that's exactly it. Can't use this script to delete from a non-$HOME filesystem (on typical implementations).

> # Delete any files in $savedir that are older than $keeptime
> /usr/bin/find $savedir -mtime +$keeptime -delete

ctime is more likely to be useful than mtime here, I think.

Even more nitpicky responses welcome, of course.

f

-- 
Francis Daly        francis@daoine.org


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Fri, 17 Oct 2008 08:58:09 -0400

On Fri, Oct 17, 2008 at 12:03:31AM +0100, Thomas Adam wrote:

> 2008/10/16 Ben Okopnik <ben@linuxgazette.net>:
> ``
> > # Delete any files in $savedir that are older than $keeptime
> > /usr/bin/find $savedir -mtime +$keeptime -delete
> > ''
> 
> At least makes this portable:
> 
> ``
> find $savedir -mtime +$keeptime -print0 | xargs -0 rm -fr
> ''

Reasonable, and smart. Thanks, Thomas!

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Fri, 17 Oct 2008 09:06:32 -0400

On Fri, Oct 17, 2008 at 12:58:07AM +0100, Francis Daly wrote:

> On Thu, Oct 16, 2008 at 06:52:27PM -0400, Ben Okopnik wrote:
> 
> > Here's something that should address both issues (I have not done any
> > significant testing on this, and would certainly welcome comments):
> 
> Even minor nitpicky ones? :-)

Sure. As long as they're at least interesting. :)

> > ``
> > #!/bin/bash
> > # Created by Ben Okopnik on Thu Oct 16 18:17:50 EDT 2008
> > 
> > [ -z "$1" ] && { printf "Usage: ${0##*/} <file_to_safely_delete>\n"; exit; }
> > 
> > ################ User-defined variables ###########################
> > savedir=${HOME}/.Trash	# Directory in which files are backed up  #
> > keeptime=30				# Delete backups after this many days     #
> > ###################################################################
> > 
> > # Create $savedir if it doesn't exist
> > [ -d "$savedir" ] || mkdir "$savedir"
> 
> I tend to "mkdir -p". Presumably they have similar failure modes if
> $savedir exists as a non-directory. Possibly I'm sacrificing pre-POSIX
> portability for less typing.

[blink] In what case, pray tell, would ${HOME} not exist? I suppose that '-p' would do no harm - frankly, using it in scripts is my own first inclination - but I couldn't see any situation in which it would be applicable. Anyone who actually edits the script and changes the savedir to something bizarre would also theoretically know enough to create it - or would suffer the consequences.

> > # Generate unique, time-stamped filename
> > savename="${1##*/}:`/bin/date '+%s%N'`"
> > 
> > # Delete file only if hardlink creation succeeded
> > /bin/ln "$1" "$savedir/$savename" && /bin/rm "$1"
> 
> I'm not sure I see the difference between this and just "mv" -- apart
> from the "cross-filesystem" thing. Maybe that's exactly it. 

Try it on a, say, 10GB file. You'll see the difference immediately. (Hint: it takes a while to "mv" something that size.)

> Can't use
> this script to delete from a non-$HOME filesystem (on typical
> implementations).

You have a point; copy-and-delete would cover more stuff, including other filesystems. That's a pretty easy change.

> > # Delete any files in $savedir that are older than $keeptime
> > /usr/bin/find $savedir -mtime +$keeptime -delete
> 
> ctime is more likely to be useful than mtime here, I think.

I'd considered it when I wrote it, but still can't see a reason that it would be more useful. Reasoning, please?

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *


Top    Back


Francis Daly [francis at daoine.org]


Fri, 17 Oct 2008 17:02:03 +0100

On Fri, Oct 17, 2008 at 09:06:32AM -0400, Ben Okopnik wrote:

> On Fri, Oct 17, 2008 at 12:58:07AM +0100, Francis Daly wrote:
> > On Thu, Oct 16, 2008 at 06:52:27PM -0400, Ben Okopnik wrote:
> > > ################ User-defined variables ###########################
> > > savedir=${HOME}/.Trash	# Directory in which files are backed up  #
> > > keeptime=30				# Delete backups after this many days     #
> > > ###################################################################
> > > 
> > > # Create $savedir if it doesn't exist
> > > [ -d "$savedir" ] || mkdir "$savedir"
> > 
> > I tend to "mkdir -p". Presumably they have similar failure modes if
> > $savedir exists as a non-directory. Possibly I'm sacrificing pre-POSIX
> > portability for less typing.
> 
> [blink] In what case, pray tell, would ${HOME} not exist? I suppose
> that '-p' would do no harm - frankly, using it in scripts is my own
> first inclination - but I couldn't see any situation in which it would
> be applicable. Anyone who actually edits the script and changes the
> savedir to something bizarre would also theoretically know enough to
> create it - or would suffer the consequences.

You're right.

$HOME will exist; but I was more thinking about the initial [ -d ] test being unnecessary.

However, most of the time this program runs, $savedir is a directory, so "-d" is the quickest way to learn that and carry on.

For the one time when $savedir doesn't exist, "-d || mkdir" and "mkdir -p" are irrelevantly close.

And for the really odd one time when $savedir is a file or link, the errors reported are close enough too.

So yes, your [blink] was well-founded.

> > > # Delete file only if hardlink creation succeeded
> > > /bin/ln "$1" "$savedir/$savename" && /bin/rm "$1"
> > 
> > I'm not sure I see the difference between this and just "mv" -- apart
> > from the "cross-filesystem" thing. Maybe that's exactly it. 
> 
> Try it on a, say, 10GB file. You'll see the difference immediately.
> (Hint: it takes a while to "mv" something that size.)

Nope; looks superfast to me for each of them.

"copying" between /var/tmp and /usr/local, both on the same ext3 filesystem on top of an lvm2 setup.

Unless the system is returning control to me before the file is actually written, of course.

But I thought that "mv" within a filesystem was "add entry to new dir, remove entry from old dir" without needing to copy any data at all, which seems to match what I see.

> > Can't use
> > this script to delete from a non-$HOME filesystem (on typical
> > implementations).
> 
> You have a point; copy-and-delete would cover more stuff, including
> other filesystems. That's a pretty easy change.

It would be -- but then it becomes slower than mv. (Same speed across filesystems, slower when on the same one.)

The earlier suggestion of "one .Trash per filesystem" would be good, if there were an easy way of identifying the right .Trash to use within the script.

> > > # Delete any files in $savedir that are older than $keeptime
> > > /usr/bin/find $savedir -mtime +$keeptime -delete
> > 
> > ctime is more likely to be useful than mtime here, I think.
> 
> I'd considered it when I wrote it, but still can't see a reason that it
> would be more useful. Reasoning, please?

ln (and mv, on the same filesystem) don't change mtime, but do change ctime.

So if the aim is "delete this file, and really delete it 30 days from now", you need ctime. If the aim is "delete this file, and really delete it if it was (apparently) modified 30 days ago", you need mtime.

It might be due to a different interpretation of the spec.

But overall, I agree with your original assessment. Shortly after you hit control-C following an unintentional "rm * .bak", you'll respect the command that little bit more, and will have a better chance of not messing up when you're on a new machine that doesn't come with a safety net by default.

Cheers,

f

-- 
Francis Daly        francis@daoine.org


Top    Back


J.Bakshi [bakshi12 at gmail.com]


Sun, 19 Oct 2008 12:05:58 +0530

[[[ I've taken the liberty of snipping the quoted text, which was more than 100 lines long. -- Kat ]]]

Hello,

I am really grateful to you all. The knowledge you have shared, the links you have given and the scripts you have designed as the possible solutions. I am really thankful to all of you.

with best regards.


Top    Back


Ben Okopnik [ben at linuxgazette.net]


Sun, 19 Oct 2008 09:05:17 -0400

On Sun, Oct 19, 2008 at 12:05:58PM +0530, J.Bakshi wrote:

> 
> Hello,
> 
> I am really grateful to you all. The knowledge you have shared, the links you 
> have given and the scripts you have designed as the possible solutions. I am 
> really thankful to all of you.

[ J. Bakshi: I'd really appreciate it if you'd clip the email you're replying to and leave only the necessary content - just as I have, here. ]

This is also a good place to incorporate all the suggested script changes:

#!/bin/bash
# Created by Ben Okopnik on Thu Oct 16 18:17:50 EDT 2008
# Thanks to Thomas Adam and Francis Daly for their good input!
 
[ -z "$1" ] && { printf "Usage: ${0##*/} <file_to_safely_delete>\n"; exit; }
 
################ User-defined variables ###########################
savedir=${HOME}/.Trash  # Directory in which files are backed up  #
keeptime=30             # Delete backups after this many days     #
###################################################################
 
# Create $savedir if it doesn't exist
[ -d "$savedir" ] || mkdir "$savedir"
 
# Generate unique, time-stamped filename
savename="${1##*/}:`/bin/date '+%s%N'`"
 
# Move the file to $savedir
/bin/mv "$1" "$savedir/$savename"
 
# Delete any files in $savedir that are older than $keeptime
/usr/bin/find "$savedir" -ctime +$keeptime -print0|/usr/bin/xargs -0 rm

You could create a function to do the above, but I'm not much of a fan of functions when it comes to anything that you may want to run remotely (e.g., via 'ssh'.)

-- 
* Ben Okopnik * Editor-in-Chief, Linux Gazette * https://LinuxGazette.NET *


Top    Back


J.Bakshi [bakshi12 at gmail.com]


Tue, 21 Oct 2008 20:20:10 +0530

On Sunday 19 Oct 2008 6:35:17 pm Ben Okopnik wrote:

> [ J. Bakshi: I'd really appreciate it if you'd clip the email you're
> replying to and leave only the necessary content - just as I have, here. ]

I'll definately follow it, thanks for the script.


Top    Back