Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Is there a specific Unix/Linux command that would show when a file was renamed? I have a file in a Linux box and want to know when the file got renamed. I tried ls -ltr but that command only shows the date when it was created and not when it was last modified.
Is there a specific Unix/Linux command to do this?
You can use
stat filename
Check Access when it was last read.
Check Modify when it was updated.
Check Change if it was renamed.
Plus other file info like blocks, inodes, etc.
The behaviour is going to depend on the file system type. On classic Unix file systems (and, amongst others, the Mac OS X HFS file system in particular), the interesting thing about a file rename within a file system is that it does not affect the inode of the file at all; it only affects the directory (or directories) where the file was moved from/to. So, within a single file system, the only timestamps that change for a rename are the modify time of the directory holding the file. For example*:
$ fl -a -c -m -n xxx.pl
2013-07-17 16:17:17 2013-07-17 16:17:17 2013-07-17 16:17:17 xxx.pl
$ mv xxx.pl yyy.pl
$ fl -a -c -m -n yyy.pl
2013-07-17 16:17:17 2013-07-17 16:17:17 2013-07-17 16:17:17 yyy.pl
$
This means that on such file systems, you cannot tell when the file was renamed some time after the file was renamed.
* fl is a home-brew command; the invocation is equivalent to the Mac OS X stat invocation stat -f '%Sa %Sm %Sc %N'. This is only marginally related to the GNU/Linux stat command.
On the other hand, empirical evidence shows that a Linux ext4 file system does record the name change with a change of the modification time of the inode itself.
$ stat -c '%x %y %z %n' test.dat
2013-07-19 10:31:31.803842672 -0700 2013-07-19 10:31:31.803842672 -0700 2013-07-19 10:31:31.803842672 -0700 test.dat
$ mv test.dat test.data
$ stat -c '%x %y %z %n' test.data
2013-07-19 10:31:31.803842672 -0700 2013-07-19 10:31:31.803842672 -0700 2013-07-22 09:11:49.074339525 -0700 test.data
$
So, what you can do depends on the O/S on which you are running and which file system type your file is on.
If you have the inotify service available and running, you could record the event that is the file rename, but if you don't record the event, there is no 'post mortem' way to know when the file was renamed.
The modify time on the directory is not a good indication of when a specific file was changed; it changes whenever any file is created, moved, deleted. If there's only a single file in the directory, then the change time probably indicates when that file was created or renamed, but single file directories are the exception and not the rule.
You can use the inotify library (man page) to be notified when something happens on a file.
It's likely that there are bindings to scripting languages.
On AIX, you can use equivalent of "stat" -
istat filename
You should be able to know the file's last updated, modified and accessed time down to the seconds along with some more information.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I'm trying to ftp a folder using the command line ftp client, but so far I've only been able to use 'get' to get individual files.
You could rely on wget which usually handles ftp get properly (at least in my own experience). For example:
wget -r ftp://user:pass#server.com/
You can also use -m which is suitable for mirroring. It is currently equivalent to -r -N -l inf.
If you've some special characters in the credential details, you can specify the --user and --password arguments to get it to work. Example with custom login with specific characters:
wget -r --user="user#login" --password="Pa$$wo|^D" ftp://server.com/
As pointed out by #asmaier, watch out that even if -r is for recursion, it has a default max level of 5:
-r
--recursive
Turn on recursive retrieving.
-l depth
--level=depth
Specify recursion maximum depth level depth. The default maximum depth is 5.
If you don't want to miss out subdirs, better use the mirroring option, -m:
-m
--mirror
Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite
recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf
--no-remove-listing.
Just to complement the answer given by Thibaut Barrère.
I used
wget -r -nH --cut-dirs=5 -nc ftp://user:pass#server//absolute/path/to/directory
Note the double slash after the server name. If you don't put an extra slash the path is relative to the home directory of user.
-nH avoids the creation of a directory named after the server name
-nc avoids creating a new file if it already exists on the destination (it is just skipped)
--cut-dirs=5 allows to take the content of /absolute/path/to/directory and to put it in the directory where you launch wget. The number 5 is used to filter out the 5 components of the path. The double slash means an extra component.
ncftp -u <user> -p <pass> <server>
ncftp> mget directory
If lftp is installed on your machine, use mirror dir. And you are done. See the comment by Ciro below if you want to recursively download a directory.
If you can use scp instead of ftp, the -r option will do this for you. I would check to see whether you can use a more modern file transfer mechanism than FTP.
Use WGet instead. It supports HTTP and FTP protocols.
wget -r ftp://mydomain.com/mystuff
Good Luck!
reference: http://linux.about.com/od/commands/l/blcmdl1_wget.htm
There is 'ncftp' which is available for installation in linux. This works on the FTP protocol and can be used to download files and folders recursively. works on linux. Has been used and is working fine for recursive folder/file transfer.
Check this link... http://www.ncftp.com/
If you can, I strongly suggest you tar and bzip (or gzip, whatever floats your boat) the directory on the remote machine—for a directory of any significant size, the bandwidth savings will probably be worth the time to zip/unzip.
If you want to stick to command line FTP, you should try NcFTP. Then you can use get -R to recursively get a folder. You will also get completion.
wget -r ftp://url
Work perfectly for Redhat and Ubuntu
You should not use ftp. Like telnet it is not using secure protocols, and passwords are transmitted in clear text. This makes it very easy for third parties to capture your username and password.
To copy remote directories remotely, these options are better:
rsync is the best-suited tool if you can login via ssh, because it copies only the differences, and can easily restart in the middle in case the connection breaks.
ssh -r is the second-best option to recursively copy directory structures.
To fetch files recursively, you can use a script like this:
https://gist.github.com/flibbertigibbet/8165881
See:
rsync man page
ssh man page
toggle the prompt by PROMPT command.
Usage:
ftp>cd /to/directory
ftp>prompt
ftp>mget *
Years ago I use to do:
$ od -c .
to get a dump of the current directory and show the inodes.
This no longer works ... anyone know: why or what I can do?
I was just demonstrating the 'beauty' of everything is a file to someone.
Craig
od internally calls open() and then read(); from man 2 read, errors section
EISDIR fd refers to a directory.
That is, you cannot read bytes from a directory. Maybe in some old version of Linux or in some other *nix system it was allowed, but not in today Linux.
In Ubuntu, I give these commands and obtain this output:
soujanya#LLN-Ubuntu:~/workspace/openEAR-0.1.0$ ls -l SMILExtract
-rwxr-xr-x 1 soujanya soujanya 3789876 Aug 20 2009 SMILExtract
soujanya#LLN-Ubuntu:~/workspace/openEAR-0.1.0$ whoami
soujanya
soujanya#LLN-Ubuntu:~/workspace/openEAR-0.1.0$ ./SMILExtract
bash: ./SMILExtract: No such file or directory
soujanya#LLN-Ubuntu:~/workspace/openEAR-0.1.0$
SMILExtract is an executable file (not shell script) and I do not have access to the source code of this file. Maybe it calls some system() or maybe not, no way for me to know.
I have heard that this error might be if the file is 32-bit and I run it on a 64-bit system, so No such file or directory refers to the loader and not this file. I think this is not the cause in my case, but anyway, my question is:
Is there a way to find out WHICH file is No such file or directory? Maybe a special variable in Bash or something like this.
You can run programs with strace, a tool that shows you which system calls are used by a program. It'll produce a lot of output, but you can see the files your program attempts to open. Run your program like this:
strace ./SMILExtract
To be sure about the 32/64 bit question you could 'file ./SMILExtract'
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have two questions regarding unzip
I am trying to extract the zip file everything after the first parent directory.
i am trying this and its not working
unzip -d /var/www/html/data/ file-gpl.zip */*
The other questions in my old file someone has used unzip -W flag which i could not find in man page. what does it do
i tried this
[20:59][user1#core:/var/www/html/build]$ unzip -W -d test file.zip
UnZip 6.00 of 20 April 2009, by Info-ZIP. Maintained by C. Spieler. Send
bug reports using http://www.info-zip.org/zip-bug.html; see README for details.
Usage: unzip [-Z] [-opts[modifiers]] file[.zip] [list] [-x xlist] [-d exdir]
Default action is to extract files in list, except those in xlist, to exdir;
file[.zip] may be a wildcard. -Z => ZipInfo mode ("unzip -Z" for usage).
-p extract files to pipe, no messages -l list files (short format)
-f freshen existing files, create none -t test compressed archive data
-u update files, create if necessary -z display archive comment only
-v list verbosely/show version info -T timestamp archive to latest
-x exclude files that follow (in xlist) -d extract files into exdir
modifiers:
-n never overwrite existing files -q quiet mode (-qq => quieter)
-o overwrite files WITHOUT prompting -a auto-convert any text files
-j junk paths (do not make directories) -aa treat ALL files as text
-U use escapes for all non-ASCII Unicode -UU ignore any Unicode fields
-C match filenames case-insensitively -L make (some) names lowercase
-X restore UID/GID info -V retain VMS version numbers
-K keep setuid/setgid/tacky permissions -M pipe through "more" pager
See "unzip -hh" or unzip.txt for more help. Examples:
unzip data1 -x joe => extract all files except joe from zipfile data1.zip
unzip -p foo | more => send contents of foo.zip via pipe into program more
unzip -fo foo ReadMe => quietly replace existing ReadMe if archive file newer
1. unzip -d ./new/destination example.zip
2. I got the description for -W flag from my box is below
-W [only when WILD_STOP_AT_DIR compile-time option enabled] modifies the pattern matching routine so that both ‘?’ (single-char wildcard)
and ‘*’ (multi-char wildcard) do not match the directory separator character ‘/’. (The two-character sequence ‘‘**’’ acts as a multi-
char wildcard that includes the directory separator in its matched characters.) Examples:
"*.c" matches "foo.c" but not "mydir/foo.c"
"**.c" matches both "foo.c" and "mydir/foo.c"
"*/*.c" matches "bar/foo.c" but not "baz/bar/foo.c"
"??*/*" matches "ab/foo" and "abc/foo"
but not "a/foo" or "a/b/foo"
This modified behaviour is equivalent to the pattern matching style used by the shells of some of UnZip’s supported target OSs (one
example is Acorn RISC OS). This option may not be available on systems where the Zip archive’s internal directory separator character
‘/’ is allowed as regular character in native operating system filenames. (Currently, UnZip uses the same pattern matching rules for
both wildcard zipfile specifications and zip entry selection patterns in most ports. For systems allowing ‘/’ as regular filename char-
acter, the -W option would not work as expected on a wildcard zipfile specification.)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Consider two directories:
/home/user/music/flac
/media/MUSIC/flac
I would like the second directory (destination; a USB drive) to contain the same files and structure as the first directory (master). There are 3600+ files (59G in total). Every file is scanned using unison, which is painfully slow. I would rather it compare based on file name, size, and modification time.
I think rsync might be better but the examples from the man pages are rather cryptic, and Google searches did not reveal any simple, insightful examples. I would rather not accidentally erase files in the master. ;-)
The master list will change over time: directories reorganized, new files added, and existing files updated (e.g., re-tagging). Usually the changes are minor; taking hours to complete a synchronization strikes me as sub-optimal.
What is the exact command to sync the destination directory with the master?
The command should copy new files, reorganize moved files (or delete then copy), and copy changed files (based on date). The destination files should have their timestamp set to the master's timestamp.
You can use rsync this way:
rsync --delete -r -u /home/user/music/flac/* /media/MUSIC/flac
It will delete files in /media/MUSIC/flac (never on master), and update based on file date.
There are more options, but I think this way is sufficient for you. :-)
(I just did simple tests! Please test better!)
You can use plain old cp to copy new & changed files (as long as your filesystems have working timestamps):
cp -dpRuv /home/user/music/flac /media/MUSIC/
To delete files from the destination that don't exist at the source, you'll need to use find. Create a script /home/user/bin/remover.sh like so:
#!/bin/bash
CANONNAME="$PWD/$(basename $1)"
RELPATH=$(echo "$CANONNAME" | sed -e "s#/media/MUSIC/flac/##")
SOURCENAME="/home/user/music/flac/$RELPATH"
if [ ! -f "$SOURCENAME" ]; then
echo "Removing $CANONNAME"
rm "$CANONNAME"
fi
Make it executable, then run it from find:
find /media/MUSIC/flac -type f -execdir /home/user/bin/remover.sh "{}" \;
The only thing this won't do is remove directories from the destination that have been removed in the source - if you want that too you'll have to make a third pass, with a similar find/script combination.