I'm in the process of attempting to convert all my WAV files to FLAC files in such a way that my music directory for FLACs is identical to my music directory for WAVs.
At the moment I have my music archive set up, such that a typical album is here:
/directory1/directory2/directory3/Music/WAV/Artist/Album
So I would like a one-to-one correspondance for my FLAC files that looks as follows:
/directory1/directory2/directory3/Music/FLAC/Artist/Album.
I know that I will have to use find to list all the directories/subdirectories as follows:
find -type d -exec commands.sh
But how do I write the commands.sh file such that it will grab the Artist/Album part of the path in the WAV directory, mkdir the same /Artist/Album in the FLAC directory, and then output the flacs to the FLAC/Artist/Album directory?
I know the command for converting flacs to an output directory of your choice is:
flac -5 --out-prefix="/desired/output/path" *.wav
So I guess I'm just having trouble with grabbing/recreating the file paths!
This would be a whole lot easier in a scripting language like ruby, perl or python. Something like this is a fairly straightforward starter project in any of those languages. There are libraries for find and path manipulation that make this
all pretty easy.
However, there are two posix utilities that can help with splitting apart pathnames. dirname and basename. I think those two and sed should let you do
what you want.
Find will always return relative paths and any exec occurs in the directory of the target by default, if this is not what you want look in the man pages. You can force find to stay in the top level directory.
Related
I find myself in a situation similar to this question:
Linux: Overwrite all files in folder with specified data?
The answers there work nicely, however, they are for typed-out text. Allow me to provide context.
I have a Linux terminal which the following file structure: (with files & folders irrelevant to the question removed)
root/
empty.svg
svg/
257238.svg
297522.svg
a7yf872.svg
236y27fh.svg
38277.svg
... (~200 other .svg files with arbitrary names)
2903852.svg
The framework I am working with requires those .svg files to exist with those specific filenames, but obviously, it does not care about SVG image they contain. I do not plan on using such files and they take up a hefty amount of space on disk, so I wish to convert them all into empty SVGs, aka the empty.svg file on my root directory, which is a 12x12 transparent SVG file (124 bytes). This way the framework shouldn't error out like it did when I tried simply overwriting the raw data of those SVGs with plaintext using the answer of the question linked at the top of this question. I've tried many methods by trying to be creative with my basic Linux command-line knowledge but no success. How do I accomplish this?
TL;DR: How to recursively overwrite all files in a folder with the raw data of another file from Linux CLI?
Similar to the link, you can use tee command, but instead of echo use cat to copy file contents, where cat is the command to read the contents of the file.
cat empty.svg | tee svg/257238.svg svg/297522.svg <etc>
But if there are a lot of files in svg directory it will be useful to use loop to automate the previous command:
for f in svg/*; do
if [[ "$f" == *.svg ]]; then
cat empty.svg > "$f"
fi
done
Here we use pipes and redirections to connect commands and redirect previous command output.
i used the following:
gzip -9 -c -r <some_directory> > directory.gz
how do i decompress this directory ?
I have tried
gunzip directory.gz
i am just left with a single file and not a directory structure.
As others have already mentioned, gzip is a file compression tool and not an archival tool. It cannot work with directories. When you run it with -r, it will find all files in a directory hierarchy and compress them, i.e. replacing path/to/file with path/to/file.gz. When you pass -c the gzip output is written to stdout instead of creating files. You have effectively created one big file which contains several gzip-compressed files.
Now, you could look for the gzip file header/magic number, which is 1f8b and then reconstruct your files manually.
The sensible thing to do now is to create backups (if you haven't already). Backups always help (especially with problems such as yours). Create a backup of your directory.gz file now. Then read on.
Fortunately, there's an easier way than manually reconstructing all files: using binwalk, a forensics utility which can be used to extract files from within other files. I tried it with a test file, which was created the same way as yours. Running binwalk -e file.gz will create a folder with all extracted files. It even manages to reconstruct the original file names. The hierarchy of the directories is probably lost. But at least you have your file contents and their names back. Good luck!
Remember: backups are essential.
(For completeness' sake: What you probably intended to run: tar czf directory.tar.gz directory and then tar xf directory.tar.gz)
gzip will compress 1+ files, though not meant to function like an archive utility. The posted cmd-line would yield N compressed file images concatenated to stdout, redirected to the named output file; unfortunately stuff like filenames and any dirs would not be recorded. A pair like this should work:
(create)
tar -czvf dir.tar.gz <some-dir>
(extract)
tar -xzvf dir.tar.gz
Now, I get the feeling that some people will think that there was no original file of a hard link, but I would strongly disagree because of the following experiment I did.
Let's create a file with the content pwd and make a hard link to a subfolder:
echo "pwd" > original
mkdir subfolder
cp -l original subfolder/hardlink
Now let's see what the files output if I run it with shell:
sh original
sh subfolder/hardlink
The output is the same, even though the file hardlink is in a subfolder!
Sorry, for the long intro, but I wanted to make sure that nobody says that my following question is irrelevent.
So my question now is: If the content of the original file was not conveniently pwd, how do I find out the path to the original file from a hard link file?
I know that linux programs seem to know the path somehow, but not the filename, because some programs returned error messages that <path to original file>/hardlinkname was not found. But how do they do that?
Thanks in advance for an answer!
Edit: Btw, I fixed the error messages mentioned above by naming the hard links the same as the original file.
But how do they do that?
By looking for the same inode value. Here's one way you can list files with the same inode:
find /home -xdev -samefile original
replace /home with any other starting directory for find to start searching.
how do I find out the path to the original file from a hard link file?
For hard links there are no multiple files, just one file (inode) with multiple (file) names.
ADDENDUM:
is there no other way to find the hard links of an inode than searching through folders?
ln, ls, find, and stat are the common ways of discovering and querying the filesystem for inodes. Then depending on what next you want to accomplish, many file, directory, archiving, and searching commands recognize inode values. Some may require a special -inum or --follow or equivalent option to specify inodes.
The find example I gave above is just one such usage. Another is to combine with xargs to operate on all the found files. Here's one way to delete them all:
find /home -xdev -samefile original | xargs rm
Look under --help for other standard os commands. Most Linux distributions also come with help files that explain inodes and which tools work with inodes.
pwd is the present working directory, so of course, the output should be the same, since you didnt cd't into your subfolder.
Sorry to say, but there is no "original" file if you create other hardlinks. If you want to get other hardlinks of a file, look at How to find all hard links to a given file? for example.
Agree with #Emacs User. Your example of pwd is irrelevant and confused you.
There is no concept of original file for hard-links. The file names just act as a reference count to the content on the disk pointed by the i-node (see 'ls -li original subfolder/hardlink'). So even if you delete the original file hardlink still points to the same content.
It is impossible to find out as all hard links are treated the same way pointing to one inode.
I want to rsync a directory to server from a mac machine to linux machine while excluding compiled files like .o files and binary executables. How do I exclude binary files?
What I am using at the moment:
rsync -av --compress --exclude="*.o" dir server:dir
This is a sticky problem because a Unix system does not have a hard and fast definition of the distinction between "binary" and "text" files. You can do a pretty good job by using the file command and searching for text in the output (see How to tell binary from text files in linux), so I'd run find to generate a list of files which file considers to be text, and use that as the list of files to rsync:
find dir | xargs file | awk -F: '$2 ~ /text/ { print $1 }' | \
rsync --files-from=- -av --compress dir server:dir
This will require some tweaking to make sure the pathnames are correct relative to the source dir, and so on, but it should get close to what you want.
In the long term, I'd want to rework my build process to put generated files in a dir/build directory, but this might help for now :-)
You can add a .cvsignore file in the directories and use the option -C to rsync.
But this is only vaguely what you specified. Maybe it suits you well, maybe it assumes other things than you. So be careful and test that properly.
Also, you can run a find before the rsync, scanning the complete tree for files matching your idea of being "binary" (maybe compiled executables?), and place all their names in an exclude file which you then use with option --exclude-from.
Presently i am using linux(Fedora 15) and i ma trying to search a folder in the entire file system like with below command
find / -name "apache-tomcat*"
The execution of the above command is taking more and more time that a user cant wait and results are some thing like below
[root#user fedrik]# find / -name "apache-tomcat*"
find: `/proc/6236/task/6236/ns/net': No such file or directory
find: `/proc/6236/task/6236/ns/uts': No such file or directory
find: `/proc/6236/task/6236/ns/ipc': No such file or directory
find: `/proc/6236/ns/net': No such file or directory
find: `/proc/6236/ns/uts': No such file or directory
find: `/proc/6236/ns/ipc': No such file or directory
find: `/proc/6462/task/6462/ns/net': No such file or directory
.................
.................
But as i have mentioned it is taking long time to process and sometimes it is been strucked, so can anyone please let me know on how to search a particular folder by name with a command from linux terminal that will be very fast and should search in the entire file system like above i used '/'
Edit
Actually my intention is to search the folder something like apache-tomcat-7.0.37 in the entire filesystem,
for example there may be many folders like apache-tomcat-6.0.45, apache-tomcat-5.1.7, apache-tomcat-5.0.37........... on different locations on filesystem
So as we can observe only the last part(which is numerical part) is changing and the entire folder name is same, so is there a way to search for these kind of folders irrespective of the last numerical part , like by using regular expression or somethingl ike that.
Finally my intention is to find the folders of the format apache-tomcat-xxxxxxx on the entire file system, because if we search for just apache-tomcat we will get hundreds of results and even thousands too sometimes which is difficult to analyze and search from them
?
Try this:
locate apache-tomcat
It uses a database (updated by the hilariously-named updatedb, which you can run with sudo updatedb to refresh the search index).
locate apache-tomcat | grep -E '^apache-tomcat-[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+$'
or just use [0-9] instead of [[:digit:]]. That's probably more readable. Or
locate apache-tomcat | perl -ne 'print if /^apache-tomcat-\d+\.\d+\.\d+$/'
Whatever you do, you definitely want to use locate instead of find, as it will be much faster.