Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Surprisingly I could not find a straight-forward answer to this question on here yet. I am still learning Linux. Say I have downloaded a zip file to my Downloads folder. Now, I want to move it into a protected folder, like /opts or /var. Is there a good command to both sudo move AND unzip the file to where I need it to go?
If you wish to perform two separate operations (move and extract) then you have no option but to use two commands.
However, if your end goal is to extract the zip file to a specific directory, you can leave the zip file where it is and specify an extraction directory using the -d option:
sudo unzip thefile.zip -d /opt/target_dir
From the manpage:
[-d exdir]
An optional directory to which to extract files. By default, all files and subdirectories are recreated in the current directory; the -d option allows extraction in an arbitrary directory (always assuming one has permission to write to the directory). This option need not appear at the end of the command line; it is also accepted before the zipfile specification (with the normal options), immediately after the zipfile specification, or between the file(s) and the -x option. The option and directory may be concatenated without any white space between them, but note that this may cause normal shell behavior to be suppressed. In particular, ''-d ~'' (tilde) is expanded by Unix C shells into the name of the user's home directory, but ''-d~'' is treated as a literal subdirectory ''~'' of the current directory.
sudo mv <file_name> /opts && unzip /opts/<file_name>
Also you may specify the unzip destination to unzip so you can do this in a single command. This however will be a bit different from the command above as the zip will be kept in its current location, only the unzipped files will be extracted to the pointed destination.
unzip -d [target directory] [filename].zip
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I want to rename files in a folder on UNIX using a script.
The format of the original file is:
abc.txt.temp
and I want to rename it to:
abc.txt
Many files use this format and I want to remove .temp from the original file name.
The answer Ciprian gave is certainly an option but I feel it's limiting.
The solution below is much more flexible as you don't have to actually count anything and you can remove text from any position rather than just the end.
The following command (1 line) will remove any mention of .temp in all the files:
for filename in *; do mv "$filename" "${filename//.temp/}"; done
Note The "*" means all files in current folder. You can use *.temp to achieve exactly the same result as Ciprian's method. (that is, only removing .temp from files ending with .temp)
I don't know about UNIX, but since the question also have the Linux tag it may just be a UNIX/Linux confusion.
Most GNU/Linux distributions have a rename command. Depending on the rename version, to replace foo with bar in files names the syntax may either be as simple as
rename foo bar files
or follow sed's regexp syntax :
rename 's/foo/bar/' files
In your case, you want to replace .temp with an empty string ('') in all files ending with .temp, so depending on your rename version one of these commands should work :
rename .temp '' *.temp
or
rename 's/\.temp$//' *.temp
Create the following script with a name like 'rename.sh':
#!/bin/bash
TARGET_DIR=$1
TARGET_FILES="$TARGET_DIR/*.temp"
for fileName in $TARGET_FILES
do
newFileName=${fileName::-5}
mv -v "${fileName}" "${newFileName}"
done
note The ${var:offset:length} expansion requires bash version 4 or higher.
Give it execution rights:
chmod a+x rename.sh
You need to call it and pass the name of the directory of the .temp files as a parameter. Call it like this:
./rename.sh /path/to/the/temp-files
The script loops over all the *.temp files in the target folder, extracts the last 5 chars from the file path ('.temp' is 5 chars) and moves the original file to the new file that doesn't contain .temp as the extension.
EDIT: tested on a CentOS 7
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I had a problem with my Ubuntu install. I was able to boot from liveCD and connect an external hard drive. I want to backup my files now.
I tried cp -r /home destination, but I get problem with spaces in filenames, symlinks, errors "Cannot create fifo: Operation not permitted" "Permission denied" "Invalid argument" and plenty more. What is the best way to do it? Will cp -a fix these issues or should I do something more clever?
I found out that rsync doesn't have problems with filenames. But it doesn't copy .so and .a files. Also it is running extremely slow comparing to cp.
EDIT:
I followed the advice of John Bollinger and created an archive, because my external drive wasn't ext4 formatted, so is not able to preserve all file attributes.
From a liveCD home refers to liveCD home, so one has to use:
tar -c -z -f /my/backup/disk/home.tar.gz -C / media/ubuntu/longDeviceName/home
Despite sudo, I still received some "Cannot open: Permission denied" and "socket ignored" errors creating a tar for several .png files in .cache/software-center/icons/blabla. I wonder whether it is normal.
If you do not want to reformat your backup disk with a filesystem that has enough capabilities to represent all of the attributes of your files (e.g. ext4) then preserving them across the backup requires putting them into some sort of container. The traditional container for this sort of thing is a [compressed] tarball. You might therefore try
tar -c -z -f /my/backup/disk/home.tar.gz -C / home
You would recover the contents of that tarball via
tar -x -z -f /my/backup/disk/home.tar.gz -C /
Either or both might need to be run with privilege, obtained by being root or by using sudo.
That will handle symlinks, executable files, and any filename just fine, but it may still have trouble if the data you are trying to back up include any special files, such as device nodes or FIFOs. In that event, you may simply need to remove such files first, and recreate them after restoring the other files. You can identify such files via find:
find /home -not -type f -not -type d -not -type l
The accepted answer does not backup / recover file permission.
You should use parameter "p" while backing up and while recovering.
Also you might want to recover to specific folder and then move things around to not overwrite files you might want to keep.
"/" on the end of the command stands for backing up entire system:
sudo tar -cvpzf /backupfolder/backup.tar.gz --exclude=/mnt /
sudo mkdir /recover_v1.1
sudo tar -xvpzf backup.tar.gz -C /recover_v1.1
... // replacing whatever you need manually
Manually replace files you need to recover and keep those you want to keep.
-x extract
-p include permissions
-v verbose will show you the files name while working
-z compression
-f name the file
You might want to setup cron jobs to run backup automatically.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
The explanation is:
"-R, --recursive
operate on files and directories recursively"
What does "recursive" mean here?
"Recursive" implies that the operation will be performed for all files and directories (and all files and directories within any directory). So
chown -R foo /some/path
would change file owner to foo for all files and directories in /some/path
p.s. You might have even seen the dictionary entry for recursive:
recursive, n: See recursive
In some Linux commands, if you run the command on a folder with -R, the command will operate on all files and folders in that folder's tree. If you run the command on a file, -R has no effect.
The command will operate on given folder, and recursively operates on files and folders within it. It is based on recursion.
For example, you can remove a folder and its contents with
rm -R folder-name
Or you can find all occurrences of a specific string in all files within current folder tree with
grep -R -n the-string .
In this example -n is for displaying line numbers.
It means apply it to sub-directories and their contents, that is, recurse chown() when a directory is encountered.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I want to remove all files that exist in folder new-files from another folder in linux using bash commands.
I need this for two things:
I got some setup scripts which copy some pre-configured config files over. I would like to have the option to remove those files again
Sometimes it happens that archives get unpacked into the root of your downloads directory and not into a subdir because the person packing the file put everything to the archives root
What's the best way to do that?
Edit, to clarify:
I got a folder with files called new-files.
Now I execute cp -r new-files/* other-directory/.
Lets say other-directory is not the directory I wanted to copy them to but it already contains other files so I can't just do rm other-directory/*.
I need to delete all folders which I accidently copied. How do I do that?
You could use the following command:
cd new-files ; find . -exec rm -rf path/to/other-directory/{} \;
It will list all the files that where copied from the new-files directory (new-files directory will not be taken in consideration). For each file, it will remove the copied version in other-directory.
But you've to be careful, if a file in new-files erase a file in other-directory, you won't be able to restore the old file using this method. You should consider to use a versioning system (like Git for example).
From your:
Edit, to clarify:
I got a folder with files called new-files.
Now I execute cp -r new-files/* other-directory/.
Lets say other-directory is not the directory I wanted to copy them to but it already contains other files so I can't just do rm
other-directory/*.
I need to delete all folders which I accidently copied. How do I do that?
You can loop through the original dir new-files/ and delete files with same name in the other-directory/:
for file in /new-files/*
do
rm /other-directory/"$file"
done
wee script to do what you want:
pushd `pwd`
cd /path/to/new-files
x=`find . -type f`
popd
echo $x | xargs rm
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
My development machine is a linux host.
I have a complicated directory structure (like most of you, I assume), and I would like to move easily from one directory to the other, from within the shell. Specifically, welcomed features would be:
autocompletion (something like ido-mode in emacs)
regular expression directory / file matching
suggestion of recently visited directories (stack).
Possibilty to push/pop to the stack, get a listing of recently visited directories, ...
good integration of those features
console based
Do you know any tool which can satisfy those requirements?
In bash you can set CDPATH to a colon-separated directories that bash will search for when the argument to the cd does not exist.
$ man bash|grep -A3 '^\s\+CDPATH '
CDPATH The search path for the cd command. This is a colon-
separated list of directories in which the shell looks
for destination directories specified by the cd com‐
mand. A sample value is ".:~:/usr".
Once set, autocomplete will just work the way you'd expect it:
$ export CDPATH=dir1:dir2
$ cd somedir<tab>
Besides the current directory, bash will look into the directories in $CDPATH for the possible values.
Umm, any interactive shell(say, bash) already has nearly all of these features:
Press Tab once to auto-complete, and twice to show a list of possible completions.
find | grep reg.exp can be used for file matching, or find -exec grep reg.exp -H '{}' ';' to match contents
You can switch to the previous directory with cd -
pushd and popd can be used to push and pop directories