In a bash script called through shell in some directory ($PWD), there is a line where I need to call an executable located at $PWD/bin so that it reads a input file located at $PWD/inputfiles and the resulting output files are stored in $PWD/output.
Can this be achieved?
PS: Now if I am at
cd /home/user
I do
./run config.inp output.dat
with config.inp being at /home/user
config.inp reads files data.txt and lines.txt which are in the same directory.
Now I want to read from /home/user/input and write the output files to /home/user/output
and I try
./run input/config.inp
it says
error, data.txt not found
As the problem is described, this will do it:
bin/executable < inputfiles/input > output/output
If the problem is really that bin/executable creates files in the current directory without allowing the user to specify the input and output files, then it will be a little more complicated. What you would probably want to do instead is:
cd output
ln -s ../inputfiles/input
../bin/executable
rm input
This will create a symbolic link to inputfiles/input from within the output directory, and then delete it later. If you want to eliminate the chance of collisions with files in the output directory, then you need to create a temporary directory with something like TMPDIR = $(mktemp -d), do everything there, and then copy it back to $OLDPWD/output.
Related
I'm new to linux and shell script in general. I'm using a distribution of Debian on the WSL (Windows Subsystem for Linux). I'm trying to write a very simple bash script that will do the following:
create a file in a directory (child-directory-a)
move to the directory it is in
move the file to another directory (child-directory-b)
move to that directory
move the file to the parent directory
This is what I have so far (trying to keep things extremely simple for now)
touch child-directory-a/test.txt
cd child-directory-a
mv child-directory-a/test.txt home/username/child-directory-b
The first two lines work, but I keep getting a 'no such directory exists' error with the last one. The directory exists and that is the correct path (checked with pwd). I have also tried using different paths (i.e. child-directory-b, username/child-directory-b etc.) but to no avail. I can't understand why it's not working.
I've looked around forums/documentation and it seems that these commands should work as they do in the command line, but I can't seem to do the same in the script.
If anyone could explain what I'm missing/not understanding that would be brilliant.
Thank you.
You could create the script like this:
#!/bin/bash
# Store both child directories on variables that can be loaded
# as environment variables.
CHILD_A=${CHILD_A:=/home/username/child-directory-a}
CHILD_B=${CHILD_B:=/home/username/child-directory-b}
# Create both child folders. If they already exist nothing will
# be done, and no error will be emitted.
mkdir -p $CHILD_A
mkdir -p $CHILD_B
# Create a file inside CHILD_A
touch $CHILD_A/test.txt
# Change directory into CHILD_A
cd $CHILD_A
# Move the file to CHILD_B
mv $CHILD_A/test.txt $CHILD_B/test.txt
# Move to CHILD_B
cd $CHILD_B
# Move the file to the parent folder
mv $CHILD_B/test.txt ../test.txt
Take into account the following:
We make sure that all the folders exists and are created.
Use variables to avoid typos, with the ability to load dynamic values from environment variables.
Use absolute paths to simplify the movement between folders.
Use relative paths to move files relatives to where we are.
Another command that might be of use is pwd. It will tell you the directory you are on.
with your second line, you change the current directory to child-directory-a
so, in your third line there is an error because there is no subdirectory child-directory-a into subdirectory child-directory-a
Your third line should be instead :
mv test.txt ../child-directory-b
The point #4 of your script should be:
cd ../child-directory-b
(before that command the current directory is home/username/child-directory-a and after this command it becomes home/username/child-directory-b)
Then the point #5 and final point of your script should be:
mv test.txt ..
NB: you can display the current directory at any line of your script by using the command pwd (print working directory) in your script, it that helps
#!/bin/sh
# Variables
WORKING_DIR="/home/username/example scripts"
FILE_NAME="test file.txt"
DIR_A="${WORKING_DIR}/child-directory-a"
DIR_B="${WORKING_DIR}/child-directory-b"
# create a file in a directory (child-directory-a)
touch "${DIR_A}/${FILE_NAME}"
# move to the directory it is in
cd "${DIR_A}"
# move the file to another directory (child-directory-b)
mv "${FILE_NAME}" "${DIR_B}/"
# move to that directory
cd "${DIR_B}"
# move the file to the parent directory
mv "${FILE_NAME}" ../
Let say the command be my_command
And this command has to be prepared specific files (file1, file2, and file3) in the current working directory.
Because I often use my_command in many different directories, I'd like to keep the certain files in a certain directory and execute my_command without those three files in the working directory.
I mean I don't want to copy those three files to every working directory.
For example:
Directory containing the three files /home/chest
Working directory: /home/wd
If I execute command my_command, it automatically recognizes the three files in /home/chest/
I've thought the way is similar to add $PATH and not the executable files but just files.
It seems like the files needs to be in the current working directory for the vasp_std command to work as expected, I am thinking that you could simply add all files in a include folder in you home directory and then create a symbolic link to this folder from your script. In the end of your script the symbolic link will then be deleted:
#!/bin/bash
# create a symbolic link to our resource folder
ln -s ~/include src
# execute other commands here
# finally remove the symbolic link from the current directory
unlink src
If the vasp_std command require that the files are placed directly under the current working directory you could instead create a symbolic link for each file:
#!/bin/bash
# create link for to all resource files
for file in ~/include/*
do
ln -s $file `basename $file`
done
# execute other commands here
# remove any previously created links
for file in ~/include/*
do
unlink `basename $file`
done
I would like to conduct a command on each file in a directory and store the output in a new directory such that it has the same filename as the input. The command that I run is a .pl script has the format:
test.pl inputfile outputfile
For example, I have a directory named input with the files:
testa.txt
testb.txt
I run a for loop that conducts a command on those two files:
for file in /Users/test/Desktop/input
do
test.pl $file /Users/test/Desktop/output/$file
done
However, providing the output path this way does not work. I keep getting the error no such file or directory.
file gets the value /Users/test/Desktop/input, so test.pl receives /Users/test/Desktop/output//Users/test/Desktop/input as the last argument. You'll want to use a glob like /Users/test/Desktop/input/* and then stripping the directories using basename:
for file in /Users/test/Desktop/input/*
do
test.pl "$file" "/Users/test/Desktop/output/$(basename "$file")"
done
xargs is a tool that reads a list of files form stdin and executes a command on each filename it gets.
You need to pay attention to filenames containing a blanks.
I'd suggest you to provide more details for a more precise answer. The man page has plenty of details.
Let's say I make a file .history.txt:
touch .history.txt
and I try to write to it:
cat > .history.txt
after having done that all I get is:
bash: .history.txt: is a directory
What I need is to be able to write some text to it like I would be able to any normal file. Any ideas what am I doing wrong?
A file doesn't need to already exist in order to redirect output to it (the shell will create the file if necessary). But Bash is telling you that .history.txt already exists and is a directory, so you can't write to it.
You either need to remove the existing directory rm -rf .history.txt or use a different file name. Then cat > .whatever.txt should work on its own.
I have a directory containing a set of subdirectories and files. I need to recursively copy all the content of this directory to all the subdirectories of another directory, also recursively.
How do I achieve this, preferably without using a script and only with the cp command?
You can write this in a script but you don't have to. Just write it line by line in the terminal:
# $TARGET is the directory containing subdirectories where you want to STORE the copies
# $SOURCE is the directory containing the subdirectories you want to COPY
for dir in $(ls $TARGET); do
cp -r $SOURCE/* $TARGET/$dir
done
Only uses cp and runs on both bash and zsh.
You can't. cp can copy multiple sources but will only copy to a single destination. You need to arrange to invoke cp multiple times - once per destination - for what you want to do; using, as you say, a loop or some other tool.
The first part of the command before the pipe instruct tar to create an archive of everything in the current directory and write it to standard output (the – in place of a file-name frequently indicates stdout).
tar cf - * | ( cd /target; tar xfp -)
The commands within parentheses cause the shell to change directory to the target directory and untar data from standard input. Since the cd and tar commands are contained within parentheses, their actions are performed together.
The -p option in the tar extraction command directs tar to preserve permission and ownership information, if possible given the user executing the command. If you are running the command as superuser, this option is turned on by default and can be omitted.
Also you can use the following command, but it seems to be quite slower than tar;
cp -a * /target