bash: cd: No such file or directory - linux

I'm writing a bash function to jump into my last editted folder.
In my example, the last edited folder is titled 'daniel'.
The bash function looks fine.
>>:~$ echo $(ls -d -1dt -- */ | head -n 1)
daniel/
And I can manually cd into the directory.
>>:~$ cd daniel
>>:~/daniel$
But I can't use the bash function to cd into the directory.
>>:~$ cd $(ls -d -1dt -- */ | head -n 1)
bash: cd: daniel/: No such file or directory

Turns out someone added alias ls=ls --color to the bashrc of this server. My function works once the alias was removed. – Daniel Tan

This error is usually thrown when you enter a path that does not exist. See -bash: cd: Desktop: No such file or directory.
But the $(ls -d -1dt -- */ | head -n 1) is not wrong in the output. Thus the reason must be the different usage of sh and bash in that moment.
In my case, I had a docker container with that error when I accessed the folder with bash. The container was broken since I had force-closed it after docker-compose up which did not work. After that, on the existing containers, I could only use sh, not bash. I found this because of OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: "bash": executable file not found in $PATH": unknown. I guess that bash is loaded later than sh, and that at an early error at the start of the container, only sh gets loaded.
That would fit since you are in the sh, which can be seen from >>. Using sh, everything will work as expected. But the expression gets solved by bash. Which is probably not loaded for whatever reason.
In docker, using docker-compose, I also had a similar error saying sh: 1: cd: can't cd to /root/MYPROJECT. That could be solved by mounting the needed volumes in the services using
services:
host:
volumes:
- ~/MYPROJECT:/MYPROJECT # ~/path/on/host:/path/on/container
See Mount a volume in docker-compose. How is it done? and How to mount a host directory with docker-compose? or the official docs.

Related

Write output of the local command into the file in the running docker container

I have file template which has various variables. I would like to constitute these variables with values before I copy this file to
running container. This what I do right now:
export $(grep -v '^#' /home/user/.env | xargs) && \
envsubst < template.yaml > new_connection.yaml && \
docker cp new_connection.yaml docker_container:/app && \
rm new_connection.yaml
This is working, however I'm sure there is a way I can skip file creation/copy/remove steps and do just something like: echo SOME_TEXT > new_connection.yaml straight to the container. Could you help?
This seems like a good application for an entrypoint wrapper script. If your image has both an ENTRYPOINT and a CMD, then Docker passes the CMD as additional arguments to the ENTRYPOINT. That makes it possible to write a simple script that rewrites the configuration file, then runs the CMD:
#!/bin/sh
envsubst < template.yaml > /app/new_connection.yaml
exec "$#"
In the Dockerfile, COPY the script in and make it the ENTRYPOINT. (If your host system doesn't correctly preserve executable file permissions or Unix line endings you may need to do some additional fixups in the Dockerfile as well.)
COPY entrypoint.sh ./
ENTRYPOINT ["/app/entrypoint.sh"] # must be JSON-array form
CMD same command as the original image
When you run the container, you need to pass in the environment file, but there's a built-in option for this
docker run --env-file /home/user/.env ... my-image
If you want to see this working, any command you provide after the image name replaces the Dockerfile CMD, but it will still be passed to the ENTRYPOINT as arguments. So you can, for example, see the rewritten config file in a new temporary container:
docker run --rm --env-file /home/user/.env my-image \
cat /app/new_connection.yaml
Generally, I agree with David Maze and the comment section. You should probably build your image in such a way that it picks up env vars on startup and uses them accordingly.
However, to answer your question, you can pipe the output of envsubst to the running container.
$ echo 'myVar: ${FOO}' > env.yaml
$ FOO=bar envsubst < env.yaml | docker run -i busybox cat
myVar: bar
If you want to write that to a file, using redirection, you need to wrap it in a sh -c because otherwise the redirection is treated as redirect the container output to some path on the host.
FOO=bar envsubst < env.yaml | docker run -i busybox sh -c 'cat > my-file.yaml'
I did it here with docker run but you can do the same with exec.
FOO=bar envsubst < env.yaml | docker exec -i <container> cat

docker exec with standard output logged in a file inside the docker container

I am currently running a cronjob from a host machine (Linux Redhat) executing a script in a docker container. The problem I have is that when I redirect the standard output to a file with path inside the docker container, the cronjob threw an exception basically saying that the path of the log file cannot be found. But if I change the output log file path to be a path that is on the host machine, it works fine.
Below does not work
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh > /path/in/docker/container/script.shout
But this one works
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh > /path/in/host/script.shout
How do I get the first cronjob working so I can have the output file in the docker container using the path in the docker container?
I don't want to run the cronjob as root and that's why I need sudo before docker exec. Please note, only root has access to the docker volume path in the host machine, which is why I can't use the docker volume path either.
Cron runs your command with a shell, so the output redirect is handled by the shell running on your host, not inside your container. To get shell commands like this to run inside the container, you need to run a shell as your docker command, and escape or quote your any of those shell options to avoid having them interpreted until you are inside the container. E.g.
0 9 * * 1-5 sudo docker exec -i container_name /bin/sh -c \
"/path/in/docker/container/script.sh > /path/in/docker/container/script.shout"
I would rather try and path the redirection path as a parameter to the script (so remove the '>'), and make the script itself redirect its output to that parameter file.
Since the script is executed in a docker container, it would see that path (as opposed to the cron job, which sees by default host paths)
We can use bach -c and put the redirect command between double quotes as in this command:
docker exec ${CONTAINER_ID} bash -c "./path/in/container/script.sh > /path/in/container/out"
And we have to be sure /path/in/container/script.sh is an executable file either by using this command from the container:
chmod +x /path/in/container/script.sh
or by using this command from the host machine:
docker exec ${CONTAINER_ID} chmod +x /path/in/container/script.sh
You can use tee: a program that reads stdin and writes the same to both stdout AND the file specified as an arg.
echo 'foo' | tee file.txt
will write the text 'foo' in file.txt
Your desired command becomes:
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh | tee /path/in/docker/container/script.shout
The drawback is that you also dump to stdout.
You may check this SO question for further possibilities and workarounds.

No such file or directory sed command in a bash file run inside a docker container

I'm trying to run a bash file inside an Ubuntu docker container where i'm trying to modify two files of Postgresql, postgresql.conf and pg_hba.conf, here is the bash file that i'm trying to run:
sed -i -e"s/^#listen_addresses.*$/listen_addresses = '*'/"
/var/lib/postgresql/data/postgresql.conf
echo "host all all 0.0.0.0/0 md5" >>
/var/lib/postgresql/data/pg_hba.conf
/etc/init.d/postgresql restart
here is how i'm trying to run it:
root#fe4fcebedad6: ./db.sh
db.sh is the name of the bash file, the file postgresql.conf is in the path that i'm using /var/lib/postgresql/data/ but i'm getting this message:
: No such file or directorystgresql/data/postgresql.conf
The weird part is that if run each one of the commands outside of the bash file it works fine, like this:
root#fe4fcebedad6:/# sed -i -e"s/^#listen_addresses.*$/listen_addresses = '*'/" /var/lib/postgresql/data/postgresql.conf
This is the part in the docker-compose file where i'm copying my local bash file into the postgrest image
volumes:
- pgdata:/var/lib/postgresql/data
- ./shell_scripts:/shell_scripts
volumes:
pgdata:
This is an issue because you have multiple lines in your command. It should be like below
sed -i -e"s/^#listen_addresses.*$/listen_addresses = '*'/" /var/lib/postgresql/data/postgresql.conf
echo "host all all 0.0.0.0/0 md5" >> /var/lib/postgresql/data/pg_hba.conf
/etc/init.d/postgresql restart
After several tries, I couldn't make the bash file work, so i removed it and executed each of the commands manually in my docker console.
Thank you anyway.

Bash: sourcing file as user from script

I am creating a script meant to be run as superuser that reads a file and runs a number of scripts on behalf of all users. The important bit is this:
sudo -u $user -H source /home/$user/list_of_commands
However, whether I encose the command with quotesor not, this fails with:
sudo: source /home/user/list_of_commands: command not found
I have even tried with the . bash builtin:
sudo: . /home/user/list_of_commands: command not found
Of course running source outside a sudo environment works. I thought there might be a PATH problem, and I tried to bypass it by providing the full path to source. However, I cannot find the executable: which source returns which: no source in (/usr/local/sbin:usr/local/bin:usr/bin). So I'm stuck.
How do I make a script source a file as a user?
source is a builtin not a command, use it with bash -c:
sudo -u $user -H bash -c "source /home/$user/list_of_commands"

Rsync to Amazon Linux EC2 instance - failed: No such file or directory

I want to upload the content of one directory to my Amazon EC2 with rsync:
rsync -r -t -v --progress -z -s -e "ssh -i /home/mostafa/keyamazon.pem" /home/mostafa/splitfiles ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:~/splitfiles
but I receive the following error message:
sending incremental file list
rsync: link_stat "/home/mostafa/splitfiles" failed: No such file or directory (2)
rsync: change_dir#3 "/home/ubuntu//~" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(712) [Receiver=3.1.0]
and if I do a dry run with grsync, it works correctly
In rsync the trailing / is very important. Also you rsync usually defaults to ssh when one of the destinations contains a host.
So you if you want to preserver modification times then you can get rid of the -e and -s options.
Your command could be written as /home/mostafa/splitfiles/ ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:splitfiles/ - notice the trailing /'s provided that you have ssh configured to read the private key from your home directory.
On ubuntu you can add this to the key chain, by going
ssh-add [key-file]
And this will save you having to specify the keyfile everytime you ssh into the AWS machine.
The errors seem to say that on the local machine you don't have a source directory and the destination doesn't exist.
I completed this task with Filezilla instead, easier to use.
You are at home ~ if you cd ../ to root you will be able to run the command.

Resources