How To Run Docker Container Bash Commands From NodeJS Code - node.js

I need to get the size of an InfluxDb on disc. I am able to get the size on the windows cmd(I am in windows 11) using du -sh /var/lib/influxdb2/.
The problem is now that i want to access this data using node.js. Using child process, i can run docker commands and get the ouput e.g. docker ps.
But when i run docker exec -it influxdb du -sh /var/lib/influx2/engine which is the bash command, i am getting "the input device is not a TTY. If you are using mintty, try prefixing the command with 'winpty'"
Any help or guidance would be highly appreciated.

docker exec -it nameofcontainer -- /bin/bash helps us to login to the container and after that you would be able to use du -sh /var/lib/influx2/engine to check the disk space

Related

shell script to run the docker image in bash, take db dump and copy file to the host

completely new to the shell script. I want to run the sql image (image is just there to take a db dump) and take a dump of the db and copy file to the host using shell script.
how i do manually is
1) docker run -it <image_name> bash (this takes in image bash)
2) mysqldump -h <ip> -u <user> -p db > filename.sql
3) docker cp <containerId>:/file/path/within/container /host/path/target (running this in host machine)
doing this i get the dump from container to host manually.
but while making shell script, i am having problem with the point 1) docker run -it <image_name> bash (this takes in image bash) since this takes me to the bash and i have to manually type the command.
how can i do it in the shell script.
any help will be greatly appreciated!
If I understand this correctly, you don't want to type those command manually and instead shell script should execute your command as and when you container is up and running. Now if you can modify sql related Dockerfile and can re-create image then use ENTRYPOINT [and if needed CMD] to execute shell script at startup. Check this link for details on ENTRYPOINT shell script.
Else, if you cannot recreate image then check this post i.e. how to run bash script from run command.
NOTE in both these cases you will have to mount your directory/volume and your sqldump command should copy dump this map volume/directory
You can pass the command to Bash as a parameter:
docker run -it <image_name> --name sqldump bash -c "mysqldump -h <ip> -u <user> -p db > /tmp/filename.sql"
docker cp sqldump:/tmp/filename.sql /path/on/host/filename.sql
Ignore the Docker steps, and just run mysqldump on your host. The -h option is the IP address or DNS name of the host running the database (can be 127.0.0.1 if the container is running on the same host, but not localhost because MySQL misinterprets that); if you mapped the database external port to a non-default port, you also need a -P (capital P) option to specify that port.
For example, if you started the container with
docker run -p 5433:5432 ... mysql:8
then you can take the dump from the host with
mysqldump -h 127.0.0.1 -P 5433 -p db > dump.sql
and not worry about the Docker details at all.

How to set limit on folder memory running in Docker Container

I am running a script in docker container which create some files and logs information in that.
Command is
docker run -t --name a6f97966d3a2552283df -v "/temp/a6f97966d3a2552283df":/usercode ubuntu_16_04:firsttry /usercode/script.sh
I want to limit the size of that folder which i have mounted using this command because log size may increase very much.
One solution for that may be i mount a virtual filesystem in container using following commands
mkdir -p /quota
mkdir -p /var/virtual_disks
touch /var/virtual_disks/directory_with_size_limit.ext3
dd if=/dev/zero of=/var/virtual_disks/directory_with_size_limit.ext3 count=51200
mkfs.ext3 /var/virtual_disks/directory_with_size_limit.ext3
mount -o loop,rw,usrquota,grpquota /var/virtual_disks/directory_with_size_limit.ext3 /quota
Its working fine on my local system but not in container.
Is there any other way of acheiving this?
It is now working fine.Actually the mounted path in folder should be the same as the mount path of the virtual filesystem
So the modified command is
docker run -t --name a6f97966d3a2552283df -v "/quota":/usercode ubuntu_16_04:firsttry /usercode/script.sh

Create new container with interactive shell

I wanted to create a new container with Node.js and start a bash-shell in it where I can interactively verify something.
Therefore I did docker run node /bin/bash but it exited instantly.
What did I do wrong?
You missed the -it: docker run -it <image-name> /bin/bash
--interactive, -i: Keep STDIN open even if not attached
--tty, -t: Allocate a pseudo-TTY
docker run reference

Docker: Unable to run shell script stored in a mounted volume

I am running Docker (1.10.2) on Windows. I created a script to echo 'Hello World' on my machine and stored it in C:/Users/username/MountTest. I created a new container and mounted this directory (MountTest) as a data volume. The command I ran to do so is shown below:
docker run -t -i --name mounttest -v /c/Users/sarin/MountTest:/home ubuntu /bin/bash
Next, I run the command to execute the script within the container mounttest.
docker exec -it mounttest sh /home/helloworld.sh
The result is as follows:
: not foundworld.sh: 2: /home/helloworld.sh:
Hello World
I get the desired output (echo Hello World) but I want to understand the reason behind the not found errors.
Note: This question might look similar to Run shell script on docker from shared volume, but it addresses permission related issues.
References:
The helloworld.sh file:
#!/bin/sh
echo 'Hello World'
The mounted volumes information is captured below.
Considering the default ENTRYPOINT for the 'ubuntu' image is sh -c, the final command executed on docker exec is:
sh -c 'sh /home/helloworld.sh'
It looks a bit strange and might be the cause of the error message.
Try simply:
docker exec -it mounttest /home/helloworld.sh
# or
docker exec -it mounttest sh -c '/home/helloworld.sh'
Of course, the docker exec should be done in a boot2docker ssh session, simalar to the shell session in which you did a docker run.
Since the docker run opens a bash, you should make a new boot2docker session (docker-machine ssh), and in that new boot2docker shell session, try the docker exec.
Trying docker exec from within the bash made by docker run means trying to do DiD (Docker in Docker). It is not relevant for your test.

how to "docker run" a shell session on a minimal linux install and immediately tear down the container?

I just started using Docker, and I like it very much, but I have a clunky
workflow that I'd like to streamline. When I'm iterating on my Dockerfile script
I will often test things out after a build by launching a
bash session, running some commands, finding out that such
and such package didn't get installed correctly, then
going back and tweaking my Dockerfile.
Let's say I have built my image and tagged it as buildfoo, I'd run it like
this:
$> docker run -t -i buildfoo
... enter some bash commands.. then ^D to exit
Then I will have a container running that I have to clean up. Usually I just nuke everything like this:
docker rm --force `docker ps -qa`
This works OK for me.. However, I'd rather not have to manually remove the
container.
Any tips gratefully accepted !
Some Additional Minor Details:
Running minimal centos 7 image and using bash as my shell.
Please use -rm flag of docker run command. --rm=true or just --rm.
It automatically remove the container when it exits (incompatible with -d). Example:
docker run -i -t --rm=true centos /bin/bash
or
docker run -i -t --rm centos /bin/bash
Even though the above still works, the command below makes use of Docker's newer syntax
docker container run -it --rm centos bash
I use the alias dr
alias dr='docker run -it --rm'
That gives you:
dr myimage
ls
...
exit
No more container running.

Resources