How to use mongodump command from Nodejs app container? - node.js

I have a Node js app running in a docker container that depends on mongo container which uses the image mongo from docker hub. I want to make a rest API GET: download/db. When a get request hits I want it to download the dumped backup copy of database.
For that, I used shell command
mongodump --host localhost --uri mongodb://localhost:27017/db_name --gzip --out /tmp/backup-007434030202704722
But it shows an error /bin/sh: mongodump: not found. I don't know what is the problem here. What might be the problem?

I guess you are running this shell command in your node container, but mongodump will not be a part of the node image.
You might have to modify your shell command something like this:
docker run -d --rm -v mongo-backup:/mongo-backup mongo mongodump --db db_name --gzip --out /mongo-backup/backup-007434030202704722
And if you add mongo-backup volume to your Node JS application container, then you can see this backup file in your node js container.

The documentation mentions that the user or roles needs to have the 'find' action allowed, or to use the 'backup' role:
https://docs.mongodb.com/manual/reference/program/mongodump/#required-access
Login to your server - root via putty or SSH
The 'backup' role would need to be granted to any non-admin users, but it seems it's not needed for the main admin user I setup and I was able to use the following:
mongodump -u "admin" --authenticationDatabase "admin"
This will prompt the password and once you enter the BACKUP dump is created in the server...
A new directory named "dump" in working directory path and looks to have dumped all databases into it. This is currently at /root/dump/ as an example of what it contains, and more examples of using the command can be seen on https://docs.mongodb.com/manual/reference/program/mongodump/#mongodump-with-access-control
If you want to take the Backup individually then use the following process:
The --db flag can be added to specify just one or a few databases, and --out flag can be added to specify an output directory, so if you instead wanted to create the dumps of all databases in a specific directory (/backup/mongodumps for example) you would do something like the following:
mongodump -u "admin" --authenticationDatabase "admin" --out /backup/mongodumps/
or if you just wanted one database to a specifc directory:
mongodump -u "admin" --authenticationDatabase "admin" --db [DB name] --out /backup/mongodumps/
There are also other examples at https://docs.mongodb.com/manual/reference/program/mongodump/#mongodump-with-access-control which include compressing the dumps, or dumping them to a single archive.
Additional info:
Otherwise if you want to create dumps as a user other than admin, then the 'backup' role will need to be granted to those other users.

Related

Why is docker login storing my password in an unencrypted folder, and should I do something about it?

I have an application running inside a Docker container, which is continuously being pushed to an Azure Container Registry. As part of the pipeline I am using the step:
docker login <Docker Server> -u <Username> -p <Password>
When my pipeline is running this step, I get the following warnings:
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Should I do something about this, and do you have any proposed solutions?
After you log in to your private image registry with the Docker login command, a warning is displayed that indicates that your password is stored unencrypted.
Causes
By default, Docker stores the login password unencrypted within the /root/.docker/config.json Docker configuration file. This is the default Docker behavior.
Resolving the problem
You can store your user credentials in an external credentials store instead of within the Docker configuration file. Storing your credentials in a credentials store is more secure than storing the credentials in the Docker configuration file. For more information.
According to docker documentation:
To run the docker login command non-interactively, you can set the --password-stdin flag to provide a password through STDIN. Using STDIN prevents the password from ending up in the shell’s history, or log-files.
The following examples read a password from a file, and passes it to the docker login command using STDIN:
$ cat ~/my_password.txt | docker login --username foo --password-stdin
OR
$ docker login --username foo --password-stdin < ~/my_password.
The following example reads a password from a variable, and passes it to the docker login command using STDIN:
$ echo "$MY_PASSWORD" | docker login --username foo --password-stdin
Reference: Docker: Using --password via the CLI is insecure. Use --password-stdin
https://www.ibm.com/docs/en/cloud-private/3.2.0?topic=login-docker-results-in-unencrypted-password-warning

Dockerfile USER cmd vs Linux su command

I am trying to deploy db2 express image to docker using non-root user.
The below code is used to start the db2engine using root user, it works fine.
FROM ibmoms/db2express-c:10.5.0.5-3.10.0
ENV LICENSE=accept \
DB2INST1_PASSWORD=password
RUN su - db2inst1 -c "db2start"
CMD ["db2start"]
The below code is used to start the db2engine from db2inst1 profile, giving below exception during image build. please help to resolve this.( I am trying to avoid su - command )
FROM ibmoms/db2express-c:10.5.0.5-3.10.0
ENV LICENSE=accept \
DB2INST1_PASSWORD=password
USER db2inst1
RUN /bin/bash -c ~db2inst1/sqllib/adm/db2start
CMD ["db2start"]
SQL1641N The db2start command failed because one or more DB2 database manager program files was prevented from executing with root privileges by file system mount settings.
Can you show us your Dockerfile please?
It's worth noting that a Dockerfile is used to build an image. You can execute commands while building, but once an image is published, running processses are not maintained in the image definition.
This is the reason that the CMD directive exists, so that you can tell the container which process to start and encapsulate.
If you're using the pre-existing db2 image from IBM on DockerHub (docker pull ibmcom/db2), then you will not need to start the process yourself.
Their quickstart guide demonstrates this with the following example command:
docker run -itd --name mydb2 --privileged=true -p 50000:50000 -e LICENSE=accept -e DB2INST1_PASSWORD=<choose an instance password> -e DBNAME=testdb -v <db storage dir>:/database ibmcom/db2
As you can see, you only specify the image, and leave the default ENTRYPOINT and CMD, resulting in the DB starting.
Their recommendation for building your own container on top of theirs (FROM) is to load all custom scripts into /var/custom, and they will be executed automatically after the main process has started.

Sqlplus not found in oracle docker mage

I have a working oracle image, which I can use run then use docker exec to get into the running container and execute sqlplus command with no issue.
Now I am trying to create a new image with some initial data using this image. Here is my docker file.
FROM oracle:12.2
USER root
COPY /testingData /testingData
RUN chown -R oracle:oinstall /testingData
RUN chmod -R 755 /testingData
USER oracle
RUN /testingData/runInitSQLScript.sh
And here is my sh file
#!/bin/bash
sqlplus -s /nolog << EOF
CONNECT sys as SYSDBA/testpass;
whenever sqlerror exit sql.sqlcode;
set echo off
set heading off
#/sql/mytestingData.sql
exit;
EOF
It kept telling me sqlplus command not found
When I try to use the full path of the sqlplus like this #ORACLE_HOME/bin/sqlplus, it still says the same. Then I tried to check on the path, I realize I can only get into one layer under the root directory, for example, if my ORACLE_HOME is /u01/app/oracle/product/12.2.0/dbhome_1/, I can only cd into /u01, when I do cd /u01/app, it start to say that directory not found. Please help. Thanks.
If you image is similar to official images, it installs Oracle software and creates database only after the start of container. So at the moment when you create image, ORACLE_HOME directory doesn't exist yet.
In case of official images, I'd suggest you to put your scripts into one of these 2 special folders:
-v /opt/oracle/scripts/startup | /docker-entrypoint-initdb.d/startup
Optional: A volume with custom scripts to be run after database startup.
For further details see the "Running scripts after setup and on startup" section below.
-v /opt/oracle/scripts/setup | /docker-entrypoint-initdb.d/setup
Optional: A volume with custom scripts to be run after database setup.
For further details see the "Running scripts after setup and on startup" section below.
More about this: https://github.com/oracle/docker-images/tree/master/OracleDatabase/SingleInstance
update:
As per #Sayan comment the sqlplus exist on $ORACLE_HOME/bin/sqlplus path.
OR
the other option is to use below docker image to connect with Oracle database container
docker run --interactive guywithnose/sqlplus sqlplus {CONNECTION_STRING}
or use legacy linking to better to use docker network
docker run --it --link db guywithnose/sqlplus sqlplus {CONNECTION_STRING}
Now you can use db as host name for db connection.
https://github.com/sflyr/docker-sqlplus

Restrict access of other linux user to docker container

I have two linux users, named as: ubuntu and my_user
Now I build a simple Docker image and also run the Docker container
In my docker-compose.yml, I volume mount some of the files from local machine to the container, which were created by 'ubuntu' user.
Now if I login by 'my_user', and access the docker container created by 'ubuntu' user using docker exec command, then I am able to access any files that are present in the container.
My requirement is to restrict the access of 'my_user', to access the content of Docker container that was created by 'ubuntu' user.
This is not possible to achieve currently. If your user can execute Docker commands, it means effectively that the user has root privileges, therefore it's impossible to prevent this user from accessing any files.
You can add "ro",means readOnly after the data volumn.Like this
HOST:CONTAINER:ro
Or you can add ReadOnly properties in your docker-compose.yml
Here is an example how to specify read-only containers in docker-compose:
#surabhi, There is only option to restrict file access by adding fields in docker-compose file.
read_only: flag to set the volume as read-only
nocopy: flag to disable copying of data from a container when a volume is created
You can find more information here
You could install and run a sshd in that container, map port 22 to an available host port and manage the user accessibility via ssh keys.
This would not allow the user to manage things via docker commands but would give that user access to that container.

File permissions for mounted volumes in image processing pipelines?

We're using docker to containerize some image processing pipelines to make sharing them with collaborators easier.
The current method we're using is to mount an "inputs" directory (which contains an image. i.e. a single jpg) and an "outputs" directory (which contains the processed data. i.e. maybe a segmentation of the input image). The problem we're having is we run docker with sudo, and after the processing is complete, the files in the outputs directory have root permissions.
Is there a standard or preferred way to set the files in mounted volumes to have the permissions of the calling user?
Perhaps you can use the --user flag in docker run
e.g.
docker run --user $UID [other flags...] image [cmd]
Alternatively the following might work (untested)
In Dockerfile
ENTRYPOINT "su $USERID -c"
Followed by: `
docker run -e USERID=$UID [other flags...] image [cmd]
Try setting the user in your Dockerfile so that when the container is started it uses 'tomcat' for example.
# example
USER tomcat

Resources