with bash, is it possible to enter commands inside a docker container - linux

i'm trying to see if i can run commands after "entering" into a container:
#!/bin/sh
# 1 - create a new user in the db for mautic
podman exec -it postal-mariadb mysql -h 127.0.0.1 -u root -p$1 <<EOF
CREATE DATABASE mauticdb;
CREATE USER 'mautic' IDENTIFIED BY /'$1/';
GRANT ALL PRIVILEGES ON mauticdb.* TO 'mautic';
FLUSH PRIVILEGES;
exit
EOF
this gives me an error: Error: container create failed (no logs from conmon): EOF
but im thinking maybe this is not a good use of HERE DOCS
something like this doesn't work either:
echo $1 | podman exec -it postal-mariadb mysql -h 127.0.0.1 -u root -p postal-server-1 -e 'select * from deliveries limit 10;'

That's a fine (and common) use of here docs, although you probably want to drop the -t from your podman command line. If I have a mariadb container running:
podman run -d --name mariadb -e MARIADB_ROOT_PASSWORD=secret docker.io/mariadb:10
Then if I put your shell script into a file named createdb.sh, modified to look like this for my environment:
podman exec -i mariadb mysql -u root -p$1 <<EOF
CREATE DATABASE mauticdb;
CREATE USER 'mautic' IDENTIFIED BY '$1';
GRANT ALL PRIVILEGES ON mauticdb.* TO 'mautic';
FLUSH PRIVILEGES;
EOF
I've made three changes:
I've removed the -t from the podman exec command line, since we're passing input on stdin rather than starting an interactive terminal;
I removed the unnecessary exit command (the interactive mysql shell will exit when it reaches end-of-file);
I removed the weird forward slashes around your quotes (/'$1/' -> '$1').
I can run it like this:
sh createdb.sh secret
And it runs without errors. The database exists:
$ podman exec mariadb mysql -u root -psecret -e 'show databases'
Database
information_schema
mauticdb <--- THERE IT IS
mysql
performance_schema
sys
And the user exists:
$ podman exec mariadb mysql -u root -psecret mysql -e 'select user from user where user="mautic"'
User
mautic

Related

shell script to run the docker image in bash, take db dump and copy file to the host

completely new to the shell script. I want to run the sql image (image is just there to take a db dump) and take a dump of the db and copy file to the host using shell script.
how i do manually is
1) docker run -it <image_name> bash (this takes in image bash)
2) mysqldump -h <ip> -u <user> -p db > filename.sql
3) docker cp <containerId>:/file/path/within/container /host/path/target (running this in host machine)
doing this i get the dump from container to host manually.
but while making shell script, i am having problem with the point 1) docker run -it <image_name> bash (this takes in image bash) since this takes me to the bash and i have to manually type the command.
how can i do it in the shell script.
any help will be greatly appreciated!
If I understand this correctly, you don't want to type those command manually and instead shell script should execute your command as and when you container is up and running. Now if you can modify sql related Dockerfile and can re-create image then use ENTRYPOINT [and if needed CMD] to execute shell script at startup. Check this link for details on ENTRYPOINT shell script.
Else, if you cannot recreate image then check this post i.e. how to run bash script from run command.
NOTE in both these cases you will have to mount your directory/volume and your sqldump command should copy dump this map volume/directory
You can pass the command to Bash as a parameter:
docker run -it <image_name> --name sqldump bash -c "mysqldump -h <ip> -u <user> -p db > /tmp/filename.sql"
docker cp sqldump:/tmp/filename.sql /path/on/host/filename.sql
Ignore the Docker steps, and just run mysqldump on your host. The -h option is the IP address or DNS name of the host running the database (can be 127.0.0.1 if the container is running on the same host, but not localhost because MySQL misinterprets that); if you mapped the database external port to a non-default port, you also need a -P (capital P) option to specify that port.
For example, if you started the container with
docker run -p 5433:5432 ... mysql:8
then you can take the dump from the host with
mysqldump -h 127.0.0.1 -P 5433 -p db > dump.sql
and not worry about the Docker details at all.

how to write ssh connection code in .sh file to connect to remote machine

I have written a script(test.sh file) to reset mysql and postgres db in docker on System A
So when I run test.sh file on System A it works fine
Now I need to run the same file from another System B
For this i have to first connect to system A by giving this commands in console
Navigate to folder
enter the system A id test#192.111.1.111
enter password
then run the test.sh file from system B
How can I add all above 3 steps in test.sh file so that I dont have to enter the above 3 steps in console on System B so that I can just run the test.sh file on System B and it will do all the work of connecting tp System A and reset db
echo "Resetting postgres Database";
docker cp /home/test/Desktop/db_dump.sql db_1:/home
docker exec -it workflow bash -c "npm run schema:drop"
docker exec -it workflow bash -c "npm run cli schema:sync"
docker exec -it db_1 bash -c "PGPASSWORD=test psql -h db -U postgres -d test_db < /home/db_dump.sql"
echo "ProcessEngine Database Resetting";
docker cp /home/test/test/test/test.sql test:/home
docker exec -it test bash -c "mysql -uroot -ptest -e 'drop database test;'"
docker exec -it test bash -c "mysql -uroot -ptest -e 'create database test;'"
docker exec -it test bash -c "mysql -uroot -ptest -e 'use test; source /home/test.sql;'"
I want to add the connection code of ssh to this code so that i can run it from other system
navigate to folder
ssh test#192.111.1.111
password
how to put this above 3 steps in my code

Running as a host user within a Docker container

In my team we use Docker containers to locally run our website applications while we do development on them.
Assuming I'm working on a Flask app at app.py with dependencies in requirements.txt, a working flow would look roughly like this:
# I am "robin" and I am in the docker group
$ whoami
robin
$ groups
robin docker
# Install dependencies into a docker volume
$ docker run -ti -v `pwd`:`pwd` -w `pwd` -v pydeps:/usr/local python:3-slim pip install -r requirements.txt
Collecting Flask==0.12.2 (from -r requirements.txt (line 1))
# ... etc.
# Run the app using the same docker volume
$ docker run -ti -v `pwd`:`pwd` -w `pwd` -v pydeps:/usr/local -e FLASK_APP=app.py -e FLASK_DEBUG=true -p 5000:5000 python:3-slim flask run -h 0.0.0.0
* Serving Flask app "app"
* Forcing debug mode on
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 251-131-649
Now we have a local server running our application, and we can make changes to the local files and the server will refresh as needed.
In the above example, the application end up running as the root user. This isn't a problem unless the application writes files back into the working directory. If it does then we could end up with files (e.g. something like cache.sqlite or debug.log) in our working directory owned by root. This has caused a number of problems for users in our team.
For our other applications we've solved this by running the application with the host user's UID and GID - e.g. for a Django app:
$ docker run -ti -u `id -u`:`id -g` -v `pwd`:`pwd` -w `pwd` -v pydeps:/usr/local -p 8000:8000 python:3-slim ./manage.py runserver
In this case, the application will be running as a non-existent user with ID 1000 inside the container, but any files written to the host directory end up correctly owned by the robin user. This works fine in Django.
However, Flask refuses to run as a non-existent user (in debug mode):
$ docker run -ti -u `id -u`:`id -g` -v `pwd`:`pwd` -w `pwd` -v pydeps:/usr/local -e FLASK_APP=app.py -e FLASK_DEBUG=true -p 5000:5000 python:3-slim flask run -h 0.0.0.0
* Serving Flask app "app"
* Forcing debug mode on
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
Traceback (most recent call last):
...
File "/usr/local/lib/python3.6/getpass.py", line 169, in getuser
return pwd.getpwuid(os.getuid())[0]
KeyError: 'getpwuid(): uid not found: 1000'
Does anyone know if there's any way that I could either:
Make Flask not worry about the unassigned user-id, or
Somehow dynamically assign the user ID to a username at runtime, or
Otherwise allow the docker application to create files on the host as the host user?
The only solution I can think of right now (super hacky) is to change the permissions of /etc/passwd in the docker image to be globally writeable, and then add a new line to that file at runtime to assign the new UID/GID pair to a username.
You can share the host's passwd file:
docker run -ti -v /etc/passwd:/etc/passwd -u `id -u`:`id -g` -v `pwd`:`pwd` -w `pwd` -v pydeps:/usr/local -p 8000:8000 python:3-slim ./manage.py runserver
Or, add the user to the image with useradd, using /etc as volume, in the same way you use /usr/local:
docker run -v etcvol:/etc python..... useradd -u `id -u` $USER
(Both id -u and $USER are resolved in the host shell, before docker receive the command)
Just hit this problem and found a different workaround.
From getpass.py:
def getuser():
"""Get the username from the environment or password database.
First try various environment variables, then the password
database. This works on Windows as long as USERNAME is set.
"""
for name in ('LOGNAME', 'USER', 'LNAME', 'USERNAME'):
user = os.environ.get(name)
if user:
return user
# If this fails, the exception will "explain" why
import pwd
return pwd.getpwuid(os.getuid())[0]
The call to getpwuid() is only made if none of the following environment variables are set: LOGNAME, USER, LNAME, USERNAME
Setting any of them should allow the container to start.
$ docker run -ti -e USER=someuser ...
In my case, the call to getuser() seems to come from the Werkzeug library trying to generate a debugger pin code.
If it's okay for you to use another Python package to start your container, maybe you want to use my Python package https://github.com/boon-code/docker-inside which overwrites the entrypoint and creates your user in the container on the fly...
docker-inside -v `pwd`:`pwd` -w `pwd` -v pydeps:/usr/local -e FLASK_APP=app.py -e FLASK_DEBUG=true -p 5000:5000 python:3-slim -- flask run -h 0.0.0.0
Overriding entrypoint on the command line and passing a script that creates your user might also be okay for you, if you want to stick with Docker CLI.

Create postgres user during boot

I'm running a bash file from monit during boot, that bash file starts my postgres server.
if my database directory is not present, I do:
1- initdb (postgresql/data/)
su - edge -c '/usr/bin/initdb -D ${DBDIR}'
2- copy modified pg_hba.conf and postgresql.conf files to (postgresql/data/)
3- start my server
su - edge -c " /usr/bin/pg_ctl -w -D ${DBDIR} -l logfile start"
4- postgres createuser
- su - $User -c '${DBDIR} -e -s postgres'
after the execution of the bash file
postgresql/data/ is created
files are copied
server is started,
but user is not created so I cannot access my database
error : /usr/bin/psql -U postgres
psql: FATAL: role "postgres" does not exist
I can't decipher your step #4, but the reason why the postgres role does not exist is because the step #1 is run by a user edge and it doesn't ask for the creation of a postgres role through -U, so it creates an egde role as superuser instead.
Per initdb documentation:
-U username
--username=username
Selects the user name of the database superuser. This defaults to the name of the effective user running initdb. It is really not
important what the superuser's name is, but one might choose to keep
the customary name postgres, even if the operating system user's name
is different.
Either do initdb -U postgres, or if you prefer a superuser named edge, keep it like this but start psql with psql -U edge, or set the PGUSER environment variable to edge to avoid typing that each time.

Scripting automated postgres setup

I'm scripting a system setup and already have postgres installed. Here is a test script (run as root) to try and report the working directory in postgres. Calling pwd as postgres gives /var/lib/postgresql. But the test..
#!/bin/bash
su - postgres
pwd > /home/me/postgres_report
exit
.. fails (obviously) and reports the original working directory. And afterwards the bash shell is stuck in postgres, suggesting the commands are not being called in right order. I understand the bash environmental issues here. I don't have a clue how to do what I need to do, which is automate a postgres process that I can easily do interactively (i.e. step into postgres, execute a command, and exit). Any pointers?
Use sudo.
Use one of:
Passing a one line command to psql
sudo -u postgres psql -c "SELECT ..."`
A here document:
sudo -u postgres psql <<"__END__"
SELECT ...;
SELECT ...;
__END__
(If you want to be able to substitute in shell variables leave out the ", e.g. <<__END__, and backslash escape $ signs you don't want to be variables)
Pass a file to psql
sudo -u postgres psql -f /path/to/file
The sudo -u postgres is of course only required if you need to become the postgres system user to run tasks as the postgres database user via peer authentication. Otherwise you can use psql -U username, a .pgpass file, etc.
#!/bin/bash
# run as root
[ "$USER" = "root" ] || exec sudo "$0" "$#"
echo "=== $BASH_SOURCE on $(hostname -f) at $(date)" >&2
sudo passwd postgres
echo start the postgres
sudo /etc/init.d/postgresql start
sudo su - postgres -c \
"psql <<__END__
SELECT 'crate the same user' ;
CREATE USER $USER ;
ALTER USER $USER CREATEDB;
SELECT 'grant him the priviledges' ;
grant all privileges on database postgres to $USER ;
alter user postgres password 'secret';
SELECT 'AND VERIFY' ;
select * from information_schema.role_table_grants
where grantee='""$USER""' ;
SELECT 'INSTALL EXTENSIONS' ;
CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\";
CREATE EXTENSION IF NOT EXISTS \"pgcrypto\";
CREATE EXTENSION IF NOT EXISTS \"dblink\";
__END__
"
sudo /etc/init.d/postgresql status
sudo netstat -tulntp | grep -i postgres

Resources