I'm new to Docker and I want to use an Odoo Docker image.
I want to update module, normally I would restart the server and start it again with
odoo -u module_name
But I cannot find how to do this with Docker container.
In other words, I would like to start application inside container with specific parameters.
I would like to do something like this
docker start odoo -u module_name
How to do it?
you can use docker-compose and inside Odoo service add bellow command:
tty: true
command: -u target_modules_to_update
if you don't have a docker-compose file let me know and i will help.
#tjwnuk you can add the following code In file named docker-compose.yml:
version: '2'
services:
web:
image: odoo:version_number_here
depends_on:
- db
ports:
- "target_port_here:8069"
tty: true
command: -u module_name -d database_name or any config paramter you need to add
volumes:
- odoo:/var/lib/odoo
- ./config/odoo.config:/etc/odoo/odoo.conf
- ./addons-enterprise:/mnt/enterprise-addons
- ./custom_addons:/mnt/extra-addons
restart: always
db:
image: postgres:version_number_here
ports:
- "1432:5432"
environment:
- POSTGRES_DB=db_name
- POSTGRES_USER=db_user
- POSTGRES_PASSWORD=db_password
volumes:
- "odoo-dbs:/var/lib/postgresql/data/pgdata"
restart: always
volumes:
odoo:
odoo-dbs:
And you should have those dirs:
1- config Inside this dir you should have file name odoo.config you can add any configuration here also
2- addons-enterprise <== this for enterprise addons If you have
3- custom_addons <== this for your custom addons
finally, If you don't Install docker-compose you can run this command below:
apt install docker-compose
I hope this answer helps you and anyone who has the same case.
Either switch into the container an start another Odoo instance:
docker exec -ti <odoo_container> bash
odoo -c <path_to_config> -u my_module -d my_database --max-cron-threads=0 --stop-after-init --no-xmlrpc
or switch into the database container and set the module to state to upgrade:
docker exec -ti <database_container> psql -U <odoo_db_user> <db_name>
UPDATE ir_module_module set state = 'to upgrade' where name = 'my_module';
Using the database way requires a restart of odoo.
Related
I need the client CLI python app to interact with the user insert his name, etc...
on main.py there are:
user= input("Please insert user:")
docker-compose.yml:
client:
container_name: client_container
image: client:latest
build: client/
depends_on:
- server
ports:
- "5555:5555"
tty: true # docker run -t
stdin_open: true # docker run -i
network_mode: host
command: ["python", "./main.py", "-it"]
** When I just run this main.py I got the msg to insert user and all works as expected but when add docker not.
I succeed to solve this issue only if I am running the client CLI manually separated (not from docker-compose file)
docker build -t client_image .
docker run -it client_image
I have a problem to solve well I have a Dockerfile with a tomcat image, I would still like to add a database there with postgres only I have a problem how to put it all together because I have no idea how to create a database in the dockerfile. Here is my Docker:
FROM tomcat:9.0.65-jdk11
RUN apt update
RUN apt install vim -y
RUN apt-get install postgres12
COPY ROOT.war
WORKDIR /usr/local/tomcat
CMD ["catalina.sh", "run"]
There is more but it is the usual mkdir and COPY of files. Do you have any idea ? maybe write a bash script that runs inside when building a container and creates my database ? I know, some people will write me that I should make ubuntu image install tomcat and postgres there, but I want to simplify my work in assigning permissions and shorten my work.
Use docker-compose and create a database separately:
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: example
POSTGRES_USER: user
#volumes: # if you want to create your db
# - ./init.sql:/docker-entrypoint-initdb.d/init.sql
# ports: # if you want to access it from your host
# - 5432:5432
tomcatapp:
build: .
image: tomcatapp
restart: always
ports:
- 8080:8080
depends_on:
- db
Notes: the Dockerfile and init.sql (if you want to init your db) and docker-compose.yml should be in the same folder.
In my docker-compose file, there is more than 3 service. I am passing two-variable from docker command with a makefile. But I'm facing a problem - after executing first command similar second command not executing.
See this example for better understanding-
The docker-compose file is -
version: '3.7'
services:
ping:
container_name: ping_svc
image: "${PING_IMAGE_NAME}${PING_IMAGE_TAG}"
ports:
- 8080:8080
command: serve
environment:
- CONSUL_URL=consul_dev:8500
- CONSUL_PATH=ping
tty: true
id:
container_name: id_svc
image: "${ID_IMAGE_NAME}${ID_IMAGE_TAG}"
ports:
- 8081:8081
command: serve
environment:
- CONSUL_URL=consul_dev:8500
- CONSUL_PATH=id
tty: true
And my makefile command is-
# setting ping_image
#PING_IMAGE_NAME="ping-svc:" PING_IMAGE_TAG="1.0" docker-compose up -d
# setting id_image
#ID_IMAGE_NAME="id-svc:" ID_IMAGE_TAG="1.0" docker-compose up -d
The PING_IMAGE_NAME and PING_IMAGE_TAG settings were successfully but from the next line not executing. why?
Is there any better way to do this?
I solved this by putting all variables in one line.
Like this-
#ID_IMAGE_NAME="id-svc:" ID_IMAGE_TAG="1.0" \
PING_IMAGE_NAME="ping-svc:" PING_IMAGE_TAG="1.0" \
docker-compose up -d ping id
Here ping and id is my container name.
Maybe the issue was every time I'm upping docker-compose.
I am using default image and my requirement is to run a few Linux commands when I run docker-compose file. Os is redhat. This is my docker-compose file
version: '3.4'
services:
rstudio-package-manager:
image: 'rstudio/rstudio-package-manager:latest'
restart: always
volumes:
- '$PWD/rstudio-pm.gcfg:/etc/rstudio-pm/rstudio-pm.gcfg'
command: bash -c mkdir "/tmp/hello"
ports:
- '4242:4242'
This is the error:
rstudio-package-manager_1 | mkdir: missing operand
rstudio-package-manager_1 | Try 'mkdir --help' for more information.
Any help would be appreciated
EDIT
I have to run a few commands after the container starts. It can be added as a bash script too. For that, I tried this
version: '3.4'
services:
rstudio-package-manager:
privileged: true
image: 'rstudio/rstudio-package-manager:latest'
restart: always
environment:
- RSPM_LICENSE=1212323123123123
volumes:
- './rstudio-pm.gcfg:/etc/rstudio-pm/rstudio-pm.gcfg'
- './init.sh:/usr/local/bin/init.sh'
command:
- init.sh
ports:
- '4242:4242'
Inside init.sh is this
alias rspm='/opt/rstudio-pm/bin/rspm'
rspm create repo --name=prod-cran --description='Access CRAN packages'
rspm subscribe --repo=prod-cran --source=cran
And that also didn't work. Can anyone helpme out?
You are passing 3 arguments to bash:
-c
mkdir
/tmp/hello
but you need to pass only two:
-c
mkdir /tmp/hello
In other words: -c expects a single "word" to follow it. Anything after that is considered a positional parameter.
Therefore:
bash -c 'mkdir /tmp/hello'
Based on your edit, it doesn't sound like you want to change the command when running a container, but you want to create a derived image based on an existing one.
You want a Dockerfile which modifies an existing image and add your files/applies your modifications.
docker-compose.yml:
version: '3.4'
services:
rstudio-package-manager:
privileged: true
build:
context: folder_with_dockerfile
image: your_new_image_name
restart: always
environment:
- RSPM_LICENSE=1212323123123123
volumes:
- './rstudio-pm.gcfg:/etc/rstudio-pm/rstudio-pm.gcfg'
- './init.sh:/usr/local/bin/init.sh'
ports:
- '4242:4242'
folder_with_dockerfile/Dockerfile:
FROM rstudiol/rstudio-package-manager:latest
RUN alias rspm='/opt/rstudio-pm/bin/rspm' && \
rspm create repo --name=prod-cran --description='Access CRAN packages' && \
rspm subscribe --repo=prod-cran --source=cran
RUN commands in next layer
RUN commands in third layer
Then build the new image with docker-compose build and start normally. You could also run docker-compose up --build to build & run.
I have simple but curious question, i have based my image on nodejs image and i have installed redis on the image, now i wanted to start redis and nodejs app both running in the container when i do the docker-compose up. However i can only get one working, node always gives me an error. Does anyone has any idea to
How to start the nodejs application on the docker-compose up ?
How to start the redis running in the background in the same image/container ?
My Docker file as below.
# Set the base image to node
FROM node:0.12.13
# Update the repository and install Redis Server
RUN apt-get update && apt-get install -y redis-server libssl-dev wget curl gcc
# Expose Redis port 6379
EXPOSE 6379
# Bundle app source
COPY ./redis.conf /etc/redis.conf
EXPOSE 8400
WORKDIR /root/chat/
CMD ["node","/root/www/helloworld.js"]
ENTRYPOINT ["/usr/bin/redis-server"]
Error i get from the console logs is
[36mchat_1 | [0m[1] 18 Apr 02:27:48.003 # Fatal error, can't open config file 'node'
Docker-yml is like below
chat:
build: ./.config/etc/chat/
volumes:
- ./chat:/root/chat
expose:
- 8400
ports:
- 6379:6379
- 8400:8400
environment:
CODE_ENV: debug
MYSQL_DATABASE: xyz
MYSQL_USER: xyz
MYSQL_PASSWORD: xyz
links:
- mysql
#command: "true"
A docker file can have but one entry point(either CMD or ENTRYPOINT, not both). But, you can run multiple processes in a single docker image using a process manager like systemd. There are countless recipes for doing this all over the internet. You might use this docker image as a base:
https://github.com/million12/docker-centos-supervisor
However, I don't see why you wouldn't use docker compose to spin up a separate redis container, just like you seem to want to do with mysql. BTW where is the mysql definition in the docker-compose file you posted?
Here's an example of a compose file I use to build a node image in the current directory and spin up redis as well.
web:
build: .
ports:
- "3000:3000"
- "8001:8001"
environment:
NODE_ENV: production
REDIS_HOST: redis://db:6379
links:
- "db"
db:
image: docker.io/redis:2.8
It should work with a docker file looking like the one you have minus trying to start up redis.