i have a db2 luw running in a docker container. How can I increase the increase transaction log size of it.
I tried to run "db2 connect to UEQ1D" in the docker Container cli. But docker gives the Respond: command not found. Do I need to somehow install db2cmd or so on the container or how can I run db2 commands on docker. Or is there a simpler way or so?
Appreciate your help
Related
I have successfully enabled the Docker API which I can able to connect from the Jenkins. And Now I'm trying to create docker slave agent that can dynamically create and want to create 100 active docker slave agent that can immediately pick the jobs from queue and execute.
I just trying to create a node base docker image which can be act as slave agent and the image file look like below:
Image Context:
FROM node:15.12.0-alpine3.10
RUN mkdir -p /home/achu/nodeSlave
CMD ["node", "npm --version"]
Output:
[ArrchanaMohan#devops-monitoring-achu ~]$ sudo docker build -t docker-slave-nodes:1.0 .
Sending build context to Docker daemon 7.368GB
The GB count is keep increasing and I don't see image is build success message. I'm very new to the docker world, and I'm not sure whether I'm doing right things.
Can someone please help me to resolve this issue.
Thanks in advance.
Updated:
Problem Image:
You need to use slave image and install node on it.
Here is the one of them;
FROM openshift/jenkins-slave-base-centos7:v3.11
I'm using docker/elk image to display my data in kibana dashboard (Version 6.6.0) and It works pretty good. I started the service like using below command.
Docker Image git repo:
https://github.com/caas/docker-elk
Command:
sudo docker-compose up --detach
Expecting that it will run background, and did as expected. After two days the server up and running the and third day the kibana alone getting stopped. and Used below command to make it up and running.
sudo docker run -d <Docer_image_name>
It's up and running when I use docker ps command. But when I tried to hit the kibana server in chrome browser it says not reachable.
So I just used to below command to restart the service.
sudo docker-compose down
After that I can see kibana server in chrome browser which is up and running but I do see all my data is lost.
I used below URL in jenkins to collect the data.
`http://hostname:9200/ecdpipe_builds/extern`al
Any idea how can I resolve this issue?
I did not see the persistent storage configuration the image you mentioned in their GitHub docker-compose file.
This is common to lost data in case of docker container if you did not provide persistent storage configuration. so docker-compose down may cause to lost you data if there is no persistent configuration docker-compose file.
Persisting log data
In order to keep log data across container restarts, this image mounts
/var/lib/elasticsearch — which is the directory that Elasticsearch
stores its data in — as a volume.
You may however want to use a dedicated data volume to persist this
log data, for instance to facilitate back-up and restore operations.
One way to do this is to mount a Docker named volume using docker's -v
option, as in:
$ sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 \
-v elk-data:/var/lib/elasticsearch --name elk sebp/elk
This command mounts the named volume elk-data to
/var/lib/elasticsearch (and automatically creates the volume if it
doesn't exist; you could also pre-create it manually using docker
volume create elk-data).
So you can set these paths in your docker-compose file accordingly. Here is the link that you can check elk-docker-persisting-log-data
Use docker volume or file location as persistant space
I have installed the official MongoDB docker image in a VM on AWS EC2, and the database has already data on it. If I stop the VM (to save expenses overnight), will I lose all the data contained in the database? How can I make it persistent in those scenarios?
There are multiple options to achieve this but the 2 most common ways are:
Create a directory on your host to mount the data
Create a docker
volume to mount the data
1) Create a data directory on a suitable volume on your host system, e.g. /my/own/datadir. Start your mongo container like this:
$ docker run --name some-mongo -v /my/own/datadir:/data/db -d mongo:tag
The -v /my/own/datadir:/data/db part of the command mounts the /my/own/datadir directory from the underlying host system as /data/db inside the container, where MongoDB by default will write its data files.
Note that users on host systems with SELinux enabled may see issues with this. The current workaround is to assign the relevant SELinux policy type to the new data directory so that the container will be allowed to access it:
$ chcon -Rt svirt_sandbox_file_t /my/own/datadir
The source of this is the official documentation of the image.
2) Another possibility is to use a docker volume.
$ docker volume create my-volume
This will create a docker volume in the folder /var/lib/docker/volumes/my-volume. Now you can start your container with:
docker run --name some-mongo -v my-volume:/data/db -d mongo:tag
All the data will be stored in the my-volume so in the folder /var/lib/docker/my-volume. So even when you delete your container and create a new mongo container linked with this volume your data will be loaded into the new container.
You can also use the --restart=always option when you perform your initial docker run command. This mean that your container automatically will restart after a reboot of your VM. When you've persisted your data too there will be no difference between your DB before or after the reboot.
I am a newbiew to Docker. I am using Mac hence have installed Docker in HortonWorks Sandbox Virtual Box.
I am trying to create 2 containers out of a Ubuntu base image. One container runs nodejs on it and other with mysql.
I am able to create a container and it lists under Docker ps, but when I try to specify port for that container, it doesn't show me any error, but the port is not getting set.
Command used to add port to a running container:
docker run -p 8080:8080 nodejsapp
where node jsapp is my Image name of 1 container.
Any help would be really appreciated. Thanks.
It's hard to say without seeing your Dockerfile and without knowing what error messages you're seeing, but my guess is that you're not telling NodeJS what port to run on. The convention in NodeJS is to do this with the NODE_PORT environment variable:
docker run -e NODE_PORT=8080
We are trying to move onto Docker for deployment purpose. Our architecture requires to have a redis, a mongodb and several nodejs and java based Docker containers.
So my question is, if suppose the redis/mongodb docker container crashes, do we loose all the data that it had?
We want isolation, but at the same time we don't want to loose data due to malfunction/crashes. Is this even possible to achieve this with Docker or is it something not relevant here?
Any help or comments will be greatly appreciated.
Thanks
The answer is: YES - If a container crashes so that it can not be restored/restarted the data is gone. But, normally containers can be restarted and continued - in that case the data is not lost.
E.g. - the following sequence from the docker docs illustrates how container startup work. Note that the data is not lost here until the container is removed.
# Start a new container
$ JOB=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
# Stop the container
$ sudo docker stop $JOB
# Start the container
$ sudo docker start $JOB
# Restart the container
$ sudo docker restart $JOB
# SIGKILL a container
$ sudo docker kill $JOB
# Remove a container
$ sudo docker stop $JOB # Container must be stopped to remove it
$ sudo docker rm $JOB
Whenever you execute a docker run command you start a new container with fresh data. The data is based on the image you provide and that data is consistent (unless you rebuild the image of course).
So, how should you setup docker to keep your data intact? I think that a good approach is to keep the important data mounted in a volume. Volumes are simply external folders (i.e. a folder from the host system) that holds the data and this data will not be lost even if you reinstall the entire docker daemon.
Example:
docker run -v /some/local/dir:/some/dir/in/redis-container my/redis
This mounts the host folder /some/local/dir as the folder /some/dir/in/redis-container in the running container. If e.g. redis stores its data in that folder you're all set to go and reboots/crashes can be survived.
More info about docker volumes check out the docs. Another great article is the also from the docker website, Managing Data in Containers.
EDIT: After comments I clarified the answer - the data is lost if the container can't be restarted (total crash).
If a container crashes, you won't lose any data - at least not more than with a regular application crash.
The container itself is unlikely to crash (after all, it's only an envelope for your application(s)). Your application(s) running in a container can crash, and if they do, their data will still be on the container filesystem. All you have to do in such a situation is to restart the failed container.
One case where you could lose something is if you explicitly tell Docker to remove the container when it's not running anymore (--rm option).
That being said, for IO-intensive applications such as databases, it is highly recommended to host data on Docker volumes, for performance reasons (a docker volume is a traditional filesystem, while the container default filesystem is a stack of layers and will be slower).