I am using Sematext to monitor a small composition of Docker containers plus the Logsene feature to gather the web traffic logs from one container running Node Express web application.
It all works fine until I update and restart the web server container to pull in a new code build. At this point, Sematext Logsene seems to get detached from the container, and so I lose the HTTP log trail in the monitoring. I still see the Docker events, so it seems only the logs part which is broken.
I am running Sematext "manually" (i.e. it's not in my Docker Compose) like this:
sudo docker run -d --name sematext-agent --restart=always -e SPM_TOKEN=$SPM_TOKEN \
-e LOGSENE_TOKEN=$LOGSENE_TOKEN -v /:/rootfs:ro -v /var/run/docker.sock:/var/run/docker.sock \
sematext/sematext-agent-docker
And I update my application simply like this:
docker-compose pull web && docker-compose up -d
where web is the web application service name (amongst database, memcached etc)
which recreates the web container and restarts it.
At this point Sematext stops forwarding HTTP logs.
To fix it I can restart Sematext agent like this:
docker restart sematext-agent
And the HTTP logs start arriving in their dashboard again.
So, I know I could just append the agent restart command to my release script, but I am wondering if there's a way to prevent it from becoming detached in the first place? I guess it's something to do with it monitoring the run files.
I have searched their documentation and FAQs, but not found anything specific about this effect.
I seem to have fixed it, but not in the way I'd expected.
While looking through the documentation I found the sematext-agent-docker package with the Logsene integration built-in has been deprecated and replaced by two separate packages.
"This image is deprecated.
Please use sematext/agent for monitoring and sematext/logagent for log collection."
https://hub.docker.com/r/sematext/sematext-agent-docker/
You now have to use both Logagent https://sematext.com/docs/logagent/installation-docker/ and a new Sematext Agent https://sematext.com/docs/agents/sematext-agent/containers/installation/
With these both installed, I did a quick test by pulling a new container image, and it seems that the logs still arrive in their web dashboard.
So perhaps the problem was specific to the previous package, and this new agent can "follow" the container rebuilds better somehow.
So my new commands are (just copied from the documentation, but I'm using env-vars for the keys)
docker run -d --name st-logagent --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-e LOGS_TOKEN=$SEMATEXT_LOGS_TOKEN \
-e REGION=US \
sematext/logagent:latest
docker run -d --restart always --privileged -P --name st-agent \
-v /:/hostfs:ro \
-v /sys/:/hostfs/sys:ro \
-v /var/run/:/var/run/ \
-v /sys/kernel/debug:/sys/kernel/debug \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
-e INFRA_TOKEN=$SEMATEXT_INFRA_TOKEN \
-e CONTAINER_TOKEN=$SEMATEXT_CONTAINER_TOKEN \
-e REGION=US \
sematext/agent:latest
Where
CONTAINER_TOKEN == old SPM_TOKEN
LOGS_TOKEN == old LOGSENE_TOKEN
INFRA_TOKEN = new to me
I will see if this works in the long run (not just the quick test).
Related
i am trying to make use of Azure Instances and i need some explanation about the service itself.
I want to use ACI to launch the docker running a command prompting the output of the command and stop the docker.
Is ACI the good service for that kind of things ?
The Docker file look like this.
FROM alpine
RUN apk add ffmpeg
CMD ffprobe -show_streams -show_format -loglevel warning -v quiet -print_format json /input.video
The docker run command to make it work look like this
docker run --name ffprobe-docker -i -v /path/test.ts:/input.video --rm 72e84b2825af
The issue ?
I am not able to launch my script like i can make it work on my machine on azure
What i have done?
I created a private registery where i uploaded my Image.
I ran az container createcommand witch created the ressource
Now i don't know what to do next in order to make it work as expected?
because the container is terminated and the az container exec --exec-command is not showing anything on the terminal once the command is ended.
For ACI, you can create it from your own Docker image in the ACR or other Registries. You can also run the command in it. But you should pay attention to that you cannot run the Docker command in it, because you can not nest container in it. It cannot be a Docker server. It just can be a container.
If you use the CLI command az container exec --exec-command then it will like this:
And the command as the --exec-command parameter should a bash command that can run in your Docker image.
I think the biggest advantage of ACI is the fastest and simplest, and without having to manage any virtual machines and without having to adopt a higher-level service.
Hope this will help you. Any more question please give me the message.
Consider I have a directory, for example /demoenv. I would like to start a binary in it as a docker container.
Essentially, it would work like a chroot, but with the numerous extra features (and, with the numerous disadvantages) what a docker container provides.
In this case, it is absolutely not a problem if it can't take part in the very useful docker image/container committing mechanism (I have an alternate solution for that).
Can I somehow do it?
My first trouble is that in this case, I don't really have an image to start.
My second problem is that the -v volume mount option (parameter of docker run) forbids mounting the root partition with the following message:
docker: Error response from daemon: Invalid bind mount spec "/demoenv:/": Invalid specification: destination can't be '/' in '/demoenv:/'.
Finally I found a workaround.
Unfortunately, simply mounting "/" (either with the VOLUME command in the Dockerfile, or with giving the -v to docker run) doesn't work - it can't mount the root directory as a volume.
Furhtermore, VOLUME in the Dockerfile doesn't seem to work at all.
The best (or, least worse) solution what can be done: mounting the sub-directories as different volumes. Around so:
docker run -d -h demobox --name demobox \
-v /demobox/bin:/bin \
-v /demobox/boot:/boot \
-v /demobox/etc:/etc \
-v /demobox/home:/home \
-v /demobox/lib:/lib \
-v /demobox/opt:/opt\
-v /demobox/root:/root\
-v /demobox/sbin:/sbin\
-v /demobox/srv:/srv\
-v /demobox/usr:/usr\
-v /demobox/var:/var \
demobox
Unfortunately, it also needs to have a fake image to run (it is "fake" because all of its relevant /* directories will be over-mounted by the docker daemon). You can use anything for that (I used the default image of the distribution).
Additional info: as entrypoint, we can also give /sbin/init to the container. In my tries, systemd wasn't able to run in it, but upstart could (apt-get install upstart). Giving /sbin/init as ENTRYPOINT to the Dockerfile, then calling a telinit 3 after starting the container, we can essentially run a docker container as a virtual server.
I need deleting all information of tor browser.
Is possible to creating a docker image with normal browser with tor proxy and using it trought ssh -X options?
Runing it with --rm=true automaic deleting kontainer data and always using this same configuration.
Is possible using this continer in the cloud? For example in AWS ,Azure etc.?
Is possible to download directory mount in my host machine?
If you're on Linux or Mac you can do this.
See item 9 in Jess' blog post: Docker Containers on the Desktop:
docker run -it \
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-e DISPLAY=unix$DISPLAY \ # pass the display
--device /dev/snd \ # sound
--name tor-browser \
jess/tor-browser
this is a rather conceptual question. I am running three node.js webservers as Docker containers behind a HaProxy instance, also in a docker container. The containers get started by docker-compose, so everything pretty standard.
My problem: HaProxy does health checks to see if one of my node.js containers dies to redirect traffic, so far so good. But I cannot find a good solution on how to restart dead containers automatically.
Are there any good practices for this?
You could use --retry=always policy when you run the container so that upon exit it will be automatically restarted by docker daemon.
Take a look at documentation for more details on restart policies.
You can start special container willfarrell/autoheal that monitors and restarts unhealthy containers labeled with autoheal label on the host.
docker run -d \
--name autoheal \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
willfarrell/autoheal
See https://github.com/willfarrell/docker-autoheal for details.
May be you can try to setup sensu for configure same helth checks and to restart unhelth containers.
While running the docker images just mention
restart: always
Option in your docker-compose.yaml file
I want to dockerize my entire node.js app and run everything inside a docker container, including tests.
It sounds easy if you're using PhantomJS and I actually tried that and it worked.
One thing I like though about running tests in Chrome - easy debugging. You could start Karma server, open devtools, set a breakpoint in a test file (using debugger statement) and run Karma - it will connect to the server run tests, and stop at the breakpoint, allowing you from there to do all sorts of things.
Now how do I do that in a docker container?
Should I start Karma server (with Chrome) on a hosting machine and tell somehow Karma-runner inside the container to connect to it, to run the tests? (How do I do that anyway?)
Is it possible to run Chrome in a docker container (it does sound like a silly question, but when I tried docker search desktop bunch of things come up, so I assume it is possible (?)
Maybe it's possible to debug tests in PhantomJS (although I doubt it would be as convenient as with Chrome devtools)
Would you please share your experience of running and debugging Karma tests in a docker container?
upd: I just realized it's possible to run Karma server in the container and still debug tests just by navigating to Karma page (e.g. localhost:9876) from the host computer.
However, I still have a problem - I am planning to set and start using Protractor as well. Now those tests definitely need running in a real browser (PhantomJS has way too many quirks). Can anyone tell me how to run Protractor test from inside a docker container?
I'm not aware of Protractor and it's workflow, but if you need a browser inside a container, did you see this article? I'll take the liberty for quoting this:
$ docker run -it \
--net host \ # may as well YOLO
--cpuset 0 \ # control the cpu
--memory 512mb \ # max memory it can use
-v /tmp/.X11-unix:/tmp/.X11-unix \ # mount the X11 socket
-e DISPLAY=unix$DISPLAY \ # pass the display
-v $HOME/Downloads:/root/Downloads \ # optional, but nice
-v $HOME/.config/google-chrome/:/data \ # if you want to save state
-v /dev/snd:/dev/snd --privileged \ # so we have sound
--name chrome \
jess/chrome
To dockerize your protractor test cases use either of this images from Dockerhub caltha/protractor (or) webnicer/protractor-headless.
Then run this command "docker run -it {imageid} protractor.conf.js". See the instructions in those repositories