I have a jenkins job which will execute node application. This job is configured to run on docker only during execution.
Is it possible to download file from node application everytime when job gets executed?
I tried using nodejs plugins to save and download file. File is getting saved in local but not able to download.
If your docker container runs some job and creates a file as the output of the job, and you want it available outside the container after the job is done, my suggestion is that you create the file in a location that is mapped to a host folder via the volume option. Run your docker container as follows:
sudo docker -d -v /my/host/folder:/my/location/inside/container mynodeapp:latest
Ensure that your node application writes the output file to the location /my/location/inside/container. When the job is completed, the output file can be accessed on the host-machine at /my/host/folder.
Related
Im running a docker container (springboot application running inside) with mounting a location for the logs.
docker run -d --name myContainer 8080:80 -v /server/appLogs:/var/log myContainer:latest
Here im mounting /server/appLogs to the location /var/log/. my spring boot application logs are written inside the /var/log location and I need to take logs out to the host machine.
But with the time log are getting collected inside /server/appLogs location and its filling all my space in the server.
I know from logback-spring.xml file we can handle the max size and max history but those settings doesnt applies for the mounted location.
I have a plan to write a shell script and add a cron job for auto deleting the logs in the location.
Is there any other good method to clear the logs in this /server/appLogs location?
why the configurations in logback-spring.xml doesn't get applies here?
You can use ELK (https://www.elastic.co/) for application logs.
this is the complete guide for your ref: https://logz.io/learn/complete-guide-elk-stack/#elasticsearch
I have a Java application that uses log4j2. Upon running the application it creates the log files in the logs folder and writes debug statements into the log file.
However when I create a Docker Image and run, though I see the logs folder getting created inside the container, and log printed in the file. but when i run docker log command so i can't see any logs.
I have several modules and the corresponding log4j file but when I am running the docker log command then all logs are not getting printed in the docker container while I want to print all logs in the docker container.
I'm using docker/elk image to display my data in kibana dashboard (Version 6.6.0) and It works pretty good. I started the service like using below command.
Docker Image git repo:
https://github.com/caas/docker-elk
Command:
sudo docker-compose up --detach
Expecting that it will run background, and did as expected. After two days the server up and running the and third day the kibana alone getting stopped. and Used below command to make it up and running.
sudo docker run -d <Docer_image_name>
It's up and running when I use docker ps command. But when I tried to hit the kibana server in chrome browser it says not reachable.
So I just used to below command to restart the service.
sudo docker-compose down
After that I can see kibana server in chrome browser which is up and running but I do see all my data is lost.
I used below URL in jenkins to collect the data.
`http://hostname:9200/ecdpipe_builds/extern`al
Any idea how can I resolve this issue?
I did not see the persistent storage configuration the image you mentioned in their GitHub docker-compose file.
This is common to lost data in case of docker container if you did not provide persistent storage configuration. so docker-compose down may cause to lost you data if there is no persistent configuration docker-compose file.
Persisting log data
In order to keep log data across container restarts, this image mounts
/var/lib/elasticsearch — which is the directory that Elasticsearch
stores its data in — as a volume.
You may however want to use a dedicated data volume to persist this
log data, for instance to facilitate back-up and restore operations.
One way to do this is to mount a Docker named volume using docker's -v
option, as in:
$ sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 \
-v elk-data:/var/lib/elasticsearch --name elk sebp/elk
This command mounts the named volume elk-data to
/var/lib/elasticsearch (and automatically creates the volume if it
doesn't exist; you could also pre-create it manually using docker
volume create elk-data).
So you can set these paths in your docker-compose file accordingly. Here is the link that you can check elk-docker-persisting-log-data
Use docker volume or file location as persistant space
I am doing Profiling on my NodeJs app, I am using Google App Engine Flexible and for it, I am using npm 0x, but the thing is as this package is making the flamegraph inside my NodeJs root directory folder, now how can I retrieve this folder(I mean get access to this folder). I have SSH to my App Engine Flexible instance but there were two folders vm-runtime-app, vmagent but there my NodeJs Source code root directory is not there.
It's because the GAE instance launch the app into a Docker container. After you SSH to your instance, you need to spawn a shell into the container that runs your app
Here are the steps after you SSH to your instance:
sudo docker ps
docker exec -it [CONTAINER-NAME] /bin/bash
The first command will list running Docker containers and among them, your node runtime container (likely named gaeapp); the second command will spawn a bash shell within the container where you can ls and cd and pwd around
Once you know which directory or file you want to download, you can exit the container shell and copy your file(s) from the container to the GAE instance:
exit
docker cp [CONTAINER-NAME]:/app/package.json ./
From there you can use gcloud shell to download this file locally. You could also simply copy it within Node to an exposed http endpoint in your api (i.e. /debug/flamegraph.svg) after it is generated in node, just sayin'
In my application user uploads his jmeter test plan (*.jmx file) and I need to execute it on my server. I want to verify that the jmx file does not contain any code that can harm my server. Are there any plugins, tools that can help me?
JMeter is very flexible and there is no way to stop the user from doing the harm as for example:
It is possible do delete any file or folder using Beanshell or JavaScript
It is possible to read any file and send it over to anyone via email
It is possible to fork too many processes or kick off too much threads and put your server on its knees by overloading it
So there is no any guaranteed way to verify a JMeter test, the best thing you can do is running it in isolated mode like:
Create a user with a very limited permissions set before executing the test and execute the test as this user
Use container mechanism like:
Windows Containers
Linux Containers
FreeBSD Jails
After looking through solutions like chroot, FreeBSD Jails and dockers, we choosed Dockers. The advantages we found were:
very easy setup and cool documentation
the docker starts in less than a second and there are lots of actions you can do with container - copy file into container, mount directory, run process inside container, etc.
I've created one container with jmeter in it. Every time I want to run some jmeter file I start the container, copy the jmx file into the container and run jmeter inside the container. Note that I call jmeter.sh outside of container and get the jmeter output into console again outside of container. When jmeter process is over, I stop the container.
Some commands I have used:
docker create --name container_name -it my_image_with_jmeter //create container from an image. my_image_with_jmeter is the name of the image I've created
docker start container_name
docker cp /path/to/main/server/file container_name:/path/to/container/ //copy file from main server to container
docker exec -it container_name /usr/local/jmeter/jmeter.sh // run jmeter inside container
docker stop container_name