Download version files from app engine - node.js

There is any way to download a file from google managed VM docker?
we lost one that is in production version and I want to download it to my computer but I cant find the app path

It should be possible.
First, determine the GCE instance that runs your version. The name of the version should be part of the instance name. If your version has multiple instances, you may have to try all of them (or if your file was part of the application, any of them may work).
From the Cloud console, you can switch it from "Google managed" to self-managed.
Next, use gcloud compute ssh <instance name> to ssh to the instance.
Next, run docker ps to find the container running your application code. You should see a few side-car containers like nginx, but if you look through the names of the containers you should see one for your application.
Finally, you could docker exec -it <container id> -- bash to create a shell on the instance. Or instead of bash, perhaps run a cat command or whatever else you need to do to recover your file.

Related

Where to place app config/logs in container

I've got a python package running in a container.
Is it best practice to install it in /opt/myapp within the container?
Should the logs go in /var/opt/myapp?
Should the config files go in /etc/opt/myapp?
Is anyone recommending writing logs and config files to /opt/myapp/var/log and /opt/myapp/config?
I notice google chrome was installed in /opt/google/chrome on my (host) system, but it didn't place any configs in /etc/opt/...
Is it best practice to install it in /opt/myapp within the container?
I place my apps in my container images in /app. So in the dockerfile I do
WORKDIR /app at the beginning
Should the logs go in /var/opt/myapp?
In container world the best practice is that your application logs go into stdout, stderr and not into files inside the container because containers are ephemeral by design and should be treated that way so when a container is stopped and deleted all of its data on its filesystem is gone.
On local docker development environment you can see the logs with docker logs and you can:
start a container named gettingstarted from the image docker/getting-started:
docker run --name gettingstarted -d -p 80:80 docker/getting-started
redirect docker logs output to a local file on the docker client (your machine from where you run the docker commands):
docker logs -f gettingstarted &> gettingstarted.log &
open http://localhost to generate some logs
read the log file with tail realtime or with any text viewer program:
tail -f gettingstarted.log
Should the config files go in /etc/opt/myapp?
Again, you can put the config files anywhere you want, I like to keep them together with my app so in the /app directory, but you should not modify the config files once the container is running. What you should do is instead pass the config variables to the container as environment variables at startup with the -e flag, for example to create MYVAR variable with MYVALUE value inside the container start it this way:
docker run --name gettingstarted -d -p 80:80 -e MYVAR='MYVALUE' docker/getting-started
exec into the container to see the variable:
docker exec -it gettingstarted sh
/ # echo $MYVAR
MYVALUE
From here it is the responsibility of your containerized app to understand these variables and translate them to actual application configurations. Some/most programming languages support reaching env vars from inside the code at runtime but if this is not an option then you can do an entrypoint.sh script that updates the config files with the values supplied through the env vars. A good example for this is the postgresql entrypoint: https://github.com/docker-library/postgres/blob/master/docker-entrypoint.sh
Is anyone recommending writing logs and config files to
/opt/myapp/var/log and /opt/myapp/config?
As you can see, it is not recommended to write logs into the filesystem of the container you would rather have a solution to save them outside of the container if you need them persisted.
If you understand and follow this mindset especially that containers are ephemeral then it will be much easier for you to transition from the local docker development to production ready kubernetes infrastructures.
Docker is Linux, so almost all of your concerns are related to the best operative system in the world: Linux
Installation folder
This will help you:
Where to install programs on Linux?
Where should I put software I compile myself?
and this: Linux File Hierarchy Structure
As a summary, in Linux you could use any folder for your apps, bearing in mind:
Don't use system folders : /bin /usr/bin /boot /proc /lib
Don't use file system folder: /media / mnt
Don't use /tmp folder because it's content is deleted on each restart
As you researched, you could imitate chrome and use /opt
You could create your own folder like /acme if there are several developers entering to the machine, so you could tell them: "No matter the machine or the application, all the custom content of our company will be in /acme". Also this help you if you are a security paranoid because will be able to guess where your application is. Any way, if the devil has access to your machine, is just a matter of time to find all.
You could use fine grained permissions to keep safe the chosen folder
Log Folder
Similar to the previous paragraph:
You could store your logs the standard /var/log/acme.log
Or create your own company standard
/acme/log/api.log
/acme/webs/web1/app.log
Config Folder
This is the key for devops.
In a traditional, ancient and manually deployments, some folders were used to store the apps configurations like:
/etc
$HOME/.acme/settings.json
But in the modern epoch and if you are using Docker, you should not store manually your settings inside of container or in the host. The best way to have just one build and deploy n times (dev, test, staging, uat, prod, etc) is using environment variables.
One build , n deploys and env variables usage are fundamental for devops and cloud applications, Check the famous https://12factor.net/
III. Config: Store config in the environment
V. Build, release, run: Strictly separate build and run stages
And also is a good practice on any language. Check this Heroku: Configuration and Config Vars
So your python app should not read or expect a file in the filesystem to load its configurations. Maybe for dev, but no for test and prod.
Your python should read its configurations from env variables
import os
print(os.environ['DATABASE_PASSWORD'])
And then inject these values at runtime:
docker run -it -p 8080:80 -e DATABASE_PASSWORD=changeme my_python_app
And in your developer localhost,
export DATABASE_PASSWORD=changeme
python myapp.py
Before the run of your application and in the same shell
Config of a lot pf apps
The previous approach is an option for a couple of apps. But if you are driven to microservices and microfrontends, you will have dozens of apps on several languages. So in this case, to centralize the configurations you could use:
spring cloud
zookeeper
https://www.vaultproject.io/
https://www.doppler.com/
Or the Configurator (I'm the author)

After installing docker I am unable to run commands that I used to be able to run

Two examples include snap and certbot. I used to type sudo certbot and would be able to add ssl certs to my nginx servers. Now I get this every time I enter certbot. The same thing goes for snap. I'm new to docker and don't understand what is going on. Can somebody explain what is ging on?
Usage: docker compose [OPTIONS] COMMAND
Docker Compose
Options:
--ansi string Control when to print ANSI control characters ("never"|"always"|"auto") (default "auto")
--compatibility Run compose in backward compatibility mode
--env-file string Specify an alternate environment file.
-f, --file stringArray Compose configuration files
--profile stringArray Specify a profile to enable
--project-directory string Specify an alternate working directory
(default: the path of the, first specified, Compose file)
-p, --project-name string Project name
Commands:
build Build or rebuild services
convert Converts the compose file to platform's canonical format
cp Copy files/folders between a service container and the local filesystem
create Creates containers for a service.
down Stop and remove containers, networks
events Receive real time events from containers.
exec Execute a command in a running container.
images List images used by the created containers
kill Force stop service containers.
logs View output from containers
ls List running compose projects
pause Pause services
port Print the public port for a port binding.
ps List containers
pull Pull service images
push Push service images
restart Restart service containers
rm Removes stopped service containers
run Run a one-off command on a service.
start Start services
stop Stop services
top Display the running processes
unpause Unpause services
up Create and start containers
version Show the Docker Compose version information
Run 'docker compose COMMAND --help' for more information on a command.
NEVER INSTALL DOCKER WITH SNAP
I solved the problems. Not sure where everything went wrong, but I completely destroyed snapd from my system following this https://askubuntu.com/questions/1280707/how-to-uninstall-snap. Then I installed snap again and everything works.
INSTALL DOCKER WITH THE OFFICIAL GUIDE (APT)
Go here to install docker the correct way. https://docs.docker.com/engine/install/ubuntu/
If you are new to docker follow this advice and NEVER TYPE snap install docker into you terminal. Follow these words of wisdom or use the first half if you already messed up.

Execute script located inside linked volume on host environment [duplicate]

Can I run docker command on host? I installed aws inside my docker container, now can I somehow use aws command on host (that under the hood will use docker container's aws)?
My situation is like that: I have database backups on production host. now I have Jenkins cron job that will take sql file from db container and take it into server folder. Now I also want jenkins to upload this backup file on AWS storage, but on host I have no aws installed, also I don't want to install anything except docker on my host, so I think aws should be installed inside container.
You can't directly do this. Docker containers and images have isolated filesystems, and the host and containers can't directly access each others' filesystems and binaries.
In theory you could write a shell script that wrapped docker run, name it aws, and put it in your $PATH
#!/bin/sh
exec docker run --rm -it awscli aws "$#"
but this doesn't scale well, requires you to have root-level permissions on the host, and you won't be able to access files on the host (like ~/.aws/config) or environment variables (like $AWS_ACCESS_KEY_ID) with additional setup.
You can just install software on your host instead, and it will work normally. There's no requirement to use Docker for absolutely everything.

Azure Docker Container - how to pass startup commands to a docker run?

Faced with this screen, I have managed to easily deploy a rails app to azure, on docker container app service, but logging it is a pain since the only way they have access to logs is through FTP.
Has anyone figured out a good way to running the docker run command inside azure so it essentially accepts any params.
in this case it's trying to simply log to a remote service, if anyone also has other suggestions of retrieving logs except FTP, would massively appreciate.
No, at the time of writing this is not possible, you can only pass in anything that you would normally pass to docker run container:tag %YOUR_STARTUP_COMMAND_WILL_GO_HERE_AS_IS%, so after your container name.
TLDR you cannot pass any startup parameters to Linux WebApp except for the command that needs to be run in the container. Lets say you want to run your container called MYPYTHON using the PROD tag and run some python code, you would do something like this
Startup Command = /usr/bin/python3 /home/code/my_python_entry_point.py
and that would get appended (AT THE VERY END ONLY) to the actual docker command:
docker run -t username/MYPYTHON:PROD /usr/bin/python3 /home/code/my_python_entry_point.py

Jmeter jmx file verification

In my application user uploads his jmeter test plan (*.jmx file) and I need to execute it on my server. I want to verify that the jmx file does not contain any code that can harm my server. Are there any plugins, tools that can help me?
JMeter is very flexible and there is no way to stop the user from doing the harm as for example:
It is possible do delete any file or folder using Beanshell or JavaScript
It is possible to read any file and send it over to anyone via email
It is possible to fork too many processes or kick off too much threads and put your server on its knees by overloading it
So there is no any guaranteed way to verify a JMeter test, the best thing you can do is running it in isolated mode like:
Create a user with a very limited permissions set before executing the test and execute the test as this user
Use container mechanism like:
Windows Containers
Linux Containers
FreeBSD Jails
After looking through solutions like chroot, FreeBSD Jails and dockers, we choosed Dockers. The advantages we found were:
very easy setup and cool documentation
the docker starts in less than a second and there are lots of actions you can do with container - copy file into container, mount directory, run process inside container, etc.
I've created one container with jmeter in it. Every time I want to run some jmeter file I start the container, copy the jmx file into the container and run jmeter inside the container. Note that I call jmeter.sh outside of container and get the jmeter output into console again outside of container. When jmeter process is over, I stop the container.
Some commands I have used:
docker create --name container_name -it my_image_with_jmeter //create container from an image. my_image_with_jmeter is the name of the image I've created
docker start container_name
docker cp /path/to/main/server/file container_name:/path/to/container/ //copy file from main server to container
docker exec -it container_name /usr/local/jmeter/jmeter.sh // run jmeter inside container
docker stop container_name

Resources