Cloud Run, how to access root folder to upload - linux

I succeeded in uploading the node js container image to cloud run through docker and it works fine.
But now I have to upload some executable file in the root directory in binary form. (Probably, it would be nice to set basic file permissions as well) But I can't find a way to access it.. I know it's running on Debian 64-bit, right? How can I access the root folder?

Although it is technically possible to download/copy a file to a running Cloud Run instance, that action would need to take place on every cold start. Depending on how large the files are, you could run out of memory as file system changes are in-memory. Containers should be considered read-only file systems for most use cases except for temporary file storage during computation.
Cloud Run does not provide an interface to log in to an instance or remotely access files. The Docker exec type commands are not supported. That level of functionality would need to be provided by your application.
Instead, rebuild your container with updates/changes and redeploy.

Related

Dynamically generating files on heroku

I want to create sketch-files dynamically and make them downloadable. I want to use sketch-constructor (here is an example that is working on my computer).
The code runs on heroku and the is even the console.log() of the fulfilled promise but i can't see neither the directory nor the sketchfile itself.
Thanks for your help!
The Heroku filesystem is ephemeral - that means that any changes to the filesystem whilst the dyno is running only last until that dyno is shut down or restarted. Each dyno boots with a clean copy of the filesystem from the most recent deploy.
In addition, under normal operations dynos will restart every day in a process known as "Cycling".
These two facts mean that the filesystem on Heroku is not suitable for persistent storage of data. In cases where you need to store file, you can use a dedicated file storage service such as AWS S3
However, the file will be created before it would be deleted, to confirm /check if the file on the file system, run the command .
heroku login
heroku run bash -a APPNAME
$ cd app
You can navigate the folder structure of the app

Export Memory Dump Azure Kubernetes

I need to export memory dump from Aks Cluster and save it in some location
How can I do it? Is easy to export to a storage account? Exist another solution? Can someone give me an step y step?
EDIT: the previous answer was wrong, I didn't paid attention you needed a dump. You'll actually will need to get it from Boot Diagnostic or some command line:
https://learn.microsoft.com/en-us/azure/virtual-machines/troubleshooting/boot-diagnostics#enable-boot-diagnostics-on-existing-virtual-machine
This question is quite old, but let me nevertheless share how I realized it:
Linux has an internal setting called RLIMIT_CORE which limits the size of the core dump you'll receive when your application crashes - this is what you find quite quickly.
Next, you have to define the location of where core files are saved, which is done in the file /proc/sys/kernel/core_pattern. The given path can either be a relative file name (saved next to the binary which crashed), an absolute path (absolute to the mounted namespace) or - here is where it gets interesting - a pipe followed by an absolute path to an executable (application or script). This script will (according to the docs - see headline Piping core dumps to a program) be started as user and group root - but furthermore, it will (according to this post in the Linux mailing list) also be executed in the global namespace - in other words, outside of the container.
If you are like me, and you do not have access to the image used for new nodes on your AKS cluster, you want to set these values using DaemonSets, a pod which runs once on every node.
Armed with all this knowledge, you can do the following:
Create a DaemonSet - a pod running on every machine performing the initial setup.
This DaemonSet will run as a privileged container to allow it to switch to the root namespace.
After having switched namespaces successfully, it can change the value of /proc/sys/kernel/core_pattern.
The value should be something like |/bin/dd of=/core/%h.%e.%p.%t (dd will take the stdin, the core file, and save it to the location defined by the parameter of). Core files will now be saved at /core/. The name of the file can be explained by the variables found in the docs for core files.
After knowing that the files will be saved to /core/ of the root namespace, we can mount our storage there - in my case Azure File Storage. Here's a tutorial of how to mount AzureFileStorage.
Pods have the RestartPolicy set to Always. Since the job of your pod is done, and you don't want it to restart automatically, let it remain running using sleep infinity.
This writeup is almost a copy of what I discovered while contacting the support from Microsoft. Here's the thread in their forum, which contains an almost finished configuration for a DaemonSet.
I'll leave some links here which I used during my research:
how to generate core file in docker container?
How to access docker host filesystem from privileged container
https://medium.com/#patnaikshekhar/initialize-your-aks-nodes-with-daemonsets-679fa81fd20e
Sidenote:
I could also just have mounted the AzureFileSystem into every container and set the value for /proc/sys/kernel/core_pattern to just /core/%h.%e.%p.%t but this would require me to mention the mount on every container. Going this way I could free the configuration of the pods of this administrative task and put it where it (in my opinion) belongs, to the initial machine setup.

Why a vendor/node_modules mapping in a volume is considered a bad practise?

Could someone explain me what is happening when you map (in a volume) your vendor or node_module files?
I had some speed problems of docker environment and red that I don't need to map vendor files there, so I excluded it in docker-compose.yml file and the speed was much faster instantly.
So I wonder what is happening under the hood if you have vendor files mapped in your volume and what's happening when you don't?
Could someone explain that? I think this information would be useful to more than only me.
Docker does some complicated filesystem setup when you start a container. You have your image, which contains your application code; a container filesystem, which gets lost when the container exits; and volumes, which have persistent long-term storage outside the container. Volumes break down into two main flavors, bind mounts of specific host directories and named volumes managed by the Docker daemon.
The standard design pattern is that an image is totally self-contained. Once I have an image I should be able to push it to a registry and run it on another machine unmodified.
git clone git#github.com:me/myapp
cd myapp
docker build -t me/myapp . # requires source code
docker push me/myapp
ssh me#othersystem
docker run me/myapp # source code is in the image
# I don't need GitHub credentials to get it
There's three big problems with using volumes to store your application or your node_modules directory:
It breaks the "code goes in the image" pattern. In an actual production environment, you wouldn't want to push your image and also separately push the code; that defeats one of the big advantages of Docker. If you're hiding every last byte of code in the image during the development cycle, you're never actually running what you're shipping out.
Docker considers volumes to contain vital user data that it can't safely modify. That means that, if your node_modules tree is in a volume, and you add a package to your package.json file, Docker will keep using the old node_modules directory, because it can't modify the vital user data you've told it is there.
On MacOS in particular, bind mounts are extremely slow, and if you mount a large application into a container it will just crawl.
I've generally found three good uses for volumes: storing actual user data across container executions; injecting configuration files at startup time; and reading out log files. Code and libraries are not good things to keep in volumes.
For front-end applications in particular there doesn't seem to be much benefit to trying to run them in Docker. Since the actual application code runs in the browser, it can't directly access any Docker-hosted resources, and there's no difference if your dev server runs in Docker or not. The typical build chains involving tools like Typescript and Webpack don't have additional host dependencies, so your Docker setup really just turns into a roundabout way to run Node against the source code that's only on your host. The production path of building your application into static files and then using a Web server like nginx to serve them is still right in Docker. I'd just run Node on the host to develop this sort of thing, and not have to think about questions like this one.

File read/write on cloud(heroku) using node.js

First of all I am a beginner with node.js.
In node.js when I use functions such as fs.writeFile(); the file is created and is visible in my repository. But when this same process is done on a cloud such as heroku no file is visible in the repository(cloned via git). I know the file is being made because I am able to read it but I cannot view it. Why is this??? Plus how can I view the file?
I had the same issue, and found out that Heroku and other cloud services generally prefer that you don't write in their file system; everything you write/save will be store in "ephemeral filesystem", it's like a ghost file system really.
Usually you would want to use Amazon S3 or reddis for json files etc, and other bigger ones like mp3.
I think it will work if you rent a remote server, like ECS, with a linux system, and a mounted storage space, then this might work.

Sandboxing a node.js script written by a user running on the server

I'm developing a platform where users can create their own "widgets", widgets are basically js snippets ( in the future there will be html and css too ).
The problem is they must run even when the user is not on the website, so basically my service will have to schedule those user scripts to run every now and then.
I'm trying to figure out which would be the best way to "sandbox" that script, one of the first ideas i had was to run on it's own process inside of a Docker, so let's say the user manages to somehow get into the shell it would be a virtual machine and hopefully he would be locked inside.
I'm not a Docker specialist so i'm not even sure if that makes sense, anyway that would yield another problem which is spinning hundreds of dockers to run 1 simple javascript snippet.
Is there any "secure" way of doing this? Perhaps running the script on an empty scope and somehow removing access to the "require" method?
Another requirement would be to kill the script if it times out.
EDIT:
- Found this relevant stackexchange link
This can be done with docker, you would create a docker image with their script in it and the run the image which creates a container for the script to run in.
You could even make it super easy and create a common image, based on the official node.js docker image, and pass in the users custom files at run time, run them, save the output, and then you are done. This approach is good because there is only one image to maintain, and it keeps the setup simple.
The best way to pass in the data would be to create a volume mount on the container, and mount the users directory into the container at the same spot everytime.
For example, let's say you had a host with a directory structure like this.
/users/
aaron/
bob/
chris/
Then when you run the containers you just need to change the volume mount.
docker run -v /users/aaron:/user/ myimagename/myimage
docker run -v /users/bob:/user/ myimagename/myimage
I'm not sure what the output would be, but you could write it to /user/output inside the container and the output would be stored in the users output directory.
As far as timeouts, you could write a simple script that looks at docker ps and if it is running for longer then the limit, docker stop the container.
Because everything is run in a container, you can run many at a time and they are isolated from each other and the host.

Resources