I have an OS(Amazon Linux) that doesn't support a library (libcgj). If I host the application via docker container, can I use this library?
As long as your application's base images is one of those OSs that support your library, I think you should be fine. However, if you could give some more information like what application, Dockerfile etc. and your specific problem, somebody might answer better your question.
Related
I have a docker image which I am running on Google's Cloud Run.
When I want to run the image locally, I have to give my container additional capabilities like the following:
docker run -p 8080:8080 --cap-add=SYS_ADMIN gcr.io/my-project/my-docker-image
Is there a way of configuring Docker's capabilities in Cloud Run?
I stumbled upon this piece of API documentation from Google, but I don't know how to configure my container. I am not even sure that it is relevant to my situation.
Any help would be really appreciated.
Expanding the POSIX capabilities is not an option on Cloud Run or Cloud Run on GKE as they represent expanding the security vulnerabilities of the underlying host.
Adding capabilities is often the easiest way to make something with special system demands work. More complex but frequently doable are modifications within the container environment or to the package configuration to get things working.
If what you're trying to do absolutely requires cap-add, this might be addressed in a feature request to the software package... or it may be a novel use case that Cloud Run cannot support but may in the future with your feedback.
guys,
For various projects, I'm creating single Docker environments. Each Docker container consists of Debian, Nginx, Node.js, etc. and is going to use by developers as well as in production via Google Cloud's Kubernetes. Since the Node.js/module version should be everywhere the same, I would like to restrict the access to certain npm commands (somehow). Often developers work with different Node.js and project modules and that caused a lot of trouble in the past. With the Docker containers, I can provide environments with everything you need for a project. To finish this step, I would like to restrict the npm command execution and only allow arguments like install, test, etc.
Please drop me a comment if you know how to resolve this :)
Cheers
It is almost impossible to limit your developers to run some commands in the container if they have an access to Dockerfiles and can somehow change a build flow.
But, because container providing isolation and you can build a custom container for which application based on your basic image, it can be not a big problem if the version of any package for one application will be changed somehow, as an example in a build step, because it will not affect other apps. They just have different containers.
So, you will not have a problem with compatibility like when you using one server with many application which using a shared environment.
The only one thing you need to do - make sure that nobody change container which you using as a base image.
There are many websites providing cloud coding sush as Cloud9, repl.it. They must use server virtualisation technologies. For example, Clould9's workspaces are powered by Docker Ubuntu containers. Every workspace is a fully self-contained VM (see details).
I would like to know if there are other technologies to make sandboxed environment. For example, RunKit seems to have a light solution:
It runs a completely standard copy of Node.js on a virtual server
created just for you. Every one of npm's 300,000+ packages are
pre-installed, so try it out
Does anyone know how RunKit acheives this?
You can see more in "Tonic is now RunKit - A Part of Stripe! " (see discussion)
we attacked the problem of time traveling debugging not at the application level, but directly on the OS by using the bleeding edge virtualization tools of CRIU on top of Docker.
The details are in "Time Traveling in Node.js Notebooks"
we were able to take a different approach thanks to an ambitious open source project called CRIU (which stands for checkpoint and restore in user space).
The name says it all. CRIU aims to give you the same checkpointing capability for a process tree that virtual machines give you for an entire computer.
This is no small task: CRIU incorporates a lot of lessons learned from earlier attempts at similar functionality, and years of discussion and work with the Linux kernel team. The most common use case of CRIU is to allow migrating containers from one computer to another
The next step was to get CRIU working well with Docker
Part of that setup is being opened-source, as mentioned in this HackerNews feed.
It uses linux containers, currently powered by Docker.
I'm running Ubuntu, and found a library that I'd like to run. The problem is that this library is only compatible with RedHat and Suse.
I'm looking for a way to run a Python application using this library in some kind of "box" with RedHat/Suse libraries/structure, but who would run faster (than virtualbox) because of just running CLI, and why not with the host's kernel. It would start automatically, run the application and close after that.
I think I have seen an application like this before, but I can't remember the name.
It is called container, notable examples are lxc and docker (later is build atop of the former and more user friendly)
We develop Linux-based networking application which will run on multiple servers. We need to develop some solution for remote application update.
All I can think of now is using rpm/deb packages but we prefer not to lock this to some distro-specific solution. Besides copying files via SSH by some Bash script what would you recommend?
Thanks.
Distros does vary so much in setup and dependencies, I would actually recommend you create distro specific packages and integrate with its update tool - in the end it normally saves you a ton of trouble.
With the ease of virtualization, it's rather easy to spin up a vmware/virtualbox image foor the various distros to create/test packaging for each of them
How about puppet?
Check out Blueprit and Blueprint I/O. Blueprint is a tool that detects all of the packages, files modifications and source installs on a server. It packages them up in a reusable format called a blueprint that can be applied to another server. Blueprint I/O is a tools for pushing to and pulling from another server. Both are open-source. Hope this helps.
https://github.com/devstructure/blueprint (Blueprint # Github)
https://github.com/devstructure/blueprint-io (Blueprint I/O # Github)
I'm eight years late, but check Ansible.
Ansible is a radically simple IT automation platform that makes your
applications and systems easier to deploy. Avoid writing scripts or
custom code to deploy and update your applications— automate in a
language that approaches plain English, using SSH, with no agents to
install on remote systems.
Also, you can check this guide.