I'm running Ubuntu, and found a library that I'd like to run. The problem is that this library is only compatible with RedHat and Suse.
I'm looking for a way to run a Python application using this library in some kind of "box" with RedHat/Suse libraries/structure, but who would run faster (than virtualbox) because of just running CLI, and why not with the host's kernel. It would start automatically, run the application and close after that.
I think I have seen an application like this before, but I can't remember the name.
It is called container, notable examples are lxc and docker (later is build atop of the former and more user friendly)
Related
Little intro:
I have two OS on my pc. Linux and Windows. I need Linux for work, but it freezes on my pc, but windows does not. I've heard that is a common thing for ASRock motherboards.
That's why i want to switch to Windows for work.
So my idea was to create docker image with everything i need for work, such as yarn, make, and a lot of other stuff, and run it on windows for using linux functionality. You got the idea.
I know that docker is designed to only do one thing per image, but i gave this a shot.
But there are problems constantly. For example right now i'm trying to install nvm on my image, but, after building the image, command 'nvm' is not found on bash. It is a known problem and running source ~/.profile adds the command in console, but running it while building the image doesnt affect your console when you run this image. So you need to do that manually every time you use this image.
People suggest putting this in .bashrc which gives segmentation error.
And that's just my problem for today, but i've encountered many more, as i've been trying creating this image for a couple of days already.
So my question is basically this: is it possible to create fully operational OS in one docker image, or maybe one could connect multiple images to create OS, or do i just need to stop that and use a virtual machine like a sensible person?
I would recommend using a virtual machine for your use-case. Since you will be using this for work and modifying settings, and installing new software, these operations are better suited to be in a virtual machine where it is expected that you change the state or configurations.
In contrast, Docker containers are generally meant to be immutable, as in the running instance of the image should not be altered or configured. This is so that others can pull down the image and it works "out-of-the-box." Additionally, most Docker containers available on Docker Hub are made to be lean, with only one or two use cases in mind and not extra (for security purposes and image size), so I expect that you would frequently run into problems trying to essentially set up a Docker image that you would be working on. Lastly, since it is not done frequently, there would be less help available online, and Docker-level virtualization does not really suit your situation.
There are many websites providing cloud coding sush as Cloud9, repl.it. They must use server virtualisation technologies. For example, Clould9's workspaces are powered by Docker Ubuntu containers. Every workspace is a fully self-contained VM (see details).
I would like to know if there are other technologies to make sandboxed environment. For example, RunKit seems to have a light solution:
It runs a completely standard copy of Node.js on a virtual server
created just for you. Every one of npm's 300,000+ packages are
pre-installed, so try it out
Does anyone know how RunKit acheives this?
You can see more in "Tonic is now RunKit - A Part of Stripe! " (see discussion)
we attacked the problem of time traveling debugging not at the application level, but directly on the OS by using the bleeding edge virtualization tools of CRIU on top of Docker.
The details are in "Time Traveling in Node.js Notebooks"
we were able to take a different approach thanks to an ambitious open source project called CRIU (which stands for checkpoint and restore in user space).
The name says it all. CRIU aims to give you the same checkpointing capability for a process tree that virtual machines give you for an entire computer.
This is no small task: CRIU incorporates a lot of lessons learned from earlier attempts at similar functionality, and years of discussion and work with the Linux kernel team. The most common use case of CRIU is to allow migrating containers from one computer to another
The next step was to get CRIU working well with Docker
Part of that setup is being opened-source, as mentioned in this HackerNews feed.
It uses linux containers, currently powered by Docker.
I have an OS(Amazon Linux) that doesn't support a library (libcgj). If I host the application via docker container, can I use this library?
As long as your application's base images is one of those OSs that support your library, I think you should be fine. However, if you could give some more information like what application, Dockerfile etc. and your specific problem, somebody might answer better your question.
I've been assigned a project to write some kind of a script that will perform a sanity check on a Linux server implementation to determine if it has a number of dependencies installed before source code is deployed to it. I need to check for the presence of applications such as PHP, Nginx, PostgreSQL, etc and likely confirm version numbers for these as well. These dependencies are required for the given source code to be able to run properly on the server.
The problem is, I'm not sure how to approach this due to my novelty in working with Linux. I've done some research on this and thought that the solution might be to use a combination of combing through the list of running services with a command such as "chkconfig --list" and pinging individual applications with commands such as "php -v" and then asserting the that results from these equate to what I'm looking for.
Pardon if that makes no sense whatsoever, I really am new to this. I was then thinking I could place these "tests" inside of a shell script or something that could be run whenever a test on the server needed to be executed. I would aggregate the true/false results of my assertions and output whether the sanity check passed based on that. Any guidance would be greatly appreciated.
Thank you.
Revision: In lieu of a shell script, I was also thinking I could write this in Python. Does anybody know of any good Python libraries that allow querying of system services?
If your target systems are managed by reasonable people, the software will be managed by the packaging system. On Redhat, Fedora, CentOS or SUSE systems that will be RPM. On any system derived from Debian it will be APT.
So your script can check for one of those two packaging systems. Although be warned that you can install RPM on a Debian system so the mere presence of RPM doesn't tell you the system type. Packages can also be named differently. For example, SUSE will name things a bit differently from Redhat.
So, use uname and/or /etc/issue to determine system type. Then you can look for a particular package version with rpm -q apache or dpkg-query -s postgresql.
If the systems are managed by lunatics, the software will be hand-built and installed in /opt or /usr/local or /home/nginx and versions will be unknown. In that case good luck.
I herd that it would be better to use a sub-user for installing NGiNX. Is it true? I am thinking to use NGiNX to install virtual-host that my clients could use for there website and I don't want them to have to much control over NGiNX...
I am using Ubuntu Linux distro.
Thanks in advance for any help and/or tips.
How are you planning to install these applications? Since you say you're using Ubuntu, then I would assume that you'll be installing apps via either the graphical manager or by apt-get or aptitude.
If you're using the graphical program manager, then it should prompt you for your password; this performs a sudo under the hood.
If you're using either apt-get or aptitude or something similar, those programs need to be run as root to install.
In both instances above, the installation scripts for the packages will (should) handle any user-related issues that are necessary for the program you're installing to function properly. For example, when I did an apt-get install jenkins, the installation scripts automatically created a jenkins user for me, and my Jenkins CI server runs as the jenkins user automatically.
Of course, if you're compiling all of these programs by hand, all bets are off and you'll need to figure out how best to do all of this yourself. Of course, if you're compiling these programs by hand to get them installed, I'd have to question why you're using Ubuntu in the first place; one of the best parts to using a Linux distribution with sane package management capabilities is actually USING said package management! (Note: by this statement, I mean anything Debian-based for sure; and I understand that Red Hat's yum provides very similar capabilities, but I haven't used anything RedHat since around 2003.)
You don't want a process to have any more access than it needs. So yes, you should use a user besides root -- one that has the minimal privileges required to read the files it needs. Typically this involves creating a new nginx (or www or similar) user specifically for the task.