I can see there are some implemented Web, DB servers are able to run as a container, it occurred to me that why not be able to implement as a file server with a centralized storage (e.g. SAN)
Does anyone try this before, or any recommendation to me?
My basic idea is use 2-3 docker images to create the file servers (mostly Windows servers) and they are mounting on the same storage. For the front-end, I may go or DFS namespaces to normalize the UNC path.
Windows based images have Server service disabled out of the box. It's impossible to start it either since drivers are removed as well. It will not be possible to do in Windows containers.
Related
I currently run a Red Hat Linux server with Plesk to host a hundred or so domains. For multiple reasons I'd like to transition away from Plesk and to Docker containers with each virtual host as one or more containers. I'm unclear from what I've read so far what would be the best approach to this.
A typical site includes the doc root file area and one or two MySQL databases. We run PHP on all the sites. Some sites may have constraints on the version of PHP they can run. Some of the sites use SSL. I don't believe there are any constraints on the MySQL versions, but it's of course possible that future MySQL versions could deprecate some feature that is needed. I don't believe there's any dependency on the Apache version, but I do rely on some specific Apache modules being installed. There may be a site or two that have dependencies outside of their doc root and not part of the basic virtual host setup, but I don't believe any require a specific version of Linux.
I would like the containers to have maximum portability so that I can have flexibility in moving sites to whatever server or cloud service I choose. Part of my goal is to retire the current server and move sites to servers which best fit them.
I would also like to try upgrading the PHP version after the containers are created.
So would a single container include the entire doc root file system, including the data directories where users can upload/ftp files? Would it include the MySQL database, or would that be separate? I assume I would include the current version of PHP so that I could upgrade each one when I was ready. Would it include Apache when specific Apache modules are required? Is there a reason to include Apache and/or MySQL in all containers?
One last piece. I'm looking into using CoreOS which utilizes Docker as an integral part.
Any and all inputs are appreciated.
The whole idea of Docker is running processes/components isolated, to keep them easily upgradable. I have tinkered with this in the past and have come up with the following.
Create four containers per instance (customer):
Apache or nginx
php-fpm
MySQL
Busybox (as a data container)
Link all of them together and set volumes to all data that should persist in the data container. MySQL data and /var/www plus site config files for example.
This way you can always switch out one of the components while keeping the others. It's questionable though if Docker is a solution to a full virtual server though, as Docker containers do not have a full init system and you'll have to resort to bending things quite a bit to resemble a full virtual machine. Think more of it as "application containers", hence the idea with the separation of concerns.
Update:
Newer Docker versions come with the docker-compose tool which greatly eases this task.
I am trying to solve the same issues with cPanel instead of Plesk.
We can try and accomplish this using the plugins for cpanel or plesk however we have to worry about few things.
and we have to create some premade template images for containers that our clients can use.. it cannot be just any container from dockerhub,user Dockerfiles,etc Because cPanel/Plesk will look for specific log files available on specifc locations for bw calculations, disk quota,etc.
Biggest advantage with this solution is that we can provide CloudLinux kind of isolation and easy resource allocation/ fair sharing. However it is not as easy.
To answer your question:
Every container will be nearly a complete system so you will need to have less clients per host, because each container might be like 1G and by default have to run its own webserver/php and hence more ram foot print.
Its painful to run a Mysql inside each container and it is better to use mysql on the host or 1 dedicated container and share it. this way the Plesk's tools will help.
You may also have to use the standard apache and then reverse proxy it to each container after ssl termination so Plesk's standard tools are used but then I think containers will have to run its own webserver itself or we may have to do some trickery with php-fpm to allow host's apache to talk to each container's php-fpm processes . This is more painful than allowing each container to just run its own Nginx but possible.
It doesnt prevent users from installing their own Mysql server within their container if they need.
This kind of stuff is easy for someone from cPanel or Plesk to do.. but for others it will need a lot of Dedicated development time + testing to make sure all this works.
I was going to invest some time in creating this kind of plugin for cPanel but still undediced. I may try this if I can rope in some investors.
You can see amount of interest , CPanel shows on this issue : http://features.cpanel.net/responses/dockerio-support
I will leave you to decide
Also as an alternative solution:
so Instead of playing to the Cpanel's tune I created this . https://github.com/paimpozhil/WhatPanel
Here every site runs in its own container ( and its own VM if needed.).
Migration is simple as exporting/importing a container with tools like : on github.com /paimpozhil/docker-volume-backup & acaranta/docker-backuper
I didnt complete the migrator/ php upgrade tools ,etc here but will do when i have free time.
I am working on an embedded platform, where I have an important application which handles sensitive data. I want to protect this application from other application. For that I came up with containers.
I have set up a container in my Linux PC using LXC. I then run an application in the container. From the container, I can't access or see any application running in the host, but the reverse is possible (I could access the application in container from the host). Is there any way to isolate the container from the host machine? Are there any alternatives.
Is there any way to isolate the container from the host machine?
No sorry. If you want to prevent other applications from accessing the data in the contained application, those other applications must be the one to be contained. The hypervisor will always have full access through all contained applications as it needs to do that to do its job.
If one has access on the Host machine it will be possible to access the containers running in it.
What you could do is have a minimal Host installation, with no services running other than Docker and assign all your other services in container(s), keeping your app container isolated from other services.
There are 2 things you could do. The better way would be to just run your app as a different user and don't give your main account any access to the extra user's folders and files. The second way would be to copy your entire system into a sub-folder and use chroot, but that is pretty difficult to set up and probably overkill.
I'm building a web app that will scale into a linux cluster with tomcat and nginx. There will be one nginx web server load balancing multiple tomcat app servers. And a database server in behind. All running on CentOS 6.
The app involves users uploading photos. I plan to keep all the images on the file system of the front nginx box and have pointers to them stored in the database. This way nginx can serve them full speed without involving the app servers.
The app resizes the image in the browser before uploading. So file size will not be too extreme.
What is the most efficient/reliable way of writing the images from the app servers to the nginx front end server? I can think of several ways I could do it. But I suspect some kind of network files system would be best.
What are current best practices?
Assuming you do not use CMS (Content Management System), you could use the following options :
If you have only one front end web server then the suggestion would be to store it locally on the web server in a local Unix filesystem.
If you have multiple web servers, you could store the files on a SAN or NAS shared network device. This way you would not need to synchronize the files across the servers. Make sure that the shared resource is redundant else if it goes down, your site will be down.
I am planning to use cloudfoundry paas service (from VMWare) for hosting my node.js application. I have seen that it has support for mongo and redis in the service layer and node.js framework. So far so good.
Now I need to store my mediafiles(images uploaded by users) to a filesystem. I have the metadata stored in Mongo.
I have been searching internet, but have not yet got good information.
You cannot do that for the following reasons:
There are multiple host machines running your application. They each have their own filesystems. Each running process in your application would see a different set of files.
The host machines on which your particular application is running can change moment-to-moment. Indeed, they will change every time you re-deploy your application. Every time a process is started on a new host machine, it will see an empty set of files. Every time a process is stopped on an old host machine, all the files would be permanently deleted.
You absolutely must solve this problem in another way.
Store the media files in MongoDB GridFS.
Store the media files in an object store such as Amazon S3 or Rackspace Cloud Files.
Filesystem in most cloud solutions are "ephemeral", so you can not use FS. You will have to use solutions like S3/ DB for such purpose
I work mostly on desktop application on Windows platform. Now I am focusing on Linux platform to host web applications.
While hosting the application on Linux, I don't follow any procedure. I simply CHECKOUT the files from SVN and run the application on home directory. I don't know where to store the application data (example: mysql/postgres or Mongodb or redis, tokyo tyrant). Where to keep the log files.. What is the tip you have when we do the backend maintenance work on the server but display to the user saying that 'maintenance in progress' messages.
How do you host your application on VPS/dedicated/cloud service running Linux application?
Do you have any checklist? Do you have any tips & tricks?
Very broad question
Where do you store application data?. Most people would install MySQL which would properly store the data in /var/lib/mysql and Apache where /var/www is typically used. These applications are usually configured in /etc/apache2 and /etc/mysql.
Where to keep log files?. These almost always goes in to /var/log. For configuration check /etc/syslog.conf
How do you configure a server maintenance message?. Create a HTML file with your message and serve it by configuring apache from /etc/apache2/httpd.conf
How to do virtual Linux servers?. The easiest way is to install an instance on Amazon EC2 or you could use Oracle's VirtualBox (similar to VMWare, but free). You could also try Zen/KVM but these are far form trivial, so unless you have Linux maven around then I would stay clear of these.