I work mostly on desktop application on Windows platform. Now I am focusing on Linux platform to host web applications.
While hosting the application on Linux, I don't follow any procedure. I simply CHECKOUT the files from SVN and run the application on home directory. I don't know where to store the application data (example: mysql/postgres or Mongodb or redis, tokyo tyrant). Where to keep the log files.. What is the tip you have when we do the backend maintenance work on the server but display to the user saying that 'maintenance in progress' messages.
How do you host your application on VPS/dedicated/cloud service running Linux application?
Do you have any checklist? Do you have any tips & tricks?
Very broad question
Where do you store application data?. Most people would install MySQL which would properly store the data in /var/lib/mysql and Apache where /var/www is typically used. These applications are usually configured in /etc/apache2 and /etc/mysql.
Where to keep log files?. These almost always goes in to /var/log. For configuration check /etc/syslog.conf
How do you configure a server maintenance message?. Create a HTML file with your message and serve it by configuring apache from /etc/apache2/httpd.conf
How to do virtual Linux servers?. The easiest way is to install an instance on Amazon EC2 or you could use Oracle's VirtualBox (similar to VMWare, but free). You could also try Zen/KVM but these are far form trivial, so unless you have Linux maven around then I would stay clear of these.
Related
I can see there are some implemented Web, DB servers are able to run as a container, it occurred to me that why not be able to implement as a file server with a centralized storage (e.g. SAN)
Does anyone try this before, or any recommendation to me?
My basic idea is use 2-3 docker images to create the file servers (mostly Windows servers) and they are mounting on the same storage. For the front-end, I may go or DFS namespaces to normalize the UNC path.
Windows based images have Server service disabled out of the box. It's impossible to start it either since drivers are removed as well. It will not be possible to do in Windows containers.
I currently run a Red Hat Linux server with Plesk to host a hundred or so domains. For multiple reasons I'd like to transition away from Plesk and to Docker containers with each virtual host as one or more containers. I'm unclear from what I've read so far what would be the best approach to this.
A typical site includes the doc root file area and one or two MySQL databases. We run PHP on all the sites. Some sites may have constraints on the version of PHP they can run. Some of the sites use SSL. I don't believe there are any constraints on the MySQL versions, but it's of course possible that future MySQL versions could deprecate some feature that is needed. I don't believe there's any dependency on the Apache version, but I do rely on some specific Apache modules being installed. There may be a site or two that have dependencies outside of their doc root and not part of the basic virtual host setup, but I don't believe any require a specific version of Linux.
I would like the containers to have maximum portability so that I can have flexibility in moving sites to whatever server or cloud service I choose. Part of my goal is to retire the current server and move sites to servers which best fit them.
I would also like to try upgrading the PHP version after the containers are created.
So would a single container include the entire doc root file system, including the data directories where users can upload/ftp files? Would it include the MySQL database, or would that be separate? I assume I would include the current version of PHP so that I could upgrade each one when I was ready. Would it include Apache when specific Apache modules are required? Is there a reason to include Apache and/or MySQL in all containers?
One last piece. I'm looking into using CoreOS which utilizes Docker as an integral part.
Any and all inputs are appreciated.
The whole idea of Docker is running processes/components isolated, to keep them easily upgradable. I have tinkered with this in the past and have come up with the following.
Create four containers per instance (customer):
Apache or nginx
php-fpm
MySQL
Busybox (as a data container)
Link all of them together and set volumes to all data that should persist in the data container. MySQL data and /var/www plus site config files for example.
This way you can always switch out one of the components while keeping the others. It's questionable though if Docker is a solution to a full virtual server though, as Docker containers do not have a full init system and you'll have to resort to bending things quite a bit to resemble a full virtual machine. Think more of it as "application containers", hence the idea with the separation of concerns.
Update:
Newer Docker versions come with the docker-compose tool which greatly eases this task.
I am trying to solve the same issues with cPanel instead of Plesk.
We can try and accomplish this using the plugins for cpanel or plesk however we have to worry about few things.
and we have to create some premade template images for containers that our clients can use.. it cannot be just any container from dockerhub,user Dockerfiles,etc Because cPanel/Plesk will look for specific log files available on specifc locations for bw calculations, disk quota,etc.
Biggest advantage with this solution is that we can provide CloudLinux kind of isolation and easy resource allocation/ fair sharing. However it is not as easy.
To answer your question:
Every container will be nearly a complete system so you will need to have less clients per host, because each container might be like 1G and by default have to run its own webserver/php and hence more ram foot print.
Its painful to run a Mysql inside each container and it is better to use mysql on the host or 1 dedicated container and share it. this way the Plesk's tools will help.
You may also have to use the standard apache and then reverse proxy it to each container after ssl termination so Plesk's standard tools are used but then I think containers will have to run its own webserver itself or we may have to do some trickery with php-fpm to allow host's apache to talk to each container's php-fpm processes . This is more painful than allowing each container to just run its own Nginx but possible.
It doesnt prevent users from installing their own Mysql server within their container if they need.
This kind of stuff is easy for someone from cPanel or Plesk to do.. but for others it will need a lot of Dedicated development time + testing to make sure all this works.
I was going to invest some time in creating this kind of plugin for cPanel but still undediced. I may try this if I can rope in some investors.
You can see amount of interest , CPanel shows on this issue : http://features.cpanel.net/responses/dockerio-support
I will leave you to decide
Also as an alternative solution:
so Instead of playing to the Cpanel's tune I created this . https://github.com/paimpozhil/WhatPanel
Here every site runs in its own container ( and its own VM if needed.).
Migration is simple as exporting/importing a container with tools like : on github.com /paimpozhil/docker-volume-backup & acaranta/docker-backuper
I didnt complete the migrator/ php upgrade tools ,etc here but will do when i have free time.
I'm building a web app that will scale into a linux cluster with tomcat and nginx. There will be one nginx web server load balancing multiple tomcat app servers. And a database server in behind. All running on CentOS 6.
The app involves users uploading photos. I plan to keep all the images on the file system of the front nginx box and have pointers to them stored in the database. This way nginx can serve them full speed without involving the app servers.
The app resizes the image in the browser before uploading. So file size will not be too extreme.
What is the most efficient/reliable way of writing the images from the app servers to the nginx front end server? I can think of several ways I could do it. But I suspect some kind of network files system would be best.
What are current best practices?
Assuming you do not use CMS (Content Management System), you could use the following options :
If you have only one front end web server then the suggestion would be to store it locally on the web server in a local Unix filesystem.
If you have multiple web servers, you could store the files on a SAN or NAS shared network device. This way you would not need to synchronize the files across the servers. Make sure that the shared resource is redundant else if it goes down, your site will be down.
Is there any linux scripts to for uploading nodejs app to myown linux server?
Like appfog or heroku. I have dedicated linux server and working on linux too.
I want upload my nodejs application to server and restart nodejs with one shell command.
I can write script, but maybe don't need to invent bycicle?
Popular choices using SSH:
rsync
fabric
For serious stuff you really should look at configuration management and server provisioning applications like (in no particular order):
Chef
Puppet
Ansible (+1 for the name, "Enders Game" is one of my favorite books)
Most revision control systems allows for "after/before-commit" hooks; sometimes I use these hooks to run tests before and automatically deploy to the acceptance environment after commits.
See also Jenkins CI (Continuous Integration is a hot topic).
I use fleet from substack to manage deployment. Fleet is a git-based tool that allows you to deploy code and manage your node processes running on remote servers.
Adding in seaport and either bouncy or node-http-proxy is a great way to build an application that is made up of lots of small components that work together.
I'm wondering how the Common Unix Printing System "CUPS" handels the user actions and affects the configuration files, from my humble background, a webpage only can access/edit files when there is some web server and a serverside script, so how it works without installing web server?
does it work through some shell script? if yes, how that occurs?
It is not the web frontend that alters the configuration files. At least not if you compare it to the 'typical' setup: http server, scripting engine, script.
CUPS itself contains a daemon, this also acts as a minimal web server. That deamon has control over the configuration files. And it is free to accept commands from some web client it serves. So no magic here.
Turned that around you could also setup a system running a 'normal' http server with such rights that is is able to alter all system configuration files. That's all a question of how that server/daemon is setup and started. It breaks down to simple rights management. You certainly do not want to do that, though ;-)