We are developing an application using 3 docker containers. The development is done under a windows environnement on which we are getting issues preventing the containers run.
Those issues seems to be related to running the container under windows environnement.
As a workaround we are thinking about is to configure a remote server on intellij that access a linux remote virtual machine.
Please note we access the VM using putty and execute the application the containers runs successfully.
So my question here is it possible to continue the development using that workaround? all we need is having all containers running and launch and visualise application interfaces on Windows
Related
I got a Docker environment that contain Linux Containers,
I am running on one of those pods a program that try to access a windows server under a secure domain(my own),
How can i tell the container to run from a specific user , or vice versa how can i allow access to my Docker environment to my server?
I prefer the first option if possible.
I have not used Docker before at all, but I have a flask app running on an Azure server right now which I would like the mostly replicate to another server.
Ubuntu 16.10
Anaconda for my Python environments
A few systemd files to configure nginx and uwsgi
My goal is to start fresh without having to do a fresh install of the OS (I do not have the ability to do this) on my current server. I have a few issues with environments and multiple Python versions which I would like to escape from.
I would then like to take this set up and send it over to another server which is completely fresh (a brand new Azure instance which hasn't been touched yet). Is this possible with Docker?
To make things clear, Docker is not a technolgy to migrate applications from one server to another. Docker is a "vitualization" technology which allows you to isolate applications when they are running. Once you have this isolation, the Docker containers can be migrated to any server having just Docker installed. Thus you releive yourself from issues like "It works on this machine, but it doesn't work on that".
In order to do that, you need first to Dockerize your application. Your requirements are very common, and there are many samples online of how to containarize such applications.
However, you need first to learn about Docker to get started (which need a couple of hours/days). You can start learning about Docker here. Once you have your application dockerized and working on one machine, moving it to another server is a piece of cake.
I have developed an application and am using docker to build it. I would like to ship it as a VMware OVF package. What are my options? How do I ship it so customer can deploy it in their VMware environment?
Also I am using a base Ubuntu image and installed NodeJS, MongoDB and other dependencies on it. But I would like to configure my NodeJS based application and MongoDB database as a service within the package I intend to ship. I know how to configure these as a service using init.d on a normal VM. How do I go about this in Docker? Should I have my init.d files in my application folder and copy them over to Docker container during build? Or are there better ways?
Appreciate any advise.
Update:
The reason I ask this question is - My target users need not know docker necessarily. The application should be easy to deploy for someone who do not have docker experience. With all services in a single VM makes it easy to troubleshoot issues. As in, all log files will be saved in the /var/log directory for different services and we can see status of all different services at once. Rather than the user having to look into each docker service. And probably troubleshooting issue with docker itself.
But at the same time I feel it convenient to build the application the docker way.
VMware vApps usually made of multiple VMs running to provide a service. They may have start up dependencies and etc.
Now Using docker you can have those VMs as containers running on a single docker host VM. So a single VM removes the need for vAPP.
On the other hand containerizing philosophy requires us to use Microservices. short explanation in your case, putting each service in a separate container. Then write up a docker compose file to bring the containers up and put it in start up. After that you can make an OVF of your docker host VM and ship it.
A better way in my opinion is to create docker images, put them in your repository and let the customers pull them. Then provide docker compose file for them.
I am going to work with Node.js and PostgreSQL on Linux. I read many hours about how docker actually works. Still I am not sure that is docker environment needed before starting my project or I can use docker after completion of the project?
Lets first understand what docker is and how you can use it in your project.
Docker have three core concepts:
1) Docker engine : a lightweight runtime and robust tooling that builds and runs your Docker containers.
2) Docker image : a carbon copy of your project environment including all environment dependencies like base operating system, host entries, environment variables, databases, web/application servers. In your case, Linux distribution of your choice, node.js and required modules, PostreSQL and it's configuration.
3) docker container : can be visualized as an virtual Linux server running your project. Each time you use docker run, a new container is launched from the docker image.
You can visualize a docker-environment as an lightweight virtual machine where you can run your project without any external interference(host entries/environment variables/ RAM/ CPU) from other projects.
So as a developer, you can develop your project on your Dev machine and once it's ready to be pushed to QA/Staging you can build a docker image of your project which then can be deployed on any environment(QA/Staging/Production).
You can launch multiple container from your image on single or multiple physical servers.
You can introduce Docker whenever you want. If using multiple servers then you can create a Docker container with one server in it and the other (non-Dockerised solution) makes requests to that.
Or you could Dockerise them both.
Basically, introduce Docker when you feel the time is right.
I like to divide a large project into multiple sections - e.g. front end web sever, backend authentication server, backend API server 1, backend API server 2, etc.
As each part of the project gets completed, I Dockerise it. The other parts then use the Dockerised solution.
I have a Windows Azure VM(linux server 14.04) running and am able to access the VM using command line on my mac/windows machines. I'm running a node.js server and a mongodb instance on this Azure VM.
The problem is that this nodejs server on the VM gets disconnected after sometime(timeout sort of thing). Is it possible that the server on the VM runs indefinitely and keeps serving requests?
PS: My VM is running indefinitely and properly, but the nodejs server on the VM itself times out after sometime. Please help!
Thanks.
It is probably just crashing!
A barebone node application does not get monitored by itself.
This might sound a little crazy if you come from other web frameworks / platforms like ASP.NET or PHP where you had IIS or Apache monitoring your application for you, which was kind of nice tbh. In node.js you choose your process manager / monitoring system. From my experience, the most popular and well supported PMs are the ones listed in the Expressjs documentation: http://expressjs.com/advanced/pm.html
As Azure VMs will not sleep or shutdown itself , and also will not stop any servers running on them.
And per your description
the nodejs server on the VM itself times out after sometime
The issue seems the same with what #svenskunganka said.
You can check what occurred the error “sometime”, leveraging PM2 as #Daniel and #svenskunganka suggested.
When you deploy your nodejs project with PM2, it will monitor the application and log errors automatically. You can also monitor your VM metrics (such as CUP Usage,Network in/out) from Azure Portal Monitor panel.