How to auto start node server after creating VS Code Development Container? - node.js

I am using VS Code's feature to create development containers for my services. Using the following layout, I've defined a single service (for now). I'd like to automatically run my node project after the container is configured to listen for http requests but haven't found the best way to do so.
My Project Directory
project-name
.devcontainer.json
package.json (etc)
docker-compose.yaml
Now in my docker-compose.yaml, I've defined the following structure:
version: '3'
services:
project-name:
image: node:14
command: /bin/sh -c "while sleep 1000; do :; done"
ports:
- 4001:4001
restart: always
volumes:
- ./:/workspace:cached
Note how I need to have /bin/sh -c "while sleep 1000; do :; done" as the service command, which is required according to VS Code docs so that the service doesn't close?
Within my .devcontainer.json:
{
"name": "My Project",
"dockerComposeFile": [
"../docker-compose.yaml"
],
"service": "project-name",
"shutdownAction": "none",
"postCreateCommand": "npm install",
"postStartCommand": "npm run dev" // this causes the project to hang while configuring?
"workspaceFolder": "/workspace/project-name"
}
I've added a postCreateCommand to install dependencies, but I also need to run npm run dev to have my server listen for requests. However, if I add this command in the postStartCommand, the project does build and run, but it technically hangs on Configuring Dev Server (with a spinner at the bottom of VS Code) since this starts my server and the script doesn't "exit", so I feel like there should be a better way to trigger the server to run after the container is set up?

See https://code.visualstudio.com/remote/advancedcontainers/start-processes
In other cases, you may want to start up a process and leave it running. This can be accomplished by using nohup and putting the process into the background using &. For example:
"postStartCommand": "nohup bash -c 'your-command-here &'"
I just tried it, and it works for me - it removes the spinning "Configuring Dev Container" that I also saw. However, it does mean the process is running in the background so your logs will not be surfaced to the devcontainer terminal. I got used to watching my ng serve logs in that terminal to know when compilation was done, but now I can't see them. Undecided if I'll switch back to the old way. Having the Configuring Dev Container spinning constantly was annoying but otherwise did not obstruct anything that I could see.

Related

VSCode: "Format uri ('http://localhost:%s') must contain exactly two substitution placeholders" error with docker

Trying to set up VSCode to debug a nodejs app in a Docker container.
It builds and run the containers but then I eventually get this error dialog
Format uri ('http://localhost:%s') must contain exactly two substitution placeholders
and I cannot do any debugging.
I can't find this error anywhere in any of the VSCode output/terminal windows. And I don't have that format string anywhere in my code or config files.
For what it's worth, the application runs fine in the Docker container (e.g. responds to HTTP requests, etc)
And here's my run command that VSCode shows in the terminal window:
docker run -dt --name "xxnodejsservicetemplate-dev" -e "DEBUG=*" -e "NODE_ENV=development" --label "com.microsoft.created-by=visual-studio-code" -p "3000:3000" -p "9229:9229" "xxnodejsservicetemplate:latest" yarn dev --debug 0.0.0.0:9229
Any thoughts?
Docker wants to replace the scheme and the port. At this time, VS Code is only adding a substitution for the port. So, instead of this format:
"dockerServerReadyAction": {
"uriFormat": "http://localhost:%s"
}
Use this:
"dockerServerReadyAction": {
"uriFormat": "%s://localhost:%s"
}

Kubernetes fails to deploy valid container image

I have a docker image containing an NodeJS app. The Dockerfile is:
FROM node:8
WORKDIR /app
ADD . /app
RUN npm install
EXPOSE 80
ENTRYPOINT [ "/bin/sh", "./start.sh" ]
The start.sh script is:
#!/bin/bash
...
echo "Starting application"
npm start
I'm able to launch and test the image manually:
$ gcloud docker -- run -it --rm my-container
...
Starting application
...
> node index.js
...
The same container is used by a kubernetes deployment:
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
...
template:
...
spec:
containers:
- image: my-container
...
The container starts, the start.sh script is correctly executed but it terminates and the container goes into a CrashLoopBackOff loop.
After inspecting the pod manually:
kubectl exec -ti my-pod -- bash
I have no name!#my-pod:/app# cat /etc/passwd
... empty response
-> It appears that somehow there are no system users on the container, which makes most commands (like npm) fail silently and terminate the container
I have also tried, without success:
deleting the pod
deleting and re-creating the deployment
running the node image with the node user -> unable to find user node: no matching entries in passwd file
Last note: I actually have many deployments (using the same template with just a different name) which are running fine with an image that was built a few days ago with the same source code.
For some deployments, it actually worked after manually deleting the pod and letting kubernetes recreate it.
Any ideas?
Edit 18/01/2018 I have tried rebuilding an image with the same source code that old working images use, without success. I have also tried a simpler Dockerfile:
FROM node:8
USER node
But I still get an error related to the fact that no users seem to be there:
Error response from daemon: {"message":"linux spec user: unable to find user node: no matching entries in passwd file"}
I have checked with the docker-node guys, the image hasn't changed recently. Could it be related to kubernetes changes? Keep it mind that my images do run when I run them manually with the docker command.
I tried to reproduce your issue, but didn't get it to fail in anything like the same fashion. I made a dummy express app and stuck it on github that matches your example above, and then invoked it into a local minikube instance I had. The base image size is reasonably large, but it started up just fine.
I had to interpret what was happening within npm start for your example since you didn't specify, but you can see my package.json, which I suspect is pretty close to what you're doing based on the description.
When I fire this up:
git clone https://github.com/heckj/dummyexpress
cd dummyexpress
kubectl apply -f deploy/
The I got a running instance right off the bat:
NAME READY STATUS RESTARTS AGE
dummynodeapp-7788b95497-tkw2s 1/1 Running 0 1d
And the logs show pretty much what you'd expect:
**kubectl log dummynodeapp-7788b95497-tkw2s**
W0117 19:41:00.986498 20648 cmd.go:353] log is DEPRECATED and will be removed in a future version. Use logs instead.
Starting application
> blah#1.0.0 start /app
> node index.js
Example app listening on port 3000!
My guess is that you've got something going awry within your npm start execution, so I'd recommend fiddling with that aspect of your deployment and see if you can't resolve it that way.
Well as #heckj pointed out, it was a Docker issue on my kubernetes cluster. I updated the cluster from 1.6.13-gke.1 to v1.7.12-gke.0 and the pods worked fine again. I'm not sure what Docker version was used since there's another kubernetes bug that is preventing me from seeing it.

How to run supervisor with Ansible?

I've got a server on which Supervisord is managing my processes. I normally start supervisord with the following command:
sudo /var/www/imd/venv/bin/supervisord -c /var/www/imd/deploy/supervisord.conf
I'm now trying to set things up with Ansible, but I'm unsure of how I should start Ansible. I can of course do it using something like:
- name: run supervisord
command: "/var/www/imd/venv/bin/supervisord -c /var/www/imd/deploy/supervisord.conf"
This works, but only the first time that you run it. Second time you run the same script supervisord is of course already running, which causes the following error:
TASK [run supervisord]
******************************************************* fatal: [ansible-test1]: FAILED! => {"changed": true, "cmd":
["/var/www/imd/venv/bin/supervisord", "-c",
"/var/www/imd/deploy/supervisord.conf"], "delta": "0:00:00.111700",
"end": "2016-06-03 11:57:38.605804", "failed": true, "rc": 2, "start":
"2016-06-03 11:57:38.494104", "stderr": "Error: Another program is
already listening on a port that one of our HTTP servers is configured
to use. Shut this program down first before starting
supervisord.\nFor help, use /var/www/imd/venv/bin/supervisord -h",
"stdout": "", "stdout_lines": [], "warnings": []}
Does anybody know how I can correctly run supervisord with Ansible? All tips are welcome!
[EDIT]
Because the solution in the answer by mbarthelemy doesn't work for socket files I now managed to get it working with the following:
- name: run supervisord
shell: if [ ! -S /var/run/supervisor.sock ]; then sudo /var/www/imd/venv/bin/supervisord -c /var/www/imd/deploy/supervisord.conf; fi
This of course is not very "ansibleish". If anybody has a real Ansible-based solution that would still be really welcome.
You can use supervisor module
- supervisorctl:
name: my_app
state: restarted
config: /var/opt/my_project/supervisord.conf
or
- name: Restart my_app
supervisorctl:
name: my_app
state: restarted
config: /var/opt/my_project/supervisord.conf
full documentation on https://docs.ansible.com/ansible/2.7/modules/supervisorctl_module.html#examples
Your situation is specific since you don't seem to use a regular Supervisor installed as a normal system package ; in that case you would start/stop/restart it like any other regular system service, using Ansible's service module.
By default, upon starting Supervisor creates a socket to listen for administrations commands from supervisorctl. When it stops it it supposed to remove it.
Try to find where this socket is created in your specific setup (default would be /var/run/supervisor.sock).
Then, let the Ansible command module know that if the Suopervisord process is already running, the socket exists, using the creates option (documentation). This way it won't try to run the command if it's already running:
- name: run supervisord
command: "./venv/bin/supervisord -c ./deploy/supervisord.conf"
args:
chdir=/var/www/imd
creates=/var/run/supervisor.sock
Edit : while this would be the right answer if /var/run/supervisor.sock were a file, it won't work because it's a socket, and Ansible's create parameter won't work.
The most Ansible-ish solution I can think of is using an external Ansible module like one of these, to check if you process already exists (test_process) or is already listening (test_tcp)

Nodejs/Strongloop: working upstart config example

After update strongloop to v2.10 slc stops writing logs.
Also I couldn't make the app to start in production mode.
/etc/init/app.conf
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
env NODE_ENV=production
script
exec slc run /home/ubuntu/app/ \
-l /home/ubuntu/app/app.log \
-p /var/run/app.pid
end script
Can anybody check my upstart config or provide another working copy?
Are you were writing the pid to a file so that you can use it to send SIGUSR2 to the process to trigger log re-opening from logrotate?
Assuming you are using Upstart 1.4+ (Ubuntu 12.04 or newer), then you would be better off letting slc run log to its stdout and let Upstart take care of writing it to a file so that log rotation is done for you:
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
# assuming this is /etc/init/app.conf,
# stdout+stderr logged to: /var/log/upstart/app.log
console log
env NODE_ENV=production
exec /usr/local/bin/slc run --cluster=CPUs /home/ubuntu/app
The log rotation for "free" is nice, but the biggest benefit to this approach is Upstart can log errors that slc run reports even if they are a crash while trying to set up its internal logging, which makes debugging a lot easier.
Aside from what it means to your actual application, the only effect NODE_ENV has on slc run is to set the default number of cluster workers to the number of detected CPU cores, which literally translates to --cluster=CPUs.
Another problem I find is the node/npm path prefix not being in the $PATH as used by Upstart, so I normally put the full paths for executables in my Upstart jobs.
Service Installer
You could also try using strong-service-install, which is a module used by slc pm-install to install strong-pm as an OS service:
$ npm install -g strong-service-install
$ sudo sl-svc-install --name app --user ubuntu --cwd /home/ubuntu/app -- slc run --cluster=CPUs .
Note the spaces around the -- before slc run

Cannot run nodejs app and mongo within a docker container

I'm setting up a container with the following Dockerfile
# Start with project/baseline
FROM project/baseline # => image with mongo / nodejs / sailsjs
# Create folder that will contain all the sources
RUN mkdir -p /var/project
# Load the configuration file and the deployment script
ADD init.sh /var/project/init.sh
ADD src/ /var/project/ # src contains a list of folder, each one being a sails app
# Compile the sources / run the services / run mongodb
CMD /var/project/init.sh
The init.sh script is called when the container runs.
It should start a couple of webapp and mongodb.
#!/bin/bash
PROJECT_PATH=/var/project
# Start mongodb
function start_mongo {
mongod --fork --logpath /var/log/mongodb.log # attempt to have mongo running in daemon
}
# Start services
function start {
for service in $(ls);do
cd $PROJECT_PATH/$service
npm start # Runs sails lift on each service
done
}
# start mongodb
start_mongo
# start web applications defined in /var/project
start
Basically, there is a couple of nodejs (sailsjs) application in /var/project.
When I run the container, I got the following message:
$ sudo docker run -t -i projects/test
about to fork child process, waiting until server is ready for connections.
forked process: 10
and then it remains stuck.
How can mongo and the sails processes can be started and the container to remain in a running state ?
UPDATE
I now use this supervisord.conf file
[supervisord]
nodaemon=false
[program:mongodb]
command=/usr/bin/mongod
[program:process1]
command=/bin/bash "cd /var/project/service1 && node app.js"
[program:process2]
command=/bin/bash "cd /var/project/service2 && node app.js"
it is called in the Dockerfile like:
# run the applications (mongodb + project related services)
CMD ["/usr/bin/supervisord"]
As my services are dependent upon mongo starting correctly, supervisord does not wait that long and the services are not started then. Any idea to solve that ?
By the way, it that a so best practice to use mongo in the same container ?
UPDATE 2
I went back to a service.sh script that is called when the container is running. I know this is not clean (but I'll say it's temporary so I can fix the pb I have in supervisor), but I'm doing the following:
run nohup mongod &
wait 60 sec
run my node (forever) processes
The thing is, the container exit right after the forever processes are ran... how can it be kept active ?
If you want to cleanly start multiple services inside a container, one option is to use a process supervisor of some sort. One option is documented here, in the official Docker documentation.
I've done something similar using runit. You can see my base runit image here, and a multi-service application image using that here.

Resources