There are too many info part od docker-compose that is relevant
outline:
image: outlinewiki/outline:latest
command: sh -c "yarn sequelize:migrate --env production-ssl-disabled && yarn start"
My container is restarting. I looked at logs
Loaded configuration file "server/config/database.json".
Using environment "production-ssl-disabled".
No migrations were executed, database schema was already up to date.
Done in 1.01s.
yarn run v1.22.18
$ node ./build/server/index.js
{"label":"lifecycle","level":"info","message":"Note: Restricting process count to 1 due to use of collaborative service"}
{"label":"lifecycle","level":"info","message":"\nIs your team enjoying Outline? Consider supporting future development by sponsoring the project:\n\nhttps://github.com/sponsors/outline\n"}
/opt/outline/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:190
reject(new sequelizeErrors.ConnectionError(err));
^
ConnectionError [SequelizeConnectionError]: The server does not support SSL connections
How to fix this?
Related
I want to Containerize my Nuxt.js application. I could write my own Dockerfile (as mentioned in the Nuxt.js Google Cloud Run docs for example), but as Cloud Native Buildpacks are here to free us from the need to write those over and over again I wanted to simply use Paketo.io to build a Container from my Nuxt.js app.
I ran
pack build microservice-ui-nuxt-js --path . --builder paketobuildpacks/builder:base
and a Container was created successfully. Here's the full log:
$ pack build microservice-ui-nuxt-js --path . --builder paketobuildpacks/builder:base
base: Pulling from paketobuildpacks/builder
Digest: sha256:3e2ee17348bd901e7e0748e0e1ddccdf8a602b624e418927145b5f84ca26f264
Status: Image is up to date for paketobuildpacks/builder:base
base-cnb: Pulling from paketobuildpacks/run
Digest: sha256:b6b1612ab2dfa294514fff2750e8d724287f81e89d5e91209dbdd562ed7f7daf
Status: Image is up to date for paketobuildpacks/run:base-cnb
===> DETECTING
4 of 7 buildpacks participating
paketo-buildpacks/ca-certificates 2.2.0
paketo-buildpacks/node-engine 0.4.0
paketo-buildpacks/npm-install 0.3.0
paketo-buildpacks/npm-start 0.2.0
===> ANALYZING
Previous image with name "microservice-ui-nuxt-js" not found
===> RESTORING
===> BUILDING
Paketo CA Certificates Buildpack 2.2.0
https://github.com/paketo-buildpacks/ca-certificates
Launch Helper: Contributing to layer
Creating /layers/paketo-buildpacks_ca-certificates/helper/exec.d/ca-certificates-helper
Paketo Node Engine Buildpack 0.4.0
Resolving Node Engine version
Candidate version sources (in priority order):
-> ""
<unknown> -> ""
Selected Node Engine version (using ): 14.17.0
Executing build process
Installing Node Engine 14.17.0
Completed in 5.795s
Configuring build environment
NODE_ENV -> "production"
NODE_HOME -> "/layers/paketo-buildpacks_node-engine/node"
NODE_VERBOSE -> "false"
Configuring launch environment
NODE_ENV -> "production"
NODE_HOME -> "/layers/paketo-buildpacks_node-engine/node"
NODE_VERBOSE -> "false"
Writing profile.d/0_memory_available.sh
Calculates available memory based on container limits at launch time.
Made available in the MEMORY_AVAILABLE environment variable.
Paketo NPM Install Buildpack 0.3.0
Resolving installation process
Process inputs:
node_modules -> "Not found"
npm-cache -> "Not found"
package-lock.json -> "Found"
Selected NPM build process: 'npm ci'
Executing build process
Running 'npm ci --unsafe-perm --cache /layers/paketo-buildpacks_npm-install/npm-cache'
Completed in 14.988s
Configuring launch environment
NPM_CONFIG_LOGLEVEL -> "error"
Configuring environment shared by build and launch
PATH -> "$PATH:/layers/paketo-buildpacks_npm-install/modules/node_modules/.bin"
Paketo NPM Start Buildpack 0.2.0
Assigning launch processes
web: nuxt start
===> EXPORTING
Adding layer 'paketo-buildpacks/ca-certificates:helper'
Adding layer 'paketo-buildpacks/node-engine:node'
Adding layer 'paketo-buildpacks/npm-install:modules'
Adding layer 'paketo-buildpacks/npm-install:npm-cache'
Adding 1/1 app layer(s)
Adding layer 'launcher'
Adding layer 'config'
Adding layer 'process-types'
Adding label 'io.buildpacks.lifecycle.metadata'
Adding label 'io.buildpacks.build.metadata'
Adding label 'io.buildpacks.project.metadata'
Setting default process type 'web'
Saving microservice-ui-nuxt-js...
*** Images (5eb36ba20094):
microservice-ui-nuxt-js
Adding cache layer 'paketo-buildpacks/node-engine:node'
Adding cache layer 'paketo-buildpacks/npm-install:modules'
Adding cache layer 'paketo-buildpacks/npm-install:npm-cache'
Successfully built image microservice-ui-nuxt-js
Now running
docker run --rm -i --tty -p 3000:3000 microservice-ui-nuxt-js
i hoped to see my app inside the Browser at http://localhost:3000. But no luck! My app doesn't seem to be fully running:
Although my console looks good:
What am I missing?
I read about the HOST variable in this post , which the whole problem is about! And then I also found this answer, since I now knew what to look for. The Nuxt.js configuration docs state it also:
By default, the Nuxt.js development server host is localhost which is
only accessible from within the host machine. In order to view your
app on another device you need to modify the host.
And the crucial config is mentioned also:
Host '0.0.0.0' is designated to tell Nuxt.js to resolve a host
address, which is accessible to connections outside of the host
machine (e.g. LAN)
So all we have to do is to define a Docker environment variable --env "HOST=0.0.0.0" and run the Paketo build Container like this:
docker run --rm -i --tty --env "HOST=0.0.0.0" -p 3000:3000 microservice-ui-nuxt-js
Now the Browser should also show our app at http://localhost:3000:
You can try it yourself using the GitHub Container Registry published image of the example project:
docker run --rm -i --tty --env "HOST=0.0.0.0" -p 3000:3000 ghcr.io/jonashackt/microservice-ui-nuxt-js:latest
I am using VS Code's feature to create development containers for my services. Using the following layout, I've defined a single service (for now). I'd like to automatically run my node project after the container is configured to listen for http requests but haven't found the best way to do so.
My Project Directory
project-name
.devcontainer.json
package.json (etc)
docker-compose.yaml
Now in my docker-compose.yaml, I've defined the following structure:
version: '3'
services:
project-name:
image: node:14
command: /bin/sh -c "while sleep 1000; do :; done"
ports:
- 4001:4001
restart: always
volumes:
- ./:/workspace:cached
Note how I need to have /bin/sh -c "while sleep 1000; do :; done" as the service command, which is required according to VS Code docs so that the service doesn't close?
Within my .devcontainer.json:
{
"name": "My Project",
"dockerComposeFile": [
"../docker-compose.yaml"
],
"service": "project-name",
"shutdownAction": "none",
"postCreateCommand": "npm install",
"postStartCommand": "npm run dev" // this causes the project to hang while configuring?
"workspaceFolder": "/workspace/project-name"
}
I've added a postCreateCommand to install dependencies, but I also need to run npm run dev to have my server listen for requests. However, if I add this command in the postStartCommand, the project does build and run, but it technically hangs on Configuring Dev Server (with a spinner at the bottom of VS Code) since this starts my server and the script doesn't "exit", so I feel like there should be a better way to trigger the server to run after the container is set up?
See https://code.visualstudio.com/remote/advancedcontainers/start-processes
In other cases, you may want to start up a process and leave it running. This can be accomplished by using nohup and putting the process into the background using &. For example:
"postStartCommand": "nohup bash -c 'your-command-here &'"
I just tried it, and it works for me - it removes the spinning "Configuring Dev Container" that I also saw. However, it does mean the process is running in the background so your logs will not be surfaced to the devcontainer terminal. I got used to watching my ng serve logs in that terminal to know when compilation was done, but now I can't see them. Undecided if I'll switch back to the old way. Having the Configuring Dev Container spinning constantly was annoying but otherwise did not obstruct anything that I could see.
I have an elastic beanstalk environment, which is running a docker container that has a node js API. On the AWS Console, if I select my environment, then go to Configuration/Software I have the following:
Log groups: /aws/elasticbeanstalk/my-environment
Log streaming: Enabled
Retention: 3 days
Lifecycle: Keep after termination.
However, if I click on that log group on the Cloudwatch console, I have a Last Event Time of some weeks ago (which I believe corresponds to when the environment was created) and have no content on the logs.
Since this is a dockerized application, Logs for the server itself should be at /aws/elasticbeanstalk/my-environment/var/log/eb-docker/containers/eb-current-app/stdouterr.log.
If I instead get the Logs directly from the instances by going once again to my EB environment, clicking "Logs" and then "Request last 100 Lines" the logging is happening correctly. I just can't see a thing when using CloudWatch.
Any help is gladly appreciated
I was able to get around this problem.
So CloudWatch makes a hash based on the first line of your log file and the log stream key, and the problem is that my first line on the stdouterr.log file was actually an empty line!
After couple of days playing around and getting help from the good AWS support team, I first connected via SSH to my EC2 instance associated to the EB environment and you need to add the following line to the /etc/awslogs/config/beanstalklogs.conf file, right after the "file=/var/log/eb-docker/containers/eb-current-app/stdouterr.log" line:
file_fingerprint_lines=1-20
With these, you tell the AWS service that it should calculate the hash using lines 1 through 20 on the log file. You could change 20 for larger or smaller numbers depending on your logging content; however I don't know if there is an upper limit for the value.
After doing so, you need to restart the AWS Logs Service on the instance.
For this you would execute:
sudo service awslogs stop
sudo service awslogs start
or simpler:
sudo service awslogs restart
After these steps I started using my environment and the logging was now being properly streamed to the CloudWatch console!
However this would not work if a new deployment is made, if the EC2 instance gets replaced or the auto scalable group spawns another.
To have a fix for this, it is possible to add log config via the .ebextensions directory, at the root of your application before deploying.
I added a file called logs.config to the newly created .ebextensions directory and placed the following content:
files:
"/etc/awslogs/config/beanstalklogs.conf":
mode: "000644"
user: root
group: root
content: |
[/var/log/eb-docker/containers/eb-current-app/stdouterr.log]
log_group_name=/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*stdouterr.log
file_fingerprint_lines=1-20
commands:
01_remove_eb_stream_config:
command: 'rm /etc/awslogs/config/beanstalklogs.conf.bak'
02_restart_log_agent:
command: 'service awslogs restart'
Changing of course EB-ENV-NAME by my environment name on EB.
Hope it can help someone else!
For 64 bit Amazon Linux 2 the setup is slightly different.
For the delivery of log the AWS CloudWatch Agent is installed in /opt/aws/amazon-cloudwatch-agent and the Elastic Beanstalk configuration is in /opt/aws/amazon-cloudwatch-agent/etc/beanstalk.json. It is set to log the output of the container assuming there's a file called stdouterr.log, here's a snippet of the config:
{
"file_path": "/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_group_name": "/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_stream_name": "{instance_id}"
}
However when I look for the file_path it doesn't exist, instead I have a file path that encodes the current docker container ID /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log.
This logfile is created by a script /opt/elasticbeanstalk/config/private/eb-docker-log-start that is started by the eb-docker-log service, the default contents of this file are:
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
To temporarily fix the logging you can manually run (replacing the docker ID) and then logs will start to appear in CloudWatch:
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
To make this permanant I added an .ebextension to fix the eb-docker-log service so it re-makes this link so create a file in your source code in .ebextensions called fix-cloudwatch-logging.config and set it's contents to:
files:
"/opt/elasticbeanstalk/config/private/eb-docker-log-start" :
mode: "000755"
owner: root
group: root
content: |
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
commands:
fix_logging:
command: systemctl restart eb-docker-log.service
cwd: /home/ec2-user
test: "[ ! -L /var/log/eb-docker/containers/eb-current-app/stdouterr.log ] && systemctl is-active --quiet eb-docker-log"
I have a docker image containing an NodeJS app. The Dockerfile is:
FROM node:8
WORKDIR /app
ADD . /app
RUN npm install
EXPOSE 80
ENTRYPOINT [ "/bin/sh", "./start.sh" ]
The start.sh script is:
#!/bin/bash
...
echo "Starting application"
npm start
I'm able to launch and test the image manually:
$ gcloud docker -- run -it --rm my-container
...
Starting application
...
> node index.js
...
The same container is used by a kubernetes deployment:
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
...
template:
...
spec:
containers:
- image: my-container
...
The container starts, the start.sh script is correctly executed but it terminates and the container goes into a CrashLoopBackOff loop.
After inspecting the pod manually:
kubectl exec -ti my-pod -- bash
I have no name!#my-pod:/app# cat /etc/passwd
... empty response
-> It appears that somehow there are no system users on the container, which makes most commands (like npm) fail silently and terminate the container
I have also tried, without success:
deleting the pod
deleting and re-creating the deployment
running the node image with the node user -> unable to find user node: no matching entries in passwd file
Last note: I actually have many deployments (using the same template with just a different name) which are running fine with an image that was built a few days ago with the same source code.
For some deployments, it actually worked after manually deleting the pod and letting kubernetes recreate it.
Any ideas?
Edit 18/01/2018 I have tried rebuilding an image with the same source code that old working images use, without success. I have also tried a simpler Dockerfile:
FROM node:8
USER node
But I still get an error related to the fact that no users seem to be there:
Error response from daemon: {"message":"linux spec user: unable to find user node: no matching entries in passwd file"}
I have checked with the docker-node guys, the image hasn't changed recently. Could it be related to kubernetes changes? Keep it mind that my images do run when I run them manually with the docker command.
I tried to reproduce your issue, but didn't get it to fail in anything like the same fashion. I made a dummy express app and stuck it on github that matches your example above, and then invoked it into a local minikube instance I had. The base image size is reasonably large, but it started up just fine.
I had to interpret what was happening within npm start for your example since you didn't specify, but you can see my package.json, which I suspect is pretty close to what you're doing based on the description.
When I fire this up:
git clone https://github.com/heckj/dummyexpress
cd dummyexpress
kubectl apply -f deploy/
The I got a running instance right off the bat:
NAME READY STATUS RESTARTS AGE
dummynodeapp-7788b95497-tkw2s 1/1 Running 0 1d
And the logs show pretty much what you'd expect:
**kubectl log dummynodeapp-7788b95497-tkw2s**
W0117 19:41:00.986498 20648 cmd.go:353] log is DEPRECATED and will be removed in a future version. Use logs instead.
Starting application
> blah#1.0.0 start /app
> node index.js
Example app listening on port 3000!
My guess is that you've got something going awry within your npm start execution, so I'd recommend fiddling with that aspect of your deployment and see if you can't resolve it that way.
Well as #heckj pointed out, it was a Docker issue on my kubernetes cluster. I updated the cluster from 1.6.13-gke.1 to v1.7.12-gke.0 and the pods worked fine again. I'm not sure what Docker version was used since there's another kubernetes bug that is preventing me from seeing it.
I'm setting up a container with the following Dockerfile
# Start with project/baseline
FROM project/baseline # => image with mongo / nodejs / sailsjs
# Create folder that will contain all the sources
RUN mkdir -p /var/project
# Load the configuration file and the deployment script
ADD init.sh /var/project/init.sh
ADD src/ /var/project/ # src contains a list of folder, each one being a sails app
# Compile the sources / run the services / run mongodb
CMD /var/project/init.sh
The init.sh script is called when the container runs.
It should start a couple of webapp and mongodb.
#!/bin/bash
PROJECT_PATH=/var/project
# Start mongodb
function start_mongo {
mongod --fork --logpath /var/log/mongodb.log # attempt to have mongo running in daemon
}
# Start services
function start {
for service in $(ls);do
cd $PROJECT_PATH/$service
npm start # Runs sails lift on each service
done
}
# start mongodb
start_mongo
# start web applications defined in /var/project
start
Basically, there is a couple of nodejs (sailsjs) application in /var/project.
When I run the container, I got the following message:
$ sudo docker run -t -i projects/test
about to fork child process, waiting until server is ready for connections.
forked process: 10
and then it remains stuck.
How can mongo and the sails processes can be started and the container to remain in a running state ?
UPDATE
I now use this supervisord.conf file
[supervisord]
nodaemon=false
[program:mongodb]
command=/usr/bin/mongod
[program:process1]
command=/bin/bash "cd /var/project/service1 && node app.js"
[program:process2]
command=/bin/bash "cd /var/project/service2 && node app.js"
it is called in the Dockerfile like:
# run the applications (mongodb + project related services)
CMD ["/usr/bin/supervisord"]
As my services are dependent upon mongo starting correctly, supervisord does not wait that long and the services are not started then. Any idea to solve that ?
By the way, it that a so best practice to use mongo in the same container ?
UPDATE 2
I went back to a service.sh script that is called when the container is running. I know this is not clean (but I'll say it's temporary so I can fix the pb I have in supervisor), but I'm doing the following:
run nohup mongod &
wait 60 sec
run my node (forever) processes
The thing is, the container exit right after the forever processes are ran... how can it be kept active ?
If you want to cleanly start multiple services inside a container, one option is to use a process supervisor of some sort. One option is documented here, in the official Docker documentation.
I've done something similar using runit. You can see my base runit image here, and a multi-service application image using that here.