How to change the logging options for JSON log - google-container-os

When we run up a container on a Compute Engine using COS, it writes its logs to JSON files. We are finding an error:
"level=error msg="Failed to log msg \"\" for logger json-file: write /var/lib/docker/containers/[image]-json.log: no space left on device".
I was looking to change the logging settings for Docker and found this article on changing the logging driver settings:
https://docs.docker.com/config/containers/logging/json-file/
My puzzle is I don't know how to set the parameters through the console or gcloud in order to set log-opts.

It seems that /var/lib/docker is on the / filesystem, and if this filesystem is running out of inodes, you will receive that message when you’ll try to run up a container and it tries to write its logs to JSON files. You can check this by running
df -i /var/lib/docker
You can configure your logging drivers to change the default values in ‘/etc/docker/daemon.json’
This is a configuration example of the daemon.json file
cat /etc/docker/daemon.json
{
"live-restore": true,
"storage-driver": "overlay2"
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}
Don’t forget to restart the docker daemon after changed the file.:
systemctl restart docker.service
You can check the following documentation for further information about how to configure logging drivers.
Please let me know the results.

Related

No logs appear on Cloudwatch log group for elastic beanstalk environment

I have an elastic beanstalk environment, which is running a docker container that has a node js API. On the AWS Console, if I select my environment, then go to Configuration/Software I have the following:
Log groups: /aws/elasticbeanstalk/my-environment
Log streaming: Enabled
Retention: 3 days
Lifecycle: Keep after termination.
However, if I click on that log group on the Cloudwatch console, I have a Last Event Time of some weeks ago (which I believe corresponds to when the environment was created) and have no content on the logs.
Since this is a dockerized application, Logs for the server itself should be at /aws/elasticbeanstalk/my-environment/var/log/eb-docker/containers/eb-current-app/stdouterr.log.
If I instead get the Logs directly from the instances by going once again to my EB environment, clicking "Logs" and then "Request last 100 Lines" the logging is happening correctly. I just can't see a thing when using CloudWatch.
Any help is gladly appreciated
I was able to get around this problem.
So CloudWatch makes a hash based on the first line of your log file and the log stream key, and the problem is that my first line on the stdouterr.log file was actually an empty line!
After couple of days playing around and getting help from the good AWS support team, I first connected via SSH to my EC2 instance associated to the EB environment and you need to add the following line to the /etc/awslogs/config/beanstalklogs.conf file, right after the "file=/var/log/eb-docker/containers/eb-current-app/stdouterr.log" line:
file_fingerprint_lines=1-20
With these, you tell the AWS service that it should calculate the hash using lines 1 through 20 on the log file. You could change 20 for larger or smaller numbers depending on your logging content; however I don't know if there is an upper limit for the value.
After doing so, you need to restart the AWS Logs Service on the instance.
For this you would execute:
sudo service awslogs stop
sudo service awslogs start
or simpler:
sudo service awslogs restart
After these steps I started using my environment and the logging was now being properly streamed to the CloudWatch console!
However this would not work if a new deployment is made, if the EC2 instance gets replaced or the auto scalable group spawns another.
To have a fix for this, it is possible to add log config via the .ebextensions directory, at the root of your application before deploying.
I added a file called logs.config to the newly created .ebextensions directory and placed the following content:
files:
"/etc/awslogs/config/beanstalklogs.conf":
mode: "000644"
user: root
group: root
content: |
[/var/log/eb-docker/containers/eb-current-app/stdouterr.log]
log_group_name=/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*stdouterr.log
file_fingerprint_lines=1-20
commands:
01_remove_eb_stream_config:
command: 'rm /etc/awslogs/config/beanstalklogs.conf.bak'
02_restart_log_agent:
command: 'service awslogs restart'
Changing of course EB-ENV-NAME by my environment name on EB.
Hope it can help someone else!
For 64 bit Amazon Linux 2 the setup is slightly different.
For the delivery of log the AWS CloudWatch Agent is installed in /opt/aws/amazon-cloudwatch-agent and the Elastic Beanstalk configuration is in /opt/aws/amazon-cloudwatch-agent/etc/beanstalk.json. It is set to log the output of the container assuming there's a file called stdouterr.log, here's a snippet of the config:
{
"file_path": "/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_group_name": "/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_stream_name": "{instance_id}"
}
However when I look for the file_path it doesn't exist, instead I have a file path that encodes the current docker container ID /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log.
This logfile is created by a script /opt/elasticbeanstalk/config/private/eb-docker-log-start that is started by the eb-docker-log service, the default contents of this file are:
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
To temporarily fix the logging you can manually run (replacing the docker ID) and then logs will start to appear in CloudWatch:
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
To make this permanant I added an .ebextension to fix the eb-docker-log service so it re-makes this link so create a file in your source code in .ebextensions called fix-cloudwatch-logging.config and set it's contents to:
files:
"/opt/elasticbeanstalk/config/private/eb-docker-log-start" :
mode: "000755"
owner: root
group: root
content: |
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
commands:
fix_logging:
command: systemctl restart eb-docker-log.service
cwd: /home/ec2-user
test: "[ ! -L /var/log/eb-docker/containers/eb-current-app/stdouterr.log ] && systemctl is-active --quiet eb-docker-log"

Pm2 changing log file location

I have couple of questions regarding pm2
How can I change the location of server-error-0.log and
server-out-0.log files location from c:\users\user\.pm2\logs to other drive, due to restriction in server's c drive access.
Can I log the error and info in database instead of a log file? Do I need to write a separate module for that or is there any way to achieve this?
How can I change the location of ...log file location?
To change pm2's log file location, there are 2 solutions: define log path as parameter when pm2 command is executed (-l, -o, -e), or start pm2 from a configuration file.
For the parameter solution, here is an example:
pm2 start app.js -o ./out.log -e ./err.log
If you don't want to define log path every time when pm2 is executed, you can generate a configuration file, define error_file and out_file, and start pm2 from that:
Generate a configuration file: pm2 ecosystem simple. This would generate a file ecosystem.config.js, with following content:
module.exports = {
apps : [{
name : "app1",
script : "./app.js"
}]
}
Define error_file (for error log) and out_file (for info log) in the file, such as:
module.exports = {
apps : [{
name : "app1",
script : "./app.js",
error_file : "./err.log",
out_file : "./out.log"
}]
}
Delete existing processes in pm2:
pm2 delete <pid>
You can get pid by doing:
pm2 status
Start the process from the configuration file:
pm2 start ecosystem.config.js
In this way, the logs are saved to ./err.log and ./out.log.
Please refer to the document for detail information.
Can I log the error and info in database instead of a log file?
I didn't find any resources in official document. It seems you need to write code and save log to database yourself.
Just wanted to add to #shaochuancs answer, that before doing step 3, make sure you delete the old process. If you don't delete the old process, the changes that you made to your process file will not take into effect after you start your app.
You will need to issue this command before doing step 3 above:
pm2 delete <pid>
In case you want pm2 on startup with changed logs path:
pm2 delete all
pm2 start ecosystem.js
pm2 save
pm2 startup
If you want to write both an error log and console log to the same file, might be a use case, like I am interested to have log in OneFile to push to ELK.you can use -l
-l --log [path] specify filepath to output both out and error logs
Here is the example
pm2 start server.js -l /app/logs/server.log
After doing changes do not forget to run this command as mentioned in the answer.
pm2 delete <pid>

Change docker log messages location

I met a problem with docker logging and after reading a lot of sources didn't find solution: is there a way to discard messages of docker daemon in /var/log/messages and select another location?
Ok, I know that this question is quite old but I don't think it has been answered well and no correct answer has been stated.
First of all the reason why it saves messages to that particular place starts in rsyslog configuration (/etc/rsyslog.conf) with the line:
$ModLoad imjournal # provides access to the systemd journal
So, because docker saves information to systemd journal it ends at /var/log/messages.
To be able to save it to other places, you have to create a rule like the following at /etc/rsyslog.d/docker.conf.
$FileCreateMode 0644
template(name="DockerLogFileName" type="list") {
constant(value="/var/log/docker/")
property(name="syslogtag" securepath="replace" \
regex.expression="docker/\\(.*\\)\\[" regex.submatch="1")
constant(value="/docker.log")
}
if $programname == 'dockerd' then \
/var/log/docker/combined.log
if $programname == 'dockerd' then \
if $syslogtag contains 'docker/' then \
?DockerLogFileName
else
/var/log/docker/no_tag/docker.log
$FileCreateMode 0600
I found the information for this configuration here:
https://www.simulmedia.com/blog/2016/02/19/centralized-docker-logging-with-rsyslog/
Configure rsyslog to isolate the Docker logs into their own file. To do this create /etc/rsyslog.d/10-docker.conf and copy the following content into the file.
# Docker logging
daemon.* {
/var/mylog
stop
}
In summary this will write all logs for the daemon category to /var/mylog then stop processing that log entry so it isn’t written to the systems default syslog file.
According to the Docker documentation, you can specify a different driver either as a command-line argument for the docker daemon or (preferably) in the daemon.json config file. Several drivers are available, e.g. for Syslog, HTTP-based logging, ...
Update
Here's an example configuration section for Syslog (from the documentation):
{
"log-driver": "syslog",
"log-opts": {
"syslog": "udp://1.2.3.4:1111"
}
}

How do I use EBS volume with ECS container

I created an EBS volume, attached and mounted it to my Container Instance. In the task definition volumes I set the volume Source Path with the mounted directory.
The container data is not beeing created in the mounted directory, all other directories out of the mounted EBS works properly.
The purpose is to save the data out of the container and with this another volume backup it.
Is there a way to use this attached volume with my container? or is a better way to work with volumes and backups.
EDIT: It was tested with a random docker image running it specifying the volume and I faced the same problem. I manage to make it work restarting the Docker service but I'm still looking for a solution without restart Docker.
Inspecting a container with a volume directory that is the mounted EBS
"HostConfig": {
"Binds": [
"/mnt/data:/data"
],
...
"Mounts": [
{
"Source": "/mnt/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
the directory displays:
$ ls /mnt/data/
lost+found
Inspecting a container with a volume directory that is not the mounted EBS
"HostConfig": {
"Binds": [
"/home/ec2-user/data:/data"
],
...
"Mounts": [
{
"Source": "/home/ec2-user/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
the directory displays:
$ ls /home/ec2-user/data
databases dbms
It sounds like what you potentially want to do is make use of the AWS EC2 Launch Configurations. Using Launch Configurations, you can specify EBS volumes be created and attached to your instance at launch. This happens prior to the docker agent and subsequent tasks being started.
As part of your launch configuration, you'll want to also update the User data under Configure details with something along the lines of:
mkdir /data;
mkfs -t ext4 /dev/xvdb;
mount /dev/xvdb /data;
echo '/dev/xvdb /data ext4 defaults,nofail 0 2' >> /etc/fstab;
Then, so long as your container is setup to access /data on the host, everything will just work the first go.
Bonus: If you're using ECS clusters, I presume you're already making use of Launch Configurations to get your instances joined to the cluster. If not, you can add new instances automatically as well, using something like:
#!/bin/bash
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env=ECS_LOGFILE=/log/ecs-agent.log --env=ECS_AVAILABLE_LOGGING_DRIVERS=[\"json-file\",\"syslog\",\"gelf\"] --env=ECS_LOGLEVEL=info --env=ECS_DATADIR=/data --env=ECS_CLUSTER=your-cluster-here amazon/amazon-ecs-agent:latest
Specifically in that bit, you'll want to edit this part: --env=ECS_CLUSTER=your-cluster-here
Hope this helps.
The current documentation on Using Data Volumes in Tasks seems to address this problem:
Prior to the release of the Amazon ECS-optimized AMI version 2017.03.a, only file systems that were available when the Docker daemon was started are available to Docker containers. You can use the latest Amazon ECS-optimized AMI to avoid this limitation, or you can upgrade the docker package to the latest version and restart Docker.

Filebeat service hangs on restart

I have some weird problem with filebeat
I am using cloud formation to run my stack, and a part of that i am installing and running filebeat for log aggregation,
I inject the /etc/filebeat/filebeat.yml into the machine and then i need to restart filebeat.
The problem is that filebeat hangs. and the entire provisioning is stuck (note that if i ssh into the machine and issue the "sudo service filebeat restart myself, the entire provisioning becomes unstuck and continues). I tried restarting it both via the services section and the commands section of the cloudformation::init and they both hang.
I havent tried it via the userdata but thats the worst possible solution for it.
Any ideas why?
snippets for the template. both these hang as mentioned.
"commands" : {
"01" : {
"command" : "sudo service filebeat restart",
"cwd" : "~",
"ignoreErrors" : "false"
}
}
"services" : {
"sysvinit" : {
"filebeat" : {
"enabled" : "true",
"ensureRunning" : "true",
"files" : ["/etc/filebeat/filebeat.yml"]
}
}
}
Well, this does sound like some sort of lock.. According to the docs, you should insert a dependency to the file, in the filebeat service, under the services section, and that will cause the filebeat service restart you need.
Apparently, the services section supports a files attribute:
A list of files. If cfn-init changes one directly via the files block, this service will be restarted.

Resources