I have some weird problem with filebeat
I am using cloud formation to run my stack, and a part of that i am installing and running filebeat for log aggregation,
I inject the /etc/filebeat/filebeat.yml into the machine and then i need to restart filebeat.
The problem is that filebeat hangs. and the entire provisioning is stuck (note that if i ssh into the machine and issue the "sudo service filebeat restart myself, the entire provisioning becomes unstuck and continues). I tried restarting it both via the services section and the commands section of the cloudformation::init and they both hang.
I havent tried it via the userdata but thats the worst possible solution for it.
Any ideas why?
snippets for the template. both these hang as mentioned.
"commands" : {
"01" : {
"command" : "sudo service filebeat restart",
"cwd" : "~",
"ignoreErrors" : "false"
}
}
"services" : {
"sysvinit" : {
"filebeat" : {
"enabled" : "true",
"ensureRunning" : "true",
"files" : ["/etc/filebeat/filebeat.yml"]
}
}
}
Well, this does sound like some sort of lock.. According to the docs, you should insert a dependency to the file, in the filebeat service, under the services section, and that will cause the filebeat service restart you need.
Apparently, the services section supports a files attribute:
A list of files. If cfn-init changes one directly via the files block, this service will be restarted.
Related
I am using CloudFormation to create an EC2 Linux machine which I can RDP into. I am running the required install commands as so:
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"commands" : {
"step_a" : {
"command" : "install some stuff..."
},
"step_g" : {
"command" : "sudo yum groupinstall \"Gnome Desktop\" -y"
},
"step_h" : {
"command" : "sudo passwd centos"
},
"step_i" : {
"command" : "configure firewall to allow rdp..."
}
}
}
}
}
As you can see though, in step_h, I want to set a password for the default centos user. How can I automatically return a default password value when user input is required? Maybe there is a better way of going about this problem?
Any help would be greatly appreciated!
There are several ways of setting user password without prompt. Some of them can be found here or here for examples.
However, the potential issue to consider is how are you going to provide this password in the template? Hardcode it in plain text? This is security issue. Pass it as a template parameter with NoEcho? This is better but not reproducible and prone to mistakes. Use SSM parameter store or AWS Secretes Manager? This would be probably best option but requires extra settings.
When we run up a container on a Compute Engine using COS, it writes its logs to JSON files. We are finding an error:
"level=error msg="Failed to log msg \"\" for logger json-file: write /var/lib/docker/containers/[image]-json.log: no space left on device".
I was looking to change the logging settings for Docker and found this article on changing the logging driver settings:
https://docs.docker.com/config/containers/logging/json-file/
My puzzle is I don't know how to set the parameters through the console or gcloud in order to set log-opts.
It seems that /var/lib/docker is on the / filesystem, and if this filesystem is running out of inodes, you will receive that message when you’ll try to run up a container and it tries to write its logs to JSON files. You can check this by running
df -i /var/lib/docker
You can configure your logging drivers to change the default values in ‘/etc/docker/daemon.json’
This is a configuration example of the daemon.json file
cat /etc/docker/daemon.json
{
"live-restore": true,
"storage-driver": "overlay2"
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3",
"labels": "production_status",
"env": "os,customer"
}
}
Don’t forget to restart the docker daemon after changed the file.:
systemctl restart docker.service
You can check the following documentation for further information about how to configure logging drivers.
Please let me know the results.
I am trying to deploy ElasticSearch on Azure Cloud. Installed the Elastic template from Azure Marketplace and able to access in kibana by hitting this url http://ipaddress:5601 with user id and password given at the time of creation.
Also able to access elastic search http://ipaddress:9200/ and getting below configuration
{
"name" : "myesclient-1",
"cluster_name" : "myes",
"cluster_uuid" : "........",
"version" : {
"number" : "6.2.4",
"build_hash" : "ccec39f",
"build_date" : "",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Now i am facing problem in,
On which VM runs logstash?
How to start logstash?
Where to store the config files and jdbc config file and how to run BAT file periodically. Bat file syntax for normal VM is like
Run
cd C:\logstash\logstash-6.2.2\bin
logstash -f C:\Users\basudeb\Desktop\config\jdbc.config
pause
The Elastic Azure ARM template does not currently deploy Logstash, only Elasticsearch and Kibana. There's an open issue to track this. If you feel it would be useful, please +1 the issue :)
Been looking all over the web for a configuration example of the logstash http input plugin configuration, and tried to follow the once I've found. Still running in to problem with the following configuration:
input {
http {
host => "127.0.0.1"
post => "31311"
tags => "wpedit"
}
}
output {
elasticsearch {hosts => "localhost:9400"}
}
When running service logstash restart it responds with Configuration error. Not restarting. Re-run with configtest parameter for details.
So I ran a configuration test (/opt/logstash/bin/logstash --configtest) and it says everything is fine.
So, my question is, how can I find whats wrong with the configuration? Can you see anything obviously incorrect? I'm fairly new to the world of Elasticsearch, if you could not tell...
I'm developing a large scale system (MEAN Stack + ElasticSearch + RabbitMQ),
There are many different nodejs projects and queues working together.
I a few questions.
When I want run and test the whole system, I have to open a lot of terminal windows to run each project. How do I run them at once with ease of monitoring.
When I want to run the same project on multiple machine, How can I easily config all of them because sometime it takes too much time to move around and config them one bye one.
How to config, run, monitor and manage the whole system easily. For example, I want to know how many machine is running a project. Or sometime I want to change message queue name or ip address at once, I don't want to go to every machine on both project to change them one bye one
Sorry for my bad gramma, Feel free the edit.
Thanks in advance.
Have a look at PM2.
I'm using it for developement and in production
With this tool you can define a simple JSON file that defines your environment.
pm2_services.json
[{
"name" : "WORKER",
"script" : "worker.js",
"instances" : "3",
"port" : 3002,
"node-args" : "A_CONFIG_KEY"
}, {
"name" : "BACKEND",
"script" : "backend.js",
"instances" : "3",
"port" : 3000,
"node-args" : "A_CONFIG_KEY"
}, {
"name" : "FRONTEND",
"script" : "frontend.js",
"instances" : "3",
"port" : 3001,
"node-args" : "A_CONFIG_KEY"
}]
Then run pm2 start pm2_services.json
Relevant commands:
pm2 logs show the logs of all services
pm2 ls show the running
pm2 monit show the current cpu and memory state
pm2 start FRONTEND to start a service
pm2 stop FRONTEND to stop a service
NOTE:
Be careful with the watch feature of PM2.
In my case my CPU jumps up to permanent 100%.
To watch many file for change i use node-dev.
And here's the solution ti use it with PM2
[{
"name" : "WORKER",
"script" : "worker.js",
"instances" : 1,
"watch" : false,
"exec_interpreter" : "node-dev",
"exec_mode" : "fork_mode"
}]
You could write a Node project which launches all the other ones with appropriate arguments using child_process.
You could consider a tool like Puppet or Chef.
Same as #2.