Creating linux user password automatically with CloudFormation - linux

I am using CloudFormation to create an EC2 Linux machine which I can RDP into. I am running the required install commands as so:
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"commands" : {
"step_a" : {
"command" : "install some stuff..."
},
"step_g" : {
"command" : "sudo yum groupinstall \"Gnome Desktop\" -y"
},
"step_h" : {
"command" : "sudo passwd centos"
},
"step_i" : {
"command" : "configure firewall to allow rdp..."
}
}
}
}
}
As you can see though, in step_h, I want to set a password for the default centos user. How can I automatically return a default password value when user input is required? Maybe there is a better way of going about this problem?
Any help would be greatly appreciated!

There are several ways of setting user password without prompt. Some of them can be found here or here for examples.
However, the potential issue to consider is how are you going to provide this password in the template? Hardcode it in plain text? This is security issue. Pass it as a template parameter with NoEcho? This is better but not reproducible and prone to mistakes. Use SSM parameter store or AWS Secretes Manager? This would be probably best option but requires extra settings.

Related

Efficient way to disable a user to sudo via puppet

I have puppet 6 installed in my environment and would like to ensure that the user centos cannot sudo on all of my agents. I can create somethings like this:
modules/sudoer/manifests/disable_sudo.pp
# Manage the sudoers file
class sudoers {
file { '/etc/sudoers':
source => 'puppet:///modules/sudoers/sudoers',
mode => '0440',
owner => 'root',
group => 'root',
}
}
And then create a modules/sudoers/files/sudoers file and put the content I like in there and make sure the centos line is commented out:
#centos ALL=(ALL) NOPASSWD: ALL
But this is very lengthy and in puppet 3, I could only use sudo::disable_centos: true in the hiera. Is there a better way for letting puppet prevent the user centos from sudo? Thank you

Elastic search azure deployment log-stash details

I am trying to deploy ElasticSearch on Azure Cloud. Installed the Elastic template from Azure Marketplace and able to access in kibana by hitting this url http://ipaddress:5601 with user id and password given at the time of creation.
Also able to access elastic search http://ipaddress:9200/ and getting below configuration
{
"name" : "myesclient-1",
"cluster_name" : "myes",
"cluster_uuid" : "........",
"version" : {
"number" : "6.2.4",
"build_hash" : "ccec39f",
"build_date" : "",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
Now i am facing problem in,
On which VM runs logstash?
How to start logstash?
Where to store the config files and jdbc config file and how to run BAT file periodically. Bat file syntax for normal VM is like
Run
cd C:\logstash\logstash-6.2.2\bin
logstash -f C:\Users\basudeb\Desktop\config\jdbc.config
pause
The Elastic Azure ARM template does not currently deploy Logstash, only Elasticsearch and Kibana. There's an open issue to track this. If you feel it would be useful, please +1 the issue :)

Filebeat service hangs on restart

I have some weird problem with filebeat
I am using cloud formation to run my stack, and a part of that i am installing and running filebeat for log aggregation,
I inject the /etc/filebeat/filebeat.yml into the machine and then i need to restart filebeat.
The problem is that filebeat hangs. and the entire provisioning is stuck (note that if i ssh into the machine and issue the "sudo service filebeat restart myself, the entire provisioning becomes unstuck and continues). I tried restarting it both via the services section and the commands section of the cloudformation::init and they both hang.
I havent tried it via the userdata but thats the worst possible solution for it.
Any ideas why?
snippets for the template. both these hang as mentioned.
"commands" : {
"01" : {
"command" : "sudo service filebeat restart",
"cwd" : "~",
"ignoreErrors" : "false"
}
}
"services" : {
"sysvinit" : {
"filebeat" : {
"enabled" : "true",
"ensureRunning" : "true",
"files" : ["/etc/filebeat/filebeat.yml"]
}
}
}
Well, this does sound like some sort of lock.. According to the docs, you should insert a dependency to the file, in the filebeat service, under the services section, and that will cause the filebeat service restart you need.
Apparently, the services section supports a files attribute:
A list of files. If cfn-init changes one directly via the files block, this service will be restarted.

Puppet reboot in stages

I need to do a two step installation of a CentOS6 host with puppet (currently using puppet apply) and got stuck. Not even sure it's currently possible today.
Step 1, setup of base system e.g. setup hosts, ntp, mail and some driver stuff.
reboot required
Step 2, setup of a custom service.
Can this bee done a smooth way? I'm not very familiar with the puppet environment yet.
First off, I very much doubt that any setup steps on a CentOS machine strictly require a reboot. It is usually sufficient to restart the right set of services to make all settings take effect.
Anyway, basic approach to this type of problem could be to
Define a custom fact that determines whether a machine is ready to receive the final configuration steps (Step 2 in your question)
Protect the pertinent parts of your manifest with an if condition that uses that fact value.
You may want to create a file first, then delete it when you are done installing the base system (ntp in the below example)
for example
exec { '/tmp/reboot':
path => "/usr/bin:/bin:/sbin",
command => 'touch /tmp/reboot',
onlyif => 'test ! -f /tmp/rebooted',
}
service { 'ntp':
require => Exec['/tmp/reboot'],
...
}
exec { 'reboot':
command => "mv /tmp/reboot /tmp/rebooted; reboot",
path => "/usr/bin:/bin:/sbin",
onlyif => "test -f /tmp/reboot",
require => Service['ntp'],
creates => '/tmp/rebooted',
}

How to config, run, monitor and manage multiple of different node service?

I'm developing a large scale system (MEAN Stack + ElasticSearch + RabbitMQ),
There are many different nodejs projects and queues working together.
I a few questions.
When I want run and test the whole system, I have to open a lot of terminal windows to run each project. How do I run them at once with ease of monitoring.
When I want to run the same project on multiple machine, How can I easily config all of them because sometime it takes too much time to move around and config them one bye one.
How to config, run, monitor and manage the whole system easily. For example, I want to know how many machine is running a project. Or sometime I want to change message queue name or ip address at once, I don't want to go to every machine on both project to change them one bye one
Sorry for my bad gramma, Feel free the edit.
Thanks in advance.
Have a look at PM2.
I'm using it for developement and in production
With this tool you can define a simple JSON file that defines your environment.
pm2_services.json
[{
"name" : "WORKER",
"script" : "worker.js",
"instances" : "3",
"port" : 3002,
"node-args" : "A_CONFIG_KEY"
}, {
"name" : "BACKEND",
"script" : "backend.js",
"instances" : "3",
"port" : 3000,
"node-args" : "A_CONFIG_KEY"
}, {
"name" : "FRONTEND",
"script" : "frontend.js",
"instances" : "3",
"port" : 3001,
"node-args" : "A_CONFIG_KEY"
}]
Then run pm2 start pm2_services.json
Relevant commands:
pm2 logs show the logs of all services
pm2 ls show the running
pm2 monit show the current cpu and memory state
pm2 start FRONTEND to start a service
pm2 stop FRONTEND to stop a service
NOTE:
Be careful with the watch feature of PM2.
In my case my CPU jumps up to permanent 100%.
To watch many file for change i use node-dev.
And here's the solution ti use it with PM2
[{
"name" : "WORKER",
"script" : "worker.js",
"instances" : 1,
"watch" : false,
"exec_interpreter" : "node-dev",
"exec_mode" : "fork_mode"
}]
You could write a Node project which launches all the other ones with appropriate arguments using child_process.
You could consider a tool like Puppet or Chef.
Same as #2.

Resources