I have the following exec block in my puppet code, which logs me in into an AWS ECR Repo.
exec { 'aws ecr get-login':
command => "aws ecr get-login --no-include-email --region eu-west-1 > /tmp/docker-login.sh;
chmod a+x /tmp/docker-login.sh;
/tmp/docker-login.sh > /tmp/docker.login",
path => ['/bin', '/usr/bin', '/usr/sbin', ],
}
As it is now, it gets executed every puppet run, which is a bit of an overkill. I would like to only execute it, when the following block has changes.
docker::run { 'test':
ensure => present,
image => "image:${docker_tag}",
pull_on_start => true,
}
I know I can work with notify, the thing is that, when the docker block changes, I would like to run the exec before the docker::run statement is processed
Related
I am trying to download a file from remote location into a directory in puppet.
.pp file
class test::test {
exec { 'wget http://10.0.0.1/test/test.txt':
cwd => '/opt/check',
path => ['/usr/bin', '/usr/sbin',],
}
}
I also tried with wget module in puppet :
class test::test {
include ::wget
wget::fetch { 'http://10.0.0.1/test/test.txt':
destination => '/tmp/',
timeout => 0,
verbose => false,
}
}
I am not getting the file downloaded, is there something i am doing wrong or is there a better way?
Please let me know.
Run which wget on your node to ensure you are including the right path, on my centos machine it's /bin/wget.
I often (and it's probably a bad habit) include the full path in the comand so I'd put /bin/wget http://10.0.0.1/test/test.txt
Have you tried running that command on the machine manualy?
Check this link out https://puppet.com/docs/puppet/5.5/type.html#exec
exec { 'get-test.txt':
cwd => '/opt/check',
command => '/bin/wget http://10.0.0.1/test/test.txt',
path => ['/usr/bin', '/usr/sbin',],
creates => '/opt/check/test.txt',
}
The "creates" stops running the exec if the file exists, you want to do that or every time puppet runs (default every 30 minutes) it'll run that command again.
I just ran a test on my machine, I created a file test.pp and put this in it;
exec { 'get-google':
cwd => '/tmp',
command => '/bin/wget http://www.google.com',
path => ['/usr/bin', '/usr/sbin',],
creates => '/tmp/index.html',
}
Then ran puppet apply test.ppand it worked, if you want to test small blocks of code that's a handy way of doing it.
Also, does the /opt/check directory exist?
I am trying to use Hammer in Foreman 1.20.1 on Centos 7.6 to refresh proxy features (or just about any other command other than --version) in a Puppet exec. The command I am using works fine at the shell. It fails in Puppet exec with:
Error: undefined local variable or method `dotfile' for
Notice: /Stage[main]/Profiles::Test/Exec[test]/returns: Did you mean?
##dotfile Notice: /Stage[main]/Profiles::Test/Exec[test]/returns:
Error: No such sub-command 'proxy'.
The code I am using is:
class profiles::test{
exec {'test':
command => '/usr/bin/hammer proxy refresh-features --name $(hostname)',
}
}
include profiles::test
I'm not concerned about idempotency as it will have a refreshonly, I just want to get the command to work.
I have tried adding other options such as path, user, environment etc to no avail. Any help appreciated.
from clues I found at https://github.com/awesome-print/awesome_print/issues/316 and https://grokbase.com/t/gg/puppet-users/141mrjg2bw/problems-with-onlyif-in-exec, it turns out that the HOME environment has to be set. So the working code is:
exec {'test':
command => '/usr/bin/hammer proxy refresh-features --name $(hostname)',
environment => ["HOME=/root"],
refreshonly => true,
}
f'ing ruby!
I'm trying to automate a deployment with Jenkins to an EC2 instance for the first time.
I have installed tomcat8 in the EC2 instance and changed the permissions of the tomcat8/webapps folder to 777 ($ chmod 777 webapps).
The .ssh from EC2 is in the known_hosts file.
I'm able to connect and copy the .war file into the server folder using scp from my console but it fails during the automation.
$ scp -i /Users/Shared/Jenkins/aws.pem /Users/Shared/Jenkins/Home/jobs/fully-automated/builds/28/archive/webapp/target/webapp.war ec2-user#35.158.118.56:/var/lib/tomcat8/webapps
== copies the *.war file to tomcat8/webapps ==
In Jenkins, I am getting:
[Deploy to Staging] + scp -i /Users/Shared/Jenkins/aws.pem /Users/Shared/Jenkins/Home/jobs/fully-automated/builds/28/archive/webapp/target/webapp.war ec2-user#35.158.118.56:/var/lib/tomcat8/webapps
[Deploy to Staging] Host key verification failed.
[Deploy to Staging] lost connection
The command from the console and from the Groovy Jenkins file is the exact same. Why would it work from my machine and not from Jenkins?
Jenkinsfile:
pipeline {
agent any
tools {
maven 'localMaven'
}
parameters {
string(name: 'production', defaultValue: '54.93.78.130', description: 'Staging server')
string(name: 'staging', defaultValue: '35.158.118.56', description: 'Production server')
}
triggers {
pollSCM('* * * * *')
}
stages {
stage('Build') {
steps {
sh 'mvn clean package'
}
post {
success {
echo 'Now Archiving...'
archiveArtifacts artifacts: '**/target/*.war'
}
}
}
stage('Deployments') {
parallel {
stage('Deploy to Staging') {
steps {
sh "scp -i /Users/Shared/Jenkins/aws.pem /Users/Shared/Jenkins/Home/jobs/fully-automated/builds/28/archive/webapp/target/*.war ec2-user#${params.staging}:/var/lib/tomcat8/webapps"
}
}
stage('Deploy to Production') {
steps {
sh "scp -i /Users/Shared/Jenkins/aws.pem /Users/Shared/Jenkins/Home/jobs/fully-automated/builds/28/archive/webapp/target/*.war ec2-user#${params.production}:/var/lib/tomcat8/webapps"
}
}
}
}
}
}
Thanks for your help!
This is a common mistake many people perform. You have given permission to your "USERNAME" to access EC2 not "JENKINS" user. Just do the same thing, but this time do it for Jenkins user.
Jenkins has its own user called "jenkins" which you can observe in users folder, create the ssh key here and pass this to EC2 and everything should work fine :)
For conformation, just ssh into the server using your username and give it a try with Jenkins username it will not work until you do it the above changes
Hope this helps :)
Suppose I want to make sure that my VM has devstack on it.
exec{ "openstack":
command => "git clone https://git.openstack.org/openstack-dev/devstack",
}
This is the puppet code I write for it and it works fine for the first time. Now I want to put a check. I want to clone the repository only if it has not been done already. How to do that
You say
exec { 'openstack':
command => 'git clone https://git.openstack.org/openstack-dev/devstack',
creates => '/path/to/somewhere/devstack',
cwd => '/path/to/somewhere',
path => '/usr/bin',
}
Now if the directory /path/to/somewhere/devstack exists the clone command won't run.
exec { "openstack":
command => 'git clone https://git.openstack.org/openstack-dev/devstack /path/to/devstack",
unless => 'test -d /path/to/devstack'
}
its a really hacky way to handle this. you should look into the vcsrepo puppet module https://github.com/puppetlabs/puppetlabs-vcsrepo
I have a Docker container that runs great on my local development machine. I would like to move this to AWS Elastic Beanstalk, but I am running into a small bit of trouble.
I am trying to mount an S3 bucket to my container by using s3fs. I have the Dockerfile:
FROM tomcat:7.0
MAINTAINER me#example.com
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y build-essential libfuse-dev libcurl4-openssl-dev libxml++2.6-dev libssl-dev mime-support automake libtool wget tar
# Add the java source
ADD . /path/to/tomcat/webapps/
ADD run_docker.sh /root/run_docker.sh
WORKDIR $CATALINA_HOME
EXPOSE 8080
CMD ["/root/run_docker.sh"]
And I install s3fs, mount an S3 bucket, and run the Tomcat server after the image has been created, by running run_docker.sh:
#!/bin/bash
#run_docker.sh
wget https://github.com/s3fs-fuse/s3fs-fuse/archive/master.zip -O /usr/src/master.zip;
cd /usr/src/;
unzip /usr/src/master.zip;
cd /usr/src/s3fs-fuse-master;
autoreconf --install;
CPPFLAGS=-I/usr/include/libxml2/ /usr/src/s3fs-fuse-master/configure;
make;
make install;
cd $CATALINA_HOME;
mkdir /opt/s3-files;
s3fs my-bucket /opt/s3-files;
catalina.sh run
When I build and run this Docker container using the command:
docker run --cap-add mknod --cap-add sys_admin --device=/dev/fuse -p 80:8080 -d username/mycontainer:latest
it works well. Yet, when I remove the --cap-add mknod --cap-add sys_admin --device=/dev/fuse, then s3fs fails to mount my S3 bucket.
Now, I would like to run this on AWS Elastic Beanstalk, and when I deploy the container (and run run_docker.sh), all the steps execute fine, except the step s3fs my-bucket /opt/s3-files in run_docker.sh fails to mount the bucket.
Presumably, this is because whatever Elastic Beanstalk does to run a Docker container, it doesn't add any additional flags like, --cap-add mknod --cap-add sys_admin --device=/dev/fuse.
My Dockerrun.aws.json file looks like:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "tomcat:7.0"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
Is it possible to add additional docker run flags to an AWS EB Docker deployment?
An alternative option is to find another way to mount an S3 bucket, but I suspect I'd run into similar permission errors regardless. Has anyone seen any way to accomplish this???
UPDATE:
For people trying to use #Egor's answer below, it works when the EB configuration is set to use v1.4.0 running Docker 1.6.0. Anything past the v1.4.0 version fails. So to make it work, build your environment as normal (which should give you a failed build), then rebuild it with a v1.4.0 running Docker 1.6.0 configuration. That should do it!
If you are using the latest version of aws docker stack (docker 1.7.1 for example), you'll need to slightly modify the above answer. Try this:
commands:
00001_add_privileged:
cwd: /tmp
command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
Notice the change of location && name of the run script
Add file .ebextensions/01-commands.config
container_commands:
00001-docker-privileged: command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/pre/04run.sh'
I am also using s3fs
Thanks elijahchancey for answer it was much helpful. I would just like to add small comment:
Elasticbeanstalk is now using ECS tasks to deploy and manage application cluster. There is very important paragraph in Multicontainer Docker Configuration
docs (which I originally missed).
The following examples show a subset of parameters that are commonly used. More optional parameters are available. For more information on the task definition format and a full list of task definition parameters, see Amazon ECS Task Definitions in the Amazon ECS Developer Guide.
So the document is not complete reference but it just shows typical entries and you are supposed to find more elsewhere. This has quite major impact because now (2018) you are able to specify more options and you don't need to hack ebextensions any more. Only thing you need to do is to use task parameter in containerDefinitions of your multi docker Dockerrun.aws.json.
This is not mentioned in single docker containers but one can try and verify...
Example of multi docker Dockerrun.aws.json with extra cap:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "service1",
"image": "myapp/service1:latest",
"essential": true,
"memoryReservation": 128,
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
}
],
"linuxParameters": {
"capabilities": {
"add": [
"SYS_PTRACE"
]
}
}
}
]
}
You can now add capabilities using the task definition. Here are the docs:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
This is specifically what you would add to your task definition:
"linuxParameters": {
"capabilities": {
"add": [
"SYS_PTRACE"
]
}
},