Vagrant Puppet using wget to fetch and extract .zip file - linux

Hi I am creating a Vagrant setup and I am needing to fetch a .zip file which will be put in /vagrant/www directory.
They way I am trying to do this is:
exec { 'setup octobercms':
command => "/bin/sh -c 'wget -P /vagrant/www https://github.com/octobercms/install/archive/master.zip'",
timeout => 900,
logoutput => true
}
When vagrant up has been triggered I can see that the file is downloading but it is not appearing in the /vagrant/www directory. The file is not really anything to do with vagrant but will be used to install October CMS.
When using puppet what would be the best way to fetch a zipped file and extract its contents into a directory and remove the zipped archive?
Any help would be much appreciated.

The exec command is run in a generic construct of the shell, and doesn't obey the constraints of the user account evoking it. Try:
exec { 'setup octobercms':
command => "cd /vagrant/www/; /bin/sh -c 'wget https://github.com/octobercms/install/archive/master.zip'",
timeout => 900,
logoutput => true
}

Related

Ansible - Run playbooks using systemd (.service files)

I would like to start an ansible playbook through a service.
Problem is that there is an exception occurring if i try to start a specific playbook with the help of a .service file.
Normal execution via the command line doesn't throw any exceptions.
The exact error is as follows:
ESTABLISH LOCAL CONNECTION FOR USER: build
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /tmp"&& mkdir "echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" && echo ansible-tmp-1628159196.970389-90181-42979741793270="echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" ) && sleep 0'
fatal: [Ares]: UNREACHABLE! => changed=false
msg: 'Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p "echo /tmp"&& mkdir "echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" && echo ansible-tmp-1628159196.970389-90181-42979741793270="echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" ), exited with result 127'
unreachable: true'
I've tried the following:
Authentication or permission failure, did not have permissions on the remote directory
https://github.com/ansible/ansible/issues/43830
Generally searching for an answer, but all i could find is to change remote_config to /tmp
Changing the permissions to 777 for the /tmp folder
Changing the user for the service to both build and root
Changing the group for the service to both build and root
Current configuration:
The remote_tmp is set to /tmp
The rights for /tmp are: drwxrwxrwt. 38 root root
This is the service i am starting:
[Unit]
Description=Running vmware in a service
After=network.target
[Service]
User=build
Group=build
WorkingDirectory=/home/build/dev/sol_project_overview/ansible-interface
Environment="PATH=/home/build/dev/sol_project_overview/ansible-interface/venv/bin"
ExecStart=/home/build/dev/sol_project_overview/ansible-interface/venv/bin/ansible-playbook ./playbooks/get_vm_data_playbook.yml --vault-password-file password.txt -vvv
[Install]
WantedBy=multi-user.target
The exact ansible task that throws this exception:
- name: Write Disks Data to file
template:
src: template.j2
dest: /home/build/dev/sol_project_overview/tmp/vm_data
delegate_to: localhost
run_once: yes
Also normally I would run a python script via this service file, that would call ansible when special conditions are met. But the same error occurs with a python script started by the service.
All of this lets me think that the problem is with the .service file... i just don't know what.
Any help is appreciated.
EDIT: SELinux is disabled
So i found the problem:
When debugging with -vvvv => 4 * verbose you get an even more precise error message:
"msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p "echo /tmp/.ansible/tmp"&& mkdir "echo /tmp/.ansible/tmp/ansible-tmp-1629205650.8804808-397364-56399467035196" && echo ansible-tmp-1629205650.8804808-397364-56399467035196="echo /tmp/.ansible/tmp/ansible-tmp-1629205650.8804808-397364-56399467035196" ), exited with result 127**, stderr output: /bin/sh: mkdir: command not found\n",**
and in the last part there is this information:, stderr output: /bin/sh: mkdir: command not found\n",
So after googling i realized the problem was with that "PATH" variable I am setting in my .service file.
This was the problem:
Environment="PATH=/home/build/dev/sol_project_overview/ansible-interface/venv/bin"
it couldn't find mkdir because the "bin" folder, where "mkdir" is located at, wasn't specified in the "PATH" variable
What was left to do is to change the PATH variable of the service correctly. In order to do so I took the the PATH variable from the corresponding virtual environment when it was active.
Lesson: If you are working with virtual environments and want to use services using their environments, then change the PATH variable to that of the virtual machine.

Is it possible to check if a folder exists inside a shell command running in exec?

I'm trying to create a oneliner in node using exec. The idea is to create a folder called admin and unzip untar a file into it, so:
mkdir admin
tar xvfz release.tar.gz -C admin/
The problem is, sometimes admin exists (its ok, I want to overwrite it contents), and that using exec will trigger an error:
exec('mkdir admin && tar xvfz release.tar.gz -C admin/', (err, stdout, stderr) => {
if(err) { //mkdir fails when the folder exist }
});
Is there a way to elegantly continue if mkdir fails? Ideally, I want to clean the contents of admin like rm -rf admin/ so the new untar start fresh, but then again, that command will fail.
PS: I know I can check with FS for the folder before lunching exec, but Im interested on an all in one exec solution. (if possible)
EDIT: The question How to mkdir only if a dir does not already exist? is similar but it is about the specific use of mkdir alone, this instead is about concatenation and error propagation.
You don't need to have mkdir fail on an existing target, you can use the --parents flag:
-p, --parents
no error if existing, make parent directories as needed
turning your oneliner into:
exec('mkdir -p admin && tar xvfz release.tar.gz -C admin/', (err, stdout, stderr) => {
// continue
});
Alternatively, you could also use ; instead of && to chain the calls which will always continue, no matter the exit code:
exec('mkdir admin; tar xvfz release.tar.gz -C admin/', (err, stdout, stderr) => {
// continue
});

docker exec with standard output logged in a file inside the docker container

I am currently running a cronjob from a host machine (Linux Redhat) executing a script in a docker container. The problem I have is that when I redirect the standard output to a file with path inside the docker container, the cronjob threw an exception basically saying that the path of the log file cannot be found. But if I change the output log file path to be a path that is on the host machine, it works fine.
Below does not work
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh > /path/in/docker/container/script.shout
But this one works
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh > /path/in/host/script.shout
How do I get the first cronjob working so I can have the output file in the docker container using the path in the docker container?
I don't want to run the cronjob as root and that's why I need sudo before docker exec. Please note, only root has access to the docker volume path in the host machine, which is why I can't use the docker volume path either.
Cron runs your command with a shell, so the output redirect is handled by the shell running on your host, not inside your container. To get shell commands like this to run inside the container, you need to run a shell as your docker command, and escape or quote your any of those shell options to avoid having them interpreted until you are inside the container. E.g.
0 9 * * 1-5 sudo docker exec -i container_name /bin/sh -c \
"/path/in/docker/container/script.sh > /path/in/docker/container/script.shout"
I would rather try and path the redirection path as a parameter to the script (so remove the '>'), and make the script itself redirect its output to that parameter file.
Since the script is executed in a docker container, it would see that path (as opposed to the cron job, which sees by default host paths)
We can use bach -c and put the redirect command between double quotes as in this command:
docker exec ${CONTAINER_ID} bash -c "./path/in/container/script.sh > /path/in/container/out"
And we have to be sure /path/in/container/script.sh is an executable file either by using this command from the container:
chmod +x /path/in/container/script.sh
or by using this command from the host machine:
docker exec ${CONTAINER_ID} chmod +x /path/in/container/script.sh
You can use tee: a program that reads stdin and writes the same to both stdout AND the file specified as an arg.
echo 'foo' | tee file.txt
will write the text 'foo' in file.txt
Your desired command becomes:
0 9 * * 1-5 sudo docker exec -i /path/in/docker/container/script.sh | tee /path/in/docker/container/script.shout
The drawback is that you also dump to stdout.
You may check this SO question for further possibilities and workarounds.

Using sudo command in a child process using node

I am trying to copy a bundle directory into a root directory of a remote server. I try to do this using node and so far I achieved piping the tar content to server and untar it. However when I try to move the directory to root folder it requires sudo access and I just couldn't find a way to do it. I tried -t option for pseudoterminal but I guess that works running from a shell. Here is what I have done so far, any help is highly appreciated:
const path = require("path");
const exec = require('child_process').exec;
var absolutePath = path.resolve(__dirname, "../");
const allCommands = [];
/*
*
*
* 1-) cd to the root folder of the app
* 2-) tar dist folder and pipe the result to the ssh connection
* 3-) connect to server with ssh
* 4-) try to create dist and old_dists folder, if not existing they will be created otherwise they will give an error and rest of the script will continue running
* 5-) cp contents of dist folder to old_dists/dist_$(dateofmoment) folder so if something is wrong somehow you have an backup of the existing config
* 6-) untar the piped tar content into dist folder, untar only files under the first parent directory --strip-components=1 flag, if it was 2 it will dive 2 level from the root folder
*
*
*/
allCommands.push("cd " + absolutePath);
allCommands.push("tar -czvP dist | ssh hostnameofmyserver 'mkdir dist ; mkdir old_dists; cp -R dist/ old_dists/dist_$(date +%Y%m%d_%H%M%S) && tar -xzvP -C dist --strip-components=1'");
//I would like to untar the incoming file into /etc/myapp for example rather than my home directory, this requires sudo and don't know how to handle it
exec(allCommands.join(" && "),
(error, stdout, stderr) => {
console.log(`stdout: ${stdout}`);
console.log(`stderr: ${stderr}`);
if (error !== null) {
console.log(`exec error: ${error}`);
}
});
Also whats the best place for storing web application folder in ubuntu server where multiple user can deploy an app, is it a good practice to make the owner of the directory root user, or it just doesn't matter?
As noted in the man page for ssh, you can specify multiple -t arguments to force pty allocation even if the OpenSSH client's stdin is not a tty (which it won't be by default when you spawn a child process in node).
From there you should be able to simply write the password to the child process's .stdin stream when you see the sudo prompt on the .stdout stream.
On a semi-related note, if you want more (programmatic) control over the ssh connection or you don't want to spin up a child process, there is the ssh2 module. You could even do the tarring within node too if you wanted, as there are also tar modules on npm.

how to call the sudo command in linux using Puppet

Using the below script to call sudo command in redhat linux with the puppet version 3.7.
Exec {
cwd => "/home/dev02",
command => "sudo -su dev01",
path => "/usr/bin/",
logoutput => "on_failure",
}
I am not getting any errors, but after executing this script,
when i checked to see my user with "whoami", still am seeing as dev02
instead of dev01.
Can someone help me on this.?
Thanks in advance.
This command will not do what you expect because all exec resource commands are executed in a spawned process. If you want to execute a command as another user, then the exec resource has a user parameter.
For example:
exec { 'Touch some file':
cwd => '/home/dev02',
command => 'touch some_file',
path => '/usr/bin/',
logoutput => 'on_failure',
user => 'dev01'
}

Resources