I would like to start an ansible playbook through a service.
Problem is that there is an exception occurring if i try to start a specific playbook with the help of a .service file.
Normal execution via the command line doesn't throw any exceptions.
The exact error is as follows:
ESTABLISH LOCAL CONNECTION FOR USER: build
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /tmp"&& mkdir "echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" && echo ansible-tmp-1628159196.970389-90181-42979741793270="echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" ) && sleep 0'
fatal: [Ares]: UNREACHABLE! => changed=false
msg: 'Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p "echo /tmp"&& mkdir "echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" && echo ansible-tmp-1628159196.970389-90181-42979741793270="echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" ), exited with result 127'
unreachable: true'
I've tried the following:
Authentication or permission failure, did not have permissions on the remote directory
https://github.com/ansible/ansible/issues/43830
Generally searching for an answer, but all i could find is to change remote_config to /tmp
Changing the permissions to 777 for the /tmp folder
Changing the user for the service to both build and root
Changing the group for the service to both build and root
Current configuration:
The remote_tmp is set to /tmp
The rights for /tmp are: drwxrwxrwt. 38 root root
This is the service i am starting:
[Unit]
Description=Running vmware in a service
After=network.target
[Service]
User=build
Group=build
WorkingDirectory=/home/build/dev/sol_project_overview/ansible-interface
Environment="PATH=/home/build/dev/sol_project_overview/ansible-interface/venv/bin"
ExecStart=/home/build/dev/sol_project_overview/ansible-interface/venv/bin/ansible-playbook ./playbooks/get_vm_data_playbook.yml --vault-password-file password.txt -vvv
[Install]
WantedBy=multi-user.target
The exact ansible task that throws this exception:
- name: Write Disks Data to file
template:
src: template.j2
dest: /home/build/dev/sol_project_overview/tmp/vm_data
delegate_to: localhost
run_once: yes
Also normally I would run a python script via this service file, that would call ansible when special conditions are met. But the same error occurs with a python script started by the service.
All of this lets me think that the problem is with the .service file... i just don't know what.
Any help is appreciated.
EDIT: SELinux is disabled
So i found the problem:
When debugging with -vvvv => 4 * verbose you get an even more precise error message:
"msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p "echo /tmp/.ansible/tmp"&& mkdir "echo /tmp/.ansible/tmp/ansible-tmp-1629205650.8804808-397364-56399467035196" && echo ansible-tmp-1629205650.8804808-397364-56399467035196="echo /tmp/.ansible/tmp/ansible-tmp-1629205650.8804808-397364-56399467035196" ), exited with result 127**, stderr output: /bin/sh: mkdir: command not found\n",**
and in the last part there is this information:, stderr output: /bin/sh: mkdir: command not found\n",
So after googling i realized the problem was with that "PATH" variable I am setting in my .service file.
This was the problem:
Environment="PATH=/home/build/dev/sol_project_overview/ansible-interface/venv/bin"
it couldn't find mkdir because the "bin" folder, where "mkdir" is located at, wasn't specified in the "PATH" variable
What was left to do is to change the PATH variable of the service correctly. In order to do so I took the the PATH variable from the corresponding virtual environment when it was active.
Lesson: If you are working with virtual environments and want to use services using their environments, then change the PATH variable to that of the virtual machine.
Related
I am trying to deploy in Azure Vm Scale Set using Run Custom Script VM extension on VM scale set in Azure DevOps Release Pipleine. I've a shell script which executes post deployment tasks.
In release pipeline I am using a storage account to archive artifacts and also unchecked the Skip Archiving custom scripts. In VMSS deployment task I am getting the following error:
2020-03-06T22:59:44.7864691Z ##[error]Failed to install VM custom script extension on VMSS.
Error: VM has reported a failure when processing extension 'AzureVmssDeploymentTask'.
Error message: "Enable failed: failed to execute command: command terminated with exit status=126
[stdout]
extracting archive cs.tar.gz
Invoking command: ./"main.sh"
[stderr]
./customScriptInvoker.sh: line 12: ./main.sh: Permission denied
I found the customScriptInvoker.sh under /var/lib/waagent/custom-script/download/1 directory in scale set vm
#!/bin/bash
if [ -n "$1" ]; then
mkdir a
echo "extracting archive $1"
tar -xzC ./a -f $1
cd ./a
fi
command=$2" "$3
echo $command
echo "Invoking command: "$command
eval $command
What should be the way around of this issue?
it seems like a shell execute permission is missing
I am assuming you are running the bash script from the same folder
I would try
chmod +rx main.sh
you can verify permissions
ls -l main.sh
Can you also post the ownership of the main.sh and customScriptInvoker.sh using
ls -la main.sh
ls -la customScriptInvoker.sh
Check if they are owned by different accounts that may not be in the same group. If that is the case, you would also get a permission denied error when trying to execute the main.sh script from inside other script. If that is the case, you can use the chgrp command to change the main.sh file to be owned by the same group as the other file. You can also use chown to change the ownership of the main.sh file to be have the same ownership as the other file. Its hard to tell without seeing the permissions and ownership of the files.
We have linux script in our environment which does ssh to remote machine with a common user and copies a script from base machine to remote machine through scp.
Script Test_RunFromBaseVM.sh
#!/bin/bash
machines = $1
for machine in $machines
do
ssh -tt -o StrictHostKeyChecking=no ${machine} "mkdir -p -m 700 ~/test"
scp -r bin conf.d ${machine}:~/test
ssh -tt ${machine} "cd ~/test; sudo bash bin/RunFromRemotevm.sh"
done
Script RunFromRemotevm.sh
#!/bin/bash
echo "$(date +"%Y/%m/%d %H:%M:%S")"
Before running Test_RunFromBaseVM.sh script base vm we run below two commands.
eval $(ssh-agent)
ssh-add
Executing ./Test_RunFromBaseVM.sh "<list_of_machine_hosts>" getting permission denied error.
[remote-vm-1] bin/RunFromRemotevm.sh:line 2: /bin/date: Permission denied
any clue or insights on this error will be of great help.
Thanks.
I believe the problem is the presence of the NOEXEC: tag in the sudoers file, corresponding to the user (or group) that's executing the "cd ~/test; sudo bash bin/RunFromRemotevm.sh" command. This causes any further execv(), execve() and fexecve() calls to be refused, in this case it's /bin/date.
The solution is obviously remove the NOEXEC: from the main /etc/sudoers file or some file under /etc/sudoers.d, whereever is this defined.
I'm writing a bash function to jump into my last editted folder.
In my example, the last edited folder is titled 'daniel'.
The bash function looks fine.
>>:~$ echo $(ls -d -1dt -- */ | head -n 1)
daniel/
And I can manually cd into the directory.
>>:~$ cd daniel
>>:~/daniel$
But I can't use the bash function to cd into the directory.
>>:~$ cd $(ls -d -1dt -- */ | head -n 1)
bash: cd: daniel/: No such file or directory
Turns out someone added alias ls=ls --color to the bashrc of this server. My function works once the alias was removed. – Daniel Tan
This error is usually thrown when you enter a path that does not exist. See -bash: cd: Desktop: No such file or directory.
But the $(ls -d -1dt -- */ | head -n 1) is not wrong in the output. Thus the reason must be the different usage of sh and bash in that moment.
In my case, I had a docker container with that error when I accessed the folder with bash. The container was broken since I had force-closed it after docker-compose up which did not work. After that, on the existing containers, I could only use sh, not bash. I found this because of OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: "bash": executable file not found in $PATH": unknown. I guess that bash is loaded later than sh, and that at an early error at the start of the container, only sh gets loaded.
That would fit since you are in the sh, which can be seen from >>. Using sh, everything will work as expected. But the expression gets solved by bash. Which is probably not loaded for whatever reason.
In docker, using docker-compose, I also had a similar error saying sh: 1: cd: can't cd to /root/MYPROJECT. That could be solved by mounting the needed volumes in the services using
services:
host:
volumes:
- ~/MYPROJECT:/MYPROJECT # ~/path/on/host:/path/on/container
See Mount a volume in docker-compose. How is it done? and How to mount a host directory with docker-compose? or the official docs.
I'm facing an issue with creating init.d service. Following is my run.sh file which executes completely fine (As root user)
mvn install -DskipTests
mvn exec:java
But when I execute same file as service in init.d (service run start). I get
mvn command not found
Following is my start method
start() {
if [ -f /var/run/$PIDNAME ] && kill -0 $(cat /var/run/$PIDNAME); then
echo 'Service already running' >&2
return 1
fi
echo 'Starting service…' >&2
CMD="$SCRIPT &> \"$LOGFILE\" & echo \$!"
su -c "$CMD" $RUNAS > "$PIDFILE"
echo 'Service started' >&2
}
Link to complete script which i'm using
https://gist.githubusercontent.com/naholyr/4275302/raw/9df4ef3f4f2c294c9585f11d1c8593b62bdd52d3/service.sh
RUN AS value is set as root
When you run a command using sudo you are effectively running it as the superuser or root.
The reason that the root user is not finding your command is likely that the PATH environment variable for root does not include the directory where maven is located (quite evident as in the comments). Hence the reason for command not found error.
Add PATH to your script and that it includes /opt/integration/maven/apache-maven-3.3.9/bin. Since the init script won't share the PATH environment variable with the rest of the system (since it being run much ahead of the actual updates of $PATH in the .bash_profile) you need to set it directly on your script and make sure maven is in there, for example, add the below line to the init script in the beginning.
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/opt/integration/maven/apache-maven-3.3.9/bin
I have an shell script
#!/bin/bash
echo "hello world"
ansible zookservers -i /home/ec2-user/kafka_scripts/ansible_rep/inventory -a "/home/ec2-user/kafka_2.11-0.9.0.0/bin/kafka-server-start.sh kafka_2.11-0.9.0.0/config/server.properties" --sudo
my error was
| FAILED => SSH Error: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
while connecting to 172.30.0.243:22
when I run the ansible command then it is executing
but when I kept it inside a shell script I am getting the above error
my shell is with chmod 777 permission - so the problem it not with shell execution permission
I found something interesting
when I run the script from any other place I'm getting error. but when run it in ansible directory then it is executed.
later when I run it from any other directory it is not throwing any error.
so the problem is with initial public key authentication
I found something intersting -- when I run the script from any other place I'm getting error. but when run it in ansible directory then it is executed
The problem most likely is the location of your ansible.cfg. Ansible will use the config from one of these locations (from the docs):
ANSIBLE_CONFIG (an environment variable)
ansible.cfg (in the current directory)
.ansible.cfg (in the home directory)
/etc/ansible/ansible.cfg
First match wins.
So it uses the config from the "ansible directory", if called from there. If called from any other location there is no ansible.cfg in that directory. Since this is where you have stored you username and private key location the authentication fails.
The best solution seems to be to utilize the environment variable ANSIBLE_CONFIG. Just store the path to your ansible.cfg in there. I think it is /home/ec2-user/kafka_scripts/ansible_rep/ansible.cfg, right?
You can set that variable in your script.
ANSIBLE_CONFIG=/home/ec2-user/kafka_scripts/ansible_rep/ansible.cfg ansible zookservers -i /home/ec2-user/kafka_scripts/ansible_rep/inventory -a "/home/ec2-user/kafka_2.11-0.9.0.0/bin/kafka-server-start.sh kafka_2.11-0.9.0.0/config/server.properties" --sudo