I am trying to deploy in Azure Vm Scale Set using Run Custom Script VM extension on VM scale set in Azure DevOps Release Pipleine. I've a shell script which executes post deployment tasks.
In release pipeline I am using a storage account to archive artifacts and also unchecked the Skip Archiving custom scripts. In VMSS deployment task I am getting the following error:
2020-03-06T22:59:44.7864691Z ##[error]Failed to install VM custom script extension on VMSS.
Error: VM has reported a failure when processing extension 'AzureVmssDeploymentTask'.
Error message: "Enable failed: failed to execute command: command terminated with exit status=126
[stdout]
extracting archive cs.tar.gz
Invoking command: ./"main.sh"
[stderr]
./customScriptInvoker.sh: line 12: ./main.sh: Permission denied
I found the customScriptInvoker.sh under /var/lib/waagent/custom-script/download/1 directory in scale set vm
#!/bin/bash
if [ -n "$1" ]; then
mkdir a
echo "extracting archive $1"
tar -xzC ./a -f $1
cd ./a
fi
command=$2" "$3
echo $command
echo "Invoking command: "$command
eval $command
What should be the way around of this issue?
it seems like a shell execute permission is missing
I am assuming you are running the bash script from the same folder
I would try
chmod +rx main.sh
you can verify permissions
ls -l main.sh
Can you also post the ownership of the main.sh and customScriptInvoker.sh using
ls -la main.sh
ls -la customScriptInvoker.sh
Check if they are owned by different accounts that may not be in the same group. If that is the case, you would also get a permission denied error when trying to execute the main.sh script from inside other script. If that is the case, you can use the chgrp command to change the main.sh file to be owned by the same group as the other file. You can also use chown to change the ownership of the main.sh file to be have the same ownership as the other file. Its hard to tell without seeing the permissions and ownership of the files.
Related
I would like to start an ansible playbook through a service.
Problem is that there is an exception occurring if i try to start a specific playbook with the help of a .service file.
Normal execution via the command line doesn't throw any exceptions.
The exact error is as follows:
ESTABLISH LOCAL CONNECTION FOR USER: build
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /tmp"&& mkdir "echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" && echo ansible-tmp-1628159196.970389-90181-42979741793270="echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" ) && sleep 0'
fatal: [Ares]: UNREACHABLE! => changed=false
msg: 'Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p "echo /tmp"&& mkdir "echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" && echo ansible-tmp-1628159196.970389-90181-42979741793270="echo /tmp/ansible-tmp-1628159196.970389-90181-42979741793270" ), exited with result 127'
unreachable: true'
I've tried the following:
Authentication or permission failure, did not have permissions on the remote directory
https://github.com/ansible/ansible/issues/43830
Generally searching for an answer, but all i could find is to change remote_config to /tmp
Changing the permissions to 777 for the /tmp folder
Changing the user for the service to both build and root
Changing the group for the service to both build and root
Current configuration:
The remote_tmp is set to /tmp
The rights for /tmp are: drwxrwxrwt. 38 root root
This is the service i am starting:
[Unit]
Description=Running vmware in a service
After=network.target
[Service]
User=build
Group=build
WorkingDirectory=/home/build/dev/sol_project_overview/ansible-interface
Environment="PATH=/home/build/dev/sol_project_overview/ansible-interface/venv/bin"
ExecStart=/home/build/dev/sol_project_overview/ansible-interface/venv/bin/ansible-playbook ./playbooks/get_vm_data_playbook.yml --vault-password-file password.txt -vvv
[Install]
WantedBy=multi-user.target
The exact ansible task that throws this exception:
- name: Write Disks Data to file
template:
src: template.j2
dest: /home/build/dev/sol_project_overview/tmp/vm_data
delegate_to: localhost
run_once: yes
Also normally I would run a python script via this service file, that would call ansible when special conditions are met. But the same error occurs with a python script started by the service.
All of this lets me think that the problem is with the .service file... i just don't know what.
Any help is appreciated.
EDIT: SELinux is disabled
So i found the problem:
When debugging with -vvvv => 4 * verbose you get an even more precise error message:
"msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p "echo /tmp/.ansible/tmp"&& mkdir "echo /tmp/.ansible/tmp/ansible-tmp-1629205650.8804808-397364-56399467035196" && echo ansible-tmp-1629205650.8804808-397364-56399467035196="echo /tmp/.ansible/tmp/ansible-tmp-1629205650.8804808-397364-56399467035196" ), exited with result 127**, stderr output: /bin/sh: mkdir: command not found\n",**
and in the last part there is this information:, stderr output: /bin/sh: mkdir: command not found\n",
So after googling i realized the problem was with that "PATH" variable I am setting in my .service file.
This was the problem:
Environment="PATH=/home/build/dev/sol_project_overview/ansible-interface/venv/bin"
it couldn't find mkdir because the "bin" folder, where "mkdir" is located at, wasn't specified in the "PATH" variable
What was left to do is to change the PATH variable of the service correctly. In order to do so I took the the PATH variable from the corresponding virtual environment when it was active.
Lesson: If you are working with virtual environments and want to use services using their environments, then change the PATH variable to that of the virtual machine.
We have linux script in our environment which does ssh to remote machine with a common user and copies a script from base machine to remote machine through scp.
Script Test_RunFromBaseVM.sh
#!/bin/bash
machines = $1
for machine in $machines
do
ssh -tt -o StrictHostKeyChecking=no ${machine} "mkdir -p -m 700 ~/test"
scp -r bin conf.d ${machine}:~/test
ssh -tt ${machine} "cd ~/test; sudo bash bin/RunFromRemotevm.sh"
done
Script RunFromRemotevm.sh
#!/bin/bash
echo "$(date +"%Y/%m/%d %H:%M:%S")"
Before running Test_RunFromBaseVM.sh script base vm we run below two commands.
eval $(ssh-agent)
ssh-add
Executing ./Test_RunFromBaseVM.sh "<list_of_machine_hosts>" getting permission denied error.
[remote-vm-1] bin/RunFromRemotevm.sh:line 2: /bin/date: Permission denied
any clue or insights on this error will be of great help.
Thanks.
I believe the problem is the presence of the NOEXEC: tag in the sudoers file, corresponding to the user (or group) that's executing the "cd ~/test; sudo bash bin/RunFromRemotevm.sh" command. This causes any further execv(), execve() and fexecve() calls to be refused, in this case it's /bin/date.
The solution is obviously remove the NOEXEC: from the main /etc/sudoers file or some file under /etc/sudoers.d, whereever is this defined.
I'm facing an issue with creating init.d service. Following is my run.sh file which executes completely fine (As root user)
mvn install -DskipTests
mvn exec:java
But when I execute same file as service in init.d (service run start). I get
mvn command not found
Following is my start method
start() {
if [ -f /var/run/$PIDNAME ] && kill -0 $(cat /var/run/$PIDNAME); then
echo 'Service already running' >&2
return 1
fi
echo 'Starting service…' >&2
CMD="$SCRIPT &> \"$LOGFILE\" & echo \$!"
su -c "$CMD" $RUNAS > "$PIDFILE"
echo 'Service started' >&2
}
Link to complete script which i'm using
https://gist.githubusercontent.com/naholyr/4275302/raw/9df4ef3f4f2c294c9585f11d1c8593b62bdd52d3/service.sh
RUN AS value is set as root
When you run a command using sudo you are effectively running it as the superuser or root.
The reason that the root user is not finding your command is likely that the PATH environment variable for root does not include the directory where maven is located (quite evident as in the comments). Hence the reason for command not found error.
Add PATH to your script and that it includes /opt/integration/maven/apache-maven-3.3.9/bin. Since the init script won't share the PATH environment variable with the rest of the system (since it being run much ahead of the actual updates of $PATH in the .bash_profile) you need to set it directly on your script and make sure maven is in there, for example, add the below line to the init script in the beginning.
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/opt/integration/maven/apache-maven-3.3.9/bin
My shell script looks like this:
#!/bin/bash
USER=$1
sudo rm -rf /home/$USER/system/logs/*
exit 0
It's checked into cvs in a shell folder, Jenkins is configured to execute it on a Linux machine via a job with 'Execute Shell' build step:
bash -ex shell/clear-logs.sh myuser
But Jenkins is wrapping the whole sudo line in single quotes which results in my log files not been deleted (although the Jenkins job passes successfully):
[workspace] $ /bin/sh -xe /tmp/hudson7785398405733321556.sh
+ bash -ex shell/clear-logs.sh myuser
+ USER=myuser
+ sudo rm -rf '/home/myuser/system/logs/*'
+ exit 0
Any ideas why Jenkins is doing this? If I call the script from the Jenkins workspace location as the root user, then it works fine.
EDIT:
I have the same shell script, in different cvs modules, being executed by Jenkins on the same linux server. Have created a new job, either as freestyle or by copying an existing job where this works, but makes no difference.
Okay, seemed to have resolved this by adding the 'jenkins' user to the 'myuser' group and restarting the jenkins service. If the logs directory is empty, then Jenkins console output does report the path in single quotes, as no files found. But run the job a second time where there are files, and no single quotes, files correctly deleted.
Jenkins is not doing anything with your quotation marks, such as changing double to single - you are seeing the output of set -x. Try this in your shell:
set -x
ls "some string with spaces"
Output will be something like:
+ ls --color=auto 'some string with spaces'
bash is just showing you debug output of its interpretation and tokenization of your command.
Adapt the permissions of /home/$USER/... I got the following in the Console Output at first:
+ USER=geri
+ rm -rf '/home/geri/so-30802898/*'
rm: cannot remove ‘/home/geri/so-30802898/*’: Permission denied
Build step 'Execute shell' marked build as failure
After adapting the permissions the build/deletion succeeded.
I want to upload the content of one directory to my Amazon EC2 with rsync:
rsync -r -t -v --progress -z -s -e "ssh -i /home/mostafa/keyamazon.pem" /home/mostafa/splitfiles ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:~/splitfiles
but I receive the following error message:
sending incremental file list
rsync: link_stat "/home/mostafa/splitfiles" failed: No such file or directory (2)
rsync: change_dir#3 "/home/ubuntu//~" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(712) [Receiver=3.1.0]
and if I do a dry run with grsync, it works correctly
In rsync the trailing / is very important. Also you rsync usually defaults to ssh when one of the destinations contains a host.
So you if you want to preserver modification times then you can get rid of the -e and -s options.
Your command could be written as /home/mostafa/splitfiles/ ubuntu#ec2-64-274-161-87.compute-1.amazonaws.com:splitfiles/ - notice the trailing /'s provided that you have ssh configured to read the private key from your home directory.
On ubuntu you can add this to the key chain, by going
ssh-add [key-file]
And this will save you having to specify the keyfile everytime you ssh into the AWS machine.
The errors seem to say that on the local machine you don't have a source directory and the destination doesn't exist.
I completed this task with Filezilla instead, easier to use.
You are at home ~ if you cd ../ to root you will be able to run the command.