cygwin ssh batch script for windows 2008 - cygwin

I configured cygwin in Windows Server 2008, now we need to implement automation
I am writing a batch script to add user to cygwin\etc\passwd file using following command
mkpasswd -l -u %username% -p /home >> /etc/passwd
Please help me how to execute following cmd in batch file
echo off
C:
chdir C:\cygwin\bin
bash --login -i
mkpasswd -l -u %username% -p /home >> /etc/passwd
It's not working

You're mixing Windows and Unix in your windows batch file. The batch file is running as a Windows command, as is the mkpasswd command in it. Windows has no concept of /etc/passwd and will throw an error. Probably something like;
D:\cygwin\bin>mkpasswd -l -u testusr -p /home >> /etc/passwd
The system cannot find the path specified.
Given what you want to do with mkpasswd I'd suggest you find a way to run your automation from within Cygwin. Perhaps setting up a cron job.

Related

Running linux commands inside bash script throws permission denied error

We have linux script in our environment which does ssh to remote machine with a common user and copies a script from base machine to remote machine through scp.
Script Test_RunFromBaseVM.sh
#!/bin/bash
machines = $1
for machine in $machines
do
ssh -tt -o StrictHostKeyChecking=no ${machine} "mkdir -p -m 700 ~/test"
scp -r bin conf.d ${machine}:~/test
ssh -tt ${machine} "cd ~/test; sudo bash bin/RunFromRemotevm.sh"
done
Script RunFromRemotevm.sh
#!/bin/bash
echo "$(date +"%Y/%m/%d %H:%M:%S")"
Before running Test_RunFromBaseVM.sh script base vm we run below two commands.
eval $(ssh-agent)
ssh-add
Executing ./Test_RunFromBaseVM.sh "<list_of_machine_hosts>" getting permission denied error.
[remote-vm-1] bin/RunFromRemotevm.sh:line 2: /bin/date: Permission denied
any clue or insights on this error will be of great help.
Thanks.
I believe the problem is the presence of the NOEXEC: tag in the sudoers file, corresponding to the user (or group) that's executing the "cd ~/test; sudo bash bin/RunFromRemotevm.sh" command. This causes any further execv(), execve() and fexecve() calls to be refused, in this case it's /bin/date.
The solution is obviously remove the NOEXEC: from the main /etc/sudoers file or some file under /etc/sudoers.d, whereever is this defined.

How to sudo run a local script over ssh

I try to sudo run a local script over ssh,
ssh $HOST < script.sh
and I tried
ssh -t $HOST "sudo -s && bash" < script.sh
Actually, I searched a lot in google, find some similar questions, however, I don't find a solution which can sudo run a local script.
Reading the error message of
$ ssh -t $HOST "sudo -s && bash" < script.sh
Pseudo-terminal will not be allocated because stdin is not a terminal.
makes it pretty clear what's going wrong here.
You can't use the ssh parameter -t (which sudo needs to ask for a password) whilst redirecting your script to bash's stdin of your remote session.
If it is acceptable for you, you could transfer the local script via scp to your remote machine and then execute the script without the need of I/O redirection:
scp script.sh $HOST:/tmp/ && ssh -t $HOST "sudo -s bash /tmp/script.sh"
Another way to fix your issue is to use sudo in non-interactive mode -n but for this you need to set NOPASSWD within the remote machine's sudoers file for the executing user. Then you can use
ssh $HOST "sudo -n -s bash" < script.sh
To make Edward Itrich's answer more scalable and geared towards frequent use, you can set up a system where you only run a one line script that can be quickly ported to any host, file or command in the following manner:
Create a script in your Scripts directory if you have one by changing the name you want the script to be (I use this format frequently to change 1 word for my script name and create the file, set permissions and open for editing):
newscript="runlocalscriptonremotehost.sh"
touch $newscript && chmod +x $newscript && nano $newscript
In nano fill out the script as follows placing the directory and name information of the script you want to run remotely in the variable lines of runlocalscriptonremotehost.sh(only need to edit lines 1-3):
HOSTtoCONTROL="sudoadmin#192.168.0.254"
PATHtoSCRIPT="/home/username/Scripts/"
SCRIPTname="scripttorunremotely.sh"
scp $PATHtoSCRIPT$SCRIPTname $HOSTtoCONTROL:/tmp/ && ssh -t $HOSTtoCONTROL "sudo -s bash /tmp/$SCRIPTname"
Then just run:
sh ./runlocalscriptonremotehost.sh
Keep runlocalscriptonremotehost.sh open in a tabbed text editor for quick updating, go ahead and create a bash alias for the script and you have yourself an app-ified version of this frequently used operation.
First of all divide your objective in 2 parts. 1) ssh to the host. 2) run the command you want as sudo. After you are certain that you can 1) access the host and 2) have sudo privileges then you can combine the two commands with &&. What x_cmd && y_cmd does is that the y_cmd gets executed after x_cmd has exited successfully.

cd to directory and su to particular user on remote server in script

I have some tasks to do on a remote Ubuntu CLI-only server in our offices every 2 weeks. I usually type the commands one by one, but I am trying to find a way (write a script maybe?) to decrease the time I spend in repeating those first steps.
Here is what I do:
ssh my_username#my_local_server
# asks for my_username password
cd /path/to/particular/folder
su particular_user_on_local_server
# asks for particular_user_on_local_server password
And then I can do my tasks (run some Ruby script on Rails applications, copy/remove files, restart services, etc.)
I am trying to find a way to do this in a one-step script/command:
"ssh connect then cd to directory then su to this user"
I tried to use the following:
ssh username#server 'cd /some/path/to/folder ; su other_user'
# => does not keep my connection open to the server, just execute my `cd`
# and then tells me `su: must be run from terminal`
ssh username#server 'cd /some/path/to/folder ; bash ; su other_user'
# => keeps my connection open to the server but doesn't switch to user
# and I don't see the usual `username:~/current/folder` prefix in the CLI
Is there a way to open a terminal (keep connection) on a remote server via ssh and change directory + switch to particular in a automated way? (to make things harder, I'm using Yakuake)
You can force allocation of a pseudo-terminal with -t, change to the desired directory and then replace the shell with one where you are the desired user:
ssh -t username#server 'cd /some/path/to/folder && exec bash -c "su other_user"'
sudo -H keeps the current working directory, so you could do:
ssh -t login_user#host.com 'cd /path/to/dir/; sudo -H -u other_user bash'
The -t parameter of ssh is needed otherwise the second sudo won't be able to ask you for your password.

Installation of Cron in cygwin

When I run the following command in cygwin,
$ cygrunsrv -I cron -p C:\cygwin64\bin --args -n
I get the following error
cygrunsrv: Given path doesn't point to a valid executable
Why am I getting this error?
You only gave a folder and not a path to the executable. Besides this I wouldn't recommend to use windows paths in cygwin, this can cause errors. You should write /cygdrive/c/cygwin64/bin/something instead of C:\cygwin64\bin\something.exe
Perhaps you are looking for an
installation guide, and you would like to do something like this:
Install cron as a windows service, using cygrunsrv:
cygrunsrv -I cron -p /usr/sbin/cron -a -D
net start cron

bash & s3cmd not working properly

Hi I have a shell script which contains s3cmd command on ubuntu 12.04 LTS.
I configured cron for this shell script which works fine for local environment but don't push the file to s3. But when i run shell script manually, It pushes the file to s3 without any error. I checked log and found nothing for this. Here is my shell script.
#!/bin/bash
User="abc"
datab="abc_xyz"
pass="abc#123"
Host="abc1db.instance.com"
FILE="abc_rds`date +%d_%b_%Y`.tar.gz"
S3_BKP_PATH="s3://abc/db/"
cd /abc/xyz/scripts/
mysqldump -u $User $datab -h $Host -p$pass | gzip -c > $FILE | tee -a /abc/xyz/logs/app-bkp.log
s3cmd --recursive put /abc/xyz/scripts/$FILE $S3_BKP_PATH | tee -a /abc/xyz/logs/app-bkp.log
mv /abc/xyz/scripts/$FILE /abc/xyz/backup2015/Database/
#END
This is really weird. Any suggestion would be a great help.
Check if the user running configured in crontab has correct permissions and keys in the environment.
I am guessing the keys are configured in env file as they are not here in the script.

Resources