SSH command varaible not being seen - linux

I am trying to pass a variable using a remote SSH command connection. I want to rename the data file with the station variable. The SSH command is being run on a Windows PC to a Ubuntu PC. When the script is run from Python on the Windows PC it makes the connection but won’t rename the file. Can someone suggest what I am doing wrong?
import paramiko
ssh_client =paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(hostname="192.168.1.xx",username="xx",password="xxxx")
station = "NAA"
stdin,stdout,stderr=ssh_client.exec_command(
"mv /home/pi/vlfrx-tools/data/station.data \
/home/pi/vlfrx-tools/data/$station.dat")

station is variable in Python, not in shell - and you need Python functions, not shell $
string-formatting ("{}".format(station))
cmd = "mv /home/pi/vlfrx-tools/data/station.data \
/home/pi/vlfrx-tools/data/{}.dat".format(station)
print("CMD:", cmd)
stdin, stdout, stderr = ssh_client.exec_command(cmd)
or f-string (f"{station}")
cmd = f"mv /home/pi/vlfrx-tools/data/station.data \
/home/pi/vlfrx-tools/data/{station}.dat"
print("CMD:", cmd)
stdin, stdout, stderr = ssh_client.exec_command(cmd)
or old method with % and %s ("%s" % station)
cmd = "mv /home/pi/vlfrx-tools/data/station.data \
/home/pi/vlfrx-tools/data/%s.dat" % station
print("CMD:", cmd)
stdin, stdout, stderr = ssh_client.exec_command(cmd)
See more on page PyFormat.info
EDIT:
It is not tested but probably you could use $station if you set shell variable using EXPORT - but it still need to use Python to format string.
cmd = "EXPORT station={} ; mv ... .../$station.dat".format(station)
I'm not sure but this shell variable can be temporary and you may n need to set it in every exec_command() which need it.

Related

How to execute multiple command in command line with Python 3.x

Thanks, everyone. I am writing a script to execute multiple command in command line. It is one part of my whole script.
I have checked many answers, but none of them solved my problem. Some of them are too old to use.
My commands are like this
cd C:/Users/Bruce/Desktop/test
set I_MPI_ROOT=C:\Program Files\firemodels\FDS6\bin\mpi
set PATH=%I_MPI_ROOT%;%PATH%
fds_local -o 1 -p 1 test.fds
python test.py
I tried to use subprocess.run or os.system, etc. But they do not work. I don't know what happened. Here is an example I have used.
file_path = "C:/Users/Bruce/Desktop/test"
cmd1 = 'cd ' + file_path
cmd2 = "set I_MPI_ROOT=C:/Program Files/firemodels/FDS6/bin/mpi"
cmd3 = "set PATH=%I_MPI_ROOT%;%PATH%"
nMPI = '-p {}'.format(1)
nOpenMP = '-o {}'.format(1)
cmd4 = "fds_local {} {} ".format(nMPI, nOpenMP) + file_name
cmd = '{} && {} && {} && {}'.format(cmd1, cmd2, cmd3, cmd4)
subprocess.Popen(cmd, shell=True)
I am not quite familiar with subprocess. But I have worked for one week to solve this problem. It makes me crazy. Any suggestions?
cmd needs to be a list of text, as whatever you see on shell separated by blanks. E.g.
"ls -l /var/www" should be cmd=['ls','-l','/var/www']
That said, cd is better done with os.chdir. Set is better done with providing the environ dictionary into subprocess calls. Multiline is better done by putting several lines into a shell script (which can take parameters) so you do not have to mess up in python.
here is an example. If a command is not in OS's $PATH, you can fully qualify its path
from subprocess import Popen
cmd=['cd',r'C:\Program Files (x86)\Notepad++','&&','notepad','LICENSE','&&',r'D:\Program\Tools\Putty.exe','-v']
d=Popen(cmd, shell=True)

Why does python's Popen fail to pass environment variables on Mac OS X?

I am writing a program that needs to spawn a new terminal window and launch a server in this new terminal window (with environment variables passed to the child process).
I have been able to achieve this on windows 10 and linux without much trouble but on Mac OS X (Big Sur) the environment variables are not being passed to the child process. Here is an example code snippet capturing the behaviour I want to achieve:
#!/usr/bin/python3
import subprocess
import os
command = "bash -c 'export'"
env = os.environ.copy()
env["MYVAR"] = "VAL"
process = subprocess.Popen(['osascript', '-e', f"tell application \"Terminal\" to do script \"{command}\""], env=env)
Unfortunately, MYVAR is not present in the exported environment variables.
Any ideas if I am doing something wrong here?
Is this a bug in python's standard library ('subprocess' module)?
edit - thank you Ben Paterson (previously my example code had a bug) - I have updated the code example but I still have the same issue.
edit - I have narrowed this down further. subprocess.Popen is doing what it is supposed to do with environment variables when I do:
command = "bash -c 'export > c.txt'"
process = subprocess.Popen(shlex.split(command, posix=1), env=env)
But when I try to wrap the command with osascript -e ... (to spawn it in a new terminal window) the environment variable "MYVAR" does not appear in the c.txt file.
command = "bash -c 'export > c.txt'"
process = subprocess.Popen(['osascript', '-e', f"tell application \"Terminal\" to do script \"{command}\""], env=env)
dict.update returns None, so the OP code is equivalent to passing env=None to subprocess.Popen. Write instead:
env = os.environ.copy()
env["MYVAR"] = "VAL"
subprocess.Popen(..., env=env)

Why does my subprocess.call(command) code write nothing to a txt output file?

I am trying to use the subprocess module of Python 3 to call a command (i.e. netstat -ano > output.txt), but when I run the script, the output file gets created, but nothing gets written into it, in other words, its just blank.
I've tried looking into the subprocess module API about how the subprocess.call() method works, and searching Google for a solution. I tried using the subprocess.check_output() method, but it printed it out as an unformatted string, rather than the column-like format that entering netstat -ano into Windows command prompt usually gives.
This is my current code:
import subprocess as sp
t = open('output.txt', 'w')
command = 'netstat -ano > output.txt'
cmd = command.split(' ')
sp.call(cmd) # sp.call(['netstat', '-ano', '>', 'output.txt'])
t.close()
I thought it was maybe because I didn't use the write() method. But when I changed my code to be
t.write(sp.call(cmd))
I would get the error that the write() method expects a string input, but received an int.
I expected the output to give me what I would normally see if I were to open command prompt (in Windows 10) and type netstat -ano > output.txt, which would normally generate a file called "output.txt" and have the output of my netstat command.
However when I run that command in my current script, it creates the 'output.txt' file, but there's nothing written in it.
If you just cast your cmd to a str like t.write(str(cmd))it will write to output.txt
import subprocess as sp
t = open('output.txt', 'w')
command = 'netstat -ano > output.txt'
cmd = command.split(' ')
t.write(str(cmd))
sp.call(cmd) # sp.call(['netstat', '-ano', '>', 'output.txt'])
t.close()

remote powershell scripts on windows not running through python script in linux

I have a python script written using paramiko and pysphere.this script is in linnux box.i have some powershell scripts on windows machine which i have to run one after the other(after each script ends obviously),but the point here is through my pythonscript it is not running the powershell scripts on windows machine.Kindly help.
PS;i have to run python script fromlinux and powershell scriupts on windows.
Here is a snippet of code for running powershell scripts:
target_vm1 = connect_Esxi_Server(return_list[0])
print "Again connected to vm:" + return_list[0]
target_vm1.login_in_guest(vmUser,vmPass)
list_scripts = target_vm1.list_files(VM_SCRIPT_LOCATION)
for f in list_scripts:
size = f['size']
**if size <> 0:**
paths = f['path']
print paths
#for all_scripts in paths:
*****print "script running is :" , paths*****
path_l = os.path.join(VM_SCRIPT_LOCATION + '\\'+ paths)
*****print path_l*****
run_script =
subprocess.Popen([r'c:\\Windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe',". path_l"], shell=True)
result = run_script.wait()
print "result is:", result
I doubt whether subprocess will work.
Please note that the bold prints given above are giving the correct script to run.there are many powershell scriptsinside the fo;der,so looping throught it and running each one of them.
Any help would be appreciated,this thing is eating my heads off.....argghhhhhhhh..
Cheers,
NJ
I run powershell commands directly using paramiko:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('10.10.0.2', username='vipul', password='password')
cmd = "powershell -InputFormat none -OutputFormat text echo Hello"
stdin, stdout, stderr = self.ssh.exec_command(cmd)
print stdout.readlines()
Here 10.10.0.2 is my windows machine. Using cygwin sshd server for ssh.

/usr/bin/env questions regarding shebang line pecularities

Questions:
What does the kernel do if you stick a shell-script into the shebang line?
How does the Kernel know which interpreter to launch?
Explanation:
I recently wanted to write a wrapper around /usr/bin/env because my CGI Environment does not allow me to set the PATH variable, except globally (which of course sucks!).
So I thought, "OK. Let's set PREPENDPATH and set PATH in a wrapper around env.". The resulting script (here called env.1) looked like this:
#!/bin/bash
/usr/bin/env PATH=$PREPENDPATH:$PATH $*
which looks like it should work. I checked how they both react, after setting PREPENDPATH:
$ which /usr/bin/env python
/usr/bin/env
/usr/bin/python
$ which /usr/bin/env.1 python
/usr/bin/env
/home/pi/prepend/bin/python
Look absolutely perfect! So far, so good. But look what happens to "Hello World!".
# Shebang is #!/usr/bin/env python
$ test-env.py
Hello World!
# Shebang is #!/usr/bin/env.1 python
$ test-env.1.py
Warning: unknown mime-type for "Hello World!" -- using "application/*"
Error: no such file "Hello World!"
I guess I am missing something pretty fundamental about UNIX.
I'm pretty lost, even after looking at the source code of the original env. It sets the environment and launches the program (or so it seems to me...).
First of all, you should very seldom use $* and you should almost always use "$#" instead. There are a number of questions here on SO which explain the ins and outs of why.
Second - the env command has two main uses. One is to print the current environment; the other is to completely control the environment of a command when it is run. The third use, which you are demonstrating, is to modify the environment, but frankly there's no need for that - the shells are quite capable of handling that for you.
Mode 1:
env
Mode 2:
env -i HOME=$HOME PATH=$PREPENDPATH:$PATH ... command args
This version cancels all inherited environment variables and runs command with precisely the environment set by the ENVVAR=value options.
The third mode - amending the environment - is less important because you can do that fine with regular (civilized) shells. (That means "not C shell" - again, there are other questions on SO with answers that explain that.) For example, you could perfectly well do:
#!/bin/bash
export PATH=${PREPENDPATH:?}:$PATH
exec python "$#"
This insists that $PREPENDPATH is set to a non-empty string in the environment, and then prepends it to $PATH, and exports the new PATH setting. Then, using that new PATH, it executes the python program with the relevant arguments. The exec replaces the shell script with python. Note that this is quite different from:
#!/bin/bash
PATH=${PREPENDPATH:?}:$PATH exec python "$#"
Superficially, this is the same. However, this will execute the python found on the pre-existing PATH, albeit with the new value of PATH in the process's environment. So, in the example, you'd end up executing Python from /usr/bin and not the one from /home/pi/prepend/bin.
In your situation, I would probably not use env and would just use an appropriate variant of the script with the explicit export.
The env command is unusual because it does not recognize the double-dash to separate options from the rest of the command. This is in part because it does not take many options, and in part because it is not clear whether the ENVVAR=value options should come before or after the double dash.
I actually have a series of scripts for running (different versions of) a database server. These scripts really use env (and a bunch of home-grown programs) to control the environment of the server:
#!/bin/ksh
#
# #(#)$Id: boot.black_19.sh,v 1.3 2008/06/25 15:44:44 jleffler Exp $
#
# Boot server black_19 - IDS 11.50.FC1
IXD=/usr/informix/11.50.FC1
IXS=black_19
cd $IXD || exit 1
IXF=$IXD/do.not.start.$IXS
if [ -f $IXF ]
then
echo "$0: will not start server $IXS because file $IXF exists" 1>&2
exit 1
fi
ONINIT=$IXD/bin/oninit.$IXS
if [ ! -f $ONINIT ]
then ONINIT=$IXD/bin/oninit
fi
tmpdir=$IXD/tmp
DAEMONIZE=/work1/jleffler/bin/daemonize
stdout=$tmpdir/$IXS.stdout
stderr=$tmpdir/$IXS.stderr
if [ ! -d $tmpdir ]
then asroot -u informix -g informix -C -- mkdir -p $tmpdir
fi
# Specialized programs carried to extremes:
# * asroot sets UID and GID values and then executes
# * env, which sets the environment precisely and then executes
# * daemonize, which makes the process into a daemon and then executes
# * oninit, which is what we really wanted to run in the first place!
# NB: daemonize defaults stdin to /dev/null and could set umask but
# oninit dinks with it all the time so there is no real point.
# NB: daemonize should not be necessary, but oninit doesn't close its
# controlling terminal and therefore causes cron-jobs that restart
# it to hang, and interactive shells that started it to hang, and
# tracing programs.
# ??? Anyone want to integrate truss into this sequence?
asroot -u informix -g informix -C -a dbaao -a dbsso -- \
env -i HOME=$IXD \
INFORMIXDIR=$IXD \
INFORMIXSERVER=$IXS \
INFORMIXCONCSMCFG=$IXD/etc/concsm.$IXS \
IFX_LISTEN_TIMEOUT=3 \
ONCONFIG=onconfig.$IXS \
PATH=/usr/bin:$IXD/bin \
SHELL=/usr/bin/ksh \
TZ=UTC0 \
$DAEMONIZE -act -d $IXD -o $stdout -e $stderr -- \
$ONINIT "$#"
case "$*" in
(*v*) track-oninit-v $stdout;;
esac
You should carefully read the wikipedia article about shebang.
When your system sees the magic number corresponding to the shebang, it does an execve on the given path after the shebang and gives the script itself as an argument.
Your script fails because the file you give (/usr/bin/env.1) is not an executable, but begins itself by a shebang....
Ideally, you could resolve it using... env on your script with this line as a shebang:
#!/usr/bin/env /usr/bin/env.1 python
It won't work though on linux as it treats "/usr/bin/env.1 python" as a path (it doesn't split arguments)
So the only way I see is to write your env.1 in C
EDIT: seems like no one belives me ^^, so I've written a simple and dirty env.1.c:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
const char* prependpath = "/your/prepend/path/here:";
int main(int argc, char** argv){
int args_len = argc + 1;
char* args[args_len];
const char* env = "/usr/bin/env";
int i;
/* arguments: the same */
args[0] = env;
for(i=1; i<argc; i++)
args[i] = argv[i];
args[argc] = NULL;
/* environment */
char* p = getenv("PATH");
char* newpath = (char*) malloc(strlen(p)
+ strlen(prependpath));
sprintf(newpath, "%s%s", prependpath, p);
setenv("PATH", newpath, 1);
execv(env, args);
return 0;
}

Resources