How to execute CURL API command from python - python-3.x

Here is the CURL API I have and I want to execute through python but I'm getting error " FileNotFoundError: [Errno 2] No such file or directory: 'curl' " but I'm using OS library in python.
import os
os.system(curl -H "public-api-token: 2dd81854fca7bcbd657bd06a99a46l9s" -X PUT -d "dataMapsGoogle=google.com" https://api.googlemaps/v1/data/url')
How to execute the CURL command through python? I tried with os.system() but if there is any better answer then it would be helpful.

Use 'Subprocess' module:
import subprocess
cmd = ['curl', '-H', "public-api-token:....", ...]
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()

Related

SSH using Mobaxterm in python

Am trying to automate a SSH access to a remote server with Mobaxterm in Python.
I could launch Mobaxterm and send the SSH command to it using below command
import subprocess
import time
moba_path = "C:\Program Files (x86)\Mobatek\MobaXterm\MobaXterm.exe"
output = subprocess.Popen(f'{moba_path} -newtab "ssh -t -l ubuntu server1 hostname"', shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
And I get the below expected output, with the "hostname"
How do I return back the response, so that I can check if the SSH access was successful or not ?
.

Getting stdout of nested shell command/script execution in Python

I am running a ansible playbook through python program
cmd = 'ansible-playbook -i inventory primary.yml'
process = subprocess.Popen(cmd, stdout=subprocess.PIPE, encoding='utf-8')
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if stdout:
print(output.strip())
rc = process.poll()
return rc
This playbook internally calls a shell script like below
- name: call the install script
shell: sudo ./myscript.sh
currently, I am getting the streaming stdout of the ansible-playbook command but I cannot see anything when this myscript.sh get executed.
I have to only wait blindly while the script finished its execution without any stdout. How can see the output of all the child process triggered from main shell command through python.

Python: subprocess.call('nvm ls', shell=True) giving this error /bin/sh: nvm: command not found

def runLinter2( ):
subprocess.call('nvm ls', shell=True)
return
When i run this python script, it gives error "error /bin/sh: nvm: command not found"
but when i tun simply this nvm ls in the terminal, it works.
To run python script, i am using this command python3 test.py
Could i get help on it?
The solution is here Python subprocess.call a bash alias
You should be able to call
subprocess.call(['/bin/bash', '-i', '-c', command])
so that you can access the alias defined in ~/.bashrc

ps/pgrep cannot find running script missing #!/bin/bash

I found ps or pgrep cannot find running script without "#!/bin/bash"
Here is a sample.sh:
while true
do
echo $(date)
done
start the script (ubuntu 18.04, Linux version 4.15.0-101-generic):
$echo $BASH
/bin/bash
./sample.sh
open another terminal, ps only find the command grep
$ps -aux |grep sample.sh
16887 0.0 0.0 16184 1008 pts/4 S+ 07:12 0:00 grep --color=auto sample
pgrep find nothing
$pgrep sample
$
But if I add "#!/bin/bash" to sample.sh, everything works now:
#!/bin/bash <-----add this line
while true
do
echo $(date)
done
I am wondering why.
Let's start with the second of your cases, namely where you do have #!/bin/bash, because it is actually the easier one to deal with first.
With the #!/bin/bash
When you execute a script which starts with #!/path/to/interpreter, the Linux kernel will understand this syntax and will invoke the specified interpreter for you in the same way as if you had explicitly added /path/to/interpreter to the start of the command line. So in the case of your script starting with #!/bin/bash, if you look using ps ux, you will see the command line /bin/bash ./sample.sh.
Without the #!/bin/bash
Now turning to the other one where the #!/bin/bash is missing. This case is more subtle.
A file which is neither a compiled executable nor a file starting with the #! line cannot be executed by the Linux kernel at all. Here is an example of trying to run the sample.sh without the #!/bin/bash line from a python script:
>>> import subprocess
>>> p = subprocess.Popen("./sample.sh")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/subprocess.py", line 394, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1047, in _execute_child
raise child_exception
OSError: [Errno 8] Exec format error
And to show that this is not just a python issue, here is a demonstration of exactly the same thing, but from a C program. Here is the C code:
#include <stdio.h>
#include <unistd.h>
int main() {
execl("./sample.sh", "sample.sh", NULL);
/* exec failed if reached */
perror("exec failed");
return 1;
}
and here is the output:
exec failed: Exec format error
So what is happening here case when you run the script is that because you are invoking it from a bash shell, bash is giving some fault tolerance by running the commands directly after the attempt to "exec" the script has failed.
What is happening in more detail is that:
bash forks a subshell,
inside the subshell it straight away does a call to the Linux kernel to "exec" your executable, and if successful, that would end that (subshell) process and replace it with a process running the executable
however, the exec is not successful, and this means that the subshell is still running
at that point the subshell just reads the commands inside your script and starts executing them directly.
The overall effect is very similar to the #!/bin/bash case, but because the subshell was just started by forking your original bash process, it has the same command line, i.e. just bash, without any command line argument. If you look for this subshell in the output of ps uxf (a tree-like view of your processes), you will see it just as
bash
\_ bash
whereas in the #!/bin/bash case you get:
bash
\_ /bin/bash ./sample.sh

Execute shell script on cygwin from Python

I want to execute a shell script, on cygwin from Python. The shell script is creating a file as an output.
I tried
import os
import subprocess
os.chdir(r"C:\\cygwin64\\bin\\ ")
cmd = ["bash", "-c", 'cd /<path for the script>; ./test.sh']
subprocess.call(cmd)
This works:
import os, subprocess
os.chdir(r"C:\cygwin64\bin")
cmd = ["bash", "-c", "cd $HOME; pwd; exit"]
ret = subprocess.call(cmd)

Resources