I´m using the subprocess module and it works fine, the only thing is that the stdout returns a value "b'" or in some cases longer text like "user config - ignore ...". Is it possible to remove this first part of the stdout without using str.substring() or similar methodes.
output = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
in the abrove example the std.out.decode() function could be used and it will be saved as < str >
decoded_output = nodes.stdout.decode()
And if some type of commands support json output(for example pvesh in proxmox) you could use the string and load it as json.
json_output = json.loads(decoded_output)
Related
I am trying to execute the command abs.__ doc__ inside the exec() function but for some reason it does not work.
function = input("Please enter the name of a function: ")
proper_string = str(function) + "." + "__doc__"
exec(proper_string)
Essentially, I am going through a series of exercises and one of them asks to provide a short description of the entered function using the __ doc__ attribute. I am trying with abs.__ doc__ but my command line comes empty. When I run python in the command line and type in abs.__ doc__ without anything else it works, but for some reason when I try to input it as a string into the exec() command I can't get any output. Any help would be greatly appreciated. (I have deliberately added spaces in this description concerning the attribute I am trying to use because I get bold type without any of the underscores showing.)
As a note, I do not think I have imported any libraries that could interfere, but these are the libraries that I have imported so far:
import sys
import datetime
from math import pi
My Python version is Python 3.10.4. My operating system is Windows 10.
abs.__doc__ is a string. You should use eval instead of exec to get the string.
Example:
function = input("Please enter the name of a function: ")
proper_string = str(function) + "." + "__doc__"
doc = eval(proper_string)
You can access it using globals():
def func():
"""Func"""
pass
mine = input("Please enter the name of a function: ")
print(globals()[mine].__doc__)
globals() return a dictionary that keeps track of all the module-level definitions. globals()[mine] is just trying to lookup for the name stored in mine; which is a function object if you assign mine to "func".
As for abs and int -- since these are builtins -- you can look it up directly using getattr(abs, "__doc__") or a more explicit: getattr(__builtins__, "abs").__doc__.
There are different ways to lookup for a python object corresponding to a given string; it's better not to use exec and eval unless really needed.
I am trying to download an audio dataset, I have all the audio's link stored in a csv. I read the csv and get all the links now I have to download the audio's one by one. Here's the code.
if not os.path.exists(audio_path_orig):
line = f"wget {episode_url}"
print('command:',line)
process = subprocess.Popen([(line)],shell=True)
process.wait()
for a sample, the line variable contains
wget https://stutterrockstar.files.wordpress.com/2012/08/male-episode-14-with-grant.mp3
Note that the url works and you can check for yourself, but when I try to download it using python it gives me the below error.
error: The filename, directory name, or volume label syntax is incorrect
Look at the documentation for Popen:
args should be a sequence of program arguments or else a single string or path-like object. By default, the program to execute is the first item in args if args is a sequence.
And:
Unless otherwise stated, it is recommended to pass args as a sequence.
Also:
The recommended approach to invoking subprocesses is to use the run() function for all use cases it can handle.
So use:
if not os.path.exists(audio_path_orig):
args = ["wget", f"{episode_url}"]
print('command:', " ".join(args))
result = subprocess.run(
args, capture_output=True, text=True
)
if result.returncode != 0:
print(f"wget returned error {result.returncode}")
print("Standard output:")
print(result.stdout)
print("Error output:")
print(result.stderr)
I am trying to build this script here which will accept a tracking number as an input, build the URL and then get the HTML response. I am trying to display this response in the terminal using the html2text program. I am trying to emulate the command "html2text filename" which is typed in the terminal into my python script, however the raw HTML file is displayed instead of the standard html2text output. where am i going wrong here ?
#!/usr/bin/python3
#trial using bash calls no html2text library
import requests
import subprocess # to execute bash commands
try:
check_for_package = subprocess.Popen(("dpkg","-s","html2text"), stdout=subprocess.PIPE)
output = subprocess.check_output(("grep", "Status"), stdin=check_for_package.stdout)
check_for_package.wait()
opstr=str(output, 'utf-8')
print(opstr)
if opstr == "Status: install ok installed\n" :
print("Package installed")
except:
print("installing html2text..............................")
install_pkg = subprocess.check_call("sudo apt install html2text", shell=True)
r = requests.get("http://ipsweb.ptcmysore.gov.in/ipswebtracking/IPSWeb_item_events.asp?itemid=RT404715658HK&Submit=Submit")
print(r.status_code)
raw_html=r.text
#print(raw_html)
#raw_html = str(raw_html , 'utf-8')
view_html = subprocess.Popen(["html2text", raw_html])
output = view_html.communicate()
view_html.wait()
#view_html = subprocess.Popen("html2text template", shell=True)
print(output)
Update: I have got around the issue currently but storing the output of r.text in a file and then calling it with html2text
The version of html2text you're using expects the argument to be a filename, not the HTML. To provide the HTML to it, you need to run the command with no argument, and provide the HTML on its standard input.
view_html = subprocess.Popen(["html2text"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
view_html.stdin.write(raw_html)
view_html.stdin.close() # Close the pipe so html2text will get EOF
output = view_html.stdout.read()
I'd like to pass a JSON object as a command line argument to node. Something like this:
node file.js --data { "name": "Dave" }
What's the best way to do this or is there another more advisable way to do accomplish the same thing?
if its a small amount of data, I'd use https://www.npmjs.com/package/minimist, which is a command line argument parser for nodejs. It's not json, but you can simply pass options like
--name=Foo
or
-n Foo
I think this is better suited for a command line tool than json.
If you have a large amount of data you want to use you're better of with creating a json file and only pass the file name as command line argument, so that your program can load and parse it then.
Big objects as command line argument, most likely, aren't a good idea.
this works for me:
$ node foo.js --json-array='["zoom"]'
then in my code I have:
import * as _ from 'lodash';
const parsed = JSON.parse(cliOpts.json_array || []);
_.flattenDeep([parsed]).forEach(item => console.log(item));
I use dashdash, which I think is the best choice when it comes to command line parsing.
To do the same thing with an object, just use:
$ node foo.js --json-object='{"bar": true}'
This might be a bit overkill and not appropriate for what you're doing because it renders the JSON unreadable, but I found a robust way (as in "works on any OS") to do this was to use base64 encoding.
I wanted to pass around lots of options via JSON between parts of my program (a master node routine calling a bunch of small slave node routines). My JSON was quite big, with annoying characters like quotes and backslashes so it sounded painful to sanitize that (particularly in a multi-OS context).
In the end, my code (TypeScript) looks like this:
in the calling program:
const buffer: Buffer = new Buffer(JSON.stringify(myJson));
const command: string = 'node slave.js --json "' + buffer.toString('base64') + '" --b64';
const slicing: child_process.ChildProcess = child_process.exec(command, ...)
in the receiving program:
let inputJson: string;
if (commander.json) {
inputJson = commander.json;
if (commander.b64) {
inputJson = new Buffer(inputJson, 'base64').toString('ascii');
}
}
(that --b64 flag allows me to still choose between manually entering a normal JSON, or use the base64 version, also I'm using commander just for convenience)
I've seen various other posts about this, but unfortunately I still haven't been able to figure this out:
If I do something like this:
temp = subprocess.Popen("whoami", shell=True, stdout=subprocess.PIPE)
out = temp.communicate()
print(out)
then I get something of the form
(b'username\n', None)
With other attempts (such as adding a .wait()) I've been getting the username on one line, and a 0 as a return code on the next, however only the 0 was being stored in my variable.
Is there an easy way I can format that to store only the username in a variable? I tried something like out[3:11] but that didn't work.
Thanks
The easiest way is to use subprocess.check_output():
username = subprocess.check_output("whoami").strip()
username = subprocess.check_output(['whoami']).strip()
Or better:
username = getpass.getuser()
Adding the universal_newlines=True argument tells subprocess calls to return strings. I've been using this instead of explicitly decoding bytestreams.
temp = subprocess.Popen("whoami",
shell=True,
stdout=subprocess.PIPE,
universal_newlines=True)
out = temp.communicate()
print(out)
# prints: ('username\n', None)
Subprocess docs:
If universal_newlines is True, the file objects stdin, stdout and stderr will be opened as text streams in universal newlines mode using the encoding returned by locale.getpreferredencoding(False).
After communicate, you can read the return code from temp.returncode.
From http://docs.python.org/dev/library/subprocess.html#subprocess.Popen.returncode:
Popen.returncode
The child return code, set by poll() and wait() (and indirectly by communicate()). A None value indicates that the process hasn’t terminated yet.
If all you care about is that the call succeeds, use subprocess.check_output; non-zero return will raise CalledProcessError.