I have a (free) Lattice Diamond 3.7 installation on Windows 7 and I would like to run synthesis jobs from command line. I generated a *.prj file containing all relevant command line options, like part, toplevel and all source files.
Then I started pnmainc.exe from my PowerShell and executed: synthesis -f arith_prng.prj
-a "ECP5UM"
-top arith_prng
-logfile D:\git\PoC\temp\lattice\arith_prng.lse.log
-lib poc
-vhd D:/git/PoC/tb/common/my_project.vhdl
-vhd D:/git/PoC/tb/common/my_config_KC705.vhdl
-vhd D:/git/PoC/src/common/utils.vhdl
-vhd D:/git/PoC/src/common/config.vhdl
-vhd D:/git/PoC/src/common/math.vhdl
-vhd D:/git/PoC/src/common/strings.vhdl
-vhd D:/git/PoC/src/common/vectors.vhdl
-vhd D:/git/PoC/src/common/physical.vhdl
-vhd D:/git/PoC/src/common/components.vhdl
-vhd D:/git/PoC/src/arith/arith.pkg.vhdl
-vhd D:/git/PoC/src/arith/arith_prng.vhdl
The synthesis process started and finished. Next I tried to achieve the same behavior with a wrapping Python script, controlling STDIN and STDOUT of a subprocess.
I can execute some commands, but synthesis is reported as unknown command. It's not listed in help. I assume, that's because synthesis.exe is an external program.
For example, if I send help, then all help topics are displayed.
What can I do to run Tcl commands for Diamond from Python?
That's my Python code for experimenting on a Tcl-Shell wrapper.
from subprocess import Popen as Subprocess_Popen
from subprocess import PIPE as Subprocess_Pipe
from subprocess import STDOUT as Subprocess_StdOut
class Executable:
_POC_BOUNDARY = "====== POC BOUNDARY ======"
def __init__(self, executablePath):
self._process = None
self._executablePath = executablePath
#property
def Path(self):
return self._executablePath
def StartProcess(self, parameterList):
parameterList.insert(0, str(self._executablePath))
self._process = Subprocess_Popen(parameterList, stdin=Subprocess_Pipe, stdout=Subprocess_Pipe, stderr=Subprocess_StdOut, universal_newlines=True, bufsize=16, shell=True)
def Send(self, line):
print(" sending command: {0}".format(line))
self._process.stdin.write(line + "\n")
self._process.stdin.flush()
def SendBoundary(self):
print(" sending boundary")
self.Send("puts \"{0}\"\n".format(self._POC_BOUNDARY))
def GetReader(self):
for line in iter(self._process.stdout.readline, ""):
yield line[:-1]
tclShell = Executable(r"D:\Lattice\diamond\3.7_x64\bin\nt64\pnmainc.exe")
print("starting process: {0!s}".format(tclShell.Path))
tclShell.StartProcess([])
reader = tclShell.GetReader()
iterator = iter(reader)
# send boundary and wait until pnmainc.exe is ready
tclShell.SendBoundary()
for line in iterator:
print(line)
if (line == tclShell._POC_BOUNDARY):
break
print("pnmainc.exe is ready...")
tclShell.Send("help")
tclShell.SendBoundary()
for line in iterator:
print(line)
if (line == tclShell._POC_BOUNDARY):
break
print("pnmainc.exe is ready...")
tclShell.Send("synthesis -f arith_prng.prj")
tclShell.SendBoundary()
for line in iterator:
print(line)
if (line == tclShell._POC_BOUNDARY):
break
print("pnmainc.exe is ready...")
print("exit program")
tclShell.Send("exit")
print("reading output")
for line in iterator:
print(line)
print("done")
To run a synthesis, there is no need for TCL scripting with pnmainc. You can directly run the synthesizer binary as follows.
Windows
The synthesises.exe must be run from a command shell with all necessary Lattice environment variables. The environment is setup by an executable called pnwrap.exe in the bin/nt64 directory. In your example, this executable must be called as follows from a shell:
D:\Lattice\diamond\3.7_x64\bin\nt64\pnwrap.exe -exec D:\Lattice\diamond\3.7_x64\ispfpga\bin\nt64\synthesis.exe -f arith_prng.prj
The -exec parameter specifies the executable to run in the Lattice environment. All following arguments are passed to synthesis.exe.
From within Python, you can run the synthesizer with (using your Executable class):
exe = Executable(r"D:\Lattice\diamond\3.7_x64\bin\nt64\pnwrap.exe")
parameterList = ['-exec', r"D:\Lattice\diamond\3.7_x64\ispfpga\bin\nt64\synthesis.exe", '-f', 'arith_prng']
exe.StartProcess(parameterList)
Unfortunately, pnwrap.exe opens a new command shell window to setup the environment. Thus, the output cannot be redirected via a pipe. But, the log will also be found in the log-file specified in the .prj file.
Linux
At first, you will have to load the Lattice Diamond environment to setup all necessary environment variables. With Bash syntax, this will be:
bindir=/opt/lattice/diamond/3.7_x64/bin/lin64
source $bindir/diamond_env
Then, you can directly execute synthesis from the ispfpga/bin/lin64 directory with a command line as in your example:
synthesis -f arith_prng.prj
The synthesis executable will be found in $PATH.
From within Python, you can run the synthesizer with (using your Executable class):
exe = Executable('/opt/lattice/diamond/3.7_x64/ispfpga/bin/lin64/synthesis')
parameterList = ['-f', 'arith_prng']
exe.StartProcess(parameterList)
Related
I am trying to build a system where a list of the available wifi networks would be stored for some specific purpose. Now the problem is that executing a system command with os.system() in a variable 'res' only stores the return value of the command which is useless to me at this point.
I know of no approach that provide me the desired result.
import os
res = os.system('nmcli dev wifi')
The variable res must store all the desired result into it rather than the return value. Even if it stores result, it will do the work.
You can do this using the Popen method from the subprocess module
from subprocess import Popen, PIPE
#First argument is the program name.
arguments = ['ls', '-l', '-a']
#Run the program ls as subprocess.
process = Popen(arguments, stdout=PIPE, stderr=PIPE)
#Get the output or any errors. Be aware, they are going to be
#in bytes!!!
stdout, stderr = process.communicate()
#Print the output of the ls command.
print(bytes.decode(stdout))
I have a python program, which calls the shell script through subprocess() module. I am looking for a way to pass a simple file, as an input to shell script. Does this happen through subproess and popen?
I have tried this code for an AWS lambda function
It would be nice/helpful if you could share some excerpt of your code in your question.
But assuming bits of it.
Here is a way to achieve this.
import shlex
from subprocess import PIPE, Popen
import logger
def run_script(script_path, script_args):
"""
This function will run a shell script.
:param script_path: String: the path of script that needs to be called
:param script_args: String: the arguments needed by the shell script
:return:
"""
logger.info("Running bash script {script} with parameters:{params}".format(script=script_path, params=script_args))
# Adding a whitespace in shlex.split because the path gets distorted if args are added without it
session = Popen(shlex.split(script_path + " " + script_args), stderr=PIPE, stdout=PIPE, shell=False)
stdout, stderr = session.communicate()
# Beware that stdout and stderr will be bytes so in order to get a proper python string decode the values.
logger.debug(stdout.decode('utf-8'))
if stderr:
logger.error(stderr)
raise Exception("Error " + stderr.decode('utf-8'))
return True
Now a couple of things to note here
Your bash script should be able to handle the args properly may it be $1 or named params like --file or -f
Just give all the params you want in the string array in shlex method.
Also note the comments mentioned in code above.
I run the python script using terminal command
python3 myScript.py
It's simply run my program but if i want to open python console after complete run of my script so that i can access my script's variables.
So, What should i do ? and How can i get my script's variables after run the code using terminal ?
Open a python terminal (type 'python' in cmd);
Paste this (replace 'myScript.py' with your script filename):
def run():
t = ""
with open('myScript.py') as f:
t = f.read()
return t
Type exec(run()). Now you will have access to the variables defined in myScript.py.
I needed to do this so I could explore the result of a request from the requests library, without having to paste the code to make the requests every time.
Make the program run the other program you want with the variables as arguments. For example:
#program1
var1=7
var2="hi"
import os
os.system("python %s %d %s" % (filename, var1, var2))
#program2
import sys
#do something such as:
print(sys.argv[1]) #for var1
print(sys.argv[2]) #for var2
Basically, you are running program2 with arguments that can be referenced later.
Hope this helps :)
Using subprocess.Popen is producing incomplete results where as subprocess.call is giving correct output
This is related to a regression script which has 6 jobs and each job performs same task but on different input files. And I'm running everything in parallel using SubProcess.Popen
Task is performed using a shell script which has calls to a bunch of C-compiled executables whose job is to generate some text reports followed by converting text report info into jpg images
sample of shell script (runit is the file name) with calling C-compile executables
#!/bin/csh -f
#file name : runit
#C - Executable 1
clean_spgs
#C - Executable 2
scrub_spgs_all file1
scrub_spgs_all file2
#C - Executable 3
scrub_pick file1 1000
scrub_pick file2 1000
while using subprocess.Popen, both scrub_spgs_all and scrub_pick are trying to run in parallel causing the script to generate incomplete results i.e. output text files doesn't contain complete information and also missing some of output text reports.
subprocess.Popen call is
resrun_proc = subprocess.Popen("./"+runrescompare, shell=True, cwd=rescompare_dir, stdout=subprocess.PIPE, stderr=subprocess.POPT, universal_newlines=True)
where runrescompare is a shell script and has
#!/bin/csh
#some other text
./runit
Where as using subprocess.call is generating all the output text files and jpg images correctly but I can't run all 6 jobs in parallel.
resrun_proc = subprocess.call("./"+runrescompare, shell=True, cwd=rescompare_dir, stdout=subprocess.PIPE, stderr=subprocess.POPT, universal_newlines=True)
What is the correct way to call a C-exctuable from shell script using python subprocess calls where all 6 jobs can run in parallel(using python 3.5.1?
Thanks.
You tried to simulate multiprocessing with subprocess.Popen() which does not work like you want: the output is blocked after a while unless you consume it, for instance with communicate() (but this is blocking) or by reading the output, but with 6 concurrent handles in a loop, you are bound to get deadlocks.
The best way is run the subprocess.call lines in separate threads.
There are several ways to do it. Small simple example with locking:
import threading,time
lock=threading.Lock()
def func1(a,b,c):
lock.acquire()
print(a,b,c)
lock.release()
time.sleep(10)
tl=[]
t = threading.Thread(target=func1,args=[1,2,3])
t.start()
tl.append(t)
t=threading.Thread(target=func1,args=[4,5,6])
t.start()
tl.append(t)
# wait for all threads to complete (if you want to wait, else
# you can skip this loop)
for t in tl:
t.join()
I took the time to create an example more suitable to your needs:
2 threads executing a command and getting the output, then printing it within a lock to avoid mixup. I have used check_output method for this. I'm using windows, and I list C and D drives in parallel.
import threading,time,subprocess
lock=threading.Lock()
def func1(runrescompare,rescompare_dir):
resrun_proc = subprocess.check_output(runrescompare, shell=True, cwd=rescompare_dir, stderr=subprocess.PIPE, universal_newlines=True)
lock.acquire()
print(resrun_proc)
lock.release()
tl=[]
t=threading.Thread(target=func1,args=["ls","C:/"])
t.start()
tl.append(t)
t=threading.Thread(target=func1,args=["ls","D:/"])
t.start()
tl.append(t)
# wait for all threads to complete (if you want to wait, else
# you can skip this loop)
for t in tl:
t.join()
Currently, I have a command that looks something like the following:
my_command = Popen([activate_this_python_virtualenv_file, \
"-m", "my_command", "-l", \
directory_where_ini_file_for_my_command_is + "/" + my_ini_file_name], \
stderr=subprocess.STDOUT, stdout=subprocess.PIPE, shell=False,
universal_newlines=False, cwd=directory_where_my_module_is)
I have figured out how to access and process the output, deal with subprocess.PIPE, and make subprocess do a few other neat tricks.
However, it seems odd to me that the standard Python documentation for subprocess doesn't mention a way to just get the actual command line as subprocess.Popen puts it together from arguments to the Popen constructor.
For example, perhaps my_command.get_args() or something like that?
Is it just that getting the command line run in Popen should be easy enough?
I can just put the arguments together on my own, without accessing the command subprocess runs with Popen, but if there's a better way, I'd like to know it.
It was added in Python 3.3. According to docs:
The following attributes are also available:
Popen.args The args argument as it was passed to Popen – a sequence of
program arguments or else a single string.
New in version 3.3.
So sample code would be:
my_args_list = [] # yourlist
p = subprocess.Popen(my_args_list)
assert p.args == my_args_list