python3 - exec(open('....py').read()) - transfer variables - python-3.x

Biting my teeth on a problem right now:
I would like to have a py (Slave.py - which is quite complex and already finished) start / run by another py (Master.py) in the Python IDLE several times, with Slave.py always being completely restarted. This means that all variables and imports etc. there are reinitialized with each start. Each time the Slave.py is restarted, different variable values ​​should be transferred from the Master.py to the Slave.py (the background is that the Slave.py requires a running time of 1 to 12 hours and the AI ​​does not remain stable over several runs with changed parameters.)
I work in WIN10 with the Python IDLE and have Python 3.8.3.
Found something that at least executes the Slave.py in the IDLE:
(What is an alternative to execfile in Python 3?)
with open ("Slave_01.py") as f:
code = compile (f.read (), "Slave_01.py", 'exec')
exec (code)
The problem remains for me to transfer variables from the Master.py to the Slave.py. I think you can write in the Master.py for example:
global_vars = {"Wert1": "Ford", "Wert2": "Mustang", "Wert3": 1964}
local_vars = {"Wert1": "VW", "Wert2": "Käfer", "Wert3": 1966}
with open ("Slave.py") as f:
code = compile (f.read (), "Slave.py", 'exec')
exec (code, global_vars, local_vars)
My question now is how do you resume the variable values ​​in the Slave.py?
Thanks for suggestions, how one could specifically write that.

Related

Attempting to append all content into file, last iteration is the only one filling text document

I'm trying to Create a file and append all the content being calculated into that file, but when I run the script the very last iteration is written inside the file and nothing else.
My code is on pastebin, it's too long, and I feel like you would have to see exactly how the iteration is happening.
Try to summarize it, Go through an array of model numbers, if the model number matches call the function that calculates that MAC_ADDRESS, when done calculating store all the content inside a the file.
I have tried two possible routes and both have failed, giving the same result. There is no error in the code (it runs) but it just doesn't store the content into the file properly there should be 97 different APs and it's storing only 1.
The difference between the first and second attempt,
1 attempt) I open/create file in the beginning of the script and close at the very end.
2 attempt) I open/create file and close per-iteration.
First Attempt:
https://pastebin.com/jCpLGMCK
#Beginning of code
File = open("All_Possibilities.txt", "a+")
#End of code
File.close()
Second Attempt:
https://pastebin.com/cVrXQaAT
#Per function
File = open("All_Possibilities.txt", "a+")
#per function
File.close()
If I'm not suppose to reference other websites, please let me know and I'll just paste the code in his post.
Rather than close(), please use with:
with open('All_Possibilities.txt', 'a') as file_out:
file_out.write('some text\n')
The documentation explains that you don't need + to append writes to a file.
You may want to add some debugging console print() statements, or use a debugger like pdb, to verify that the write() statement actually ran, and that the variable you were writing actually contained the text you thought it did.
You have several loops that could be a one-liner using readlines().
Please do this:
$ pip install flake8
$ flake8 *.py
That is, please run the flake8 lint utility against your source code,
and follow the advice that it offers you.
In particular, it would be much better to name your identifier file than to name it File.
The initial capital letter means something to humans reading your code -- it is
used when naming classes, rather than local variables. Good luck!

Celery Task Not Registered Exception in Python 3.7, Ubuntu 16.04

This is how the celery configuration looks like
from celery import Celery, group
celery = Celery('grouptest', broker='redis://localhost:6379/0', backend='redis://localhost:6379/0')
celery.conf.CELERY_TASK_SERIALIZER = 'pickle'
celery.conf.CELERY_RESULT_SERIALIZER ='pickle'
celery.conf.CELERY_ACCEPT_CONTENT = {'json', 'pickle'}
#celery.task
def add(self, x,y):
print('Executing with arguments', x, y)
return x+y
I am starting the celery worker daemon with the command (in the same working directory)
$celery worker -A grouptest -l info -c 5
in the terminal.
Next, from another process, I am calling the service like this.
bigList=[(randint(60, 90), randint(60, 90)) for _ in range(10)]
jobresult=group([add.s(celery, i[0], i[1]) for i in bigList]).apply_async()
#Basically adding a ten pairs of random numbers
The funny thing is, only some of the tasks are executed, even after long wait. For example, jobresult[0].result gives the sum of two numbers just fine, but jobresult[1].result says Task of kind 'grouptest.add' never registered, please make sure it's imported.
I am even checking if jobresult.ready() is set True in the python REPL. Sometimes it throws error, sometimes it does not, in the same REPL session. (After some observation, I guessed, it passes from False to error to True.)
I am new to celery and following some templates, but how to make sure I the task is properly registered (whatever that means) and all of them are executed consistently. If my code was wrong, at least the errors would be consistent, is it?
Celery does not support Python 3.7 at the moment this comment is being written.
You might be able to use a workaround:
https://github.com/celery/celery/issues/4500

How to run a string from an input file as python code?

I am creating something along the likes of a text adventure game. I have a .yaml file that is my input. This file looks something like this
node_type:
action
title:
Do some stuff
info:
This does some stuff and things
script:
'print("hello world")
print(ret_val)
foo.bar(True)
ret_val = (foo.bar() == True)
if (thing):
print(thing)
print(ret_val)
'
My end goal is to have my python program run the script portion of the yaml file exactly as if it had been copy pasted into the main code. (I know there are about ten bazillion security reasons I should not be running user input like this, but I am the only one writing these nodes, and the only one using this program so I'm mostly just ignoring this fact...)
Currently my attempt goes like this: I load my yaml file as a dict using pyyaml
node = yaml.safe_load(file.yaml)
Then I'm trying to use exec to run my code and hitting a lot of problems, I can't run if statements, I simply get a syntax error, and I can't get any sort of return value from my code. I've tried this as a work around:
def main()
ret_val = "test";
thing = exec(node['script'], globals(),locals())
print(ret_val)
which when run with the above .yaml file prints
>> hello world
>> test
>> True
>> test
for some reason not actually modifying any of my main variables even though I fed them to exec.
Is there any way for me to work around these issues or is there an all together better way to be doing this?
One way of doing this would be to parse the code out and save it to a .py file, from which it can be imported dynamically, for example by importlib.
You might want to encapsulate parsed code into a function, which you can then easily call to invoke your action. Also, it would make sense to specify some default imports there.

Using nohup to help run a loop of python code while disconnecting from ssh

I'm looking for help running a python script that takes some time to run.
It is a long running process that takes about 2hours per test observation. For example, these observations could be the 50 states of the usa.
I dont want to baby sit this process all day - I'd like to kick it off then drive home from work - or have it run while I'm sleeping.
Since this a loop - I would need to call one python script that loops through my code going over each of the 50 states - and a 2nd that runs my actual code that does things.
I've heard of NOHUP, but I have very limited knowledge. I saw nohup ipython mypython.py but then when I google I get alot of other people chiming in with other methods and so I don't know what is the ideal approach for someone like me. Additionally, I am essentially looking to run this as a loop - so don't know how that complicates things.
Please give me something simple and easier to understand. I don't know linux all that well or I wouldn't be asking as this seems like a common sort of command/activity...
Basic example of my code:
Two files: code_file.py and loop_file.py
Code_file.py does all the work. Loop file just passes in the list of things to run the stuff for.
code_file.py
output = each_state + ' needs some help!'
print output
loop_file.py
states = ['AL','CO','CA','NY','VA','TX']
for each_state in states:
code_file.py
Regarding the loop - I have also heard that I can't pass in parameters or something via nohup? I can fix this part within my python code....for example reading from a CSV in my code and deleting the current record from that CSV file and then re-writing it out...that way I can always select the top record in the CSV file for my loop (the list of states)
May be you could modify your loop_file.py like this:
import os
states = ['AL','CO','CA','NY','VA','TX']
for each_state in states:
os.system("python /dir_of_your_code/code_file.py")
Then in a shell, you could run the loop_file.py with:
nohup python loop_file.py & # & is not necessary, it just redirect all output of the file to a file named nohup.out instead of printing it on screen.

how to generate random value in each time I run the lua script

I have written a lua script named "lua_rand_gen" which contains following code:
function random_number_func()
math.randomseed(os.time())
return (math.random(100000000,999999999))
end
print (random_number_func())
when I run the lua_rand_gen script in the terminal in loop , the above function is not generating randome values as shown :
for ((i=0;i<=5;i++)); do lua lua_rand_gen; done
717952767
717952767
717952767
717952767
717952767
717952767
I know this is because os.time() doesn't change till one second. So, how can I get Random number in lua if the time difference between running the lua script is less than 1 sec.
Move math.randomseed(os.time()) outside the function.
It seems to be a common misconception that you need to call math.randomseed before each time you call math.random. This is wrong and will defeat the randomness of math.random, especially if you use os.time() as seed, since the seeds will be the same for a whole second.

Resources