I have an input file where some variables are defined. For each iteration in a loop, I would like to read the file, update the values of some of the variables, then run calculations.
I have an input file called input.jl with
myval=1
Then I have a file myscript.jl with the following commands
for i=1:2
include("input.jl")
println(myval)
myval=2
end
If I run the file (julia myscript.jl), I get an error that myval is not defined. If I comment out the third or fourth lines, then it runs with no problem. If I remove the for loop, the three lines run with no problem. How can I read myval from input.jl, use it, then update its value during each iteration of the loop?
Unfortunately, it seems that the include function executes things at global scope, and then continues from where it left off. So if you're trying to dynamically include new variables into local scope, this is not the way to do it.
You can either introduce the variable at global scope first so that the function has access to it, and therefore the assignment will work (but, be aware that the variable will be updated at the global scope).
or
you can cheat by wrapping your input file into a module first. You still need to call the variable by its name, and you will get warnings about updating the module, but this way you can update your local variable dynamically at least, without needing that variable at global scope:
# in input.jl
module Input
myval = 1;
end
# in your main file
for i=1:2
include("input.jl")
myval = Input.myval;
println(myval)
myval=2
end
or
you could add a separate process and offload the calculation to its global scope, and retrieve it to your current process locally, e.g.
# in file input.jl
myval = 1
# in main file
addprocs(1);
for i=1:2
myval = remotecall_fetch(() -> (global myval; include("input.jl"); myval), 2);
println(myval)
myval=2
end
Related
im trying to use Multi-Threadding (or better Multi-Processor) in Julia-Lang. Just using Base.Threads just made my Application Slower so i wanted to try Distributed.
module Parallel
# ... includes ..
using Distributed
#Distributed.everywhere include("...jl")
#... Includes Needed in Proccesses
export loop_inner
#Distributed.everywhere function loop_inner(parentValue, value, i, depth)
...
end
function langfordSequence(parentValue, depth)
...
if depth < 4 && depth > 1
futures = [#spawnat :any loop_inner(parentValue, value, i, depth) for i = 0:possibilites]
return sum(fetch.(futures))
% 2) PROBLEMATIC LINE ^^
else
return sum([loop_inner(parentValue, value, i, depth) for i = 0:possibilites])
% 1) PROBLEMATIC LINE ^^
end
end
end
But i run into
$julia -L ...jl -L Parallel.jl main.jl -p 8
ERROR: LoadError: UndefVarError: loop_inner not defined (At Line [see Code Above at % 1) PROBLEMATIC LINE ^^ )
I hope someone can tell me what im doing wrong.
If im chaning if depth < 4 && depth > 1 to if depth < 4 im getting UndefVarError: Parallel not defined (At Line [see Code Above at % 2) PROBLEMATIC LINE ^^ )
Ty in advance
It does not work because each worker process needs to separately load your module.
Your module should look like this:
module MyParallel
include("somefile.jl")
export loop_inner
function loop_inner(parentValue, value, i, depth)
end
end
Now you use it in the following way (this assumes that the module is inside your private package):
using Distributed
using Pkg
Pkg.activate(".") # or wherever is the package
using MyParallel # enforces module compilation which should not occur in parallel
addprocs(4) # should always happen before any `#everywhere` macro or use `-p` command line option instead.
#everywhere using Pkg, Distributed
#everywhere Pkg.activate(".")
#everywhere using MyParallel
Now you are ready to work with the MyParallel module.
EDIT
To make it more clear. The important goal of a module is to provide a common namespace for a set of functions, types and global (module) variables. If you put any distributed code inside a module you obviously break this design because each Julia's worker is a totally separate system process, having it's own memory and namespace. Hence the good design, in my opinion, is to keep all the do-the-work code inside the module and distributed computation management outside the module. Perhaps in some scenarios one might want the distributed orchestration code to be in a second module - but normally it is just more convenient to keep such code outside of the module.
I have a question , I noticed that when I use a variable to count the score of clicks of different objects . The value of the score that uses variable whether it was a global or a local variable maintain its value of the score that has reached and continues to count from that point even when I close and re-open the app and I reset the variable value to 0 with code (put 0 into _gScorePlayer ) for example when a user reaches score 15 and closes the app , next time the score continues from 15 and so on
I am a beginner in livecode
Thanks fro your continues help and support guys :)
you can use the put command to clear the variable and the delete command to remove it from memory.
check the LiveCode dictionary
delete variable
Example
delete local tVar [1]
By default, declaring variables is optional in LiveCode.* Persistence of variable values is determined by whether the variable is declared outside of a handler or not. When a variable is only declared or used inside a handler, the variable is always temporary and its value is only valid while the handler is running.
The value of variables declared as local or global outside of a handler will persist between instances of the handler being run. However, the value of such variables will not persist between launches of LiveCode. That is if you quit LiveCode and launch it again, the values of the declared variables will be lost. However, if you only close the stack without quitting LiveCode, the stack remains in memory (by default) and the values of declared variables remain intact.
If you want to ensure that the variable is reset when the stack is reopened, do this for declared globals in the stack script:
global gScorePlayer
on openStack
put empty into gScorePlayer
# OR
put 0 into gScorePlayer
end open stack
To initialize local variables you do something similar in the script where the variable is used. For example, if you are using a local variable in a card script, you can do this in the card script:
local sMyLocalVar
on openCard
put empty into sMyLocalVar # or put 0 into sMyLocalVar
end openCard
*See the explicitVariables property in the Dictionary for more information about declaring variables.
The way I use to clean the content of a variable is:
delete variable VariableName
I am basically trying to save to data/${EPOCH_TIME}:
begin_unix_time: "J"$first system "date +%s"
\t:1 save `data/"string"$"begin_unix_time"
I am expecting it to save to data/1578377178
You do not need to cast first system "date +%s" to a long in this case, since you want to attach one string to another. Instead you can use
begin_unix_time:first system "date +%s"
to store the string of numbers:
q)begin_unix_time
"1578377547"
q)`$"data/",begin_unix_time
`data/1578377547
Here you use the comma , to join one string to another, then using cast `$ to convert the string to a symbol.
The keyword save is saving global data to a file. Given your filepath, it looks like youre trying to save down a global variable named 1578377547, and kdb can not handle variable names being purely numbers.
You might want to try saving a variable named a1578377547 instead, for example. This would change the above line to
q)`$"data/a",begin_unix_time
`data/a1578377547
and your save would work correctly, given that the global variable a1578377547 exists. Because you are sourcing the date down to the second from linux directly in the line you are saving a variable down to, this will likely not work, due to time constantly changing!
Also note that the timer system command will repeat it the execution n times (as in \t:n), meaning that the same variable will save down mutliple times given the second does not change. The time will also likely change for large n and you wont have anything assigned to the global variable you are trying to save should the second change.
I'm trying to Create a file and append all the content being calculated into that file, but when I run the script the very last iteration is written inside the file and nothing else.
My code is on pastebin, it's too long, and I feel like you would have to see exactly how the iteration is happening.
Try to summarize it, Go through an array of model numbers, if the model number matches call the function that calculates that MAC_ADDRESS, when done calculating store all the content inside a the file.
I have tried two possible routes and both have failed, giving the same result. There is no error in the code (it runs) but it just doesn't store the content into the file properly there should be 97 different APs and it's storing only 1.
The difference between the first and second attempt,
1 attempt) I open/create file in the beginning of the script and close at the very end.
2 attempt) I open/create file and close per-iteration.
First Attempt:
https://pastebin.com/jCpLGMCK
#Beginning of code
File = open("All_Possibilities.txt", "a+")
#End of code
File.close()
Second Attempt:
https://pastebin.com/cVrXQaAT
#Per function
File = open("All_Possibilities.txt", "a+")
#per function
File.close()
If I'm not suppose to reference other websites, please let me know and I'll just paste the code in his post.
Rather than close(), please use with:
with open('All_Possibilities.txt', 'a') as file_out:
file_out.write('some text\n')
The documentation explains that you don't need + to append writes to a file.
You may want to add some debugging console print() statements, or use a debugger like pdb, to verify that the write() statement actually ran, and that the variable you were writing actually contained the text you thought it did.
You have several loops that could be a one-liner using readlines().
Please do this:
$ pip install flake8
$ flake8 *.py
That is, please run the flake8 lint utility against your source code,
and follow the advice that it offers you.
In particular, it would be much better to name your identifier file than to name it File.
The initial capital letter means something to humans reading your code -- it is
used when naming classes, rather than local variables. Good luck!
I am using apache-airflow==1.10.0
I get errors that looks like this:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "variable" does not exist
LINE 2: FROM variable
When I declare tasks like:
from airflow.models import Variable
dag = DAG('dag')
PythonOperator('task_id', ratio=Variable.get('ratio'), dag=dag)
because I don't have a Variable table yet. I get errors that don't affect anything, but how can I prevent this from happening?
Run airflow upgradedb. It will create all the missing tables.
A workaround (in case airflow upgradedb doesn't work) is to do the following
Remove all the calls to variables in your DAGs code, you can do this by putting all of them into a function set_variables() and commenting it out
Start airflow webserver and create a dummy variable from the variable UI, this will create the Variable table
Now you can uncomment your set_variables(), your programmatic calls to variables will now work since you have the Variable table