I'm currently using the Atom editor to work with Julia 0.5 and somehow fail to make functions available to my worker threads. Here is my testfile test.jl:
module testie
export t1
function t1()
a= rand()
println("a is $a, thread $(myid())")
return a
end
end
if nprocs()<2
addprocs(1)
end
#everywhere println("Hi")
using testie
t1()
println(remotecall_fetch(t1,2))
Executing this file, I get as output a "Hi" from the master and the worker, and the master will also output the "a is ..." line. But the worker won't, and on the remotecall_fetch line it throws the following error msg (shortened)
LoadError: On worker 2:
UndefVarError: testie not defined
http://docs.julialang.org/en/release-0.5/manual/parallel-computing/ states: using DummyModule causes the module to be loaded on all processes; however, the module is brought into scope only on the one executing the statement. Nothing more I could see how to solve this situation. I tried to add an #everywhere before the using line, also tried to add an #everywhere include("test.jl") right before it. Didn't help. This should be really simple but I can't figure it out.
On SO I only found Julia parallel programming - Making existing function available to all workers but this doesn't really answer it to me.
If you are importing a module yourself with include then you need to tell julia that you want to use t1 from that module by prefixing it with testie.t1
Try this
if nprocs()<2
addprocs(1)
end
#everywhere include("testie.jl")
println(remotecall_fetch(testie.t1,2)) #NB prefix here
where testie.jl is:
module testie
export t1
function t1()
a= rand()
println("a is $a, thread $(myid())")
return a
end
end
Related
im trying to use Multi-Threadding (or better Multi-Processor) in Julia-Lang. Just using Base.Threads just made my Application Slower so i wanted to try Distributed.
module Parallel
# ... includes ..
using Distributed
#Distributed.everywhere include("...jl")
#... Includes Needed in Proccesses
export loop_inner
#Distributed.everywhere function loop_inner(parentValue, value, i, depth)
...
end
function langfordSequence(parentValue, depth)
...
if depth < 4 && depth > 1
futures = [#spawnat :any loop_inner(parentValue, value, i, depth) for i = 0:possibilites]
return sum(fetch.(futures))
% 2) PROBLEMATIC LINE ^^
else
return sum([loop_inner(parentValue, value, i, depth) for i = 0:possibilites])
% 1) PROBLEMATIC LINE ^^
end
end
end
But i run into
$julia -L ...jl -L Parallel.jl main.jl -p 8
ERROR: LoadError: UndefVarError: loop_inner not defined (At Line [see Code Above at % 1) PROBLEMATIC LINE ^^ )
I hope someone can tell me what im doing wrong.
If im chaning if depth < 4 && depth > 1 to if depth < 4 im getting UndefVarError: Parallel not defined (At Line [see Code Above at % 2) PROBLEMATIC LINE ^^ )
Ty in advance
It does not work because each worker process needs to separately load your module.
Your module should look like this:
module MyParallel
include("somefile.jl")
export loop_inner
function loop_inner(parentValue, value, i, depth)
end
end
Now you use it in the following way (this assumes that the module is inside your private package):
using Distributed
using Pkg
Pkg.activate(".") # or wherever is the package
using MyParallel # enforces module compilation which should not occur in parallel
addprocs(4) # should always happen before any `#everywhere` macro or use `-p` command line option instead.
#everywhere using Pkg, Distributed
#everywhere Pkg.activate(".")
#everywhere using MyParallel
Now you are ready to work with the MyParallel module.
EDIT
To make it more clear. The important goal of a module is to provide a common namespace for a set of functions, types and global (module) variables. If you put any distributed code inside a module you obviously break this design because each Julia's worker is a totally separate system process, having it's own memory and namespace. Hence the good design, in my opinion, is to keep all the do-the-work code inside the module and distributed computation management outside the module. Perhaps in some scenarios one might want the distributed orchestration code to be in a second module - but normally it is just more convenient to keep such code outside of the module.
As you know $fdisplay can print info into a file. But if you instantiate a module (like a BFM:Bus functional model) several times in the test bench and each of them has $fdisplay, then a problem may occur : Simultaneous access to a file
I my expriene that issue causes a not neat and tidy output file.
So how can I achieve my goal?
Python-Equivalent of my question is here.
P.S. The simulator's console has limitation of what can be aggregated and my logs are somewhat long. So I should print them to file. Also merging all verilog codes into one is not possible at all.(Think how the BFM models are)
If you want all the output from your BFMs to go into a single file, your problem is not with $fdisplay but with $fopen. You need to create a top level function that calls $fopen only if it has not been called before.
integer file=0;
function integer bfm_fopen;
begin
if (file)
bfm_fopen = file;
else begin
file = $fopen("logfile");
bfm_fopen = file;
end
end
endfunction
Then call top_level.bfm_fopen from your BFMs
I am creating something along the likes of a text adventure game. I have a .yaml file that is my input. This file looks something like this
node_type:
action
title:
Do some stuff
info:
This does some stuff and things
script:
'print("hello world")
print(ret_val)
foo.bar(True)
ret_val = (foo.bar() == True)
if (thing):
print(thing)
print(ret_val)
'
My end goal is to have my python program run the script portion of the yaml file exactly as if it had been copy pasted into the main code. (I know there are about ten bazillion security reasons I should not be running user input like this, but I am the only one writing these nodes, and the only one using this program so I'm mostly just ignoring this fact...)
Currently my attempt goes like this: I load my yaml file as a dict using pyyaml
node = yaml.safe_load(file.yaml)
Then I'm trying to use exec to run my code and hitting a lot of problems, I can't run if statements, I simply get a syntax error, and I can't get any sort of return value from my code. I've tried this as a work around:
def main()
ret_val = "test";
thing = exec(node['script'], globals(),locals())
print(ret_val)
which when run with the above .yaml file prints
>> hello world
>> test
>> True
>> test
for some reason not actually modifying any of my main variables even though I fed them to exec.
Is there any way for me to work around these issues or is there an all together better way to be doing this?
One way of doing this would be to parse the code out and save it to a .py file, from which it can be imported dynamically, for example by importlib.
You might want to encapsulate parsed code into a function, which you can then easily call to invoke your action. Also, it would make sense to specify some default imports there.
I have an input file where some variables are defined. For each iteration in a loop, I would like to read the file, update the values of some of the variables, then run calculations.
I have an input file called input.jl with
myval=1
Then I have a file myscript.jl with the following commands
for i=1:2
include("input.jl")
println(myval)
myval=2
end
If I run the file (julia myscript.jl), I get an error that myval is not defined. If I comment out the third or fourth lines, then it runs with no problem. If I remove the for loop, the three lines run with no problem. How can I read myval from input.jl, use it, then update its value during each iteration of the loop?
Unfortunately, it seems that the include function executes things at global scope, and then continues from where it left off. So if you're trying to dynamically include new variables into local scope, this is not the way to do it.
You can either introduce the variable at global scope first so that the function has access to it, and therefore the assignment will work (but, be aware that the variable will be updated at the global scope).
or
you can cheat by wrapping your input file into a module first. You still need to call the variable by its name, and you will get warnings about updating the module, but this way you can update your local variable dynamically at least, without needing that variable at global scope:
# in input.jl
module Input
myval = 1;
end
# in your main file
for i=1:2
include("input.jl")
myval = Input.myval;
println(myval)
myval=2
end
or
you could add a separate process and offload the calculation to its global scope, and retrieve it to your current process locally, e.g.
# in file input.jl
myval = 1
# in main file
addprocs(1);
for i=1:2
myval = remotecall_fetch(() -> (global myval; include("input.jl"); myval), 2);
println(myval)
myval=2
end
looking for something similar to .Net string format in a chef recipe ie.
string phone = String.format("phone: {0}",_phone);
I have a Chef recipe where I need to build up a command string with 30 of these params so hoping for a tidy way to build the string, in principle Im doing this
a=node['some_var'].to_s
ruby_block "run command" do
block do
cmd = shell_out!("node mycommand.js #{a}; exit 2;")
end
end
When I try this I get the error
Arguments to path.join must be strings any tips appreciated
Chef runs in two phases:
Compile and Execute (see https://www.chef.io/blog/2013/09/04/demystifying-common-idioms-in-chef-recipes/ for more details).
Your variable assignment to a happens at compile time, e.g. when chef loads all recipes. The ruby block will be execute in execution mode at converge time and cannot access the variable a.
So the easiest solution might be putting the attribute into the ruby block:
ruby_block "run command with argument #{node['some_var']}" do
block do
shell_out!("node mycommand.js #{node['some_var']}")
end
end
However:
If you don't need to execute Ruby code, consider using the execute or bash resource instead.
Keep in mind, that you must have a unique resource name, if you're building some kind of loop around it. An easy way is to put something unique into the name ruby_block "something unique per loop iteration" do ... end
What I really don't understand is your exit code 2. This is an error code. It will make chef throw an exception each time. (shell_out! throws an exception if exit code != 0, see https://github.com/chef/chef/blob/master/lib/chef/mixin/shell_out.rb#L24-L28)
The resource will be executed on each chef run. This is probably not in your interest. Consider adding a guard (test), to prevent unnecessary execution, see https://docs.chef.io/resource_common.html#guards