I have code that is structurally similar to the following in Matlab:
bestConfiguration = 0;
bestConfAwesomeness = 0;
for i=1:X
% note that providing bestConfAwesomeness to the function helps it stop if it sees the current configuration is getting hopeless anyway
[configuration, awesomeness] = expensive_function(i, bestConfAwesomeness);
if awesomeness > bestConfAwesomeness
bestConfAwesomeness = awesomeness;
bestConfiguration = configuration;
end
end
There is a bit more to it but the basic structure is the above. X can get very large. I am trying to make this code run in parallel, since expensive_function() takes a long time to run.
The problem is that Matlab won't let me just change for to parfor because it doesn't like that I'm updating the best configuration in the loop.
So far what I've done is:
[allConfigurations, allAwesomeness] = deal(cell(1, X));
parfor i=1:X
% note that this is not ideal because I am forced to use 0 as the best awesomeness in all cases
[allConfigurations{i}, allAwesomeness{i}] = expensive_function(i, 0);
end
for i=1:X
configuration = allConfigurations{i};
awesomeness = allAwesomeness{i};
if awesomeness > bestConfAwesomeness
bestConfAwesomeness = awesomeness;
bestConfiguration = configuration;
end
endfor
This is better in terms of time it takes to run; however, for large inputs it takes huge amounts of memory because all the configurations are always saved. Another problem is that using parfor forces me to always provide 0 as the best configuration even though better ones might be known.
Does Matlab provide a better way of doing this?
Basically, if I didn't have to use Matlab and could manage the threads myself, I'd have one central thread which gives jobs to workers (i.e. make them run expensive_function(i)) and once a worker returns, look at the data it produced and compare it to the best found so far and update it accordingly. There would be no need to save all the configurations which seems to be the only way to make parfor work.
Is there a way to do the above in Matlab?
Using the bestConfAwesomeness each time round the loop means that the iterations of your loop are not order-independent, hence why PARFOR is unhappy. One approach you could take is to use SPMD and have each worker perform expensiveFunction in parallel, and then communicate to update bestConfAwesomeness. Something like this:
bestConfiguration = 0;
bestConfAwesomeness = 0;
spmd
for idx = 1:ceil(X/numlabs)
myIdx = labindex + ((idx-1) * numlabs);
% should really guard against myIdx > X here.
[thisConf, thisAwesome] = expensiveFunction(myIdx, bestConfAwesomeness);
% Now, we must communicate to see if who is best
[bestConfiguration, bestAwesomeness] = reduceAwesomeness(...
bestConfiguration, bestConfAwesomeness, thisConf, thisAwesome);
end
end
function [bestConf, bestConfAwesome] = reduceAwesomeness(...
bestConf, bestConfAwesome, thisConf, thisAwesome)
% slightly lazy way of doing this, could be optimized
% but probably not worth it if conf & awesome both scalars.
allConfs = gcat(bestConf);
allAwesome = gcat(thisAwesome);
[maxThisTime, maxLoc] = max(allAwesome);
if maxThisTime > bestConfAwesome
bestConfAwesome = maxThisTime;
bestConf = allConfs(maxLoc);
end
end
I'm not sure that the kind of control over your threads is possible with Matlab. However, since X is very large, it may be worth doing the following, which costs you one more iteration of expensiveFunction:
%# calculate awesomeness
parfor i=1:X
[~,awesomeness(i)] = expensiveFunction(i);
end
%# find the most awesome i
[mostAwesome,mostAwesomeIdx] = min(awesomeness);
%# get the corresponding configuration
bestConfiguration = expensiveFunction(mostAwesomeIdx);
Related
its my second Day learning and experiment with Julia. Although I read the Documantation concerning Metaprogramming carefully (but maybe not carefully enough) and several simular threads. I still can't figure out how I can use it inside a function.
I tryed to make following function for simulation of some data more flexible:
using Distributions
function gendata(N,NLATENT,NITEMS)
latent = repeat(rand(Normal(6,2),N,NLATENT), inner=(1,NITEMS))
errors = rand(Normal(0,1),N,NLATENT*NITEMS)
x = latent+errors
end
By doing this:
using Distributions
function gendata(N,NLATENT,NITEMS,LATENT_DIST="Normal(0,1)",ERRORS_DIST="Normal(0,1)")
to_eval_latent = parse("latent = repeat(rand($LATENT_DIST,N,NLATENT), inner=(1,NITEMS))")
eval(to_eval_latent)
to_eval_errors = parse("error = rand($ERRORS_DIST,N,NLATENT*NITEMS)")
eval(to_eval_errors)
x = latent+errors
end
But since eval don't work on the local scope it dont work. What can I do to work arround this?
Also the originally function, don't seem to be that fast, did I make any major mistakes concerning perfomance?
I really appriciate any recommandation.
Thanks in advance.
There is no need to use eval there, you can retain the same flexibility by passing the distribution types as keyword args (or named args with default values). Parsing and eval'ing "stringly-typed" arguments will often defeat optimizations and should be avoided.
function gendata(N,NLATENT,NITEMS; LATENT_DIST=Normal(0,1),ERRORS_DIST=Normal(0,1))
latent = repeat(rand(LATENT_DIST,N,NLATENT), inner=(1,NITEMS))
errors = rand(ERRORS_DIST,N,NLATENT*NITEMS)
x = latent+errors
end
julia> gendata(10,2,3, LATENT_DIST=Pareto(.3))
...
julia> gendata(10,2,3, ERRORS_DIST=Gamma(.6))
...
etc.
You're not really supposed to use eval here (slower, won't produce type information, will interfere with compilation, etc) but in case you're trying to understand what went wrong, here's how you would do it:
Either separate it from the rest of the code:
function gendata(N,NLATENT,NITEMS,LDIST_EX="Normal(0,1)",EDIST_EX="Normal(0,1)")
# Eval your expressions separately
LATENT_DIST = eval(parse(LDIST_EX))
ERRORS_DIST = eval(parse(EDIST_EX))
# Do your thing
latent = repeat(rand(LATENT_DIST,N,NLATENT), inner=(1,NITEMS))
errors = rand(ERROR_DIST,N,NLATENT*NITEMS)
x = latent+errors
end
Or use interpolation with quoted expressions:
function gendata(N,NLATENT,NITEMS,LDIST_EX="Normal(0,1)",EDIST_EX="Normal(0,1)")
# Obtain expression objects
LATENT_DIST = parse(LDIST_EX)
ERRORS_DIST = parse(EDIST_EX)
# Eval but interpolate in everything that's local to the function
# And you can't introduce local variables with eval so keep them
# out of it.
latent = eval( :(repeat(rand($LATENT_DIST,$N,$NLATENT), inner=(1,$NITEMS))) )
errors = eval( :(rand($ERRORS_DIST, $N, $NLATENT*$NITEMS)) )
x = latent+errors
end
You can also use a single eval with a let block to introduce a self-contained scope:
function gendata(N,NLATENT,NITEMS,LDIST_EX="Normal(0,1)",EDIST_EX="Normal(0,1)")
LATENT_DIST = parse(LDIST_EX)
ERRORS_DIST = parse(EDIST_EX)
x =
#eval let
latent = repeat(rand($LATENT_DIST,$N,$NLATENT), inner=(1,$NITEMS))
errors = (rand($ERRORS_DIST, $N, $NLATENT*$NITEMS))
latent+errors
end
end
((#eval x) == eval(:(x)))
Well, hope you understand the eval thing a little better. Day two I mean, you should be experimenting ;)
I have a function (a convolution) which can get very slow if it operates on matrices of many many columns (function code below). I hence want to parallelize the code.
Example MATLAB code:
x = zeros(1,100);
x(rand(1,100)>0.8) = 1;
x = x(:);
c = convContinuous(1:100,x,#(t,p)p(1)*exp(-(t-p(2)).*(t-p(2))./(2*p(3).*p(3))),[1,0,3],false)
plot(1:100,x,1:100,c)
if x is a matrix of many columns, the code gets very slow... My first attempt was to change for to parfor statement, but it went wrong (see Concluding remarks below).
My second attempt was to follow this example, which shows how to schedule tasks in a job and then submit the job to a local server. That example is implemented in my function below by letting the last argument isParallel being true.
The example MATLAB code would be:
x = zeros(1,100);
x(rand(1,100)>0.8) = 1;
x = x(:);
c = convContinuous(1:100,x,#(t,p)p(1)*exp(-(t-p(2)).*(t-p(2))./(2*p(3).*p(3))),[1,0,3],true)
Now, MATLAB tells me:
Starting parallel pool (parpool) using the 'local' profile ... connected to 4 workers.
Warning: This job will remain queued until the Parallel Pool is closed.
And MATLAB terminal keeps on hold, waiting for something to finish. I then open Jobs Monitor by Home -> Parallel -> Monitor jobs and see there are two jobs, one of which has the state running. But none of them will ever finish.
Questions
Why is it taking too long to run, given it is a really simple task?
What would be the best way to parallelize my function below? (the "heavy" part is in the separated function convolveSeries)
File convContinuous.m
function res = convContinuous(tData, sData, smoothFun, par, isParallel)
% performs the convolution of a series of delta with a smooth function of parameters par
% tData = temporal space
% sData = matrix of delta series (each column is a different series that will be convolved with smoothFunc)
% smoothFun = function used to convolve with each column of sData
% must be of the form smoothFun(t, par)
% par = parameters to smoothing function
if nargin < 5 || isempty(isParallel)
isParallel = false;
end
if isvector(sData)
[mm,nn] = size(sData);
sData = sData(:);
end
res = zeros(size(sData));
[ ~, n ] = size(sData);
if ~isParallel
%parfor i = 1:n % uncomment this and comment line below for strange error
for i = 1:n
res(:,i) = convolveSeries(tData, sData(:,i), smoothFun, par);
end
else
myPool = gcp; % creates parallel pool if needed
sched = parcluster; % creates scheduler
job = createJob(sched);
task = cell(1,n);
for i = 1:n
task{i} = createTask(job, #convolveSeries, 1, {tData, sData(:,i), smoothFun, par});
end
submit(job);
wait(job);
jobRes = fetchOutputs(job);
for i = 1:n
res(:,i) = jobRes{i,1}(:);
end
delete(job);
end
if isvector(sData)
res = reshape(res, mm, nn);
end
end
function r = convolveSeries(tData, s, smoothFun, par)
r = zeros(size(s));
tSpk = s == 1;
j = 1;
for t = tData
for tt = tData(tSpk)
if (tt > t)
break;
end
r(j) = r(j) + smoothFun(t - tt, par);
end
j = j + 1;
end
end
Concluding remarks
As a side note, I was not able to do it using parfor because MATLAB R2015a gave me a strange error:
Error using matlabpool (line 27)
matlabpool has been removed.
To query the size of an already started parallel pool, query the 'NumWorkers' property of the pool.
To check if a pool is already started use 'isempty(gcp('nocreate'))'.
Error in parallel_function (line 317)
Nworkers = matlabpool('size');
Error in convContinuous (line 18)
parfor i = 1:n
My version command outputs
Parallel Computing Toolbox Version 6.6 (R2015a)
which is compatible with my MATLAB version. Almost all other tests I have done are OK. I am then compelled to think that this is a MATLAB bug.
I tried changing matlabpool to gcp and then retrieving the number of workers by parPoolObj.NumWorkers, and after altering this detail in two different built-in functions, I received another error:
Error in convContinuous>makeF%1/F% (line 1)
function res = convContinuous(tData, sData, smoothFun, par)
Output argument "res" (and maybe others) not assigned during call to "convContinuous>makeF%1/F%".
Error in parallel_function>iParFun (line 383)
output.data = processInfo.fun(input.base, input.limit, input.data);
Error in parProcess (line 167)
data = processFunc(processInfo, data);
Error in parallel_function (line 358)
stateInfo = parProcess(#iParFun, #iConsume, #iSupply, ...
Error in convContinuous (line 14)
parfor i = 1:numel(sData(1,:))
I suspect that this last error is generated because the function call inside parfor loop requires many arguments, but I don't really know it.
Solving the errors
Thanks to wary comments of people here (saying they could not reproduce my errors), I went on looking for the source of the error. I realized it was a local error due to having pforfun in my pathdef.m which I downloaded long ago from File Exchange.
Once I removed pforfun from my pathdef.m, parfor (line 18 in convContinuous function) started working well.
Thank you in advance!
The parallel pool you created is blocking your job from running. When you are using the jobs and tasks API you do not need (and must not have) a pool open. When you looked in Job Monitor, the running job you saw was the job that backs the parallel pool, that only finishes when the pool is deleted.
If you delete the line in convContinuous that says myPool = gcp, then it should work. As an optimization you can use the vectorised form of createTask, which is much more efficient than creating tasks in a loop i.e.
inputCell = cell(1, n);
for i = 1:n
inputCell{i} = {tData, sData(:,i), smoothFun, par};
end
task = createTask(job, #convolveSeries, 1, inputCell);
However, having said all that, you should be able to make this code work using parfor. The first error you encountered was due to matlabpool being removed, it has now been replaced by parpool.
The second error appears to be caused by your function not returning the correct outputs, but the error message does not appear to correspond to the code you posted, so I'm not sure. Specifically I don't know what convContinuous>makeF%1/F% (line 1) refers to.
Thanks to wary comments of people here (saying they could not reproduce my errors), I went on looking for the source of the error. I realized it was a local error due to having pforfun in my pathdef.m which I downloaded long ago from File Exchange.
Once I removed pforfun from my pathdef.m, parfor (line 18 in convContinuous function) started working well.
I am trying to write a program to analyze data from a simulation. Since the simulation software I am using is what is running the Lua program, I am not sure if this is the right place to ask this question, but I am probably making a programming error.
I am struggling with the difference between using the simple and complete I/O models. I have a block of code, which works, and looks like this:
io.output([[filename_and_location]])
function segment.other_actions
if ion_splat ~= 0 then io.write(ion_px_mm, "\n") end
io.close()
end
Note: ion_splat and ion_px_mm are pre-determined variables that take on number values. This code is run over and over again throughout the simulation.
Then I decided to try achieving the same thing using the complete I/O model like this:
f = io.open([[file_name_and_location]],"w")
function segment.other_actions ()
if ion_splat ~= 0 then f:write(ion_py_mm, "\n") end
f:close()
end
end
This runs, but takes a lot longer than the other way. Why is that?
Example 1:
for i = 1, 1000 do
io.output("test.txt")
io.write("some data to be written\n")
io.close()
end
Example 2:
for i = 1, 1000 do
local f = io.open("test.txt", "w")
f:write("some data to be written\n")
f:close()
end
There is no measurable difference in the execution time.
The latter approach is usually preferable because the used file is identified explicitly.
Let's assume that I want to create 10 variables which would look like this:
x1 = 1;
x2 = 2;
x3 = 3;
x4 = 4;
.
.
xi = i;
This is a simplified version of what I'm intending to do. Basically I just want so save code lines by creating these variables in an automated way. Is there the possibility to construct a variable name in Matlab? The pattern in my example would be ["x", num2str(i)]. But I cant find a way to create a variable with that name.
You can do it with eval but you really should not
eval(['x', num2str(i), ' = ', num2str(i)]); %//Not recommended
Rather use a cell array:
x{i} = i
I also strongly advise using a cell array or a struct for such cases. I think it will even give you some performance boost.
If you really need to do so Dan told how to. But I would also like to point to the genvarname function. It will make sure your string is a valid variable name.
EDIT: genvarname is part of core matlab and not of the statistics toolbox
for k=1:10
assignin('base', ['x' num2str(k)], k)
end
Although it is long overdue, i justed wanted to add another answer.
the function genvarname is exactly for these cases
and if you use it with a tmp structure array you do not need the eval cmd
the example 4 from this link is how to do it http://www.mathworks.co.uk/help/matlab/ref/genvarname.html
for k = 1:5
t = clock;
pause(uint8(rand * 10));
v = genvarname('time_elapsed', who);
eval([v ' = etime(clock,t)'])
end
all the best
eyal
If anyone else is interested, the correct syntax from Dan's answer would be:
eval(['x', num2str(i), ' = ', num2str(i)]);
My question already contained the wrong syntax, so it's my fault.
I needed something like this since you cannot reference structs (or cell arrays I presume) from workspace in Simulink blocks if you want to be able to change them during the simulation.
Anyway, for me this worked best
assignin('base',['string' 'parts'],values);
Here's my program.
local t = {}
local match = string.gmatch
local insert = table.insert
val = io.read("*a")
for num in match(val, "%d+") do
insert(t, num)
end
I'm wondering if there is a faster way to load a large (16MB+) array of integers than this. Considering the data is composed of line after line of a single number can this be made faster? Should I be looking at io.read("*n") instead?
Given that your file size is 16MB, your loading routine's performance will be dominated by file IO. How long it takes you to process the loaded data will generally be irrelevant next to that.
Just try it; profile how long it takes to just load the file (stopping the script after io.read), then profile how long the whole script takes. The latter will be longer, but it's only going to be by some relatively small percentage, not vast amounts.
Loading the whole file at once the way you're doing will almost certainly be faster than doing it piecemeal. Filesystems like reading entire blocks of data all at once, rather than bits at a time. Beyond that, how to process the text is relatively irrelevant.
I'm not sure if its faster, but read("*n") is much simpler...
local t = { }
while true do
local n = io.stdin:read("*n")
if n == nil then break end
table.insert ( t , n )
end
Probably, this would be faster:
local t = {}
local match = string.match
for line in io.lines() do
t[#t+1] = match(line, '%d+')
end
Don't forget to convert strings to numbers.