Is there any method to make a function non-blocking in MATLAB?
For example a program
for t = 0 : 1 : 1000
if mod(t, 100) == 0
foo();
end
end
it called foo() every 100 cycles and foo() spends about 50 cycles.
I want to call foo() in a background task and call a certain callback function when it completes. Is there any method to implement this in MATLAB?
to do multi threading in matlab you should use "batch" command (i think you must have parallel computing toolbox),
here is the example :
i like to run a script in matlab which taking long time
example script:
for i=1:1e8
A(i)=sin(i*2*pi/1e8);
end
i saved this script as "da" !
then to run it in batch mode i wrote this code in matlab :
job=batch("da")
the the job run in batch mode and you could use your matlab simultaneously
t retrieve the results after finishing the job you could simply write :
load(job,'A')
and the resulting array A will be at your workspace
you could open "monitor job gui" from Home > Environment > parallel > monitor jobs
and finally you could delete the job with below simple code :
delete(job)
to load a function for batch processing you could simply use this statement :
j=batch(fcn, N, {x1,x2,...,xn})
where fcn is your function name, N is number of output arrays and x1,...xn are function input arrays.
Related
I made a dictionary switcher as follows:
switcher={
0:linked_list,
1:queue,
2:stack
}
and I used switcher[key]() to just call a function.
The function runs as normal but the issue is it prints None before
taking input in while loop of my called function, in this case linked_list()
while(c!=2):
c=int(input(print("Enter operation\n1.Insert beg\n2.Exit")))
if c==1:
some code
I have tried using a return statement and lambda but still it prints None.Also I am not printing the given function.
Because what you are trying to write to the standard output is not your menu seen as a string but the object resulting from the print function.
print function is useless. Argument sent to input function is by default written to the standard output.
Therefore:
while(c!=2):
c=int(input("Enter operation\n1.Insert beg\n2.Exit\n"))
if c == 1:
some code
is enough (with an extra newline after Exit option for more readibility).
See here official documentation about input function.
I'm processing some textual data and I transform them into interpretable commands that would be used as argument for a WHERE statement but I get a string and I don't know how to use it.
For example from the string :
'c_programme_nom == "2-Broke-Girls"'
I get :
"F.col('name').like('%2-Broke-Girls%')"
But I get a string and I would like to use it as a parameter in a WHERE statement.
The expected result would be :
df.where(F.col('name').like('%2-Broke-Girls%'))
I don't know if there is a way to do it.
Seems like you're looking to execute strings containing code:
You can use exec in python:
exec() function is used for the dynamic execution of Python program which can either be a string or object code. If it is a string, the string is parsed as a suite of Python statements which is then executed unless a syntax error occurs and if it is an object code, it is simply executed.
exec('print("The sum of 5 and 10 is", (5+10))')
# The sum of 5 and 10 is 15
Background:
My question should be relatively easy, however I am not able to figure it out.
I have written a function regarding queueing theory and it will be used for ambulance service planning. For example, how many calls for service can I expect in a given time frame.
The function takes two parameters; a starting value of the number of ambulances in my system starting at 0 and ending at 100 ambulances. This will show the probability of zero calls for service, one call for service, three calls for service….up to 100 calls for service. Second parameter is an arrival rate number which is the past historical arrival rate in my system.
The function runs and prints out the result to my screen. I have checked the math and it appears to be correct.
This is Python 3.7 with the Anaconda distribution.
My question is this:
I would like to process this data even further but I don’t know how to capture it and do more math. For example, I would like to take this list and accumulate the probability values. With an arrival rate of five, there is a cumulative probability of 61.56% of at least five calls for service, etc.
A second example of how I would like to process this data is to format it as percentages and write out a text file
A third example would be to process the cumulative probabilities and exclude any values higher than the 99% cumulative value (because these vanish into extremely small numbers).
A fourth example would be to create a bar chart showing the probability of n calls for service.
These are some of the things I want to do with the queueing theory calculations. And there are a lot more. I am planning on writing a larger application. But I am stuck at this point. The function writes an output into my Python 3.7 console. How do I “capture” that output as an object or something and perform other processing on the data?
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import math
import csv
def probability_x(start_value = 0, arrival_rate = 0):
probability_arrivals = []
while start_value <= 100:
probability_arrivals = [start_value, math.pow(arrival_rate, start_value) * math.pow(math.e, -arrival_rate) / math.factorial(start_value)]
print(probability_arrivals)
start_value = start_value + 1
return probability_arrivals
#probability_x(arrival_rate = 5, x = 5)
#The code written above prints to the console, but my goal is to take the returned values and make other calculations.
#How do I 'capture' this data for further processing is where I need help (for example, bar plots, cumulative frequency, etc )
#failure. TypeError: writerows() argument must be iterable.
with open('ExpectedProbability.csv', 'w') as writeFile:
writer = csv.writer(writeFile)
for value in probability_x(arrival_rate = 5):
writer.writerows(value)
writeFile.close()
#Failure. Why does it return 2. Yes there are two columns but I was expecting 101 as the length because that is the end of my loop.
print(len(probability_x(arrival_rate = 5)))
The problem is, when you write
probability_arrivals = [start_value, math.pow(arrival_rate, start_value) * math.pow(math.e, -arrival_rate) / math.factorial(start_value)]
You're overwriting the previous contents of probability_arrivals. Everything that it held previously is lost.
Instead of using = to reassign probability_arrivals, you want to append another entry to the list:
probability_arrivals.append([start_value, math.pow(arrival_rate, start_value) * math.pow(math.e, -arrival_rate) / math.factorial(start_value)])
I'll also note, your while loop can be improved. You're basically just looping over start_value until it reaches a certain value. A for loop would be more appropriate here:
for s in range(start_value, 101): # The end value is exclusive, so it's 101 not 100
probability_arrivals = [s, math.pow(arrival_rate, s) * math.pow(math.e, -arrival_rate) / math.factorial(s)]
print(probability_arrivals)
Now you don't need to manually worry about incrementing the counter.
I need to write a script in python2.7 which parse 4 files.
I need to be fast as possible.
For the moment i create a loop, and i parse the 4 files one after another.
I need to understand one thing. If a created 4 parsing script programs (one for each file) and launch the 4 script in 4 different terminal, is this going to reduce the execution time (or not) ?
Thx,
if you ave a potato pc , Yes it will Reduce the Execution Time
i suggest yu to use multithreading on every script for up the speed
import threading
import time
def main():
starttime = time.time()
endtime = time.time()
for x in range(1,10000): # For Print 10000 Times The Character X For Try If It Reduce Or Nop
print x
print "Time Speled : " + round((endtime-starttime), 2) # For Show The Time
threads = []
t = threading.Thread(target=main)
threads.append(t)
t.start()
and you can Try the difference with threading and without it
I have about 5 very large csv files that I need to parse, munge and insert into a database. The code looks approximately like this:
i = 0
processFile = (linecount, file, onDone) ->
# process the csv as a stream
# NOTE: **this is where the large array gets declared**
# insert every relevant line into an array
# process the array and insert it into the db (about 5k records at a time)
# call onDone when db insert is done
getLinesAndProcess = (i, onDone) ->
inputFile = inputFiles[i]
if inputFile?
getFileSizeAndProcess = -> # this helps the GC
puts = (error, stdout, stderr) ->
totalLines = stdout.split(" ")[0]
processFile(totalLines, inputFile, ->
getLinesAndProcess(++i)
)
console.log "processing: #{inputFile}"
exec "wc -l '#{inputFile}'", puts
setTimeout(getFileSizeAndProcess, 5000)
getLinesAndProcess(i, ->
# close db connection, exit process and so on
)
The first million lines go fine, takes about 3 mins. Then it chunks along on the next record -- until node hits its memory limit (1.4GB) then it just crawls. The most likely thing is that v8's GC is not cleaning up the recordsToInsert array, even though it's in a closure that is done.
My solution in the short term is to just run one file at a time. That's fine, it works and so on, but I'm stuck with what to do to fix the multi-file problem. I've tried the -–max-old-space-size=8192 fix from caustik's blog, but it hasn't helped -- node is still getting stuck at 1.4GB. I added in the setTimeout based on a suggestion in another SO post. It doesn't appear to help.
In the end, I just had to set the array back to an empty array before calling the callback. That works fine but it feels like v8's GC is failing me here.
Can I do anything to get v8 to be smarter about GC when dealing with large arrays?
Closed over variables just seem like this because syntactically they look like local variables which live on the stack.
However they are very, very different.
If you had object like this:
function Closure() {
this.totalLines = [];
this.otherVariable = 3;
}
var object = new Closure();
Majority of people would understand that totalLines will never be garbage collected as long as object lives on. And if they were done with totalLines long before they were done with the object, they would assign it to null or at least understand that the garbage collector cannot collect it.
However, when it comes to actual javascript closures it works exactly the same yet people find it odd that they would have to explicitly set the closed over variable to null because the syntax is deceiving them.