Reading from excel sheet a single cell - excel

I am simulating a Blackjack games, where decisions are made according to Basic Strategy. Each decision is made by reading value in one of the cells from .xlsx file. When trying to simulate lets say 100 000 games it takes a long time. I use these two lines to read the decision:
bs = pandas.read_excel('BasicStrategy.xlsx', sheet_name = 'Soft')
decision = bs.iloc[player_result - 1,dealer_hand[0] - 2]
Since the decisions are just a number in table, what would decrease the time my program takes to execute? Since as I understand the whole sheet is being read every time a decision has to be made but I need only 1 value, how can I read only 1 value? Have not used numpy before but would it work in this case, and if so would it be faster? Any advice will be much appreciated.

Read the values into a variable once, and from then on refer to that variable. If you're accessing the file over a network, then save the variable as a local/temp file. Then when your program is started it can read the local file back into a variable.

Related

How to speed up a CLOCK_MONOTONIC list of timestamps

I have a list of SECONDS.MICROSECONDS CLOCK_MONOTONIC timestamps like those below:
5795.944152
5795.952708
5795.952708
5795.960820
5795.960820
5795.969092
5795.969092
5795.977502
5795.977502
5795.986061
5795.986061
5795.994075
5795.994075
5796.002382
5796.002382
5796.010860
5796.010860
5796.019241
5796.019241
5796.027452
5796.027452
5796.035709
5796.035709
5796.044158
5796.044158
5796.052453
5796.052453
5796.060785
5796.060785
5796.069053
They each represent a particular action to be made.
What I need to do, in python preferably, but the programming language doesn't really matter, is to speed up the actions - something like being able to do a 2X, 3X, etc., speed increment on this list. So those values need to decrease in such a way to match the speed incrementation of ?X.
I thought of dividing each timestamp with the speed number I want, but it seems it doesn't work this way.
As described and suggested by #RobertDodier I have managed to find a quick and simple solution to my issue:
speed = 2
speedtimestamps = [(t0 + (t - t0)/speed for t in timestamps]
Just make sure to remove the first line containing the first t0 timestamp.

keep the last value of a variable for the next run

x=0
for i in range(0,9):
x=x+1
When I run the script second time, I want x to start with a value of 9 . I know the code above is not logical. x will get the value of zero. I wrote it to be as clear as I can be. I found a solution by saving x value to a txt file as it is below. But if the txt file is removed, I will lose the last x value. It is not safe. Is there any other way to keep the last x value for the second run?
from pathlib import Path
myf=Path("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt")
x=0
if myf.is_file():
f=open("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt","r")
d=f.read()
x=int(d)
else:
f=open("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt","w")
f.write("0")
deger=0
for i in range(0,9):
x=x+1
f=open("C:\\Users\\Ozgur\\Anaconda3\\yaz.txt","w")
f.write(str(x))
f.close()
print(x)
No.
You can't 100% protect against users deleting data. There are some steps you can take (such as duplicating the data to other places, hiding the file, and setting permissions), but if someone wants to, they can find a way to delete the file, reset the contents to the original value, or do any number of things to manipulate the data, even if it takes one unplugging the hard drive and placing it in a different computer.
This is why error-checking is important, because developers cannot make 100% reliable assumptions that everything in place is there and in the correct state (especially since drives do wear down after long periods of time causing odd effects).
You can use databases, they are less likely to be lost than files. You can read about MySQL using Python
However, this is not the only way you can save the value of x. You can have an environment variable in your operating system. As an example,
import os
os.environ["XVARIABLE"] = "9"
To access this variable later, simply use
print os.environ["XVARIABLE"]
Note: On some platforms, modifying os.environ will not actually modify the system environment either for the current process or child processes.

Improve speed of reading data from excel to matlab

I have (at the time) five coloumns in Excel which I need to read and store into Matlab variables. I currently use the following code:
TE=xlsread('../input/input.xlsx','A:A');
AF=xlsread('../input/input.xlsx','B:B');
TAHE=xlsread('../input/input.xlsx','C:C');
HD=xlsread('../input/input.xlsx','D:D');
TCW=xlsread('../input/input.xlsx','E:E');
This takes 11 seconds, when the input.xlsx contains 14 rows. When using 8760 rows (which will be the number of rows in my final inputxlsx), the consumed time is about the same.
The bottleneck seems to be opening the Excel file. Am I right? How can I minimize the time consumption?
To me, it seems like Matlab opens the Excel file five times, when only one seems necessary. How can I improve my code?
EDIT:
By using the following code, the time consumption was reduced by about 2 seconds (still rather slow):
temp=xlsread('../input/input.xlsx','A:E');
TE=temp(:,1);
AF=temp(:,2);
TAHE=temp(:,3);
HD=temp(:,4);
TCW=temp(:,5);
From the xlsread documentation:
num = xlsread(filename,sheet,xlRange,'basic') reads data from the
spreadsheet in basic mode, the default on systems without Excel for
Windows. If you do not specify all the arguments, use empty strings as
placeholders, for example, num = xlsread(filename,'','','basic').
My understanding of this is that on Windows machines with Excel installed, MATLAB actually calls Excel and lets it read the data and pass them to MATLAB, whereas otherwise (without Excel, without Windows or with explicit 'basic' mode) the file is read by a native MATLAB implementation, which may be faster because the Excel startup alone may take some time.
You shouldn't split up the xlsread calls. Try reading all your data at once, for example, into a cell array and split it into variables once it's loaded.
EDIT: I just saw your edit. I guess it won't get any faster...

How to work with the COUNTER in Nagios or RRD?

I have the following problem:
I want to do the statistics of data that need to be constantly increasing. For example, the number of visits to the link. After some time be restarted these visit and start again from the beginning. To have a continuous increase, want to do the statistics somewhere. For this purpose, use a site that does this. In his condition can be used to COUNTER, GAUGE, AVERAGE, ... a.. I want to use the COUNTER. The system is built on Nagios.
My question is how to use this COUNTER. I guess it is the same as that of the RRD. But I met some strange things in the creation of such a COUNTER.
I submit the values ' 1 ' then ' 2 ' and the chart to come up 3. When I do it doesn't work. After the restart, for example, and submit again 1 to become 4
Anyone dealt with these things tell me briefly how it works with this COUNTER.
I saw that the COUNTER is used for traffic on routers, etc, but I want to apply for a regular graph, which just increases.
The RRD data type COUNTER will convert the input data into a rate, by taking the difference between this sample and the last sample, and dividing by the time interval (note that data normalisation also takes place and this is dependent on the Interval setting of the RRD)
Thus, updating with a constantly increasing count will result in a rate-of-change value to be graphed.
If you want to see your graph actually constantly increasing, IE showing the actual count of packets transferred (for example) rather than the rate of transfer, you would need to use type GAUGE which assumes any rate conversion has already been done.
If you want to submit the rate values (EG, 2 in the last minute), but display the overall constantly increasing total (in other words, the inverse of how the COUNTER data type works), then you would need to store the values as GAUGE, and use a CDEF in your RRDgraph command of the form CDEF:x=y,PREV,+ to obtain the ongoing total. Of course you would only have this relative to the start of the graph time window; maybe a separate call would let you determine what base value to use.
As you use Nagios, you may like to investigate Nagios add-ons such as pnp4nagios which will handle much of the graphing for you.

Visual C++ String Parsing

I wrote my own terminal program that reads from the serial port to read data from a microcontroller. Data is presented as follows:
0C82949>0D23949>0A75249> etc...
These are ASCII. Some things to note are that all elements start with >_0xx which is the header where xx is some chars such as >0C8 or >0D2 etc... this tells me what the rest of the data is such as if >0C8 is the speed of the car then 2949 holds the actual speed. The microcontroller writes the data really fast so at one time i can see 40 elements at a time. I want to quickly search this for an ">0C8" entry and only print out ">0C82949" out of the bunch:
an example if i only want 0D2:
Read from Serial Port: >0C82949>0D23949>0A75249>
Output: 0D23949
would anyone know how to do this?? I am aware that since it is so fast i would have to create threads which i can do, i am just not sure how to approach this issue for parsing. Any ideas would be greatly appreciated.
I am using Visual C++
You can parse the data and divide it on each > character. Then create separate strings. For each string, just search for desired substring. You may use strstr or CString::Find or string::find.
There is no need to create separate thread - the search operation is quite trivial and won't take much of CPU.

Resources