Resampling Time Series Data (Pandas Python 3) - python-3.x
Trying to convert data at daily frequency to weekly frequency.
In:
weeklyaaapl = pd.DataFrame()
weeklyaapl['Open'] = aapl.Open.resample('W').iloc[0]
#here I am trying to take the first value of the aapl.Open,
#that falls within the week.
Out:
ValueError: .resample() is now a deferred operation
use .resample(...).mean() instead of .resample(...)
I want the true open (the first open that prints for the week) (the open of the first day in that week).
It instead wants me to take the mean of the daily open values for a given week using .mean(), which is not the information I need.
Can't seem to interpret the error, documentation isn't helping either.
I think you need.
aapl.resample('W').first()
Output:
Open High Low Close Volume
Date
2010-01-10 30.49 30.64 30.34 30.57 123432050
2010-01-17 30.40 30.43 29.78 30.02 115557365
2010-01-24 29.76 30.74 29.61 30.72 182501620
2010-01-31 28.93 29.24 28.60 29.01 266424802
2010-02-07 27.48 28.00 27.33 27.82 187468421
Related
Parsing heterogenous data from a text file in Python
I am trying to parse raw data results from a text file into an organised tuple but having trouble getting it right. My raw data from the textfile looks something like this: Episode Cumulative Results EpisodeXD0281119 Date collected21/10/2019 Time collected10:00 Real time PCR for M. tuberculosis (Xpert MTB/Rif Ultra): PCR result Mycobacterium tuberculosis complex NOT detected Bacterial Culture: Bottle: Type FAN Aerobic Plus Result No growth after 5 days EpisodeST32423457 Date collected23/02/2019 Time collected09:00 Gram Stain: Neutrophils Occasional Gram positive bacilli Moderate (2+) Gram negative bacilli Numerous (3+) Gram negative cocci Moderate (2+) EpisodeST23423457 Date collected23/02/2019 Time collected09:00 Bacterial Culture: A heavy growth of 1) Klebsiella pneumoniae subsp pneumoniae (KLEPP) ensure that this organism does not spread in the ward/unit. A heavy growth of 2) Enterococcus species (ENCSP) Antibiotic/Culture KLEPP ENCSP Trimethoprim-sulfam R Ampicillin / Amoxic R S Amoxicillin-clavula R Ciprofloxacin R Cefuroxime (Parente R Cefuroxime (Oral) R Cefotaxime / Ceftri R Ceftazidime R Cefepime R Gentamicin S Piperacillin/tazoba R Ertapenem R Imipenem S Meropenem R S - Sensitive ; I - Intermediate ; R - Resistant ; SDD - Sensitive Dose Dependant Comment for organism KLEPP: ** Please note: this is a carbapenem-RESISTANT organism. Although some carbapenems may appear susceptible in vitro, these agents should NOT be used as MONOTHERAPY in the treatment of this patient. ** Please isolate this patient and practice strict contact precautions. Please inform Infection Prevention and Control as contact screening might be indicated. For further advice on the treatment of this isolate, please contact. The currently available laboratory methods for performing colistin susceptibility results are unreliable and may not predict clinical outcome. Based on published data and clinical experience, colistin is a suitable therapeutic alternative for carbapenem resistant Acinetobacter spp, as well as carbapenem resistant Enterobacteriaceae. If colistin is clinically indicated, please carefully assess clinical response. EpisodeST234234057 Date collected23/02/2019 Time collected09:00 Authorised by xxxx on 27/02/2019 at 10:35 MIC by E-test: Organism Klebsiella pneumoniae (KLEPN) Antibiotic Meropenem MIC corrected 4 ug/mL MIC interpretation Resistant Antibiotic Imipenem MIC corrected 1 ug/mL MIC interpretation Sensitive Antibiotic Ertapenem MIC corrected 2 ug/mL MIC interpretation Resistant EpisodeST23423493 Date collected18/02/2019 Time collected03:15 Potassium 4.4 mmol/L 3.5 - 5.1 EpisodeST45445293 Date collected18/02/2019 Time collected03:15 Creatinine 32 L umol/L 49 - 90 eGFR (MDRD formula) >60 mL/min/1.73 m2 Creatinine 28 L umol/L 49 - 90 eGFR (MDRD formula) >60 mL/min/1.73 m2 Essentially the pattern is that ALL information starts with a unique EPISODE NUMBER and follows with a DATE and TIME and then the result of whatever test. This is the pattern throughout. What I am trying to parse into my tuple is the date, time, name of the test and the result - whatever it might be. I have the following code: with open(filename) as f: data = f.read() data = data.splitlines() DS = namedtuple('DS', 'date time name value') parsed = list() idx_date = [i for i, r in enumerate(data) if r.strip().startswith('Date')] for start, stop in zip(idx_date[:-1], idx_date[1:]): chunk = data[start:stop] date = time = name = value = None for row in chunk: if not row: continue row = row.strip() if row.startswith('Episode'): continue if row.startswith('Date'): _, date = row.split() date = date.replace('collected', '') elif row.startswith('Time'): _, time = row.split() time = time.replace('collected', '') else: name, value, *_ = row.split() print (name) parsed.append(DS(date, time, name, value)) print(parsed) My error is that I am unable to find a way to parse the heterogeneity of the test RESULT in a way that I can use later, for example for the tuple DS ('DS', 'date time name value'): DATE = 21/10/2019 TIME = 10:00 NAME = Real time PCR for M tuberculosis or Potassium RESULT = Negative or 4.7 Any advice appreciated. I have hit a brick wall.
python3: Split time series by diurnal periods
I have the following dataset: 01/05/2020,00,26.3,27.5,26.3,80,81,73,22.5,22.7,22.0,993.7,993.7,993.0,0.0,178,1.2,-3.53,0.0 01/05/2020,01,26.1,26.8,26.1,79,80,75,22.2,22.4,21.9,994.4,994.4,993.7,1.1,22,2.0,-3.54,0.0 01/05/2020,02,25.4,26.1,25.4,80,81,79,21.6,22.3,21.6,994.7,994.7,994.4,0.1,335,2.3,-3.54,0.0 01/05/2020,03,23.3,25.4,23.3,90,90,80,21.6,21.8,21.5,994.7,994.8,994.6,0.9,263,1.5,-3.54,0.0 01/05/2020,04,22.9,24.2,22.9,89,90,86,21.0,22.1,21.0,994.2,994.7,994.2,0.3,268,2.0,-3.54,0.0 01/05/2020,05,22.8,23.1,22.8,90,91,89,21.0,21.4,20.9,993.6,994.2,993.6,0.7,264,1.5,-3.54,0.0 01/05/2020,06,22.2,22.8,22.2,92,92,90,20.9,21.2,20.8,993.6,993.6,993.4,0.8,272,1.6,-3.54,0.0 01/05/2020,07,22.6,22.6,22.0,91,93,91,21.0,21.2,20.7,993.4,993.6,993.4,0.4,284,2.3,-3.49,0.0 01/05/2020,08,21.6,22.6,21.5,92,92,90,20.2,20.9,20.1,993.8,993.8,993.4,0.4,197,2.1,-3.54,0.0 01/05/2020,09,22.0,22.1,21.5,92,93,92,20.7,20.8,20.2,994.3,994.3,993.7,0.0,125,2.1,-3.53,0.0 01/05/2020,10,22.7,22.7,21.9,91,92,91,21.2,21.2,20.5,995.0,995.0,994.3,0.0,354,0.0,70.99,0.0 01/05/2020,11,25.0,25.0,22.7,83,91,82,21.8,22.1,21.1,995.5,995.5,995.0,0.8,262,1.5,744.8,0.0 01/05/2020,12,27.9,28.1,24.9,72,83,70,22.3,22.8,21.6,996.1,996.1,995.5,0.7,228,1.9,1392.,0.0 01/05/2020,13,30.4,30.4,27.7,58,72,55,21.1,22.6,20.4,995.9,996.2,995.9,1.6,134,3.7,1910.,0.0 01/05/2020,14,31.7,32.3,30.1,50,58,48,20.2,21.3,19.7,995.8,996.1,995.8,3.0,114,5.4,2577.,0.0 01/05/2020,15,32.9,33.2,31.8,44,50,43,19.1,20.5,18.6,994.9,995.8,994.9,0.0,128,5.6,2853.,0.0 01/05/2020,16,33.2,34.4,32.0,46,48,41,20.0,20.0,18.2,994.0,994.9,994.0,0.0,125,4.3,2700.,0.0 01/05/2020,17,33.1,34.5,32.7,44,46,39,19.2,19.9,18.5,993.4,994.1,993.4,0.0,170,1.6,2806.,0.0 01/05/2020,18,33.6,34.2,32.6,41,47,40,18.5,20.0,18.3,992.6,993.4,992.6,0.0,149,0.0,2319.,0.0 01/05/2020,19,33.5,34.7,32.1,43,49,39,19.2,20.4,18.3,992.3,992.6,992.3,0.3,168,4.1,1907.,0.0 01/05/2020,20,32.1,33.9,32.1,49,51,41,20.2,20.7,18.5,992.4,992.4,992.3,0.1,192,3.7,1203.,0.0 01/05/2020,21,29.9,32.2,29.9,62,62,49,21.8,21.9,20.2,992.3,992.4,992.2,0.0,188,2.9,408.0,0.0 01/05/2020,22,28.5,29.9,28.4,67,67,62,21.8,22.0,21.7,992.5,992.5,992.3,0.4,181,2.3,6.817,0.0 01/05/2020,23,27.8,28.5,27.8,71,71,66,22.1,22.1,21.5,993.1,993.1,992.5,0.0,225,1.6,-3.39,0.0 02/05/2020,00,27.4,28.2,27.3,75,75,68,22.5,22.5,21.7,993.7,993.7,993.1,0.5,139,1.5,-3.54,0.0 02/05/2020,01,27.3,27.7,27.3,72,75,72,21.9,22.6,21.9,994.3,994.3,993.7,0.0,126,1.1,-3.54,0.0 02/05/2020,02,25.4,27.3,25.2,85,85,72,22.6,22.8,21.9,994.4,994.5,994.3,0.1,256,2.6,-3.54,0.0 02/05/2020,03,25.5,25.6,25.3,84,85,82,22.5,22.7,22.1,994.3,994.4,994.2,0.0,329,0.7,-3.54,0.0 02/05/2020,04,24.5,25.5,24.5,86,86,82,22.0,22.5,21.9,993.9,994.3,993.9,0.0,290,1.2,-3.54,0.0 02/05/2020,05,24.0,24.5,23.5,87,88,86,21.6,22.1,21.3,993.6,993.9,993.6,0.7,285,1.3,-3.54,0.0 02/05/2020,06,23.7,24.1,23.7,87,87,85,21.3,21.6,21.3,993.1,993.6,993.1,0.1,305,1.1,-3.51,0.0 02/05/2020,07,22.7,24.1,22.5,91,91,86,21.0,21.7,20.7,993.1,993.3,993.1,0.6,220,1.1,-3.54,0.0 02/05/2020,08,22.9,22.9,22.6,92,92,91,21.5,21.5,21.0,993.2,993.2,987.6,0.0,239,1.5,-3.53,0.0 02/05/2020,09,22.9,23.0,22.8,93,93,92,21.7,21.7,21.4,993.6,993.6,993.2,0.0,289,0.4,-3.53,0.0 02/05/2020,10,23.5,23.5,22.8,92,93,92,22.1,22.1,21.6,994.3,994.3,993.6,0.0,256,0.0,91.75,0.0 02/05/2020,11,26.1,26.2,23.5,80,92,80,22.4,23.1,22.2,995.0,995.0,994.3,1.1,141,1.9,789.0,0.0 02/05/2020,12,28.7,28.7,26.1,69,80,68,22.4,22.7,22.1,995.5,995.5,995.0,0.0,116,2.2,1468.,0.0 02/05/2020,13,31.4,31.4,28.6,56,69,56,21.6,22.9,21.0,995.5,995.7,995.4,0.0,65,0.0,1762.,0.0 02/05/2020,14,32.1,32.4,30.6,48,58,47,19.8,22.0,19.3,995.0,995.6,990.6,0.0,105,0.0,2657.,0.0 02/05/2020,15,34.0,34.2,31.7,43,48,42,19.6,20.1,18.6,993.9,995.0,993.9,3.0,71,6.0,2846.,0.0 02/05/2020,16,34.7,34.7,32.3,38,48,38,18.4,20.3,18.3,992.7,993.9,992.7,1.4,63,6.3,2959.,0.0 02/05/2020,17,34.0,34.7,32.7,42,46,38,19.2,20.0,18.4,991.7,992.7,991.7,2.2,103,4.8,2493.,0.0 02/05/2020,18,34.3,34.7,33.6,41,42,38,19.1,19.4,18.0,991.2,991.7,991.2,2.0,141,4.8,2593.,0.0 02/05/2020,19,33.5,34.5,32.5,42,47,39,18.7,20.0,18.4,990.7,991.4,989.9,1.8,132,4.2,1317.,0.0 02/05/2020,20,32.5,34.2,32.5,47,48,40,19.7,20.3,18.7,990.5,990.7,989.8,1.3,191,4.2,1250.,0.0 02/05/2020,21,30.5,32.5,30.5,59,59,47,21.5,21.6,20.0,979.8,990.5,979.5,0.1,157,2.9,345.5,0.0 02/05/2020,22,28.6,30.5,28.6,67,67,59,21.9,21.9,21.5,978.9,980.1,978.7,0.6,166,2.2,1.122,0.0 02/05/2020,23,27.2,28.7,27.2,74,74,66,22.1,22.2,21.6,978.9,979.3,978.6,0.0,246,1.7,-3.54,0.0 03/05/2020,00,26.5,27.2,26.0,77,80,74,22.2,22.5,22.0,979.0,979.1,978.7,0.0,179,1.4,-3.54,0.0 03/05/2020,01,26.0,26.6,26.0,80,80,77,22.4,22.5,22.1,979.1,992.4,978.7,0.0,276,0.6,-3.54,0.0 03/05/2020,02,26.0,26.5,26.0,79,81,75,22.1,22.5,21.7,978.8,979.1,978.5,0.0,290,0.6,-3.53,0.0 03/05/2020,03,25.3,26.0,25.3,83,83,79,22.2,22.4,21.8,978.6,989.4,978.5,0.5,303,1.0,-3.54,0.0 03/05/2020,04,25.3,25.6,24.6,81,85,81,21.9,22.5,21.7,978.1,992.7,977.9,0.7,288,1.5,-3.00,0.0 03/05/2020,05,23.7,25.3,23.7,88,88,81,21.5,21.9,21.5,977.6,991.8,977.3,1.2,256,1.8,-3.54,0.0 03/05/2020,06,23.3,23.7,23.3,91,91,88,21.7,21.7,21.5,976.9,977.6,976.7,0.4,245,1.8,-3.54,0.0 03/05/2020,07,23.0,23.6,23.0,91,91,89,21.4,21.9,21.3,976.7,977.0,976.4,0.9,257,1.9,-3.54,0.0 03/05/2020,08,23.4,23.4,22.9,90,92,90,21.7,21.7,21.3,976.8,976.9,976.5,0.4,294,1.6,-3.52,0.0 03/05/2020,09,23.0,23.5,23.0,88,90,87,21.0,21.6,20.9,992.1,992.1,976.7,0.8,263,1.6,-3.54,0.0 03/05/2020,10,23.2,23.2,22.5,91,92,88,21.6,21.6,20.8,993.0,993.0,992.2,0.1,226,1.5,29.03,0.0 03/05/2020,11,26.0,26.1,23.2,77,91,76,21.6,22.1,21.5,993.8,993.8,982.1,0.0,120,0.9,458.1,0.0 03/05/2020,12,26.6,27.0,25.5,76,80,76,22.1,22.5,21.4,982.7,994.3,982.6,0.3,121,2.3,765.3,0.0 03/05/2020,13,28.5,28.7,26.6,66,77,65,21.5,23.1,21.2,982.5,994.2,982.4,1.4,130,3.2,1219.,0.0 03/05/2020,14,31.1,31.1,28.5,55,66,53,21.0,21.8,19.9,982.3,982.7,982.1,1.2,129,3.7,1743.,0.0 03/05/2020,15,31.6,31.8,30.7,50,55,49,19.8,20.8,19.2,992.9,993.5,982.2,1.1,119,5.1,1958.,0.0 03/05/2020,16,32.7,32.8,31.1,46,52,46,19.6,20.7,19.2,991.9,992.9,991.9,0.8,122,4.4,1953.,0.0 03/05/2020,17,32.3,33.3,32.0,44,49,42,18.6,20.2,18.2,990.7,991.9,979.0,2.6,133,5.9,2463.,0.0 03/05/2020,18,33.1,33.3,31.9,44,50,44,19.3,20.8,18.9,989.9,990.7,989.9,1.1,170,5.4,2033.,0.0 03/05/2020,19,32.4,33.2,32.2,47,47,44,19.7,20.0,18.7,989.5,989.9,989.5,2.4,152,5.2,1581.,0.0 03/05/2020,20,31.2,32.5,31.2,53,53,46,20.6,20.7,19.4,989.5,989.7,989.5,1.7,159,4.6,968.6,0.0 03/05/2020,21,29.7,32.0,29.7,62,62,51,21.8,21.8,20.5,989.7,989.7,989.4,0.8,154,4.0,414.2,0.0 03/05/2020,22,28.3,29.7,28.3,69,69,62,22.1,22.1,21.7,989.9,989.9,989.7,0.3,174,2.0,6.459,0.0 03/05/2020,23,26.9,28.5,26.9,75,75,67,22.1,22.5,21.7,990.5,990.5,989.8,0.2,183,1.0,-3.54,0.0 The second column is time (hour). I want to separate the dataset by morning (06-11), afternoon (12-17), evening (18-23) and night (00-05). How I can do it?
You can use pd.cut: bins = [-1,5,11,17,24] labels = ['morning', 'afternoon', 'evening', 'night'] df['day_part'] = pd.cut(df['hour'], bins=bins, labels=labels)
I added column names, including Hour for the second column. Then I used read_csv which reads the source text, "dropping" leading zeroes, so that Hour column is just int. To split rows (add a column marking the diurnal period), use: df['period'] = pd.cut(df.Hour, bins=[0, 6, 12, 18, 24], right=False, labels=['night', 'morning', 'afternoon', 'evening']) Then you can e.g. use groupby to process your groups. Because I used right=False parameter, the bins are closed on the left side, thus bin limits are more natural (no need for -1 as an hour). And bin limits (except for the last) are just starting hours of each period - quite natural notation.
PACF function in statsmodels.tsa.stattools gives numbers greater than 1 when using ywunbiased?
I have a dataframe which is of length 177 and I want to calculate and plot the partial auto-correlation function (PACF). I have the data imported etc and I do: from statsmodels.tsa.stattools import pacf ys = pacf(data[key][array].diff(1).dropna(), alpha=0.05, nlags=176, method="ywunbiased") xs = range(lags+1) plt.figure() plt.scatter(xs,ys[0]) plt.grid() plt.vlines(xs, 0, ys[0]) plt.plot(ys[1]) The method used results in numbers greater than 1 for very long lags (90ish) which is incorrect and I get a RuntimeWarning: invalid value encountered in sqrtreturn rho, np.sqrt(sigmasq) but since I can't see their source code I don't know what this means. To be honest, when I search for PACF, all the examples only carry out PACF up to 40 lags or 60 or so and they never have any significant PACF after lag=2 and so I couldn't compare to other examples either. But when I use: method="ols" # or method="ywmle" the numbers are corrected. So it must be the algo they use to solve it. I tried importing inspect and getsource method but its useless it just shows that it uses another package and I can't find that. If you also know where the problem arises from, I would really appreciate the help. For your reference, the values for data[key][array] are: [1131.130005, 1144.939941, 1126.209961, 1107.300049, 1120.680054, 1140.839966, 1101.719971, 1104.23999, 1114.579956, 1130.199951, 1173.819946, 1211.920044, 1181.27002, 1203.599976, 1180.589966, 1156.849976, 1191.5, 1191.329956, 1234.180054, 1220.329956, 1228.810059, 1207.01001, 1249.47998, 1248.290039, 1280.079956, 1280.660034, 1294.869995, 1310.609985, 1270.089966, 1270.199951, 1276.660034, 1303.819946, 1335.849976, 1377.939941, 1400.630005, 1418.300049, 1438.23999, 1406.819946, 1420.859985, 1482.369995, 1530.619995, 1503.349976, 1455.27002, 1473.98999, 1526.75, 1549.380005, 1481.140015, 1468.359985, 1378.550049, 1330.630005, 1322.699951, 1385.589966, 1400.380005, 1280.0, 1267.380005, 1282.829956, 1166.359985, 968.75, 896.23999, 903.25, 825.880005, 735.090027, 797.869995, 872.8099980000001, 919.1400150000001, 919.320007, 987.4799800000001, 1020.6199949999999, 1057.079956, 1036.189941, 1095.630005, 1115.099976, 1073.869995, 1104.48999, 1169.430054, 1186.689941, 1089.410034, 1030.709961, 1101.599976, 1049.329956, 1141.199951, 1183.26001, 1180.550049, 1257.640015, 1286.119995, 1327.219971, 1325.829956, 1363.609985, 1345.199951, 1320.640015, 1292.280029, 1218.890015, 1131.420044, 1253.300049, 1246.959961, 1257.599976, 1312.410034, 1365.680054, 1408.469971, 1397.910034, 1310.329956, 1362.160034, 1379.319946, 1406.579956, 1440.670044, 1412.160034, 1416.180054, 1426.189941, 1498.109985, 1514.680054, 1569.189941, 1597.569946, 1630.73999, 1606.280029, 1685.72998, 1632.969971, 1681.550049, 1756.540039, 1805.810059, 1848.359985, 1782.589966, 1859.449951, 1872.339966, 1883.949951, 1923.569946, 1960.22998, 1930.6700440000002, 2003.369995, 1972.290039, 2018.050049, 2067.560059, 2058.899902, 1994.9899899999998, 2104.5, 2067.889893, 2085.51001, 2107.389893, 2063.110107, 2103.840088, 1972.180054, 1920.030029, 2079.360107, 2080.409912, 2043.939941, 1940.2399899999998, 1932.22998, 2059.73999, 2065.300049, 2096.949951, 2098.860107, 2173.600098, 2170.949951, 2168.27002, 2126.149902, 2198.810059, 2238.830078, 2278.8701170000004, 2363.639893, 2362.719971, 2384.199951, 2411.800049, 2423.409912, 2470.300049, 2471.649902, 2519.360107, 2575.26001, 2584.840088, 2673.610107, 2823.810059, 2713.830078, 2640.8701170000004, 2648.050049, 2705.27002, 2718.3701170000004, 2816.290039, 2901.52002, 2913.97998]
Your time series is pretty clearly not stationary, so that Yule-Walker assumptions are violated. More generally, PACF is usually appropriate with stationary time series. You might difference your data first, before considering the partial autocorrelations.
Pull random results from a database?
I have been coding in Python for a 2 months or so, but I mostly ask for help from a more experienced friend when I run in to these kinds of issues. I should also, before I begin, specify that I use Python solely for a personal project; any questions I ask will relate to each other through that. With those two things out of the way, I have a database of weaponry items that I created using the following script, made in Python 3.X: #Start by making a list of every material, weapontype, and upgrade. Materials=("Unobtanium","IvorySilk","BoneLeather","CottonWood","Tin","Copper","Bronze","Gold","Cobalt","Tungsten") WeaponTypes=("Knife","Sword","Greatsword","Polearm","Battlestaff","Claw","Cane","Wand","Talis","Slicer","Rod","Bow","Crossbow","Handbow","Pistol","Mechgun","Rifle","Shotgun") Upgrades=("0","1","2","3","4","5","6","7","8","9","10") ForgeWInputs=[] #Go through every material... for m in Materials: #And in each material, go through every weapontype... for w in WeaponTypes: #And in every weapontype, go through each upgrade... for u in Upgrades: ForgeWInputs.append((m,w,u)) #We now have a list "ForgeWInputs", which contains the 3-element list needed to #Forge any weapon. For example... MAT={} MAT["UnobtaniumPD"]=0 MAT["UnobtaniumMD"]=0 MAT["UnobtaniumAC"]=0 MAT["UnobtaniumPR"]=0 MAT["UnobtaniumMR"]=0 MAT["UnobtaniumWT"]=0 MAT["UnobtaniumBuy"]=0 MAT["UnobtaniumSell"]=0 MAT["IvorySilkPD"]=0 MAT["IvorySilkMD"]=12 MAT["IvorySilkAC"]=3 MAT["IvorySilkPR"]=0 MAT["IvorySilkMR"]=3 MAT["IvorySilkWT"]=6 MAT["IvorySilkBuy"]=10 MAT["IvorySilkSell"]=5 MAT["CottonWoodPD"]=8 MAT["CottonWoodMD"]=8 MAT["CottonWoodAC"]=5 MAT["CottonWoodPR"]=0 MAT["CottonWoodMR"]=3 MAT["CottonWoodWT"]=6 MAT["CottonWoodBuy"]=14 MAT["CottonWoodSell"]=7 MAT["BoneLeatherPD"]=12 MAT["BoneLeatherMD"]=0 MAT["BoneLeatherAC"]=3 MAT["BoneLeatherPR"]=3 MAT["BoneLeatherMR"]=0 MAT["BoneLeatherWT"]=6 MAT["BoneLeatherBuy"]=10 MAT["BoneLeatherSell"]=5 MAT["TinPD"]=18 MAT["TinMD"]=6 MAT["TinAC"]=3 MAT["TinPR"]=5 MAT["TinMR"]=2 MAT["TinWT"]=12 MAT["TinBuy"]=20 MAT["TinSell"]=10 MAT["CopperPD"]=6 MAT["CopperMD"]=18 MAT["CopperAC"]=3 MAT["CopperPR"]=2 MAT["CopperMR"]=5 MAT["CopperWT"]=12 MAT["CopperBuy"]=20 MAT["CopperSell"]=10 MAT["BronzePD"]=10 MAT["BronzeMD"]=10 MAT["BronzeAC"]=5 MAT["BronzePR"]=3 MAT["BronzeMR"]=3 MAT["BronzeWT"]=15 MAT["BronzeBuy"]=30 MAT["BronzeSell"]=15 MAT["GoldPD"]=10 MAT["GoldMD"]=30 MAT["GoldAC"]=0 MAT["GoldPR"]=5 MAT["GoldMR"]=10 MAT["GoldWT"]=25 MAT["GoldBuy"]=50 MAT["GoldSell"]=25 MAT["CobaltPD"]=30 MAT["CobaltMD"]=10 MAT["CobaltAC"]=0 MAT["CobaltPR"]=10 MAT["CobaltMR"]=0 MAT["CobaltWT"]=25 MAT["CobaltBuy"]=50 MAT["CobaltSell"]=25 MAT["TungstenPD"]=20 MAT["TungstenMD"]=20 MAT["TungstenAC"]=0 MAT["TungstenPR"]=7 MAT["TungstenMR"]=7 MAT["TungstenWT"]=20 MAT["TungstenBuy"]=70 MAT["TungstenSell"]=35 WEP={} WEP["KnifePD"]=0.5 WEP["KnifeMD"]=0.5 WEP["KnifeAC"]=1.25 WEP["SwordPD"]=1.0 WEP["SwordMD"]=1.0 WEP["SwordAC"]=1.0 WEP["GreatswordPD"]=1.67 WEP["GreatswordMD"]=0.67 WEP["GreatswordAC"]=0.5 WEP["PolearmPD"]=1.15 WEP["PolearmMD"]=1.15 WEP["PolearmAC"]=1.15 WEP["CanePD"]=1.15 WEP["CaneMD"]=1.15 WEP["CaneAC"]=0.7 WEP["ClawPD"]=1.1 WEP["ClawMD"]=1.1 WEP["ClawAC"]=0.8 WEP["BattlestaffPD"]=1.15 WEP["BattlestaffMD"]=1 WEP["BattlestaffAC"]=1.25 WEP["TalisPD"]=1.15 WEP["TalisMD"]=0.7 WEP["TalisAC"]=1.15 WEP["WandPD"]=0.0 WEP["WandMD"]=1 WEP["WandAC"]=1.33 WEP["RodPD"]=0.0 WEP["RodMD"]=1.67 WEP["RodAC"]=0.67 WEP["SlicerPD"]=0.67 WEP["SlicerMD"]=0.67 WEP["SlicerAC"]=0.67 WEP["BowPD"]=1.15 WEP["BowMD"]=1.15 WEP["BowAC"]=0.85 WEP["CrossbowPD"]=1.4 WEP["CrossbowMD"]=1.4 WEP["CrossbowAC"]=1 WEP["PistolPD"]=0.65 WEP["PistolMD"]=0.65 WEP["PistolAC"]=1.15 WEP["MechgunPD"]=0.2 WEP["MechgunMD"]=0.2 WEP["MechgunAC"]=1.5 WEP["ShotgunPD"]=1.3 WEP["ShotgunMD"]=1.3 WEP["ShotgunAC"]=0.4 WEP["RiflePD"]=0.75 WEP["RifleMD"]=0.75 WEP["RifleAC"]=1.75 WEP["HandbowPD"]=0.8 WEP["HandbowMD"]=0.8 WEP["HandbowAC"]=1.2 UP={} UP["0PD"]=1.0 UP["1PD"]=1.1 UP["2PD"]=1.2 UP["3PD"]=1.3 UP["4PD"]=1.4 UP["5PD"]=1.5 UP["6PD"]=1.6 UP["7PD"]=1.7 UP["8PD"]=1.8 UP["9PD"]=1.9 UP["10PD"]=2.0 UP["0MD"]=1.0 UP["1MD"]=1.1 UP["2MD"]=1.2 UP["3MD"]=1.3 UP["4MD"]=1.4 UP["5MD"]=1.5 UP["6MD"]=1.6 UP["7MD"]=1.7 UP["8MD"]=1.8 UP["9MD"]=1.9 UP["10MD"]=2.0 UP["0AC"]=1.0 UP["1AC"]=1.1 UP["2AC"]=1.2 UP["3AC"]=1.3 UP["4AC"]=1.4 UP["5AC"]=1.5 UP["6AC"]=1.6 UP["7AC"]=1.7 UP["8AC"]=1.8 UP["9AC"]=1.9 UP["10AC"]=2.0 UP["0PR"]=1.0 UP["1PR"]=1.1 UP["2PR"]=1.2 UP["3PR"]=1.3 UP["4PR"]=1.4 UP["5PR"]=1.5 UP["6PR"]=1.6 UP["7PR"]=1.7 UP["8PR"]=1.8 UP["9PR"]=1.9 UP["10PR"]=2.0 UP["0MR"]=1.0 UP["1MR"]=1.1 UP["2MR"]=1.2 UP["3MR"]=1.3 UP["4MR"]=1.4 UP["5MR"]=1.5 UP["6MR"]=1.6 UP["7MR"]=1.7 UP["8MR"]=1.8 UP["9MR"]=1.9 UP["10MR"]=2.0 UP["0WT"]=1.0 UP["1WT"]=0.95 UP["2WT"]=0.9 UP["3WT"]=0.85 UP["4WT"]=0.8 UP["5WT"]=0.75 UP["6WT"]=0.7 UP["7WT"]=0.65 UP["8WT"]=0.6 UP["9WT"]=0.55 UP["10WT"]=0.5 def ForgeW(Material,WeaponType,UpgradeLevel): """The ForgeW function Forges a Weapon from its base components into a lethal tool.""" #Get the appropriate material stats... OrePD=MAT[Material+"PD"] OreMD=MAT[Material+"MD"] OreAC=MAT[Material+"AC"] #And weapon type stats... SmithPD=WEP[WeaponType+"PD"] SmithMD=WEP[WeaponType+"MD"] SmithAC=WEP[WeaponType+"AC"] #And apply the upgrade... UpgradePD=UP[UpgradeLevel+"PD"] UpgradeMD=UP[UpgradeLevel+"MD"] UpgradeAC=UP[UpgradeLevel+"AC"] #Then, add them all together. ProductPD=(OrePD*SmithPD)*UpgradePD ProductMD=(OreMD*SmithMD)*UpgradeMD ProductAC=(OreAC*SmithAC)*UpgradeAC return(ProductPD,ProductMD,ProductAC) #Recall that ForgeW simply needs its three inputs, which we have a list of. So, let's make our #database of weapon information. OmniWeapData={} #Go through every set of inputs we have... for Inputs in ForgeWInputs: #And create a key in the dictionary by combining their three names. Then, set that #key equal to whatever ForgeW returns when those three inputs are put in. OmniWeapData[Inputs[0]+Inputs[1]+Inputs[2]] = ForgeW(Inputs[0],Inputs[1],Inputs[2]) I would like to refer to the database created by this code and pull out weapons at random, and frankly I have no idea how. As an example of what I would like to do... Well, hum. The code in question should spit out a certain number of results based on the complete products of the ForgeW function - if I specify, either within the code or through an input, that I would like 3 outputs, it might output a GoldKnife0, a TinPolearm5, and a CobaltGreatsword10. If I were to run the code again, it should dispense new equipment - not the same three every time. I apologize if this is too much or too little data - it's my first time asking a question here.
"Take this... it may help you on your quest." There is a library called random with a method called choice(). e.g. import random random.choice([1,2,3]) >>> 2 It sounds like you need one item from Materials, one item from WeaponTypes, and one from Upgrades. Also, rarely is there ever a need for a triple nested FOR statement. This should get you started.
Use matlab to search excel data file for time range and copy data into variable
In my excel file I have a time column in 12 hr clock time and a bunch of data columns. I have pasted a snippet of it in this post as a code since i cant attach a file. I am trying to build a gui that will take an input from the user like so: start time: 7:29:32 AM End time: 7:29:51 AM Then do the following: calculate the time that has passed in seconds (should be just a row count, data is gathered once a second) copy the data in the time range from the "Data 3" column in to a variable perform other calculations on the data copied as needed I am having some trouble figuring out what to do to search the time data and find its location since it imports as text with xlsread. any ideas? The data looks like this: Time Data 1 Data 2 Data 3 Data 4 Data 5 7:29:25 AM 0.878556385 0.388400561 0.076890401 0.93335277 0.884750618 7:29:26 AM 0.695838393 0.712762566 0.014814069 0.81264949 0.450303694 7:29:27 AM 0.250846937 0.508617941 0.24802015 0.722457624 0.47119616 7:29:28 AM 0.206189924 0.82970364 0.819163787 0.060932817 0.73455323 7:29:29 AM 0.161844331 0.768214077 0.154097877 0.988201094 0.951520263 7:29:30 AM 0.704242494 0.371877481 0.944482485 0.79207359 0.57390951 7:29:31 AM 0.072028024 0.120263127 0.577396985 0.694153791 0.341824004 7:29:32 AM 0.241817775 0.32573323 0.484644494 0.377938298 0.090122672 7:29:33 AM 0.500962945 0.540808907 0.582958676 0.043377373 0.041274613 7:29:34 AM 0.087742217 0.596508236 0.020250297 0.926901109 0.45960323 7:29:35 AM 0.268222071 0.291034947 0.598887588 0.575571111 0.136424853 7:29:36 AM 0.42880255 0.349597405 0.936733938 0.232128788 0.555528823 7:29:37 AM 0.380425154 0.162002488 0.208550466 0.776866494 0.79340504 7:29:38 AM 0.727940393 0.622546124 0.716007768 0.660480612 0.02463804 7:29:39 AM 0.582772435 0.713406643 0.306544291 0.225257421 0.043552277 7:29:40 AM 0.371156954 0.163821476 0.780515577 0.032460418 0.356949005 7:29:42 AM 0.484167263 0.377878242 0.044189636 0.718147456 0.603177625 7:29:43 AM 0.294017186 0.463360581 0.962296024 0.504029061 0.183131098 7:29:44 AM 0.95635086 0.367849494 0.362230918 0.984421096 0.41587606 7:29:45 AM 0.198645523 0.754955312 0.280338922 0.79706146 0.730373691 7:29:46 AM 0.058483961 0.46774544 0.86783339 0.147418954 0.941713252 7:29:47 AM 0.411193343 0.340857813 0.162066261 0.943124515 0.722124394 7:29:48 AM 0.389312994 0.129281042 0.732723258 0.803458815 0.045824426 7:29:49 AM 0.549633038 0.73956852 0.542532728 0.618321989 0.358525184 7:29:50 AM 0.269925317 0.501399748 0.938234302 0.997577871 0.318813506 7:29:51 AM 0.798825842 0.24038537 0.958224157 0.660124357 0.07469288 7:29:52 AM 0.963581196 0.390150081 0.077448543 0.294604314 0.903519943 7:29:53 AM 0.890540963 0.50284339 0.229976565 0.664538451 0.926438543 7:29:54 AM 0.46951573 0.192568637 0.506730373 0.060557482 0.922857391 7:29:55 AM 0.56552394 0.952136998 0.739438663 0.107518765 0.911045415 7:29:56 AM 0.433149875 0.957190309 0.475811126 0.855705733 0.942255155 and this is the code I am using: [Data,Text] = xlsread('C:\Users\data.xlsx',2); IndexStart=strmatch('7:29:29 AM',Text,'exact'); %start time IndexEnd=strmatch('2:30:29 PM',Text,'exact'); %end time seconds = IndexEnd-IndexStart; TestData = Data([IndexStart: IndexEnd],:);
You probably need to: Use strfind to find the relevant string in the data imported Use datenum to convert the date to serial date numbers, to be able to calculate the elapsed time between the two points. It would help if you posted your code so far though. EDIT based on comments: Here's what I would do for cycling through the list of start and end times: [Data,Text] = xlsread('C:\Users\data.xlsx',2); start_times = {'7:29:29 AM','7:29:35 AM','7:29:44 AM','7:29:49 AM'}; % etc... end_times = {'2:30:29 PM','2:30:59 PM','2:31:22 PM','2:32:49 PM'}; % etc... elapsed_time = zeros(length(start_times),1); TestData = cell(length(start_times),1); % need a cell array because data can/will be of unequal lengths for k=1:length(start_times) IndexStart=strmatch(start_times{k},Text,'exact'); %start time IndexEnd=strmatch(end_times{k},Text,'exact'); %end time elapsed_time(k) = IndexEnd-IndexStart; TestData{k} = Data([IndexStart: IndexEnd],:); end
Use the "Import Data" from the Variable Tag in the Home menu. There you can set how you want the data to be imported like. With or without heading and the format.