Pine script - Security function not show correct on different timeframe - security
I'm newbie and try to get ichimoku data on 4 hour timeframe but it not showing the correct value when I shift.
//#version=4
study(title="test1", overlay=true)
conversionPeriods = input(9, minval=1, title="Conversion Line Length")
basePeriods = input(26, minval=1, title="Base Line Length")
laggingSpan2Periods = input(52, minval=1, title="Leading Span B Length")
displacement = input(26, minval=1, title="Displacement")
donchian_M240(len) => avg(security(syminfo.tickerid, 'D' , lowest(len)), security(syminfo.tickerid, 'D', highest(len)))
tenkanSen_M240 = donchian_M240(conversionPeriods)
kijunSen_M240 = donchian_M240(basePeriods)
senkoSpanA_M240 = avg(tenkanSen_M240, kijunSen_M240)
plot(senkoSpanA_M240[25], title="senkoSpanA_M240[25]")
The value senkoSpanA_M240[25] keep changing when I'm in M5, M15, M30, H1, H4 or D1.
Can you help please?
the reason it keeps changing when you change time frames is because you are using a historical bar reference [25] on your senkoSpanA_M240.
This means it will look for the senkoSpanA_M240 condition that occurred 25 bars ago.
Depending on which time frame you are selecting, it will look back 25 bars of that time frame and perform the calculation.
What exactly are you trying to achieve by using the [25]?
Related
What is the simplest way to complete a function on every row of a large table?
so I want to do a fisher exact test (one sided) on every row of a 3000+ row table with a format matching the below example gene sample_alt sample_ref population_alt population_ref One 4 556 770 37000 Two 5 555 771 36999 Three 6 554 772 36998 I would ideally like to make another column of the table equivalent to [(4+556)!(4+770)!(770+37000)!(556+37000)!]/[4!(556!)770!(37000!)(4+556+770+37000)!] for the first row of data, and so on and so forth for each row of the table. I know how to do a fisher test in R for simple 2x2 tables, but I wouldn't know how I would apply the fisher.test() function to each row of a large table. I also can't use an excel formula because the numbers get so big with the factorials that they reach excel's digit limit and result in a #NUM error. What's the best way to simply complete this? Thanks in advance!
Beginning with a tab-delimited text file on desktop (table.txt) with the same format as shown in the stem question if(!require(psych)){install.packages("psych")} multiFisher = function(file="Desktop/table.txt", saveit=TRUE, outfile="Desktop/table.csv", progress=T, verbose=FALSE, digits=3, ... ) { require(psych) Data = read.table(file, skip=1, header=F, col.names=c("Gene", "MD", "WTD", "MC", "WTC"), ...) if(verbose){print(str(Data))} Data$Fisher.p = NA Data$phi = NA Data$OR1 = format(0.123, nsmall=3) Data$OR2 = NA if(progress){cat("\n")} for(i in 1:length(Data$Gene)){ Matrix = matrix(c(Data$WTC[i],Data$MC[i],Data$WTD[i],Data$MD[i]), nrow=2) Fisher = fisher.test(Matrix, alternative = 'greater') Data$Fisher.p[i] = signif(Fisher$p.value, digits=digits) Data$phi[i] = phi(Matrix, digits=digits) OR1 = (Data$WTC[i]*Data$MD[i])/(Data$MC[i]*Data$WTD[i]) OR2 = 1 / OR1 Data$OR1[i] = format(signif(OR1, digits=digits), nsmall=3) Data$OR2[i] = signif(OR2, digits=digits) if(progress) {cat(".")} } if(progress){cat("\n"); cat("\n")} if(saveit){write.csv(Data, outfile)} return(Data) } multiFisher()
Convert user input to time which changes boolean value for the duration entered?
I'm working on this side project game to grasp python better. I'm trying to have the user enter the amount of time the character has to spend busy, then not allow the user to do the same thing until they have completed the original time entered. I have tried a few methods with varying error results from my noob ways. (timestamps, converting input to int and time in different spots, timeDelta) def Gold_mining(): while P.notMining: print('Welcome to the Crystal mines kid.\nYou will be paid in gold for your labour,\nif lucky you may get some skill points or bonus finds...\nGoodluck in there.') print('How long do you wish to enter for?') time_mining = int(input("10 Gold Per hour. Max 8 hours --> ")) if time_mining > 0 and time_mining <= 8: time_started = current_time print(f'You will spend {time_mining} hours digging in the mines.') P.gold += time_mining * 10 print(P.gold) P.notMining = False End_Time = (current_time + timedelta(hours = 2)) print(f'{End_Time} time you exit the mines...') elif time_mining > 8: print("You can't possibly mine for that long kid, go back and think about it.") else: print('Invalid') After the set amount of time i would like for it to change the bool value back to false so that you can mine again. "Crystal Mining" is mapped to a different key for testing so my output says "Inventory" but would say "Crystal Mining" when it works properly and currently looks like this: *** Page One *** Intro Page 02:15:05 1 Character Stats 2 Rename Character 3 Inventory 4 Change Element 5 Menu 6 Exit Num: 3 Welcome to the Crystal mines kid. You will be paid in gold for your labour, if lucky you may get some skill points or bonus finds... Goodluck in there. How long do you wish to enter for? 10 Gold Per hour. Max 8 hours --> 1 You will spend 1 hours digging in the mines. 60 Traceback (most recent call last): File "H:\Python ideas\input_as_always.py", line 176, in <module> intro.pageInput() File "H:\Python ideas\input_as_always.py", line 45, in pageInput self.pageOptions[pInput]['entry']() File "H:\Python ideas\input_as_always.py", line 134, in Gold_mining End_Time = (current_time + timedelta(hours = 2)) TypeError: can only concatenate str (not "datetime.timedelta") to str
Failing to use sumproduct on date ranges with multiple conditions [Python]
From replacement data table (below on the image), I am trying to incorporate the solbox product replace in time series data format(above on the image). I need to extract out the number of consumers per day from the information. What I need to find out: On a specific date, which number of solbox product was active On a specific date, which number of solbox product (which was a consumer) was active I have used this line of code in excel but cannot implement this on python properly. =SUMPRODUCT((Record_Solbox_Replacement!$O$2:$O$1367 = "consumer") * (A475>=Record_Solbox_Replacement!$L$2:$L$1367)*(A475<Record_Solbox_Replacement!$M$2:$M$1367)) I tried in python - timebase_df['date'] = pd.date_range(start = replace_table_df['solbox_started'].min(), end = replace_table_df['solbox_started'].max(), freq = frequency) timebase_df['date_unix'] = timebase_df['date'].astype(np.int64) // 10**9 timebase_df['no_of_solboxes'] = ((timebase_df['date_unix']>=replace_table_df['started'].to_numpy()) & (timebase_df['date_unix'] < replace_table_df['ended'].to_numpy() & replace_table_df['customer_type'] == 'customer'])) ERROR: ~\Anaconda3\Anaconda4\lib\site-packages\pandas\core\ops\array_ops.py in comparison_op(left, right, op) 232 # The ambiguous case is object-dtype. See GH#27803 233 if len(lvalues) != len(rvalues): --> 234 raise ValueError("Lengths must match to compare") 235 236 if should_extension_dispatch(lvalues, rvalues): ValueError: Lengths must match to compare Can someone help me please? I can explain in comment section if I have missed something.
Parsing heterogenous data from a text file in Python
I am trying to parse raw data results from a text file into an organised tuple but having trouble getting it right. My raw data from the textfile looks something like this: Episode Cumulative Results EpisodeXD0281119 Date collected21/10/2019 Time collected10:00 Real time PCR for M. tuberculosis (Xpert MTB/Rif Ultra): PCR result Mycobacterium tuberculosis complex NOT detected Bacterial Culture: Bottle: Type FAN Aerobic Plus Result No growth after 5 days EpisodeST32423457 Date collected23/02/2019 Time collected09:00 Gram Stain: Neutrophils Occasional Gram positive bacilli Moderate (2+) Gram negative bacilli Numerous (3+) Gram negative cocci Moderate (2+) EpisodeST23423457 Date collected23/02/2019 Time collected09:00 Bacterial Culture: A heavy growth of 1) Klebsiella pneumoniae subsp pneumoniae (KLEPP) ensure that this organism does not spread in the ward/unit. A heavy growth of 2) Enterococcus species (ENCSP) Antibiotic/Culture KLEPP ENCSP Trimethoprim-sulfam R Ampicillin / Amoxic R S Amoxicillin-clavula R Ciprofloxacin R Cefuroxime (Parente R Cefuroxime (Oral) R Cefotaxime / Ceftri R Ceftazidime R Cefepime R Gentamicin S Piperacillin/tazoba R Ertapenem R Imipenem S Meropenem R S - Sensitive ; I - Intermediate ; R - Resistant ; SDD - Sensitive Dose Dependant Comment for organism KLEPP: ** Please note: this is a carbapenem-RESISTANT organism. Although some carbapenems may appear susceptible in vitro, these agents should NOT be used as MONOTHERAPY in the treatment of this patient. ** Please isolate this patient and practice strict contact precautions. Please inform Infection Prevention and Control as contact screening might be indicated. For further advice on the treatment of this isolate, please contact. The currently available laboratory methods for performing colistin susceptibility results are unreliable and may not predict clinical outcome. Based on published data and clinical experience, colistin is a suitable therapeutic alternative for carbapenem resistant Acinetobacter spp, as well as carbapenem resistant Enterobacteriaceae. If colistin is clinically indicated, please carefully assess clinical response. EpisodeST234234057 Date collected23/02/2019 Time collected09:00 Authorised by xxxx on 27/02/2019 at 10:35 MIC by E-test: Organism Klebsiella pneumoniae (KLEPN) Antibiotic Meropenem MIC corrected 4 ug/mL MIC interpretation Resistant Antibiotic Imipenem MIC corrected 1 ug/mL MIC interpretation Sensitive Antibiotic Ertapenem MIC corrected 2 ug/mL MIC interpretation Resistant EpisodeST23423493 Date collected18/02/2019 Time collected03:15 Potassium 4.4 mmol/L 3.5 - 5.1 EpisodeST45445293 Date collected18/02/2019 Time collected03:15 Creatinine 32 L umol/L 49 - 90 eGFR (MDRD formula) >60 mL/min/1.73 m2 Creatinine 28 L umol/L 49 - 90 eGFR (MDRD formula) >60 mL/min/1.73 m2 Essentially the pattern is that ALL information starts with a unique EPISODE NUMBER and follows with a DATE and TIME and then the result of whatever test. This is the pattern throughout. What I am trying to parse into my tuple is the date, time, name of the test and the result - whatever it might be. I have the following code: with open(filename) as f: data = f.read() data = data.splitlines() DS = namedtuple('DS', 'date time name value') parsed = list() idx_date = [i for i, r in enumerate(data) if r.strip().startswith('Date')] for start, stop in zip(idx_date[:-1], idx_date[1:]): chunk = data[start:stop] date = time = name = value = None for row in chunk: if not row: continue row = row.strip() if row.startswith('Episode'): continue if row.startswith('Date'): _, date = row.split() date = date.replace('collected', '') elif row.startswith('Time'): _, time = row.split() time = time.replace('collected', '') else: name, value, *_ = row.split() print (name) parsed.append(DS(date, time, name, value)) print(parsed) My error is that I am unable to find a way to parse the heterogeneity of the test RESULT in a way that I can use later, for example for the tuple DS ('DS', 'date time name value'): DATE = 21/10/2019 TIME = 10:00 NAME = Real time PCR for M tuberculosis or Potassium RESULT = Negative or 4.7 Any advice appreciated. I have hit a brick wall.
python3: Split time series by diurnal periods
I have the following dataset: 01/05/2020,00,26.3,27.5,26.3,80,81,73,22.5,22.7,22.0,993.7,993.7,993.0,0.0,178,1.2,-3.53,0.0 01/05/2020,01,26.1,26.8,26.1,79,80,75,22.2,22.4,21.9,994.4,994.4,993.7,1.1,22,2.0,-3.54,0.0 01/05/2020,02,25.4,26.1,25.4,80,81,79,21.6,22.3,21.6,994.7,994.7,994.4,0.1,335,2.3,-3.54,0.0 01/05/2020,03,23.3,25.4,23.3,90,90,80,21.6,21.8,21.5,994.7,994.8,994.6,0.9,263,1.5,-3.54,0.0 01/05/2020,04,22.9,24.2,22.9,89,90,86,21.0,22.1,21.0,994.2,994.7,994.2,0.3,268,2.0,-3.54,0.0 01/05/2020,05,22.8,23.1,22.8,90,91,89,21.0,21.4,20.9,993.6,994.2,993.6,0.7,264,1.5,-3.54,0.0 01/05/2020,06,22.2,22.8,22.2,92,92,90,20.9,21.2,20.8,993.6,993.6,993.4,0.8,272,1.6,-3.54,0.0 01/05/2020,07,22.6,22.6,22.0,91,93,91,21.0,21.2,20.7,993.4,993.6,993.4,0.4,284,2.3,-3.49,0.0 01/05/2020,08,21.6,22.6,21.5,92,92,90,20.2,20.9,20.1,993.8,993.8,993.4,0.4,197,2.1,-3.54,0.0 01/05/2020,09,22.0,22.1,21.5,92,93,92,20.7,20.8,20.2,994.3,994.3,993.7,0.0,125,2.1,-3.53,0.0 01/05/2020,10,22.7,22.7,21.9,91,92,91,21.2,21.2,20.5,995.0,995.0,994.3,0.0,354,0.0,70.99,0.0 01/05/2020,11,25.0,25.0,22.7,83,91,82,21.8,22.1,21.1,995.5,995.5,995.0,0.8,262,1.5,744.8,0.0 01/05/2020,12,27.9,28.1,24.9,72,83,70,22.3,22.8,21.6,996.1,996.1,995.5,0.7,228,1.9,1392.,0.0 01/05/2020,13,30.4,30.4,27.7,58,72,55,21.1,22.6,20.4,995.9,996.2,995.9,1.6,134,3.7,1910.,0.0 01/05/2020,14,31.7,32.3,30.1,50,58,48,20.2,21.3,19.7,995.8,996.1,995.8,3.0,114,5.4,2577.,0.0 01/05/2020,15,32.9,33.2,31.8,44,50,43,19.1,20.5,18.6,994.9,995.8,994.9,0.0,128,5.6,2853.,0.0 01/05/2020,16,33.2,34.4,32.0,46,48,41,20.0,20.0,18.2,994.0,994.9,994.0,0.0,125,4.3,2700.,0.0 01/05/2020,17,33.1,34.5,32.7,44,46,39,19.2,19.9,18.5,993.4,994.1,993.4,0.0,170,1.6,2806.,0.0 01/05/2020,18,33.6,34.2,32.6,41,47,40,18.5,20.0,18.3,992.6,993.4,992.6,0.0,149,0.0,2319.,0.0 01/05/2020,19,33.5,34.7,32.1,43,49,39,19.2,20.4,18.3,992.3,992.6,992.3,0.3,168,4.1,1907.,0.0 01/05/2020,20,32.1,33.9,32.1,49,51,41,20.2,20.7,18.5,992.4,992.4,992.3,0.1,192,3.7,1203.,0.0 01/05/2020,21,29.9,32.2,29.9,62,62,49,21.8,21.9,20.2,992.3,992.4,992.2,0.0,188,2.9,408.0,0.0 01/05/2020,22,28.5,29.9,28.4,67,67,62,21.8,22.0,21.7,992.5,992.5,992.3,0.4,181,2.3,6.817,0.0 01/05/2020,23,27.8,28.5,27.8,71,71,66,22.1,22.1,21.5,993.1,993.1,992.5,0.0,225,1.6,-3.39,0.0 02/05/2020,00,27.4,28.2,27.3,75,75,68,22.5,22.5,21.7,993.7,993.7,993.1,0.5,139,1.5,-3.54,0.0 02/05/2020,01,27.3,27.7,27.3,72,75,72,21.9,22.6,21.9,994.3,994.3,993.7,0.0,126,1.1,-3.54,0.0 02/05/2020,02,25.4,27.3,25.2,85,85,72,22.6,22.8,21.9,994.4,994.5,994.3,0.1,256,2.6,-3.54,0.0 02/05/2020,03,25.5,25.6,25.3,84,85,82,22.5,22.7,22.1,994.3,994.4,994.2,0.0,329,0.7,-3.54,0.0 02/05/2020,04,24.5,25.5,24.5,86,86,82,22.0,22.5,21.9,993.9,994.3,993.9,0.0,290,1.2,-3.54,0.0 02/05/2020,05,24.0,24.5,23.5,87,88,86,21.6,22.1,21.3,993.6,993.9,993.6,0.7,285,1.3,-3.54,0.0 02/05/2020,06,23.7,24.1,23.7,87,87,85,21.3,21.6,21.3,993.1,993.6,993.1,0.1,305,1.1,-3.51,0.0 02/05/2020,07,22.7,24.1,22.5,91,91,86,21.0,21.7,20.7,993.1,993.3,993.1,0.6,220,1.1,-3.54,0.0 02/05/2020,08,22.9,22.9,22.6,92,92,91,21.5,21.5,21.0,993.2,993.2,987.6,0.0,239,1.5,-3.53,0.0 02/05/2020,09,22.9,23.0,22.8,93,93,92,21.7,21.7,21.4,993.6,993.6,993.2,0.0,289,0.4,-3.53,0.0 02/05/2020,10,23.5,23.5,22.8,92,93,92,22.1,22.1,21.6,994.3,994.3,993.6,0.0,256,0.0,91.75,0.0 02/05/2020,11,26.1,26.2,23.5,80,92,80,22.4,23.1,22.2,995.0,995.0,994.3,1.1,141,1.9,789.0,0.0 02/05/2020,12,28.7,28.7,26.1,69,80,68,22.4,22.7,22.1,995.5,995.5,995.0,0.0,116,2.2,1468.,0.0 02/05/2020,13,31.4,31.4,28.6,56,69,56,21.6,22.9,21.0,995.5,995.7,995.4,0.0,65,0.0,1762.,0.0 02/05/2020,14,32.1,32.4,30.6,48,58,47,19.8,22.0,19.3,995.0,995.6,990.6,0.0,105,0.0,2657.,0.0 02/05/2020,15,34.0,34.2,31.7,43,48,42,19.6,20.1,18.6,993.9,995.0,993.9,3.0,71,6.0,2846.,0.0 02/05/2020,16,34.7,34.7,32.3,38,48,38,18.4,20.3,18.3,992.7,993.9,992.7,1.4,63,6.3,2959.,0.0 02/05/2020,17,34.0,34.7,32.7,42,46,38,19.2,20.0,18.4,991.7,992.7,991.7,2.2,103,4.8,2493.,0.0 02/05/2020,18,34.3,34.7,33.6,41,42,38,19.1,19.4,18.0,991.2,991.7,991.2,2.0,141,4.8,2593.,0.0 02/05/2020,19,33.5,34.5,32.5,42,47,39,18.7,20.0,18.4,990.7,991.4,989.9,1.8,132,4.2,1317.,0.0 02/05/2020,20,32.5,34.2,32.5,47,48,40,19.7,20.3,18.7,990.5,990.7,989.8,1.3,191,4.2,1250.,0.0 02/05/2020,21,30.5,32.5,30.5,59,59,47,21.5,21.6,20.0,979.8,990.5,979.5,0.1,157,2.9,345.5,0.0 02/05/2020,22,28.6,30.5,28.6,67,67,59,21.9,21.9,21.5,978.9,980.1,978.7,0.6,166,2.2,1.122,0.0 02/05/2020,23,27.2,28.7,27.2,74,74,66,22.1,22.2,21.6,978.9,979.3,978.6,0.0,246,1.7,-3.54,0.0 03/05/2020,00,26.5,27.2,26.0,77,80,74,22.2,22.5,22.0,979.0,979.1,978.7,0.0,179,1.4,-3.54,0.0 03/05/2020,01,26.0,26.6,26.0,80,80,77,22.4,22.5,22.1,979.1,992.4,978.7,0.0,276,0.6,-3.54,0.0 03/05/2020,02,26.0,26.5,26.0,79,81,75,22.1,22.5,21.7,978.8,979.1,978.5,0.0,290,0.6,-3.53,0.0 03/05/2020,03,25.3,26.0,25.3,83,83,79,22.2,22.4,21.8,978.6,989.4,978.5,0.5,303,1.0,-3.54,0.0 03/05/2020,04,25.3,25.6,24.6,81,85,81,21.9,22.5,21.7,978.1,992.7,977.9,0.7,288,1.5,-3.00,0.0 03/05/2020,05,23.7,25.3,23.7,88,88,81,21.5,21.9,21.5,977.6,991.8,977.3,1.2,256,1.8,-3.54,0.0 03/05/2020,06,23.3,23.7,23.3,91,91,88,21.7,21.7,21.5,976.9,977.6,976.7,0.4,245,1.8,-3.54,0.0 03/05/2020,07,23.0,23.6,23.0,91,91,89,21.4,21.9,21.3,976.7,977.0,976.4,0.9,257,1.9,-3.54,0.0 03/05/2020,08,23.4,23.4,22.9,90,92,90,21.7,21.7,21.3,976.8,976.9,976.5,0.4,294,1.6,-3.52,0.0 03/05/2020,09,23.0,23.5,23.0,88,90,87,21.0,21.6,20.9,992.1,992.1,976.7,0.8,263,1.6,-3.54,0.0 03/05/2020,10,23.2,23.2,22.5,91,92,88,21.6,21.6,20.8,993.0,993.0,992.2,0.1,226,1.5,29.03,0.0 03/05/2020,11,26.0,26.1,23.2,77,91,76,21.6,22.1,21.5,993.8,993.8,982.1,0.0,120,0.9,458.1,0.0 03/05/2020,12,26.6,27.0,25.5,76,80,76,22.1,22.5,21.4,982.7,994.3,982.6,0.3,121,2.3,765.3,0.0 03/05/2020,13,28.5,28.7,26.6,66,77,65,21.5,23.1,21.2,982.5,994.2,982.4,1.4,130,3.2,1219.,0.0 03/05/2020,14,31.1,31.1,28.5,55,66,53,21.0,21.8,19.9,982.3,982.7,982.1,1.2,129,3.7,1743.,0.0 03/05/2020,15,31.6,31.8,30.7,50,55,49,19.8,20.8,19.2,992.9,993.5,982.2,1.1,119,5.1,1958.,0.0 03/05/2020,16,32.7,32.8,31.1,46,52,46,19.6,20.7,19.2,991.9,992.9,991.9,0.8,122,4.4,1953.,0.0 03/05/2020,17,32.3,33.3,32.0,44,49,42,18.6,20.2,18.2,990.7,991.9,979.0,2.6,133,5.9,2463.,0.0 03/05/2020,18,33.1,33.3,31.9,44,50,44,19.3,20.8,18.9,989.9,990.7,989.9,1.1,170,5.4,2033.,0.0 03/05/2020,19,32.4,33.2,32.2,47,47,44,19.7,20.0,18.7,989.5,989.9,989.5,2.4,152,5.2,1581.,0.0 03/05/2020,20,31.2,32.5,31.2,53,53,46,20.6,20.7,19.4,989.5,989.7,989.5,1.7,159,4.6,968.6,0.0 03/05/2020,21,29.7,32.0,29.7,62,62,51,21.8,21.8,20.5,989.7,989.7,989.4,0.8,154,4.0,414.2,0.0 03/05/2020,22,28.3,29.7,28.3,69,69,62,22.1,22.1,21.7,989.9,989.9,989.7,0.3,174,2.0,6.459,0.0 03/05/2020,23,26.9,28.5,26.9,75,75,67,22.1,22.5,21.7,990.5,990.5,989.8,0.2,183,1.0,-3.54,0.0 The second column is time (hour). I want to separate the dataset by morning (06-11), afternoon (12-17), evening (18-23) and night (00-05). How I can do it?
You can use pd.cut: bins = [-1,5,11,17,24] labels = ['morning', 'afternoon', 'evening', 'night'] df['day_part'] = pd.cut(df['hour'], bins=bins, labels=labels)
I added column names, including Hour for the second column. Then I used read_csv which reads the source text, "dropping" leading zeroes, so that Hour column is just int. To split rows (add a column marking the diurnal period), use: df['period'] = pd.cut(df.Hour, bins=[0, 6, 12, 18, 24], right=False, labels=['night', 'morning', 'afternoon', 'evening']) Then you can e.g. use groupby to process your groups. Because I used right=False parameter, the bins are closed on the left side, thus bin limits are more natural (no need for -1 as an hour). And bin limits (except for the last) are just starting hours of each period - quite natural notation.