Converting fuel consumption - python-3.x

A car's fuel consumption may be expressed in many different ways. For example, in Europe, it is shown as the amount of fuel consumed per 100 kilometers.
In the USA, it is shown as the number of miles traveled by a car using one gallon of fuel.
Your task is to write a pair of functions converting l/100km into mpg, and vice versa.
The functions:
are named l100kmtompg and mpgtol100km respectively;
take one argument (the value corresponding to their names)
Complete the code in the editor.
Run your code and check whether your output is the same as ours.
Here is some information to help you:
1 American mile = 1609.344 metres;
1 American gallon = 3.785411784 litres.
def l100kmtompg(liters):
def mpgtol100km(miles):

I know the question is confusion , cuz I spent hours doing this. ngl this is the real question
3.9 is the value given
so for first function
100*0.625/(3.9 *0.265)
2nd def(60.3)
(3.78/(60.3 * 1.6))*100

Related

Weighted Average of Two Complier Average Treatment Effects

So I'm taking the weighted average of two complier average treatment effects (CATEs) for anassignment, but I'm not sure how to apportion the appropriate weights. Let me explain why I'm taking this average.
I am given data from a fictional randomized experiment testing the effects of get-out-the-vote efforts on turnout of urban and non-urban areas. Approximately half of the sample is of people who live in urban and non-urban areas, respectively, but they were not randomly assigned to the treatment and control group. That is, the treatment group is about 80% non-urban (the rest urban) and the control group is 80% urban (the rest non-urban). This creates a confounder because, everything else being equal, urbanites were less likely to vote than non-urbanites (at least in the fictional data).
I am being asked to estimate an overall compliance average treatment effect (CATE) for get-out-the-vote interventions while accounting for this confounder. To do this, I found separate a CATE for urban and non-urban parts of the sample, and I need to find an overall estimate from the two CATEs by taking a weighted average of them.
However, I'm not sure how to assign the appropriate weights. My professor has told us to assign more weight to the group that has more variation in the treatment. Since 80% of the treatment group is non-urban, should I assign a weight of .8 to the non-urban CATE and .2 to the urban one? (i.e., overall CATE = (.8)non-urban CATE + (.2)urban CATE)
For background, the data can be found here: https://press.princeton.edu/student-resources/thinking-clearly-with-data. It's the "GOTV_Experiment.csv" data. Thanks in advance for your help!

How do I calculate age mean & standard deviation using aggregate age?

My data set has an age range variable, but I would like to calculate the mean and standard deviation of age.
Since your data is categorical, there isn't a way to calculate the "true" sample mean and standard deviation of respondent age. There are a few different ways you could estimate, depending on how sophisticated you'd like to get.
The simplest way would be to assign an age to each band (say, the mid-point) and summarize on that. The downside is that you will be underestimating the standard deviation (clumping data together tends to do that). To the extent your categories are not uniformly distributed (and from your image they don't appear to be), your estimate of the mean will also be off.
* set point estimates for each age band .
RECODE age (1=22) (2=30) (3=40) (4=50) (5=60) (6=70) (7=80) .
EXE .
* calculate mean and std dev .
MEANS age /CELLS MEAN STDDEV .
More sophisticated estimation techniques might try to account for skews in data (e.g. your sample seems to skew younger) and convert each age band into its own distribution.
For example, instead of assuming 203 respondents are age 22 (as is done in the code above), you might assume 25 respondents each are 18, 19, 20, ... 25. More realistically than that even, you might assume that even that distribution skews younger (e.g. 50 18-yr olds, 40 19-yr old, etc etc).
Automated approaches to that would be interesting as its own question. :)

Tuning Parameters to Optimize Score without CNN

I am trying to create an Agent in Rust that uses a scoring function to determine the best move on a 2D uniform cost grid. The specifics of the game aren't very relevant, other than knowing that each turn you can choose to make one of 4 moves (up, down, left or right) and you are competing against other AIs who are playing on the same board. Currently the AI makes "branches" of possible paths it could make into the future using several different simple algorithms such as using A* to find enemies or food. Several characteristics are saved as the future simulations run including the number of enemies we killed on that branch, amount of food we ate and how long the future branch lasted before we died.
Once we are ready to make our move, we give each future predicting branch a score and go in the direction with the highest average score. This score is essentially a sum of each characteristic mentioned previously multiplied by a constant. For example the score may be 30 * number of food eaten + 100 * number of enemies killed. However, the number 30 and 100 were chosen almost at random through experimentation. If the snake died from not eating food then I increase the score multiplier for eating food for example. However, there are 10 different characteristics each with their own weight. Figuring out the relationship between them all manually is both time consuming and doesn't easily converge onto the optimal strategy.
Here in lies my issue. I would like to find a way to "train" the values for the AI through a process sort of like Q-Learning. There is a very clear terminal condition when you win or lose which helps. My currently idea is creating a table with 100 possible values of each parameter, then play 100 games with each combination and record the win rate. However, this would take (1000 choose 10) * 100 games or 2.6E25 games. It seems like there should be a smarted way to eliminate bad combinations using some form of loss minimization. If anybody has suggestions on tuning these parameters without a neural network, it would be greatly appreciated.

Descriptive statistics, percentiles

I am stuck in a statistics assignment, and would really appreciate some qualified help.
We have been given a data set and are then asked to find the 10% with the lowest rate of profit, in order to decide what Profit rate is the maximum in order to be considered for a program.
the data has:
Mean = 3,61
St. dev. = 8,38
I am thinking that i need to find the 10th percentile, and if i run the percentile function in excel it returns -4,71.
However I tried to run the numbers by hand using the z-score.
where z = -1,28
z=(x-μ)/σ
Solving for x
x= μ + z σ
x=3,61+(-1,28*8,38)=-7,116
My question is which of the two methods is the right one? if any at all.
I am thoroughly confused at this point, hope someone has the time to help.
Thank you
This is the assignment btw:
"The Danish government introduces a program for economic growth and will
help the 10 percent of the rms with the lowest rate of prot. What rate
of prot is the maximum in order to be considered for the program given
the mean and standard deviation found above and assuming that the data
is normally distributed?"
The excel formula is giving the actual, empirical 10th percentile value of your sample
If the data you have includes all possible instances of whatever you’re trying to measure, then go ahead and use that.
If you’re sampling from a population and your sample size is small, use a t distribution or increase your sample size. If your sample size is healthy and your data are normally distributed, use z scores.
Short story is the different outcomes suggest the data you’ve supplied are not normally distributed.

Randomly select increasing subset of data to see where mean levels off

Could anyone please advise the best way to do the following?
I have three variables (X, Y & Z) and four groups (1, 2, 3 & 4). I have been using discriminant function analysis in SPSS to predict group membership of known grouped data for use with future ungrouped data.
Ideally I would like to able to randomly sample an increasing number of a subset of the data to see how many observations are required to hit a desired correct classification percentage.
However, I understand this might be difficult. Therefore, I'm looking to to do this for the means.
For example, Lets say variable X has a mean of 141 for group 1. This mean might have been calculated from 2000 observations. However, it might be the case that the mean occurred at say 700 observations. I would like to be able to calculate at what number of observations/cases the mean levels of in my data. For example, perhaps starting at 10 observations and repeating this randomly say 50 or 100 times, then increasing to 20 observations....and so on.
I understand this is a form of monte carlo testing. I have access to SPSS 15, 17 and 18 and excel. I also have access to minitab 15 & 16 and amos17 and have downloaded "R" but im not familiar with these. My experience is with SPSS and excel. I have tried some syntax in SPSS Modified from this..http://pages.infinit.net/rlevesqu/Syntax/RandomSampling/Select2CasesFromEachGroup.txt but this would still be quite time consuming on my part to enter the subset number ect etc.
Hope some one can help.
Thanks for reading.
Andy
The text you linked to is a good start (you can also use the SAMPLE command in SPSS, but IMO the Raynald script you linked to is more flexible when you think about constructing the sample that way).
In pseudo-code, the process might look like;
do n for sample size (a to b)
loop 100 times
draw sample size n
compute (& save) statistics
Here is where SPSS's macro language comes into play (I think this document is a good introduction, plus you can examine other references on the SPSS tag wiki). Basically once you figure out how to draw the sample and compute the stats you want, you just need to figure out how to write a macro so you can loop through the process (and pass it the sample size parameter). I include the loop 100 times because you want to be able to make some type of estimate about the error associated with each sample size.
If you give an example of how you compute the statistics I may be able to give examples of how to make that into a macro function and loop through the desired number of times.
#Andy W
#Oliver
Thanks for your suggestions guys. Ive managed to find a work around using the following macro from.........http://www.spsstools.net/Syntax/Bootstrap/GetRandomSampleOfVariousSizeCalcStats.txt However, for this I need to copy and paste the variable data for a given group into a new data window. Thats not to much of a problem. To take this further would anyone know how: 1/ I could get other statistics recorded eg std error, std dev ect ect. 2/Use other analysis, ideally discriminant function analysis and record in a new data window the percentage of correct classificcations rather than having lots of output tables 3/not need to copy and paste variables for each group so I can just run the macro specifying n samples for x variable on group 1, 2, 3 & 4.
Thanks again.
DEFINE !sample(myvar !TOKENS(1)
/nbsampl !TOKENS(1)
/size !CMDEND).
* myvar = the variable of interest (here we want the mean of salary)
* nbsampl = number of samples.
* size = the size of each samples.
!LET !first='1'
!DO !ss !IN (!size)
!DO !count = 1 !TO !nbsampl.
GET FILE='c:\Program Files\SPSS\employee data.sav'.
COMPUTE draw=uniform(1).
SORT CASES BY draw.
N OF CASES !ss.
COMPUTE samplenb=!count.
COMPUTE ss=!ss.
AGGREGATE
/OUTFILE=*
/BREAK=samplenb
/!myvar = MEAN(!myvar) /ss=FIRST(ss).
!IF (!first !NE '1') !THEN
ADD FILES /FILE=* /FILE='c:\temp\sample.sav'.
!IFEND
SAVE OUTFILE='c:\temp\sample.sav'.
!LET !first='0'
!DOEND.
!DOEND.
VARIABLE LABEL ss 'Sample size'.
EXAMINE
VARIABLES=salary BY ss /PLOT=BOXPLOT/STATISTICS=NONE/NOTOTAL
/MISSING=REPORT.
!ENDDEFINE.
* ----------------END OF MACRO ----------------------------------------------.
* Call macro (parameters are number of samples (here 20) and sizes of sample (here 5, 10,15,30,50).
* Thus 20 samples of size 5.
* Thus 20 samples of size 10, etc.
!sample myvar=salary nbsampl=20 size= 5 10 15 30 50.

Resources