Dividing the values by their mean for many variables - transform

I wish to conduct a data Transformation by dividing each case in a variable by that variable's mean. I have 91 variables in my dataset. I create the means using the AGGREGATE function:
AGGREGATE
/OUTFILE=* MODE = ADDVARIABLES
/BREAK=
/mean_1 to mean_91= MEAN(Var1 TO Var91).
This code is giving me each variable's mean in the same dataset, but in order to divide each case by its mean, I have created a new dataset with a command that can repeat itself. The Problem is to Change from mean_1 to mean_2 ...... mean_91.
COMPUTE CMD = CONCAT("COMPUTE",RTRIM(Name),".Norm =",RTRIM(Name),"/mean",1,".").
How can i make sure that in the next line, the number 1 will become 2, then 3 and so on?

There is a much simpler way to accomplish your task. After calculating the means like you did, you can loop through all the variables like this:
do repeat vr=var1 to var91 /mn=mean_1 to mean_91 /nrm=norm1 to norm91.
compute nrm=vr/mn.
end repeat.

Related

Computing a Fibonacci sequence in Terraform

The title is really all there is to the question: how would you compute the values of a Fibonacci sequence (first N values, where N is an input variable) and store them in a Terraform local variable?
This could, of course, be done with an external data source, but I'm looking for a way to do it in pure Terraform.
There's no real need to actually do this, but the Fibonacci sequence is a representation of a problem I need to solve in Terraform (where values in a list depend on previous values of that same list).
I think the easiest way would be to create your own external data source for that, as in TF you can't access existing list elements during iteration when you create the lists itself.
And specific to Fibonacci sequence. I would just per-compute its values, and then just read any number of values I need from a list in TF or a file. Usually you would know a possible maximum number of those elements your app requires. Thus there is no reason to recalculate it every single time.

Reduce Time complexity

Question at hand : Complete the function minimumSwaps in the editor below. It must return an integer representing the minimum number of swaps to sort the array.
My Approach:
def minimumSwaps(arr):
count = 0
temp = [None]*len(arr)
res1=sorted(arr)
while(res1!=arr):
for i in range(int(len(arr))):
if(res1[i]!=arr[i]):
y=res1.index(arr[i])
arr[y] , arr[i]=arr[i] , arr[y]
count = count +1
return count
The code does give the required op for majority of the cases , but fails a few due to time limit exceeds error. Could someone suggest a few changes to reduce the time complexity issues and make the code more efficient. If Possible please try not to change the code in its entirety , I want to learn to make codes more efficient rather than trying a whole new approach altogether.
Link to one of the huge test case
To me, this is a graph problem. Maybe it's possible with a more simple solution, but I don't think so.
You can observe that to get the minimum swaps necessary, you'd just have to move every element into its sorted position. You can figure out where they're supposed to be by sorting and having an array indexed by element (or dictionary, for that matter) to the index.
Now, build a graph by making each item its own node, and connecting with a directed edge to the place it needs to be. We can observe that for a cycle of length k, we will need k-1 swaps to solve it. This is because we just need to swap each item forward, but the last swap actually solves two items rather than one. Thus, the answer is the sum of k-1 for each cycle, which can be reduced to n-c where c is the number of cycles.
To see why this works, consider the case of [2,3,1]. The sorted version of this array is [1,2,3]. Now, build the graph, where index 0 points to index 1 (since 2 needs to be in index 1), index 1 points to index 2, and index 2 points to index 0. We can run a search algorithm through the graph and find the number of cycles or components, and find that there is 1 cycle of length 3. So, the answer we produce is 3-1 = 2. As we can observe, this is indeed correct.
The problem gets a little more complicated if the array can contain duplicates, but it's not so bad, you'd just have to think a little harder. Maybe this isn't the intended solution, but it'll certainly work in O(n). Best of luck!

is there a way to calculate every possible order of operation for 1 operation in Python?

Let's say that I have a = '1+2*5/3', there's a specific order to which my machine will evaluate this statement (with eval(a))
I would like to know if there's a line of code (or a function? just an elegant way that could get the job done) that would calculate :
(1+2)*5/3
1+(2*5)/3
1+2*(5/3)
(1+2*5)/3
1+(2*5/3)
(1+2)*(5/3)
1+2*5/3
In this example, I used an operation with 4 factors, so I could just code 1 function for each possibility, but I need to do the same thing with 6 factors and that would just take way too much time and effort since the possibility of different operation order would increase exponentially
It would be also great that it returns everything in a dictionary in this form {operation:result} with the parentheses included, if not i'll find my way around it
edit: as requested, the main goal is to make a program that find the solution to the game " le compte est bon " brute force method, the rules can be found here : https://en.wikipedia.org/wiki/Des_chiffres_et_des_lettres#Le_compte_est_bon_.28.22the_total_is_right.22.29
This is going to be very hard. I recommend you follow these steps:
Create a list to check if the formula has already been calculated
Randomize the order (such as +-*/ and randomly place numbers
Check if rule number`s one is a valid formula. if not try number 1 again
Randomize the order (such as opening and closing parentheses and ^)
Check if the sentence above is a valid formula. if not try number 3 again.
Check the formula through the list and see if it has already been calculated. if it has been calculated then we don't use it and go back to number two. but...
If it is not in the list then we can use it.
Those are the basic steps for common known math symbols, but what about square root?
Another way to do this is by making python move the symbols over like you did with the parentheses, but for EVERYTHING (numbers and symbols(+-/*))
EDIT:
This was before the original question was changed.

python 3.3 : scipy.optimize.curve_fit doesn't update the value of point

I am trying to fit a custom function to some data points using curve_fit. I have tried 1 or two free parameters. I have used it other times. Now I am struggling to make a fit, because the algorithm returns always the initial input values, with infinite sigma, no matter what the initial values are. I have also tried to print the internal parameters with which my custom function is called, and I don't understand, my custom function is called just 4 times, the first three with always the same parameters and the last with a relative change of the parameter of 10^-8. this doesn't look right
It is normal for the objective function to be called initially with very small (roughly 1e-8) changes in parameter values in order to calculate the partial derivatives to decide which way to go in parameter space. If the result of the objective function does not change at all (not even at 1e-8 level) the fit will give up: changing the parameter values did not change the result.
I would first look into whether the result of your objective function is really sensitive to the parameters. If the changes to your result really are not sensitive to a 1e-8 change, but would be sensitive to a larger change, you may want to increase the value of epsfcn passed to scipy.optimize.leastsq.

Bitwise operations Python

This is a first run-in with not only bitwise ops in python, but also strange (to me) syntax.
for i in range(2**len(set_)//2):
parts = [set(), set()]
for item in set_:
parts[i&1].add(item)
i >>= 1
For context, set_ is just a list of 4 letters.
There's a bit to unpack here. First, I've never seen [set(), set()]. I must be using the wrong keywords, as I couldn't find it in the docs. It looks like it creates a matrix in pythontutor, but I cannot say for certain. Second, while parts[i&1] is a slicing operation, I'm not entirely sure why a bitwise operation is required. For example, 0&1 should be 1 and 1&1 should be 0 (carry the one), so binary 10 (or 2 in decimal)? Finally, the last bitwise operation is completely bewildering. I believe a right shift is the same as dividing by two (I hope), but why i>>=1? I don't know how to interpret that. Any guidance would be sincerely appreciated.
[set(), set()] creates a list consisting of two empty sets.
0&1 is 0, 1&1 is 1. There is no carry in bitwise operations. parts[i&1] therefore refers to the first set when i is even, the second when i is odd.
i >>= 1 shifts right by one bit (which is indeed the same as dividing by two), then assigns the result back to i. It's the same basic concept as using i += 1 to increment a variable.
The effect of the inner loop is to partition the elements of _set into two subsets, based on the bits of i. If the limit in the outer loop had been simply 2 ** len(_set), the code would generate every possible such partitioning. But since that limit was divided by two, only half of the possible partitions get generated - I couldn't guess what the point of that might be, without more context.
I've never seen [set(), set()]
This isn't anything interesting, just a list with two new sets in it. So you have seen it, because it's not new syntax. Just a list and constructors.
parts[i&1]
This tests the least significant bit of i and selects either parts[0] (if the lsb was 0) or parts[1] (if the lsb was 1). Nothing fancy like slicing, just plain old indexing into a list. The thing you get out is a set, .add(item) does the obvious thing: adds something to whichever set was selected.
but why i>>=1? I don't know how to interpret that
Take the bits in i and move them one position to the right, dropping the old lsb, and keeping the sign. Sort of like this
Except of course that in Python you have arbitrary-precision integers, so it's however long it needs to be instead of 8 bits.
For positive numbers, the part about copying the sign is irrelevant.
You can think of right shift by 1 as a flooring division by 2 (this is different from truncation, negative numbers are rounded towards negative infinity, eg -1 >> 1 = -1), but that interpretation is usually more complicated to reason about.
Anyway, the way it is used here is just a way to loop through the bits of i, testing them one by one from low to high, but instead of changing which bit it tests it moves the bit it wants to test into the same position every time.

Resources