How to force 4 digits in excel? - excel

I have a really long truth table and the binary column is showing up like this:
A + B = S BINARY
0 + 0 = 0 0
0 + 1 = 1 1
0 + 2 = 2 10
0 + 3 = 3 11
0 + 4 = 4 100
0 + 5 = 5 101
0 + 6 = 6 110
0 + 7 = 7 111
0 + 8 = 8 1000
0 + 9 = 9 1001
0 + 10 = 10 1010
0 + 11 = 11 1011
0 + 12 = 12 1100
0 + 13 = 13 1101
0 + 14 = 14 1110
0 + 15 = 15 1111
Where the BINARY column =DEC2BIN(S)
I need to force the Binary values to have 4 digits all the time. So 0 = 0000, 3 = 0011, etc. How do I do this in excel?
Solution:
=DEC2BIN(S,6)

Rather than filling column S with:
=A1 + B1
to see 1 thru 15, use:
=DEC2BIN(A1+B1,4) to keep the leading zeros in 4-digit-binary format.

Highlight binary cells
Right-click, select "Format Cells..."
Number tab, select "Custom"
Enter 0000 into the Type text box.
Click OK.

Another, not the best way, but should work:
=RIGHT("0000"&DEC2BIN(S),4)

The above solution is correct
=DEC2BIN(A1,4)
This will show cell A1 in binary 4 digit format regardless of the value .
In the Spanish Excel version you use a semi colon like so .
=DEC.A.BIN(J6;7)

Related

Pandas dataframe: Count no of rows which meet a set of conditions across multiple columns [duplicate]

I have a dataframe(edata) as given below
Domestic Catsize Type Count
1 0 1 1
1 1 1 8
1 0 2 11
0 1 3 14
1 1 4 21
0 1 4 31
From this dataframe I want to calculate the sum of all counts where the logical AND of both variables (Domestic and Catsize) results in Zero (0) such that
1 0 0
0 1 0
0 0 0
The code I use to perform the process is
g=edata.groupby('Type')
q3=g.apply(lambda x:x[((x['Domestic']==0) & (x['Catsize']==0) |
(x['Domestic']==0) & (x['Catsize']==1) |
(x['Domestic']==1) & (x['Catsize']==0)
)]
['Count'].sum()
)
q3
Type
1 1
2 11
3 14
4 31
This code works fine, however, if the number of variables in the dataframe increases then the number of conditions grows rapidly. So, is there a smart way to write a condition that states that if the ANDing the two (or more) variables result in a zero then perform the sum() function
You can filter first using pd.DataFrame.all negated:
cols = ['Domestic', 'Catsize']
res = df[~df[cols].all(1)].groupby('Type')['Count'].sum()
print(res)
# Type
# 1 1
# 2 11
# 3 14
# 4 31
# Name: Count, dtype: int64
Use np.logical_and.reduce to generalise.
columns = ['Domestic', 'Catsize']
df[~np.logical_and.reduce(df[columns], axis=1)].groupby('Type')['Count'].sum()
Type
1 1
2 11
3 14
4 31
Name: Count, dtype: int64
Before adding it back, use map to broadcast:
u = df[~np.logical_and.reduce(df[columns], axis=1)].groupby('Type')['Count'].sum()
df['NewCol'] = df.Type.map(u)
df
Domestic Catsize Type Count NewCol
0 1 0 1 1 1
1 1 1 1 8 1
2 1 0 2 11 11
3 0 1 3 14 14
4 1 1 4 21 31
5 0 1 4 31 31
how about
columns = ['Domestic', 'Catsize']
df.loc[~df[columns].prod(axis=1).astype(bool), 'Count']
and then do with it whatever you want.
for logical AND the product does the trick nicely.
for logcal OR you can use sum(axis=1) with proper negation in advance.

recursive function does not work as expected

Could someone explain the code? I just can not understand why this code gives output like this:
1
3
6
10
15
21
I expected the code to give something like this:
1
3
5
7
9
11
What am I missing here?
def tri_recursion(k):
if(k > 0):
result = k + tri_recursion(k-1)
print(result)
else:
result = 0
return result
tri_recursion(6)
For your recursive function, the termination condition is k=0.
It's clear that if k=0, tri_recursion(0) = 0.
If k=1, tri_recursion(1) = 1 + tri_recursion(0), which from above, is 1 + 0 or 1.
If k=2, tri_recursion(2) = 2 + tri_recursion(1), which from above, is 2 + 1 or 3.
If k=3, tri_recursion(3) = 3 + tri_recursion(2), which from above, is 3 + 3 or 6.
If k=4, tri_recursion(4) = 5 + tri_recursion(3), which from above, is 4 + 6 or 10.
If k=5, tri_recursion(5) = 4 + tri_recursion(4), which from above, is 5 + 10 or 15.
If k=6, tri_recursion(6) = 6 + tri_recursion(5), which from above, is 6 + 15 or 21.
See the pattern?
Your code is calculating the sum of numbers up to n where n is 6 in the above case. The print statement prints the intermediate results. Hence the output 1 3 6 10 15 21.
1 - The sum of numbers from 0 to 1
3 - The sum of numbers from 0 to 2
6 - The sum of numbers from 0 to 3
10 - The sum of numbers from 0 to 4
15 - The sum of numbers from 0 to 5
21 - The sum of numbers from 0 to 6

How to take mean of 3 values before flag change 0 to 1python

I have dataframe with columns A,B and flag. I want to calculate mean of 2 values before flag change from 0 to 1 , and record value when flag change from 0 to 1 and record value when flag changes from 1 to 0.
# Input dataframe
df=pd.DataFrame({'A':[1,3,4,7,8,11,1,15,20,15,16,87],
'B':[1,3,4,6,8,11,1,19,20,15,16,87],
'flag':[0,0,0,0,1,1,1,0,0,0,0,0]})
# Expected output
df_out=df=pd.DataFrame({'A_mean_before_flag_change':[5.5],
'B_mean_before_flag_change':[5],
'A_value_before_change_flag':[7],
'B_value_before_change_flag':[6]})
I try to create more general solution:
df=pd.DataFrame({'A':[1,3,4,7,8,11,1,15,20,15,16,87],
'B':[1,3,4,6,8,11,1,19,20,15,16,87],
'flag':[0,0,0,0,1,1,1,0,0,1,0,1]})
print (df)
A B flag
0 1 1 0
1 3 3 0
2 4 4 0
3 7 6 0
4 8 8 1
5 11 11 1
6 1 1 1
7 15 19 0
8 20 20 0
9 15 15 1
10 16 16 0
11 87 87 1
First create groups by mask for 0 with next 1 values of flag:
m1 = df['flag'].eq(0) & df['flag'].shift(-1).eq(1)
df['g'] = m1.iloc[::-1].cumsum()
print (df)
A B flag g
0 1 1 0 3
1 3 3 0 3
2 4 4 0 3
3 7 6 0 3
4 8 8 1 2
5 11 11 1 2
6 1 1 1 2
7 15 19 0 2
8 20 20 0 2
9 15 15 1 1
10 16 16 0 1
11 87 87 1 0
then filter out groups with size less like N:
N = 4
df1 = df[df['g'].map(df['g'].value_counts()).ge(N)].copy()
print (df1)
A B flag g
0 1 1 0 3
1 3 3 0 3
2 4 4 0 3
3 7 6 0 3
4 8 8 1 2
5 11 11 1 2
6 1 1 1 2
7 15 19 0 2
8 20 20 0 2
Filter last N rows:
df2 = df1.groupby('g').tail(N)
And aggregate last with mean:
d = {'mean':'_mean_before_flag_change', 'last': '_value_before_change_flag'}
df3 = df2.groupby('g')['A','B'].agg(['mean','last']).sort_index(axis=1, level=1).rename(columns=d)
df3.columns = df3.columns.map(''.join)
print (df3)
A_value_before_change_flag B_value_before_change_flag \
g
2 20 20
3 7 6
A_mean_before_flag_change B_mean_before_flag_change
g
2 11.75 12.75
3 3.75 3.50
I'm assuming that this needs to work for cases with more than one rising edge and that the consecutive values and averages get appended to the output lists:
# the first step is to extract the rising and falling edges using diff(), identify sections and length
df['flag_diff'] = df.flag.diff().fillna(0)
df['flag_sections'] = (df.flag_diff != 0).cumsum()
df['flag_sum'] = df.flag.groupby(df.flag_sections).transform('sum')
# then you can get the relevant indices by checking for the rising edges
rising_edges = df.index[df.flag_diff==1.0]
val_indices = [i-1 for i in rising_edges]
avg_indices = [(i-2,i-1) for i in rising_edges]
# and finally iterate over the relevant sections
df_out = pd.DataFrame()
df_out['A_mean_before_flag_change'] = [df.A.loc[tpl[0]:tpl[1]].mean() for tpl in avg_indices]
df_out['B_mean_before_flag_change'] = [df.B.loc[tpl[0]:tpl[1]].mean() for tpl in avg_indices]
df_out['A_value_before_change_flag'] = [df.A.loc[idx] for idx in val_indices]
df_out['B_value_before_change_flag'] = [df.B.loc[idx] for idx in val_indices]
df_out['length'] = [df.flag_sum.loc[idx] for idx in rising_edges]
df_out.index = rising_edges

Pandas - Fill N rows for a specific column with a integer value and increment the integer there after

I have a dataframe to which I added say a column named col_1. I want to add integer values to that column starting from the first row that increment after every 4th row. So the new resulting column should have values of as such.
col_1
1
1
1
1
2
2
2
2
The current approach I have is a very brute force one:
for x in range(len(df)):
if x <= 3:
df['col_1'][x] = 1
if x >3 and x <= 7:
df['col_1'][x] = 2
This might work for something small but when moving to something larger it will chew up a lot of time.
If there si default RangeIndex you can use integer division with add 1:
df['col_1'] = df.index // 4 + 1
Or for general solution use helper array by lenght of DataFrame:
df['col_1'] = np.arange(len(df)) // 4 + 1
For repeat 1 and 2 pattern use also modulo by 2 like:
df = pd.DataFrame({'a':range(20, 40)})
df['col_1'] = (np.arange(len(df)) // 4) % 2 + 1
print (df)
a col_1
0 20 1
1 21 1
2 22 1
3 23 1
4 24 2
5 25 2
6 26 2
7 27 2
8 28 1
9 29 1
10 30 1
11 31 1
12 32 2
13 33 2
14 34 2
15 35 2
16 36 1
17 37 1
18 38 1
19 39 1

Python recursive index changing

I am trying to arrange matrix in way that it will dynamically change the indexes.
I have tried to do it by means of for loop, however it only does once for each index.
def arrangeMatrix(progMatrix):
for l in range(len(progMatrix)):
for item in range(len(progMatrix[l])):
if indexExists(progMatrix,l + 1,item) and progMatrix[l + 1][item] == " ":
progMatrix[l + 1][item] = progMatrix[l][item]
progMatrix[l][item] = " "
The original list is:
1 0 7 6 8
0 5 5 5
2 1 6
4 1 3 7
1 1 1 7 5
And my code should fill all gapped indexes from up to bottom, however my result is:
1 0 6 8
0 5 5
2 1 7
4 1 3 7 6
1 1 1 7 5 5
The actual result should be:
1 0
0 5 8
2 1 7 5
4 1 3 7 6 6
1 1 1 7 5 5
Any help or hint is appreciated.Thanks in advance
It is probably easier if you first iterate the columns, since the change that happens in one column is independent on what happens in other columns. Then, per column, you could iterate the cells from the bottom to the top and keep track of the y-coordinate where the next non-space should "drop down" to.
No recursion is needed.
Here is how that could be coded:
def arrangeMatrix(progMatrix):
for x in range(len(progMatrix[0])):
targetY = len(progMatrix)-1
for y in range(len(progMatrix)-1,-1,-1):
row = progMatrix[y]
if row[x] != " ": # Something to drop down
if y < targetY: # Is it really to drop any lower?
progMatrix[targetY][x] = row[x] # copy it down
row[x] = " " # ...and clear the cell where it dropped from
targetY -= 1 # since we filled the target cell, the next drop would be higher

Resources