Armbian_5.24 + orange pi zero + nodejs + gpio access - node.js

I successfully installed armbian, WiringOP, I can access gpio.
How can I access gpio from nodejs on orange pi zero?
Here is the gpio output:
hygy#orangepizero:~/WiringOP/gpio$ sudo ./gpio readall
[sudo] password for hygy:
+-----+-----+----------+------+---+-Orange Pi+---+---+------+---------+-----+--+
| BCM | wPi | Name | Mode | V | Physical | V | Mode | Name | wPi | BCM |
+-----+-----+----------+------+---+----++----+---+------+----------+-----+-----+
| | | 3.3v | | | 1 || 2 | | | 5v | | |
| 12 | 8 | SDA.0 | ALT5 | 0 | 3 || 4 | | | 5V | | |
| 11 | 9 | SCL.0 | ALT5 | 0 | 5 || 6 | | | 0v | | |
| 6 | 7 | GPIO.7 | ALT3 | 0 | 7 || 8 | 0 | ALT3 | TxD3 | 15 | 13 |
| | | 0v | | | 9 || 10 | 0 | ALT3 | RxD3 | 16 | 14 |
| 1 | 0 | RxD2 | ALT5 | 0 | 11 || 12 | 0 | ALT3 | GPIO.1 | 1 | 110 |
| 0 | 2 | TxD2 | ALT5 | 0 | 13 || 14 | | | 0v | | |
| 3 | 3 | CTS2 | ALT3 | 0 | 15 || 16 | 0 | ALT3 | GPIO.4 | 4 | 68 |
| | | 3.3v | | | 17 || 18 | 0 | ALT3 | GPIO.5 | 5 | 71 |
| 64 | 12 | MOSI | ALT4 | 0 | 19 || 20 | | | 0v | | |
| 65 | 13 | MISO | ALT4 | 0 | 21 || 22 | 0 | ALT3 | RTS2 | 6 | 2 |
| 66 | 14 | SCLK | ALT4 | 0 | 23 || 24 | 0 | ALT4 | CE0 | 10 | 67 |
| | | 0v | | | 25 || 26 | 0 | ALT3 | GPIO.11 | 11 | 21 |
| 19 | 30 | SDA.1 | ALT4 | 0 | 27 || 28 | 0 | ALT4 | SCL.1 | 31 | 18 |
| 7 | 21 | GPIO.21 | ALT3 | 0 | 29 || 30 | | | 0v | | |
| 8 | 22 | GPIO.22 | ALT3 | 0 | 31 || 32 | 0 | ALT3 | RTS1 | 26 | 200 |
| 9 | 23 | GPIO.23 | ALT3 | 0 | 33 || 34 | | | 0v | | |
| 10 | 24 | GPIO.24 | OUT | 1 | 35 || 36 | 0 | ALT3 | CTS1 | 27 | 201 |
| 20 | 25 | GPIO.25 | OUT | 1 | 37 || 38 | 0 | ALT5 | TxD1 | 28 | 198 |
| | | 0v | | | 39 || 40 | 0 | ALT5 | RxD1 | 29 | 199 |
+-----+-----+----------+------+---+----++----+---+------+----------+-----+-----+
| BCM | wPi | Name | Mode | V | Physical | V | Mode | Name | wPi | BCM |
+-----+-----+----------+------+---+-Orange Pi+---+------+----------+-----+-----+

You can use the standard streams.
If, for example, you are accessing the GPIO with Python, you can redirect the standard output of that Python process to your Node.js app.
I found this nice tutorial that explains how to do it: http://www.sohamkamani.com/blog/2015/08/21/python-nodejs-comm/

Related

Unpivoting data in excel that contains multiple (15) categories in columns using vba

I have code that unpivots columns into rows. There are 19 categories of data, 15 of which have been unpivoted. However, my problem is that some of the tables that are unpivoted are not showing up on the new rows. I am asking for anyone's expertise to help as this will be helpful for me in future endeavors. I have created a table. Bear in mind this time is extremely wide/long and has 131 columns I believe and only 7 rows. Below is the table of the original data (it is make believe data of course but will be used for real data in the future). Also, the 2nd table is the how I want it to look. The 3rd table is how I how it actually looks. Under that is my code. I will glady upvote anyone who will help me. Thank you in advance.
Original data:
| usr | Company | Dept.# | Dept1 | Dept2 | Dept3 | Dept4 | Hr1 | Tr1 | F1 | A1 | HOH1 | M1 | R1 | SO1 | BIG1 | T1 | P1 | X1 | Y1 | Z1 | Tin1 | Hr1 | Tr1 | F1 | A1 | HOH1 | M1 | R1 | SO1 | BIG1 | T1 | P1 | X1 | Y1 | Z1 | Tin1 | Hr1 | Tr1 | F1 | A1 | HOH1 | M1 | R1 | SO1 | BIG1 | T1 | P1 | X1 | Y1 | Z1 | Tin1 | Hr1 | Tr1 | F1 | A1 | HOH1 | M1 | R1 | SO1 | BIG1 | T1 | P1 | X1 | Y1 | Z1 | Tin1 | Hr2 | Tr2 | F2 | A2 | HOH2 | M2 | R2 | SO2 | BIG2 | T2 | P2 | X2 | Y2 | Z2 | Tin2 | Hr2 | Tr2 | F2 | A2 | HOH2 | M2 | R2 | SO2 | BIG2 | T2 | P2 | X2 | Y2 | Z2 | Tin2 | Hr2 | Tr2 | F2 | A2 | HOH2 | M2 | R2 | SO2 | BIG2 | T2 | P2 | X2 | Y2 | Z2 | Tin2 | Hr3 | Tr3 | F3 | A3 | HOH3 | M3 | R3 | SO3 | BIG3 | T3 | P3 | X2 | Y2 | Z2 | Tin2 | Hr3 | Tr3 | F3 | A3 | HOH3 | M3 | R3 | SO3 | BIG3 | T3 | P3 | X3 | Y3 | Z3 | Tin3 | Hr4 | Tr4 | F4 | A4 | HOH4 | M4 | R4 | SO4 | BIG4 | T4 | P4 | X4 | Y4 | Z4 | Tin4 |
|------|---------|--------|-------|-------|-------|-------|-----|-----|-----|-----|------|----|----|-----|------|----|-----|-----|-----|----|------|-----|-----|----|-----|------|----|-----|-----|------|----|-----|-----|-----|----|------|-----|-----|----|-----|------|----|----|-----|------|-----|----|----|----|-----|------|-----|-----|----|-----|------|----|----|-----|------|----|----|-----|-----|----|------|-----|-----|-----|-----|------|----|----|-----|------|----|----|-----|----|-----|------|-----|-----|----|-----|------|----|----|-----|------|----|----|----|----|-----|------|-----|-----|-----|-----|------|----|----|-----|------|----|----|----|-----|----|------|-----|-----|-----|-----|------|----|-----|-----|------|----|----|-----|-----|----|------|-----|-----|-----|-----|------|----|----|-----|------|----|----|-----|-----|-----|------|-----|-----|-----|-----|------|----|----|-----|------|----|-----|-----|-----|-----|------|
| xxxx | OS | 1 | Train | | | | 20 | 89 | 355 | 123 | 435 | 90 | 5 | 55 | 676 | 34 | 43 | 984 | 345 | 74 | 846 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| xxxx | OPC | 2 | Poxy1 | Poxy2 | | | | | | | | | | | | | | | | | | 45 | 546 | 68 | 345 | 903 | 70 | 345 | 23 | 54 | 32 | 234 | 23 | 567 | 69 | 64 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 38 | 67 | 235 | 789 | 7 | 40 | 99 | 98 | 87 | 89 | 34 | 312 | 42 | 756 | 23 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| xxxx | Oxy R | 4 | H1 | H2 | H3 | H4 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 22 | 36 | 13 | 678 | 64 | 40 | 34 | 239 | 76 | 87 | 34 | 999 | 965 | 34 | 93 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 89 | 54 | 761 | 765 | 9 | 20 | 22 | 65 | 78 | 98 | 78 | 75 | 354 | 23 | 23 | | | | | | | | | | | | | | | | 36 | 80 | 123 | 543 | 17 | 20 | 11 | 908 | 988 | 7 | 86 | 245 | 546 | 763 | 324 | 25 | 90 | 111 | 432 | 84 | 25 | 63 | 784 | 98 | 78 | 854 | 754 | 234 | 865 | 43 |
| xxxx | HPK | 3 | Test1 | Test2 | Test3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 99 | 456 | 39 | 567 | 223 | 50 | 5 | 32 | 549 | 435 | 34 | 87 | 64 | 348 | 942 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 52 | 21 | 47 | 876 | 1 | 30 | 46 | 92 | 78 | 12 | 34 | 12 | 12 | 421 | 23 | | | | | | | | | | | | | | | | 90 | 76 | 773 | 654 | 49 | 10 | 223 | 982 | 566 | 23 | 54 | 786 | 356 | 73 | 654 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| xxxx | Mano | 1 | Porp | | | | 42 | 657 | 645 | 234 | 344 | 80 | 45 | 364 | 97 | 23 | 634 | 34 | 23 | 87 | 84 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| xxxx | Macro | 2 | Otto1 | Otto2 | | | | | | | | | | | | | | | | | | 75 | 574 | 46 | 456 | 453 | 60 | 44 | 235 | 867 | 5 | 433 | 234 | 346 | 46 | 35 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 23 | 433 | 186 | 987 | 2 | 30 | 34 | 58 | 87 | 43 | 34 | 23 | 62 | 73 | 32 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
How I want it to look:
| usr | Company | Dept# | Dept | Hrs | Tr | F | A | HOH | M | R | SO | BIG | T | P | X | Y | Z | Tin |
|------|---------|-------|-------|-----|-----|-----|-----|-----|----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
| xxxx | OS | 1 | Train | 20 | 89 | 355 | 123 | 435 | 90 | 5 | 55 | 676 | 34 | 43 | 984 | 345 | 74 | 846 |
| xxxx | OPC | 2 | Poxy1 | 45 | 546 | 68 | 345 | 903 | 70 | 345 | 23 | 54 | 32 | 234 | 23 | 567 | 69 | 64 |
| xxxx | OPC | 2 | Poxy2 | 38 | 67 | 235 | 789 | 7 | 40 | 99 | 98 | 87 | 89 | 34 | 312 | 42 | 756 | 23 |
| xxxx | Oxy R | 4 | H1 | 22 | 36 | 13 | 678 | 64 | 40 | 34 | 239 | 76 | 87 | 34 | 999 | 965 | 34 | 93 |
| xxxx | Oxy R | 4 | H2 | 89 | 54 | 761 | 765 | 9 | 20 | 22 | 65 | 78 | 98 | 78 | 75 | 354 | 23 | 23 |
| xxxx | Oxy R | 4 | H3 | 36 | 80 | 123 | 543 | 17 | 20 | 11 | 908 | 988 | 7 | 86 | 245 | 546 | 763 | 324 |
| xxxx | Oxy R | 4 | H4 | 25 | 90 | 111 | 432 | 84 | 25 | 63 | 784 | 98 | 78 | 854 | 754 | 234 | 865 | 43 |
| xxxx | HPK | 3 | Test1 | 99 | 456 | 39 | 567 | 223 | 50 | 5 | 32 | 549 | 435 | 34 | 87 | 64 | 348 | 942 |
| xxxx | HPK | 3 | Test2 | 52 | 21 | 47 | 876 | 1 | 30 | 46 | 92 | 78 | 12 | 34 | 12 | 12 | 421 | 23 |
| xxxx | HPK | 3 | Test3 | 90 | 76 | 773 | 654 | 49 | 10 | 223 | 982 | 566 | 23 | 54 | 786 | 356 | 73 | 654 |
| xxxx | Mano | 1 | Porp | 42 | 657 | 645 | 234 | 344 | 80 | 45 | 364 | 97 | 23 | 634 | 34 | 23 | 87 | 84 |
| xxxx | Macro | 2 | Otto1 | 73 | 574 | 46 | 456 | 453 | 60 | 44 | 235 | 867 | 5 | 433 | 234 | 346 | 46 | 35 |
| xxxx | Macro | 2 | Otto2 | 23 | 433 | 186 | 987 | 2 | 30 | 34 | 58 | 87 | 43 | 34 | 23 | 62 | 73 | 32 |
This is how it actually looks which is wrong. As you can see data is missing for some reason:
| usr | Company | Dept# | Dept | Hrs | Tr | F | A | HOH | M | R | SO | BIG | T | P | X | Y | Z | Tin |
|------|---------|-------|-------|-----|-----|-----|-----|-----|----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
| xxxx | OS | 1 | Train | 20 | 89 | 355 | 123 | 435 | 90 | 5 | 55 | 676 | 34 | 43 | 984 | 345 | 74 | 846 |
| xxxx | OPC | 2 | Poxy1 | 45 | 546 | 68 | 345 | 903 | 70 | 345 | 23 | 54 | 32 | 234 | 23 | 567 | 69 | 64 |
| xxxx | OPC | 2 | Poxy2 | 38 | 67 | 235 | 789 | 7 | 40 | 99 | 98 | 87 | 89 | 34 | 312 | 42 | 756 | 23 |
| xxxx | Oxy R | 4 | H1 | 22 | 36 | 13 | 678 | 64 | 40 | 34 | 239 | 76 | 87 | 34 | 999 | 965 | 34 | 93 |
| xxxx | Oxy R | 4 | H2 | 89 | 54 | 761 | 765 | 9 | 20 | 22 | 65 | 78 | 98 | 78 | 75 | 354 | 23 | 23 |
| xxxx | Oxy R | 4 | H3 | | | | | | | | | | | | | | | |
| xxxx | Oxy R | 4 | H4 | | | | | | | | | | | | | | | |
| xxxx | HPK | 3 | Test1 | 99 | 456 | 39 | 567 | 223 | 50 | 5 | 32 | 549 | 435 | 34 | 87 | 64 | 348 | 942 |
| xxxx | HPK | 3 | Test2 | 52 | 21 | 47 | 876 | 1 | 30 | 46 | 92 | 78 | 12 | 34 | 12 | 12 | 421 | 23 |
| xxxx | HPK | 3 | Test3 | | | | | | | | | | | | | | | |
| xxxx | Mano | 1 | Porp | 42 | 657 | 645 | 234 | 344 | 80 | 45 | 364 | 97 | 23 | 634 | 34 | 23 | 87 | 84 |
| xxxx | Macro | 2 | Otto1 | 73 | 574 | 46 | 456 | 453 | 60 | 44 | 235 | 867 | 5 | 433 | 234 | 346 | 46 | 35 |
| xxxx | Macro | 2 | Otto2 | 23 | 433 | 186 | 987 | 2 | 30 | 34 | 58 | 87 | 43 | 34 | 23 | 62 | 73 | 32 |
Here is my code:
Sub buttonclick()
Dim Ary As Variant, Nary As Variant, Cary As Variant
Dim r As Long, c As Long, nr As Long, cc As Long
Cary = Array("0853", 6898, 113128, 143143)
With Sheets("Sheet1")
Ary = .Range("A2:DM" & .Range("A" & Rows.Count).End(xlUp).Row).Value2
End With
ReDim Nary(1 To UBound(Ary) * 4, 1 To 19)
For r = 1 To UBound(Ary)
For c = 4 To 7
If Ary(r, c) = "" Then Exit For
nr = nr + 1
Nary(nr, 1) = Ary(r, 1): Nary(nr, 2) = Ary(r, 2): Nary(nr, 3) = Ary(r, 3)
Nary(nr, 4) = Ary(r, c)
For cc = Left(Cary(c - 4), 2) To Right(Cary(c - 4), 2) Step 15
Nary(nr, 5) = Nary(nr, 5) & Ary(r, cc)
Nary(nr, 6) = Nary(nr, 6) & Ary(r, cc + 1)
Nary(nr, 7) = Nary(nr, 7) & Ary(r, cc + 2)
Nary(nr, 8) = Nary(nr, 8) & Ary(r, cc + 3)
Nary(nr, 9) = Nary(nr, 9) & Ary(r, cc + 4)
Nary(nr, 10) = Nary(nr, 10) & Ary(r, cc + 5)
Nary(nr, 11) = Nary(nr, 11) & Ary(r, cc + 6)
Nary(nr, 12) = Nary(nr, 12) & Ary(r, cc + 7)
Nary(nr, 13) = Nary(nr, 13) & Ary(r, cc + 8)
Nary(nr, 14) = Nary(nr, 14) & Ary(r, cc + 9)
Nary(nr, 15) = Nary(nr, 15) & Ary(r, cc + 10)
Nary(nr, 16) = Nary(nr, 16) & Ary(r, cc + 11)
Nary(nr, 17) = Nary(nr, 17) & Ary(r, cc + 12)
Nary(nr, 18) = Nary(nr, 18) & Ary(r, cc + 13)
Nary(nr, 19) = Nary(nr, 19) & Ary(r, cc + 14)
Next cc
Next c
Next r
With Sheets("Sheet2")
.UsedRange.ClearContents
.Range("A1").Resize(, 19).Value = Array("usr", "Company", "Dept.#", "Dept", "Hrs",
"Tr", "F", "A", "HOH", "M", "R", "SO", "BIG", _
"T", "P", "X", "Y", "Z", "Tin")
.Range("A2").Resize(nr, 19).Value = Nary
End With
End Sub

How to create a calculated column in access 2013 to detect duplicates

I'm recreating a tool I made in Excel as it's getting bigger and performance is getting out of hand.
The issue is that I only have MS Access 2013 on my work laptop and I'm fairly new to the Expression Builder in Access 2013, which has a very limited function base to be honest.
My data has duplicates on the [Location] column, meaning that, I have multiple SKUs on that warehouse location. However, some of my calculations need to be done only once per [Location]. My solution to that, in Excel, was to make a formula (see below) putting 1 only on the first appearance of that location, putting 0 on next appearances. Doing that works like a charm because summing over that [Duplicate] column while imposing multiple criteria returns the number of occurrences of the multiple criteria counting locations only once.
Now, MS Access 2013 Expression Builder has no SUM nor COUNT functions to create a calculated column emulating my [Duplicate] column from Excel. Preferably, I would just input the raw data and let Access populate the calculated fields vs also inputting the calculated fields as well, since that would defeat my original purpose of reducing the computational cost of creating my dashboard.
The question is, how would you create a calculated column, in MS Access 2013 Expression Builder to recreate the below Excel function:
= IF($D$2:$D3=$D4,0,1)
In the sake of reducing the file size (over 100K rows) I even replace the 0 by a blank character "".
Thanks in advance for your help
Y
First and foremost, understand MS Access' Expression Builder is a convenience tool to build an SQL expression. Everything in Query Design ultimately is to build an SQL query. For this reason, you have to use a set-based mentality to see data in whole sets of related tables and not cell-by-cell mindset.
Specifically, to achieve:
putting 1 only on the first appearance of that location, putting 0 on next appearances
Consider a whole set-based approach by joining on a separate, aggregate query to identify the first value of your needed grouping, then calculate needed IIF expression. Below assumes you have an autonumber or primary key field in table (a standard in relational databases):
Aggregate Query (save as a separate query, adjust columns as needed)
SELECT ColumnD, MIN(AutoNumberID) As MinID
FROM myTable
GROUP BY ColumnD
Final Query (join to original table and build final IIF expression)
SELECT m.*, IIF(agg.MinID = AutoNumberID, 1, 0) As Dup_Indicator
FROM myTable m
INNER JOIN myAggregateQuery agg
ON m.[ColumnD] = agg.ColumnD
To demonstrate with random data:
Original
| ID | GROUP | INT | NUM | CHAR | BOOL | DATE |
|----|--------|-----|--------------|------|-------|------------|
| 1 | r | 9 | 1.424490258 | B6z | TRUE | 7/4/1994 |
| 2 | stata | 10 | 2.591235683 | h7J | FALSE | 10/5/1971 |
| 3 | spss | 6 | 0.560461966 | Hrn | TRUE | 11/27/1990 |
| 4 | stata | 10 | -1.499272175 | eXL | FALSE | 4/17/2010 |
| 5 | stata | 15 | 1.470269177 | Vas | TRUE | 6/13/2010 |
| 6 | r | 14 | -0.072238898 | puP | TRUE | 4/1/1994 |
| 7 | julia | 2 | -1.370405263 | S2l | FALSE | 12/11/1999 |
| 8 | spss | 6 | -0.153684675 | mAw | FALSE | 7/28/1977 |
| 9 | spss | 10 | -0.861482674 | cxC | FALSE | 7/17/1994 |
| 10 | spss | 2 | -0.817222582 | GRn | FALSE | 10/19/2012 |
| 11 | stata | 2 | 0.949287754 | xgc | TRUE | 1/18/2003 |
| 12 | stata | 5 | -1.580841322 | Y1D | TRUE | 6/3/2011 |
| 13 | r | 14 | -1.671303816 | JCP | FALSE | 5/15/1981 |
| 14 | r | 7 | 0.904181025 | Rct | TRUE | 7/24/1977 |
| 15 | stata | 10 | -1.198211174 | qJY | FALSE | 5/6/1982 |
| 16 | julia | 10 | -0.265808162 | 10s | FALSE | 3/18/1975 |
| 17 | r | 13 | -0.264955027 | 8Md | TRUE | 6/11/1974 |
| 18 | r | 4 | 0.518302149 | 4KW | FALSE | 9/12/1980 |
| 19 | r | 5 | -0.053620183 | 8An | FALSE | 4/17/2004 |
| 20 | r | 14 | -0.359197116 | F8Q | TRUE | 6/14/2005 |
| 21 | spss | 11 | -2.211875193 | AgS | TRUE | 4/11/1973 |
| 22 | stata | 4 | -1.718749471 | Zqr | FALSE | 2/20/1999 |
| 23 | python | 10 | 1.207878576 | tcC | FALSE | 4/18/2008 |
| 24 | stata | 11 | 0.548902226 | PFJ | TRUE | 9/20/1994 |
| 25 | stata | 6 | 1.479125922 | 7a7 | FALSE | 3/2/1989 |
| 26 | python | 10 | -0.437245299 | r32 | TRUE | 6/7/1997 |
| 27 | sas | 14 | 0.404746106 | 6NJ | TRUE | 9/23/2013 |
| 28 | stata | 8 | 2.206741458 | Ive | TRUE | 5/26/2008 |
| 29 | spss | 12 | -0.470694096 | dPS | TRUE | 5/4/1983 |
| 30 | sas | 15 | -0.57169507 | yle | TRUE | 6/20/1979 |
SQL (uses aggregate in subquery but can be a stored query)
SELECT r.*, IIF(sub.MinID = r.ID,1, 0) AS Dup
FROM Random_Data r
LEFT JOIN
(
SELECT r.GROUP, MIN(r.ID) As MinID
FROM Random_Data r
GROUP BY r.Group
) sub
ON r.[Group] = sub.[GROUP]
Output (notice the first GROUP value is tagged 1, all else 0)
| ID | GROUP | INT | NUM | CHAR | BOOL | DATE | Dup |
|----|--------|-----|--------------|------|-------|------------|-----|
| 1 | r | 9 | 1.424490258 | B6z | TRUE | 7/4/1994 | 1 |
| 2 | stata | 10 | 2.591235683 | h7J | FALSE | 10/5/1971 | 1 |
| 3 | spss | 6 | 0.560461966 | Hrn | TRUE | 11/27/1990 | 1 |
| 4 | stata | 10 | -1.499272175 | eXL | FALSE | 4/17/2010 | 0 |
| 5 | stata | 15 | 1.470269177 | Vas | TRUE | 6/13/2010 | 0 |
| 6 | r | 14 | -0.072238898 | puP | TRUE | 4/1/1994 | 0 |
| 7 | julia | 2 | -1.370405263 | S2l | FALSE | 12/11/1999 | 1 |
| 8 | spss | 6 | -0.153684675 | mAw | FALSE | 7/28/1977 | 0 |
| 9 | spss | 10 | -0.861482674 | cxC | FALSE | 7/17/1994 | 0 |
| 10 | spss | 2 | -0.817222582 | GRn | FALSE | 10/19/2012 | 0 |
| 11 | stata | 2 | 0.949287754 | xgc | TRUE | 1/18/2003 | 0 |
| 12 | stata | 5 | -1.580841322 | Y1D | TRUE | 6/3/2011 | 0 |
| 13 | r | 14 | -1.671303816 | JCP | FALSE | 5/15/1981 | 0 |
| 14 | r | 7 | 0.904181025 | Rct | TRUE | 7/24/1977 | 0 |
| 15 | stata | 10 | -1.198211174 | qJY | FALSE | 5/6/1982 | 0 |
| 16 | julia | 10 | -0.265808162 | 10s | FALSE | 3/18/1975 | 0 |
| 17 | r | 13 | -0.264955027 | 8Md | TRUE | 6/11/1974 | 0 |
| 18 | r | 4 | 0.518302149 | 4KW | FALSE | 9/12/1980 | 0 |
| 19 | r | 5 | -0.053620183 | 8An | FALSE | 4/17/2004 | 0 |
| 20 | r | 14 | -0.359197116 | F8Q | TRUE | 6/14/2005 | 0 |
| 21 | spss | 11 | -2.211875193 | AgS | TRUE | 4/11/1973 | 0 |
| 22 | stata | 4 | -1.718749471 | Zqr | FALSE | 2/20/1999 | 0 |
| 23 | python | 10 | 1.207878576 | tcC | FALSE | 4/18/2008 | 1 |
| 24 | stata | 11 | 0.548902226 | PFJ | TRUE | 9/20/1994 | 0 |
| 25 | stata | 6 | 1.479125922 | 7a7 | FALSE | 3/2/1989 | 0 |
| 26 | python | 10 | -0.437245299 | r32 | TRUE | 6/7/1997 | 0 |
| 27 | sas | 14 | 0.404746106 | 6NJ | TRUE | 9/23/2013 | 1 |
| 28 | stata | 8 | 2.206741458 | Ive | TRUE | 5/26/2008 | 0 |
| 29 | spss | 12 | -0.470694096 | dPS | TRUE | 5/4/1983 | 0 |
| 30 | sas | 15 | -0.57169507 | yle | TRUE | 6/20/1979 | 0 |

Count specific value for IDs in two dataframes

I have two dataframes
df1
+----+-------+
| | Key |
|----+-------|
| 0 | 30 |
| 1 | 31 |
| 2 | 32 |
| 3 | 33 |
| 4 | 34 |
| 5 | 35 |
+----+-------+
df2
+----+-------+--------+
| | Key | Test |
|----+-------+--------|
| 0 | 30 | Test4 |
| 1 | 30 | Test5 |
| 2 | 30 | Test6 |
| 3 | 31 | Test4 |
| 4 | 31 | Test5 |
| 5 | 31 | Test6 |
| 6 | 32 | Test3 |
| 7 | 33 | Test3 |
| 8 | 33 | Test3 |
| 9 | 34 | Test1 |
| 10 | 34 | Test1 |
| 11 | 34 | Test2 |
| 12 | 34 | Test3 |
| 13 | 34 | Test3 |
| 14 | 34 | Test3 |
| 15 | 35 | Test3 |
| 16 | 35 | Test3 |
| 17 | 35 | Test3 |
| 18 | 35 | Test3 |
| 19 | 35 | Test3 |
+----+-------+--------+
I want to count how many times each Test is listed for each Key.
+----+-------+-------+-------+-------+-------+-------+-------+
| | Key | Test1 | Test2 | Test3 | Test4 | Test5 | Test6 |
|----+-------|-------|-------|-------|-------|-------|-------|
| 0 | 30 | | | | 1 | 1 | 1 |
| 1 | 31 | | | | 1 | 1 | 1 |
| 2 | 32 | | | 1 | | | |
| 3 | 33 | | | 2 | | | |
| 4 | 34 | 2 | 1 | 3 | | | |
| 5 | 35 | | | 5 | | | |
+----+-------+-------+-------+-------+-------+-------+-------+
What I've tried
Using join and groupby, I first got the count for each Key, regardless of Test.
result_df = df1.join(df2.groupby('Key').size().rename('Count'), on='Key')
+----+-------+---------+
| | Key | Count |
|----+-------+---------|
| 0 | 30 | 3 |
| 1 | 31 | 3 |
| 2 | 32 | 1 |
| 3 | 33 | 2 |
| 4 | 34 | 6 |
| 5 | 35 | 5 |
+----+-------+---------+
I tried to group the Key with the Test
result_df = df1.join(df2.groupby(['Key', 'Test']).size().rename('Count'), on='Key')
but this returns an error
ValueError: len(left_on) must equal the number of levels in the index of "right"
Check with crosstab
pd.crosstab(df2.Key,df2.Test).reindex(df1.Key).replace({0:''})
Here another solution with groupby & pivot. Using this solution you don't need df1 at all.
# | create some dummy data
tests = ['Test' + str(i) for i in range(1,7)]
df = pd.DataFrame({'Test': np.random.choice(tests, size=100), 'Key': np.random.randint(30, 35, size=100)})
df['Count Variable'] = 1
# | group & count aggregation
df = df.groupby(['Key', 'Test']).count()
df = df.pivot(index="Key", columns="Test", values="Count Variable").reset_index()

How to add space between rows and sum up automatically in Excel

let's say that I have a table like the below:
| | Value 1 | Value 2 | Value 3 | |
|---|---------|---------|---------|---|
| A | 22 | 12 | 3 | |
| A | 5 | 6 | 12 | |
| A | 19 | 9 | 13 | |
| A | 22 | 43 | 31 | |
| B | 7 | 12 | 23 | |
| B | 5 | 5 | 8 | |
| B | 35 | 78 | 9 | |
| B | 45 | 1 | 8 | |
| C | 34 | 56 | 0 | |
| C | 22 | 1 | 14 | |
| C | 13 | 46 | 45 | |
and that I'd need to transform it into the below:
| | Value 1 | Value 2 | Value 3 | |
|---|---------|---------|---------|---|
| A | 22 | 12 | 3 | |
| A | 5 | 6 | 12 | |
| A | 19 | 9 | 13 | |
| A | 22 | 43 | 31 | |
| | 68 | 70 | 59 | |
| | | | | |
| B | 7 | 12 | 23 | |
| B | 5 | 5 | 8 | |
| B | 35 | 78 | 9 | |
| B | 45 | 1 | 8 | |
| | 92 | 96 | 48 | |
| | | | | |
| C | 34 | 56 | 0 | |
| C | 22 | 1 | 14 | |
| C | 13 | 46 | 45 | |
| | 69 | 103 | 59 | |
How could I obtain the desired effect automatically?
There would be n empty rows after each group and the sums of each column within the group.
You can use the Subtotal feature of Excel. Subtotal is in the "Data" tab of the ribbon. To automatically add the totals between groupings. I don't think it adds the blank row. If you absolutely need the blank row, then I can generate some VBA that will work.

How to pretty print the csv which has long columns from command line?

I want to view and pretty print this csv file from command line. For this purpose I am using csvlook nupic_out.csv | less -#2 -N -S command. The problem is that this csv file has very long one column (it is the 5th - multiStepPredictions.1) Everything up to this column is displayed properly
1 -----------------+--------------------+-----------------------------+------------------------------------------------------------------------------------------------------------------------------------
2 angle | sine | multiStepPredictions.actual | multiStepPredictions.1
3 -----------------+--------------------+-----------------------------+------------------------------------------------------------------------------------------------------------------------------------
4 string | string | string | string
5 | | |
6 0.0 | 0.0 | 0.0 | None
7 0.0314159265359 | 0.0314107590781 | 0.0314107590781 | {0.0: 1.0}
8 0.0628318530718 | 0.0627905195293 | 0.0627905195293 | {0.0: 0.0039840637450199202 0.03141075907812829: 0.99601593625497931}
9 0.0942477796077 | 0.0941083133185 | 0.0941083133185 | {0.03141075907812829: 1.0}
10 0.125663706144 | 0.125333233564 | 0.125333233564 | {0.06279051952931337: 0.98942669172932329 0.03141075907812829: 0.010573308270676691}
11 0.157079632679 | 0.15643446504 | 0.15643446504 | {0.03141075907812829: 0.0040463956041429626 0.09410831331851431: 0.94917381047888194 0.06279051952931337: 0.04677979391
12 0.188495559215 | 0.187381314586 | 0.187381314586 | {0.12533323356430426: 0.85789473684210527 0.09410831331851431: 0.14210526315789476}
13 0.219911485751 | 0.218143241397 | 0.218143241397 | {0.15643446504023087: 0.63177315983686211 0.12533323356430426: 0.26859584385317475 0.09410831331851431: 0.09963099630
14 0.251327412287 | 0.248689887165 | 0.248689887165 | {0.06279051952931337: 0.3300438596491227 0.1873813145857246: 0.47381368550527647 0.15643446504023087: 0.12643231695
15 0.282743338823 | 0.278991106039 | 0.278991106039 | {0.21814324139654254: 0.56140350877192935 0.03141075907812829: 0.0032894736842105313 0.1873813145857246: 0.105263157894
16 0.314159265359 | 0.309016994375 | 0.309016994375 | {0.2486898871648548: 0.8228480378168288 0.03141075907812829: 0.0029688002160632981 0.1873813145857246: 0.022936632244020292
17 0.345575191895 | 0.338737920245 | 0.338737920245 | {0.2486898871648548: 0.13291723147401985 0.2789911060392293: 0.77025390613412514 0.21814324139654254: 0.06654338668
18 0.376991118431 | 0.368124552685 | 0.368124552685 | {0.2486898871648548: 0.10230061459892241 0.2789911060392293: 0.14992465949587844 0.21814324139654254: 0.06517018413
19 0.408407044967 | 0.397147890635 | 0.397147890635 | {0.33873792024529137: 0.67450197451277849 0.2486898871648548: 0.028274124758268366 0.2789911060392293: 0.077399230934
20 0.439822971503 | 0.425779291565 | 0.425779291565 | {0.33873792024529137: 0.17676914536466748 0.3681245526846779: 0.6509556160617509 0.2486898871648548: 0.04784688995215327
21 0.471238898038 | 0.45399049974 | 0.45399049974 | {0.33873792024529137: 0.038582651338955089 0.3681245526846779: 0.14813277049357607 0.2486898871648548: 0.029239766081
22 0.502654824574 | 0.481753674102 | 0.481753674102 | {0.3681245526846779: 0.035163881050575212 0.42577929156507266: 0.61447711863333254 0.2486898871648548: 0.015554881705
23 0.53407075111 | 0.50904141575 | 0.50904141575 | {0.33873792024529137: 0.076923076923077108 0.42577929156507266: 0.11307647489430354 0.45399049973954675: 0.66410206612
24 0.565486677646 | 0.535826794979 | 0.535826794979 | {0.42577929156507266: 0.035628438284964516 0.45399049973954675: 0.22906083786048709 0.3971478906347806: 0.014132015120
25 0.596902604182 | 0.562083377852 | 0.562083377852 | {0.5090414157503713: 0.51578106597362727 0.45399049973954675: 0.095000708551421106 0.06279051952931337: 0.08649420683
26 0.628318530718 | 0.587785252292 | 0.587785252292 | {0.5090414157503713: 0.10561370056909389 0.45399049973954675: 0.063130123291224485 0.5358267949789967: 0.617348556187
27 0.659734457254 | 0.612907053653 | 0.612907053653 | {0.5090414157503713: 0.036017118165629407 0.45399049973954675: 0.013316643552779454 0.5358267949789967: 0.236874795987
28 0.69115038379 | 0.637423989749 | 0.637423989749 | {0.2486898871648548: 0.037593984962406228 0.21814324139654254: 0.033834586466165564 0.5358267949789967: 0.085397996837
29 0.722566310326 | 0.661311865324 | 0.661311865324 | {0.6129070536529765: 0.49088597257034694 0.2486898871648548: 0.072573707671854309 0.06279051952931337: 0.04684445139
30 0.753982236862 | 0.684547105929 | 0.684547105929 | {0.6129070536529765: 0.16399317807418579 0.2486898871648548: 0.066194656736965368 0.2789911060392293: 0.015074193295
But everything displayed behind this column is garbage
1 --------------------------------------------------------------------------------------------------------+--------------+---------------------------------+----------------------------+--------------+---
2 | anomalyScore | multiStepBestPredictions.actual | multiStepBestPredictions.1 | anomalyLabel | mu
3 --------------------------------------------------------------------------------------------------------+--------------+---------------------------------+----------------------------+--------------+---
4 | string | string | string | string | fl
5 | | | | |
6 | 1.0 | 0.0 | None | [] | 0
7 | 1.0 | 0.0314107590781 | 0.0 | [] | 10
8 | 1.0 | 0.0627905195293 | 0.0314107590781 | []
9 | 1.0 | 0.0941083133185 | 0.0314107590781 | [] | 66
10 | 1.0 | 0.125333233564 | 0.0627905195293 | []
11 | 1.0 | 0.15643446504 | 0.0941083133185 | []
12 | 1.0 | 0.187381314586 | 0.125333233564 | []
13 | 1.0 | 0.218143241397 | 0.15643446504 | []
14 | 1.0 | 0.248689887165 | 0.187381314586
15 | 1.0 | 0.278991106039 | 0.218143241397 |
16 | 1.0 | 0.309016994375 | 0.248689887165 | []
17 | 1.0 | 0.338737920245 | 0.278991106039
18 075907812829: 0.0008726186745285988 0.3090169943749474: 0.36571033632089267 0.15643446504023087: 0.15263157894736851} | 1.0 | 0.368124552685 | 0.30
19 69943749474: 0.12243639244611626 0.15643446504023087: 0.076923076923077024} | 1.0 | 0.397147890635 | 0.33873792
20 474: 0.042824288244468607} | 1.0 | 0.425779291565 | 0.368124552685
21 78906347806: 0.72014752277063943 0.3090169943749474: 0.019779736758565116} | 1.0 | 0.45399049974 | 0.39714789
22 323356430426: 0.030959752321981428 0.09410831331851431: 0.027863777089783253} | 1.0 | 0.481753674102 | 0.425779291
23 831331851431: 0.036437246963562819} | 1.0 | 0.50904141575 | 0.45399049974
24 831331851431: 0.011027980232581683} | 1.0 | 0.535826794979 | 0.481753674102
25 929156507266: 0.027856989831229011 0.15643446504023087: 0.02066616653788458 0.09410831331851431: 0.016739594895686508} | 1.0 | 0.562083377852 | 0.5090
26 13145857246: 0.08333333333333337 0.42577929156507266: 0.025020076940584089} | 1.0 | 0.587785252292 | 0.5358
27 075907812829: 0.0025974025974026035 0.5620833778521306: 0.59566175023106149} | 1.0 | 0.612907053653 | 0.5620833778
28 33778521306: 0.19639042255084313} one | 1.0 | 0.637423989749 | 0.587785252292
29 13145857246: 0.0046487548012272466 0.21814324139654254: 0.070071166027997234 0.5620833778521306: 0.087432430700408653} | 1.0 | 0.661311865324 | 0.612
30 39897486896: 0.53158336716673826 0.3090169943749474: 0.016749103661249369 0.5620833778521306: 0.027323827946545261} | 1.0 | 0.684547105929 | 0.6
How to pretty print whole csv?
PS: Similar garbage produces also following commands (inspiration here):
column -s, -t < nupic_out.csv | less -#2 -N -S
csvtool readable nupic_out.csv | less -#2 -N -S
I believe that csvlook is treating the tab characters in that column just like any other character, and doesn't know about their special behaviour.
The easiest way to get the columns to line up is to minimally expand the tabs:
expand -t1 nupic_out.csv | csvlook

Resources