Pandas Concat new column - python-3.x

Why do i get NaN in 'ACTION' column?
It seems strange to me that i am getting that result. I have tried using ignore_index = True and it has a freq error.
C H L O OI V WAP ACTION
datetime
2017-03-14 00:52:00 8.25 8.25 8.19 8.21 302.0 1769.0 8.22 NaN
2017-03-13 23:54:00 8.09 8.10 8.09 8.10 6.0 65.0 8.10 NaN
2017-03-14 01:03:00 8.29 8.32 8.28 8.29 175.0 1084.0 8.30 NaN
2017-03-14 00:03:00 8.15 8.15 8.14 8.15 13.0 50.0 8.15 NaN
2017-03-13 23:57:00 8.13 8.13 8.12 8.12 3.0 6.0 8.12 NaN
I want to get -
C H L O OI V WAP ACTION
datetime
2017-03-14 00:52:00 8.25 8.25 8.19 8.21 302.0 1769.0 8.22 100
2017-03-13 23:54:00 8.09 8.10 8.09 8.10 6.0 65.0 8.10 200
2017-03-14 01:03:00 8.29 8.32 8.28 8.29 175.0 1084.0 8.30 300
2017-03-14 00:03:00 8.15 8.15 8.14 8.15 13.0 50.0 8.15 400
2017-03-13 23:57:00 8.13 8.13 8.12 8.12 3.0 6.0 8.12 500
buy_stp = pd.Series([100,200,300,400,500],name= 'ACTION')
print(buy_stp)
df10 = pd.concat([df_concat_results,
buy_stp],
axis=1,
join_axes=[df_concat_results.index])
print(df10)

You need same indexes - Series with DataFrame for alignment else get NaNs:
buy_stp.index = df.index
df['ACTION'] = buy_stp
print (df)
C H L O OI V WAP ACTION
datetime
2017-03-14 00:52:00 8.25 8.25 8.19 8.21 302.0 1769.0 8.22 100
2017-03-13 23:54:00 8.09 8.10 8.09 8.10 6.0 65.0 8.10 200
2017-03-14 01:03:00 8.29 8.32 8.28 8.29 175.0 1084.0 8.30 300
2017-03-14 00:03:00 8.15 8.15 8.14 8.15 13.0 50.0 8.15 400
2017-03-13 23:57:00 8.13 8.13 8.12 8.12 3.0 6.0 8.12 500
Or:
buy_stp = pd.Series([100,200,300,400,500],name= 'ACTION', index=df.index)
print(buy_stp)
datetime
2017-03-14 00:52:00 100
2017-03-13 23:54:00 200
2017-03-14 01:03:00 300
2017-03-14 00:03:00 400
2017-03-13 23:57:00 500
Name: ACTION, dtype: int64
df['ACTION'] = buy_stp
print (df)
C H L O OI V WAP ACTION
datetime
2017-03-14 00:52:00 8.25 8.25 8.19 8.21 302.0 1769.0 8.22 100
2017-03-13 23:54:00 8.09 8.10 8.09 8.10 6.0 65.0 8.10 200
2017-03-14 01:03:00 8.29 8.32 8.28 8.29 175.0 1084.0 8.30 300
2017-03-14 00:03:00 8.15 8.15 8.14 8.15 13.0 50.0 8.15 400
2017-03-13 23:57:00 8.13 8.13 8.12 8.12 3.0 6.0 8.12 500
Also works if convert to numpy array by values or list, only is necessary same length df and buy_stp:
df['ACTION'] = buy_stp.values
print (df)
C H L O OI V WAP ACTION
datetime
2017-03-14 00:52:00 8.25 8.25 8.19 8.21 302.0 1769.0 8.22 100
2017-03-13 23:54:00 8.09 8.10 8.09 8.10 6.0 65.0 8.10 200
2017-03-14 01:03:00 8.29 8.32 8.28 8.29 175.0 1084.0 8.30 300
2017-03-14 00:03:00 8.15 8.15 8.14 8.15 13.0 50.0 8.15 400
2017-03-13 23:57:00 8.13 8.13 8.12 8.12 3.0 6.0 8.12 500
df['ACTION'] = buy_stp.tolist()
print (df)
C H L O OI V WAP ACTION
datetime
2017-03-14 00:52:00 8.25 8.25 8.19 8.21 302.0 1769.0 8.22 100
2017-03-13 23:54:00 8.09 8.10 8.09 8.10 6.0 65.0 8.10 200
2017-03-14 01:03:00 8.29 8.32 8.28 8.29 175.0 1084.0 8.30 300
2017-03-14 00:03:00 8.15 8.15 8.14 8.15 13.0 50.0 8.15 400
2017-03-13 23:57:00 8.13 8.13 8.12 8.12 3.0 6.0 8.12 500

If I understand you correctly, you just want to add a column to a data frame. If so, this is the easiest way to do it.
df['Action'] = buy_stp

Related

Can we replace outliers with the predicted values in pyspark?

I have a df in spark:
(I am actually working on this dataset it is not possible to paste whole data so here is the link)
df = https://www.kaggle.com/schirmerchad/bostonhoustingmlnd?select=housing.csv
Now I found the outliers as below (22 rows in total):
def IQR(df,column):
quantiles = sdf.approxQuantile(column, [0.25, 0.75], 0)
q1 = quantiles[0]
q3 = quantiles[1]
IQR = q3-q1
lower = q1 - 1.5*IQR
upper = q3+ 1.5*IQR
return (lower,upper)
lower, upper = IQR(df,'RM')
lower,upper = 4.8374999999999995 7.617500000000001
outliers = df.filter((df['RM'] > upper) | (df['RM'] < lower))
Now below are the outliers detected :
RM LSTAT PTRATIO MEDV
8.069 4.21 18 812700
7.82 3.57 18 919800
7.765 7.56 17.8 835800
7.853 3.81 14.7 1018500
8.266 4.14 17.4 940800
8.04 3.13 17.4 789600
7.686 3.92 17.4 980700
8.337 2.47 17.4 875700
8.247 3.95 17.4 1014300
8.259 3.54 19.1 898800
8.398 5.91 13 1024800
7.691 6.58 18.6 739200
7.82 3.76 14.9 953400
7.645 3.01 14.9 966000
3.561 7.12 20.2 577500
3.863 13.33 20.2 485100
4.138 37.97 20.2 289800
4.368 30.63 20.2 184800
4.652 28.28 20.2 220500
4.138 23.34 20.2 249900
4.628 34.37 20.2 375900
4.519 36.98 20.2 147000
Now I want to replace the outliers with the ml predicted values, after the ml process I got the predicted values as below:-
RM LSTAT PTRATIO MEDV column_assem column prediction
8.069 4.21 18 812700 {"vectorType":"dense","length":3,"values":[4.21,18,812700]} {"vectorType":"dense","length":3,"values":[812699.9991344779,32.9872628621034,25.697942748362507]} 7.138307692307692
7.82 3.57 18 919800 {"vectorType":"dense","length":3,"values":[3.57,18,919800]} {"vectorType":"dense","length":3,"values":[919799.999082192,36.25675952004636,26.656936598060938]} 7.138307692307692
7.765 7.56 17.8 835800 {"vectorType":"dense","length":3,"values":[7.56,17.8,835800]} {"vectorType":"dense","length":3,"values":[835799.9989959698,37.18609141885786,25.87518521779868]} 7.138307692307692
7.853 3.81 14.7 1018500 {"vectorType":"dense","length":3,"values":[3.81,14.7,1018500]} {"vectorType":"dense","length":3,"values":[1018499.9990279829,40.25963007114179,24.285126110831364]} 7.138307692307692
8.266 4.14 17.4 940800 {"vectorType":"dense","length":3,"values":[4.14,17.4,940800]} {"vectorType":"dense","length":3,"values":[940799.9990507461,37.621770135316275,26.279618209844216]} 7.138307692307692
8.04 3.13 17.4 789600 {"vectorType":"dense","length":3,"values":[3.13,17.4,789600]} {"vectorType":"dense","length":3,"values":[789599.999195178,31.094759131505864,24.832393813608636]} 7.138307692307692
7.686 3.92 17.4 980700 {"vectorType":"dense","length":3,"values":[3.92,17.4,980700]} {"vectorType":"dense","length":3,"values":[980699.9990305867,38.858227336579965,26.637789595102927]} 7.138307692307692
8.337 2.47 17.4 875700 {"vectorType":"dense","length":3,"values":[2.47,17.4,875700]} {"vectorType":"dense","length":3,"values":[875699.9991585133,33.577861049146954,25.59625197564997]} 7.138307692307692
8.247 3.95 17.4 1014300 {"vectorType":"dense","length":3,"values":[3.95,17.4,1014300]} {"vectorType":"dense","length":3,"values":[1014299.9990056665,40.11446130241714,26.949909126197]} 7.138307692307692
8.259 3.54 19.1 898800 {"vectorType":"dense","length":3,"values":[3.54,19.1,898800]} {"vectorType":"dense","length":3,"values":[898799.9990899825,35.406713649671325,27.56000332051734]} 7.138307692307692
8.398 5.91 13 1024800 {"vectorType":"dense","length":3,"values":[5.91,13,1024800]} {"vectorType":"dense","length":3,"values":[1024799.9989586923,42.669988999612016,22.74784587477886]} 7.138307692307692
7.691 6.58 18.6 739200 {"vectorType":"dense","length":3,"values":[6.58,18.6,739200]} {"vectorType":"dense","length":3,"values":[739199.9990946348,32.64270527156902,25.73328780757773]} 7.138307692307692
7.82 3.76 14.9 953400 {"vectorType":"dense","length":3,"values":[3.76,14.9,953400]} {"vectorType":"dense","length":3,"values":[953399.9990744753,37.82403517229104,23.880552758747136]} 7.138307692307692
7.645 3.01 14.9 966000 {"vectorType":"dense","length":3,"values":[3.01,14.9,966000]} {"vectorType":"dense","length":3,"values":[965999.9990932231,37.53477931241747,23.960460322415766]} 7.138307692307692
3.561 7.12 20.2 577500 {"vectorType":"dense","length":3,"values":[7.12,20.2,577500]} {"vectorType":"dense","length":3,"values":[577499.9991773808,27.20258411502299,25.862694427868608]} 6.376732394366198
3.863 13.33 20.2 485100 {"vectorType":"dense","length":3,"values":[13.33,20.2,485100]} {"vectorType":"dense","length":3,"values":[485099.999013695,30.032948373359417,25.311342678468208]} 6.043858108108108
4.138 37.97 20.2 289800 {"vectorType":"dense","length":3,"values":[37.97,20.2,289800]} {"vectorType":"dense","length":3,"values":[289799.99824280146,47.51591753902686,24.707706732637366]} 5.2370714285714275
4.368 30.63 20.2 184800 {"vectorType":"dense","length":3,"values":[30.63,20.2,184800]} {"vectorType":"dense","length":3,"values":[184799.99858809082,36.35256433967503,23.378827944979733]} 5.2370714285714275
4.652 28.28 20.2 220500 {"vectorType":"dense","length":3,"values":[28.28,20.2,220500]} {"vectorType":"dense","length":3,"values":[220499.9986495131,35.3082739723793,23.59425617851294]} 5.2370714285714275
4.138 23.34 20.2 249900 {"vectorType":"dense","length":3,"values":[23.34,20.2,249900]} {"vectorType":"dense","length":3,"values":[249899.99881098093,31.44714189260281,23.625084354536643]} 6.043858108108108
4.628 34.37 20.2 375900 {"vectorType":"dense","length":3,"values":[34.37,20.2,375900]} {"vectorType":"dense","length":3,"values":[375899.9983146336,47.06252004732307,25.328138233469573]} 5.2370714285714275
4.519 36.98 20.2 147000 {"vectorType":"dense","length":3,"values":[36.98,20.2,147000]} {"vectorType":"dense","length":3,"values":[146999.99838054206,41.31545014321207,23.33912202640834]} 5.2370714285714275
If it is one value I am aware of lit() to replace it but when there are multiple values how do we replace with the original one's?
Assuming that the original dataframe is called df and the machine-learning transformed dataframe is called ml, you can do a join and replace the RM column with the prediction value if the row satisfy the outlier condition:
df2 = df.join(ml, df.columns, 'left').withColumn(
'RM',
F.when(
(F.col('RM') > upper) | (F.col('RM') < lower),
F.col('prediction')
).otherwise(F.col('RM'))
).select(df.columns)

Issue with datetime formatting

I am having an issue with the datetime format of a set of data. The issue is due to the hour of day ranging from 1-24, with the 24th hour set to the wrong day (more specifically, the previous day). I have a sample of the data below,
1/1/2019,14:00,0.2,0.1,0.0,0.2,3.0,36.7,3,153
1/1/2019,15:00,0.2,0.6,0.2,0.4,3.9,36.7,1,199
1/1/2019,16:00,1.8,2.4,0.8,1.6,1.1,33.0,0,307
1/1/2019,17:00,3.0,3.2,0.6,2.6,6.0,32.8,1,310
1/1/2019,18:00,1.6,2.2,0.5,1.7,7.9,33.1,4,293
1/1/2019,19:00,1.7,1.1,0.6,0.6,5.9,35.0,5,262
1/1/2019,20:00,1.0,0.5,0.2,0.2,2.9,32.6,5,201
1/1/2019,21:00,0.6,0.3,0.0,0.4,2.1,31.8,6,182
1/1/2019,22:00,0.4,0.3,0.0,0.4,5.1,31.4,6,187
1/1/2019,23:00,0.8,0.6,0.3,0.3,9.9,30.2,5,227
1/1/2019,24:00,1.0,0.7,0.3,0.4,6.9,27.9,4,225 --- Here the date should be 1/2/2019
1/2/2019,01:00,1.3,0.9,0.5,0.4,4.0,26.9,6,236
1/2/2019,02:00,0.4,0.4,0.2,0.2,5.0,27.3,6,168
1/2/2019,03:00,0.7,0.5,0.3,0.3,6.9,30.2,4,219
1/2/2019,04:00,1.3,0.8,0.5,0.3,5.9,32.3,4,242
1/2/2019,05:00,0.7,0.2,0.0,0.2,3.0,33.8,4,177
1/2/2019,06:00,0.5,0.2,0.2,0.1,5.1,36.1,4,195
1/2/2019,07:00,0.6,0.3,0.2,0.2,9.9,38.0,4,200
1/2/2019,08:00,0.5,0.6,0.4,0.3,6.8,38.9,4,179
1/2/2019,09:00,0.5,0.2,0.0,0.2,3.0,39.0,4,193
1/2/2019,10:00,0.3,0.3,0.2,0.1,4.0,38.7,5,198
1/2/2019,11:00,0.3,0.3,0.2,0.0,4.9,38.4,5,170
1/2/2019,12:00,0.6,0.3,0.3,0.0,2.0,38.4,4,172
1/2/2019,13:00,0.2,0.3,0.2,0.0,2.0,38.8,4,154
1/2/2019,14:00,0.3,0.1,0.0,0.2,1.9,39.3,4,145
This is a fairly large set of data which I need to make a time series plot of, and as such I need to find a way to fix this formatting issue. I was attempting to iterate through the rows and in a pandas dataframe to fix problematic rows, but this does not provide any results. Thank you for any help beforehand.
You can convert date to datetimes by to_datetime and then add time column converted to timedeltas by to_timedelta:
df['date'] = pd.to_datetime(df['date']) + pd.to_timedelta(df['time'] + ':00')
Or if need remove time column also:
print (df)
date time a b c d e f g h
0 1/1/2019 14:00 0.2 0.1 0.0 0.2 3.0 36.7 3 153
1 1/1/2019 15:00 0.2 0.6 0.2 0.4 3.9 36.7 1 199
2 1/1/2019 16:00 1.8 2.4 0.8 1.6 1.1 33.0 0 307
3 1/1/2019 17:00 3.0 3.2 0.6 2.6 6.0 32.8 1 310
4 1/1/2019 18:00 1.6 2.2 0.5 1.7 7.9 33.1 4 293
5 1/1/2019 19:00 1.7 1.1 0.6 0.6 5.9 35.0 5 262
6 1/1/2019 20:00 1.0 0.5 0.2 0.2 2.9 32.6 5 201
7 1/1/2019 21:00 0.6 0.3 0.0 0.4 2.1 31.8 6 182
8 1/1/2019 22:00 0.4 0.3 0.0 0.4 5.1 31.4 6 187
9 1/1/2019 23:00 0.8 0.6 0.3 0.3 9.9 30.2 5 227
10 1/1/2019 24:00 1.0 0.7 0.3 0.4 6.9 27.9 4 225
11 1/2/2019 01:00 1.3 0.9 0.5 0.4 4.0 26.9 6 236
12 1/2/2019 02:00 0.4 0.4 0.2 0.2 5.0 27.3 6 168
13 1/2/2019 03:00 0.7 0.5 0.3 0.3 6.9 30.2 4 219
14 1/2/2019 04:00 1.3 0.8 0.5 0.3 5.9 32.3 4 242
15 1/2/2019 05:00 0.7 0.2 0.0 0.2 3.0 33.8 4 177
16 1/2/2019 06:00 0.5 0.2 0.2 0.1 5.1 36.1 4 195
17 1/2/2019 07:00 0.6 0.3 0.2 0.2 9.9 38.0 4 200
18 1/2/2019 08:00 0.5 0.6 0.4 0.3 6.8 38.9 4 179
19 1/2/2019 09:00 0.5 0.2 0.0 0.2 3.0 39.0 4 193
20 1/2/2019 10:00 0.3 0.3 0.2 0.1 4.0 38.7 5 198
21 1/2/2019 11:00 0.3 0.3 0.2 0.0 4.9 38.4 5 170
22 1/2/2019 12:00 0.6 0.3 0.3 0.0 2.0 38.4 4 172
23 1/2/2019 13:00 0.2 0.3 0.2 0.0 2.0 38.8 4 154
24 1/2/2019 14:00 0.3 0.1 0.0 0.2 1.9 39.3 4 145
df['date'] = pd.to_datetime(df['date']) + pd.to_timedelta(df.pop('time') + ':00')
print (df)
date a b c d e f g h
0 2019-01-01 14:00:00 0.2 0.1 0.0 0.2 3.0 36.7 3 153
1 2019-01-01 15:00:00 0.2 0.6 0.2 0.4 3.9 36.7 1 199
2 2019-01-01 16:00:00 1.8 2.4 0.8 1.6 1.1 33.0 0 307
3 2019-01-01 17:00:00 3.0 3.2 0.6 2.6 6.0 32.8 1 310
4 2019-01-01 18:00:00 1.6 2.2 0.5 1.7 7.9 33.1 4 293
5 2019-01-01 19:00:00 1.7 1.1 0.6 0.6 5.9 35.0 5 262
6 2019-01-01 20:00:00 1.0 0.5 0.2 0.2 2.9 32.6 5 201
7 2019-01-01 21:00:00 0.6 0.3 0.0 0.4 2.1 31.8 6 182
8 2019-01-01 22:00:00 0.4 0.3 0.0 0.4 5.1 31.4 6 187
9 2019-01-01 23:00:00 0.8 0.6 0.3 0.3 9.9 30.2 5 227
10 2019-01-02 00:00:00 1.0 0.7 0.3 0.4 6.9 27.9 4 225
11 2019-01-02 01:00:00 1.3 0.9 0.5 0.4 4.0 26.9 6 236
12 2019-01-02 02:00:00 0.4 0.4 0.2 0.2 5.0 27.3 6 168
13 2019-01-02 03:00:00 0.7 0.5 0.3 0.3 6.9 30.2 4 219
14 2019-01-02 04:00:00 1.3 0.8 0.5 0.3 5.9 32.3 4 242
15 2019-01-02 05:00:00 0.7 0.2 0.0 0.2 3.0 33.8 4 177
16 2019-01-02 06:00:00 0.5 0.2 0.2 0.1 5.1 36.1 4 195
17 2019-01-02 07:00:00 0.6 0.3 0.2 0.2 9.9 38.0 4 200
18 2019-01-02 08:00:00 0.5 0.6 0.4 0.3 6.8 38.9 4 179
19 2019-01-02 09:00:00 0.5 0.2 0.0 0.2 3.0 39.0 4 193
20 2019-01-02 10:00:00 0.3 0.3 0.2 0.1 4.0 38.7 5 198
21 2019-01-02 11:00:00 0.3 0.3 0.2 0.0 4.9 38.4 5 170
22 2019-01-02 12:00:00 0.6 0.3 0.3 0.0 2.0 38.4 4 172
23 2019-01-02 13:00:00 0.2 0.3 0.2 0.0 2.0 38.8 4 154
24 2019-01-02 14:00:00 0.3 0.1 0.0 0.2 1.9 39.3 4 145

BeautifulSoup and urlopen aren't fetching the right table

I'm trying to practice BeautifulSoup and urlopen by using Basketball-Reference datasets. When I try and get individual player's stats, everything works fine, but then I tried to use the same code for Team's stats and apparently urlopen isn't finding the right table.
The following code is to get the "headers" from the page.
def fetch_years():
#Determine the urls
url = "https://www.basketball-reference.com/leagues/NBA_2000.html?sr&utm_source=direct&utm_medium=Share&utm_campaign=ShareTool#team-stats-per_game::none"
html = urlopen(url)
soup = BeautifulSoup(html)
soup.find_all('tr')
headers = [th.get_text() for th in soup.find_all('tr')[0].find_all('th')]
headers = headers[1:]
print(headers)
I'm trying to get the Team's stats per game data, in a format like:
['Tm', 'G', 'MP', 'FG', ...]
Instead, the header data I'm getting is:
['W', 'L', 'W/L%', ...]
which is the very first table in the 1999-2000 season information about the teams (under the name 'Division Standings').
If you use that same code for a player's data such as this one, you get the result I'm looking for:
Age Tm Lg Pos G GS MP FG ... DRB TRB AST STL BLK TOV PF PTS
0 20 OKC NBA PG 82 65 32.5 5.3 ... 2.7 4.9 5.3 1.3 0.2 3.3 2.3 15.3
1 21 OKC NBA PG 82 82 34.3 5.9 ... 3.1 4.9 8.0 1.3 0.4 3.3 2.5 16.1
2 22 OKC NBA PG 82 82 34.7 7.5 ... 3.1 4.6 8.2 1.9 0.4 3.9 2.5 21.9
3 23 OKC NBA PG 66 66 35.3 8.8 ... 3.1 4.6 5.5 1.7 0.3 3.6 2.2 23.6
4 24 OKC NBA PG 82 82 34.9 8.2 ... 3.9 5.2 7.4 1.8 0.3 3.3 2.3 23.2
The code to webscrape came originally from here.
the sports -reference.com sites are trickier than your standard ones. The tables are rendered after loading the page (with the exception of a few tables on the pages), so you'd need to use Selenium to let it render first, then pull the html source code.
However, the other option is if you look at the html source, you'll see those tables are within the comments. You could use BeautifulSoup to pull out the comments tags, then search through those for the table tags.
This will return a list of dataframes, and the Team Per Game stats are the table in index position 1:
import requests
from bs4 import BeautifulSoup
from bs4 import Comment
import pandas as pd
def fetch_years():
#Determine the urls
url = "https://www.basketball-reference.com/leagues/NBA_2000.html?sr&utm_source=direct&utm_medium=Share&utm_campaign=ShareTool#team-stats-per_game::none"
html = requests.get(url)
soup = BeautifulSoup(html.text)
comments = soup.find_all(string=lambda text: isinstance(text, Comment))
tables = []
for each in comments:
if 'table' in each:
try:
tables.append(pd.read_html(each)[0])
except:
continue
return tables
tables = fetch_years()
Output:
print (tables[1].to_string())
Rk Team G MP FG FGA FG% 3P 3PA 3P% 2P 2PA 2P% FT FTA FT% ORB DRB TRB AST STL BLK TOV PF PTS
0 1.0 Sacramento Kings* 82 241.5 40.0 88.9 0.450 6.5 20.2 0.322 33.4 68.7 0.487 18.5 24.6 0.754 12.9 32.1 45.0 23.8 9.6 4.6 16.2 21.1 105.0
1 2.0 Detroit Pistons* 82 241.8 37.1 80.9 0.459 5.4 14.9 0.359 31.8 66.0 0.481 23.9 30.6 0.781 11.2 30.0 41.2 20.8 8.1 3.3 15.7 24.5 103.5
2 3.0 Dallas Mavericks 82 240.6 39.0 85.9 0.453 6.3 16.2 0.391 32.6 69.8 0.468 17.2 21.4 0.804 11.4 29.8 41.2 22.1 7.2 5.1 13.7 21.6 101.4
3 4.0 Indiana Pacers* 82 240.6 37.2 81.0 0.459 7.1 18.1 0.392 30.0 62.8 0.478 19.9 24.5 0.811 10.3 31.9 42.1 22.6 6.8 5.1 14.1 21.8 101.3
4 5.0 Milwaukee Bucks* 82 242.1 38.7 83.3 0.465 4.8 13.0 0.369 33.9 70.2 0.483 19.0 24.2 0.786 12.4 28.9 41.3 22.6 8.2 4.6 15.0 24.6 101.2
5 6.0 Los Angeles Lakers* 82 241.5 38.3 83.4 0.459 4.2 12.8 0.329 34.1 70.6 0.482 20.1 28.9 0.696 13.6 33.4 47.0 23.4 7.5 6.5 13.9 22.5 100.8
6 7.0 Orlando Magic 82 240.9 38.6 85.5 0.452 3.6 10.6 0.338 35.1 74.9 0.468 19.2 26.1 0.735 14.0 31.0 44.9 20.8 9.1 5.7 17.6 24.0 100.1
7 8.0 Houston Rockets 82 241.8 36.6 81.3 0.450 7.1 19.8 0.358 29.5 61.5 0.480 19.2 26.2 0.733 12.3 31.5 43.8 21.6 7.5 5.3 17.4 20.3 99.5
8 9.0 Boston Celtics 82 240.6 37.2 83.9 0.444 5.1 15.4 0.331 32.2 68.5 0.469 19.8 26.5 0.745 13.5 29.5 43.0 21.2 9.7 3.5 15.4 27.1 99.3
9 10.0 Seattle SuperSonics* 82 241.2 37.9 84.7 0.447 6.7 19.6 0.339 31.2 65.1 0.480 16.6 23.9 0.695 12.7 30.3 43.0 22.9 8.0 4.2 14.0 21.7 99.1
10 11.0 Denver Nuggets 82 242.1 37.3 84.3 0.442 5.7 17.0 0.336 31.5 67.2 0.469 18.7 25.8 0.724 13.1 31.6 44.7 23.3 6.8 7.5 15.6 23.9 99.0
11 12.0 Phoenix Suns* 82 241.5 37.7 82.6 0.457 5.6 15.2 0.368 32.1 67.4 0.477 17.9 23.6 0.759 12.5 31.2 43.7 25.6 9.1 5.3 16.7 24.1 98.9
12 13.0 Minnesota Timberwolves* 82 242.7 39.3 84.3 0.467 3.0 8.7 0.346 36.3 75.5 0.481 16.8 21.6 0.780 12.4 30.1 42.5 26.9 7.6 5.4 13.9 23.3 98.5
13 14.0 Charlotte Hornets* 82 241.2 35.8 79.7 0.449 4.1 12.2 0.339 31.7 67.5 0.469 22.7 30.0 0.758 10.8 32.1 42.9 24.7 8.9 5.9 14.7 20.4 98.4
14 15.0 New Jersey Nets 82 241.8 36.3 83.9 0.433 5.8 16.8 0.347 30.5 67.2 0.454 19.5 24.9 0.784 12.7 28.2 40.9 20.6 8.8 4.8 13.6 23.3 98.0
15 16.0 Portland Trail Blazers* 82 241.2 36.8 78.4 0.470 5.0 13.8 0.361 31.9 64.7 0.493 18.8 24.7 0.760 11.8 31.2 43.0 23.5 7.7 4.8 15.2 22.7 97.5
16 17.0 Toronto Raptors* 82 240.9 36.3 83.9 0.433 5.2 14.3 0.363 31.2 69.6 0.447 19.3 25.2 0.765 13.4 29.9 43.3 23.7 8.1 6.6 13.9 24.3 97.2
17 18.0 Cleveland Cavaliers 82 242.1 36.3 82.1 0.442 4.2 11.2 0.373 32.1 70.9 0.453 20.2 26.9 0.750 12.3 30.5 42.8 23.7 8.7 4.4 17.4 27.1 97.0
18 19.0 Washington Wizards 82 241.5 36.7 81.5 0.451 4.1 10.9 0.376 32.6 70.6 0.462 19.1 25.7 0.743 13.0 29.7 42.7 21.6 7.2 4.7 16.1 26.2 96.6
19 20.0 Utah Jazz* 82 240.9 36.1 77.8 0.464 4.0 10.4 0.385 32.1 67.4 0.476 20.3 26.2 0.773 11.4 29.6 41.0 24.9 7.7 5.4 14.9 24.5 96.5
20 21.0 San Antonio Spurs* 82 242.1 36.0 78.0 0.462 4.0 10.8 0.374 32.0 67.2 0.476 20.1 27.0 0.746 11.3 32.5 43.8 22.2 7.5 6.7 15.0 20.9 96.2
21 22.0 Golden State Warriors 82 240.9 36.5 87.1 0.420 4.2 13.0 0.323 32.3 74.0 0.437 18.3 26.2 0.697 15.9 29.7 45.6 22.6 8.9 4.3 15.9 24.9 95.5
22 23.0 Philadelphia 76ers* 82 241.8 36.5 82.6 0.442 2.5 7.8 0.323 34.0 74.8 0.454 19.2 27.1 0.708 14.0 30.1 44.1 22.2 9.6 4.7 15.7 23.6 94.8
23 24.0 Miami Heat* 82 241.8 36.3 78.8 0.460 5.4 14.7 0.371 30.8 64.1 0.481 16.4 22.3 0.736 11.2 31.9 43.2 23.5 7.1 6.4 15.0 23.7 94.4
24 25.0 Atlanta Hawks 82 241.8 36.6 83.0 0.441 3.1 9.9 0.317 33.4 73.1 0.458 18.0 24.2 0.743 14.0 31.3 45.3 18.9 6.1 5.6 15.4 21.0 94.3
25 26.0 Vancouver Grizzlies 82 242.1 35.3 78.5 0.449 4.0 11.0 0.361 31.3 67.6 0.463 19.4 25.1 0.774 12.3 28.3 40.6 20.7 7.4 4.2 16.8 22.9 93.9
26 27.0 New York Knicks* 82 241.8 35.3 77.7 0.455 4.3 11.4 0.375 31.0 66.3 0.468 17.2 22.0 0.781 9.8 30.7 40.5 19.4 6.3 4.3 14.6 24.2 92.1
27 28.0 Los Angeles Clippers 82 240.3 35.1 82.4 0.426 5.2 15.5 0.339 29.9 67.0 0.446 16.6 22.3 0.746 11.6 29.0 40.6 18.0 7.0 6.0 16.2 22.2 92.0
28 29.0 Chicago Bulls 82 241.5 31.3 75.4 0.415 4.1 12.6 0.329 27.1 62.8 0.432 18.1 25.5 0.709 12.6 28.3 40.9 20.1 7.9 4.7 19.0 23.3 84.8
29 NaN League Average 82 241.5 36.8 82.1 0.449 4.8 13.7 0.353 32.0 68.4 0.468 19.0 25.3 0.750 12.4 30.5 42.9 22.3 7.9 5.2 15.5 23.3 97.5

Do files have to be csv to differentiate between columns to plot a graph with gnuplot

I am trying to plot a line graph using data from a file that has several columns in it (16 in fact). I have bee trying to use the command
plot 'snr.dat' using 2:16 with lines
but I do not seem to be getting the result I would like.
I have attached an extract from the file I am using.
2014/10/30 0:00:28.847 00000 159.9 71.6 -12.51 .40 64.1 217.1 3 23.1 15 1 3511. .055 -9.99 11.4
2014/10/30 0:00:28.847 00000 229.9 103.9 -12.51 .40 64.1 217.1 3 23.1 15 1 3511. .055 -9.99 11.4
2014/10/30 0:00:28.847 00000 159.9 81.7 -12.51 .40 59.9 92.6 3 29.4 23 1 3511. .055 -9.99 11.4
2014/10/30 0:00:28.847 00001 159.9 71.6 -12.51 .40 64.0 217.1 3 23.4 25 1 3508. .055 -9.99 11.3
2014/10/30 0:00:28.847 00001 229.9 103.9 -12.51 .40 64.0 217.1 3 23.4 25 1 3508. .055 -9.99 11.3
2014/10/30 0:00:28.847 00001 159.9 81.7 -12.51 .40 59.9 92.6 3 29.6 14 1 3508. .055 -9.99 11.3
2014/10/30 0:01:30.114 00002 229.9 92.3 1.02 1.62 67.3 138.7 2 27.2 25 1 1746. .138 -9.99 5.7
2014/10/30 0:01:30.114 00002 159.9 89.9 1.02 1.62 56.4 97.4 2 26.5 35 1 1746. .138 -9.99 5.7
2014/10/30 0:02:30.504 00005 96.0 90.1 -25.64 1.18 20.3 120.5 1 17.2 45 1 2553. .165 -9.99 8.7
2014/10/30 0:02:52.896 00007 102.0 91.5 2.23 .03 26.4 140.8 1 11.8 35 1 19393. .098 -9.99 23.6
2014/10/30 0:02:52.890 00008 100.0 89.6 3.52 .57 26.5 139.9 1 10.9 35 1 4394. .214 -9.99 13.0
2014/10/30 0:02:52.894 00009 104.0 93.3 2.39 .52 26.4 141.0 1 10.1 13 1 4376. .110 -9.99 12.5
2014/10/30 0:03:20.093 0000B 106.0 84.5 5.30 2.01 37.4 202.2 1 25.8 45 1 2306. .095 -9.99 7.8
2014/10/30 0:04:08.515 0000D 102.0 88.1 13.20 1.92 30.5 180.6 3 28.4 15 1 3200. .061 -9.99 9.9
2014/10/30 0:04:08.515 0000D 102.0 99.4 13.20 1.92 12.9 68.6 3 26.1 45 1 3200. .061 -9.99 9.9
2014/10/30 0:04:08.515 0000D 102.0 88.2 13.20 1.92 30.3 128.4 3 38.2 13 1 3200. .061 -9.99 9.9
2014/10/30 0:04:12.642 0000E 108.0 91.9 -38.85 .20 31.9 222.0 1 23.8 15 1 9636. .084 -9.99 20.2
2014/10/30 0:04:12.640 0000F 110.0 93.6 -38.17 .51 31.9 221.9 1 23.6 25 1 4974. .086 -9.99 14.7
2014/10/30 0:04:40.580 0000G 201.9 93.0 -20.01 .41 63.4 38.1 1 24.7 15 1 2716. .244 -9.99 9.3
I would like to have the time (that's in the second in the second column) on the x axis, and the snr values (that's in the 16th column) on the y axis with a line joining them.
Thanks for any help, and if you need any more info just ask please.
Then you must tell gnuplot, that you want to plot time data on the x-axis with
set xdata time
and in which format the time should be parsed
set timefmt '%H:%M:%S'
So, a complete minimal script could be
set timefmt '%H:%M:%S'
set xdata time
plot 'snr.dat' using 2:17 with lines title 'SNR'

Extending macro from 1 row to 56 rows. Application defined error

Ihave never done Excel VBA macros.
The data I’m trying to get organized into a single column is in excel rows 22-78.
0 0.04 0.08 0.12 0.16 0.2 0.24 0.28 0.32 0.36 0.4 0.44 0.48 0.52 0.56 0.6 0.64 0.68 0.72 0.76 0.8 0.84 0.88 0.92 0.96 1 1.04 1.08 1.12 1.16 1.2 1.24 1.28 1.32 1.36 1.4 1.44 1.48 1.52 1.56 1.6 1.64 1.68 1.72 1.76 1.8 1.84 1.88 1.92 1.96 2 2.04 2.08 2.12 2.16 2.2 2.24 2.28 2.32 2.36 2.4 2.44 2.48 2.52 2.56 2.6 2.64 2.68 2.72 2.76 2.8 2.84 2.88 2.92 2.96 3 3.04 3.08 3.12 3.16 3.2 3.24 3.28 3.32 3.36 3.4 3.44 3.48 3.52 3.56 3.6 3.64 3.68 3.72 3.76 3.8 3.84 3.88 3.92 3.96 4 4.04 4.08 4.12 4.16 4.2 4.24 4.28 4.32 4.36 4.4 4.44 4.48 4.52 4.56 4.6 4.64 4.68 4.72 4.76 4.8 4.84 4.88 4.92 4.96 5 5.04 5.08 5.12 5.16 5.2 5.24 5.28 5.32 5.36 5.4 5.44 5.48 5.52 5.56 5.6 5.64 5.68 5.72 5.76 5.8 5.84 5.88 5.92 5.96 6 6.04 6.08 6.12 6.16 6.2 6.24 6.28 6.32 6.36 6.4 6.44 6.48 6.52 6.56 6.6 6.64 6.68 6.72 6.76 6.8 6.84 6.88 6.92 6.96 7 7.04 7.08 7.12 7.16 7.2 7.24 7.28 7.32 7.36 7.4 7.44 7.48 7.52 7.56 7.6 7.64 7.68 7.72 7.76 7.8 7.84 7.88 7.92 7.96 8 8.04 8.08 8.12 8.16 8.2 8.24 8.28 8.32 8.36 8.4 8.44 8.48 8.52 8.56 8.6 8.64 8.68 8.72 8.76 8.8 8.84 8.88 8.92 8.96 9 9.04 9.08 9.12 9.16 9.2 9.24 9.28 9.32 9.36 9.4 9.44 9.48 9.52 9.56 9.6 9.64 9.68 9.72 9.76 9.8 9.84 9.88 9.92 9.96 10 10.04 10.08 10.12 10.16 10.2 10.24 10.28 10.32 10.36 10.4 10.44 10.48 10.52 10.56 10.6 10.64 10.68 10.72 10.76 10.8 10.84 10.88 10.92 10.96 11 11.04 11.08 11.12 11.16 11.2 11.24 11.28 11.32 11.36 11.4 11.44 11.48 11.52 11.56 11.6 11.64 11.68 11.72 11.76 11.8 11.84 11.88 11.92 11.96 12 12.04 12.08 12.12 12.16 12.2 12.24 12.28 12.32 12.36 12.4 12.44 12.48 12.52 12.56 12.6 12.64 12.68 12.72 12.76 12.8 12.84 12.88 12.92 12.96 13 13.04 13.08 13.12 13.16 13.2 13.24 13.28 13.32 13.36 13.4 13.44 13.48 13.52 13.56 13.6 13.64 13.68 13.72 13.76 13.8 13.84 13.88 13.92 13.96 14 14.04 14.08 14.12 14.16 14.2 14.24 14.28 14.32 14.36 14.4 14.44 14.48 14.52 14.56 14.6 14.64 14.68 14.72 14.76 14.8 14.84 14.88 14.92 14.96 15 15.04 15.08 15.12 15.16 15.2 15.24 15.28 15.32
This is the data in one row. And such I have from row 22-78. the final files have a similar number of columns but many more rows.
I am not sure what would be a good way to organize this into a single column in excel
I got this working for 1 row.
here's the code
Sub RowsToColumn()
Dim RN As Range
Dim RI As Range
Dim r As Long
Dim LR As Long
Application.ScreenUpdating = False
Columns(1).Insert
r = 0
LR = Range("A" & Rows.Count).End(xlUp).row
For Each RN In Range("A1:A" & LR)
r = r + 1
For Each RI In Range(RN, Range("XFD" & RN.row).End(xlToLeft))
r = r + 1
Cells(r, 1) = RI
RI.Clear
Next RI
Next RN
Columns("A:A").SpecialCells(xlCellTypeBlanks).Delete Shift:=xlUp
End Sub
But to extend this for Rows A22-78
Sub RowsToColumn_Second()
Dim RN As Range
Dim RI As Range
Dim r As Long
Dim LR As Long
Dim row As Range
Dim rng As Range
Dim cell As Range
Application.ScreenUpdating = False
Set rng = Range("A22:A78")
For Each row In rng.Rows
Columns(1, rng).Insert
r = 0
LR = Range("A" & Rows.Count).End(xlUp).row
LR = Range("A" & Rows.Count).End(xlUp).row
For Each RN In Range("A1:A" & LR)
r = r + 1
For Each RI In Range(RN, Range("XFD" & RN.row).End(xlToLeft))
r = r + 1
Cells(r, 1) = RI
RI.Clear
Next RI
Next RN
Next row
Columns("A:A").SpecialCells(xlCellTypeBlanks).Delete Shift:=xlUp
End Sub
This is where it saysApplication defined error-1004. It doesn't like Columns(1, rng).Insert
copy the data
and Paste Special -> Transpose, this will change from rows to colums, or viceversa

Resources