Get second column of a data frame using pandas - python-3.x

I am new to Pandas in Python and I am having some difficulties returning the second column of a dataframe without column names just numbers as indexes.
import pandas as pd
import os
directory = 'A://'
sample = 'test.txt'
# Test with Air Sample
fileAir = os.path.join(directory,sample)
dataAir = pd.read_csv(fileAir,skiprows=3)
print(dataAir.iloc[:,1])
The data I am working with would be similar to:
data = [[1,2,3],[1,2,3],[1,2,3]]
Then, using pandas I wanted to have only
[[2,2,2]].

You can use
dataframe_name[column_index].values
like
df[1].values
or
dataframe_name['column_name'].values
like
df['col1'].values

Related

Formatting xlwings view output

I am using xlwings to write a dataframe to an excel sheet. Nothing special, and all works perfectly.
xw.view(
dataframe,
abook.sheets.add(after = abook.sheets[-1]),
table=True
)
My issue is that the output excel sheet has filters in the top two rows, which I have to manually disable (by selecting the rows and clearning contents).
Thanks to https://github.com/xlwings/xlwings/issues/679#issuecomment-369138719
I changed my code to the following:
abook = xw.books.active
xw.view(
dataframe,
abook.sheets.add(after = abook.sheets[-1]),
table=True
)
sheetname = abook.active.name
if wb.sheets[sheetname].api.AutoFilterMode == True:
wb.sheets[sheetname].api.AutoFilterMode = False
which looked promising, but it didn't resolve my issue.
I would appreciate any pointers, how I can have the filters turned off by default. I am using the latest xlwings on win 10, 11.
Thanks
The solution was to add the
table=False
parameter to the xw.view(df) method. According to the docs:
table (bool, default True) – If your object is a pandas DataFrame, by default it is formatted as an Excel Table
Now to write a dataframe df, I call:
import xlwings as xw
import pandas as pd
df = pd.DataFrame(...)
xw.view(df, table=False)
Updated on 14 January 2023:
Just for completeness, using the argument table=True in view adds a table with a filter. If you would like to keep the table, but remove the filter, you can remove the filter with ws.tables[0].show_autofilter = False:
import xlwings as xw
import pandas as pd
df = pd._testing.makeDataFrame()
xw.view(df, table=True)
ws = xw.sheets.active
ws.tables[0].show_autofilter = False
Or with api.AutoFilter(Field=[...], VisibleDropDown=False), whereby Field is a list of integers describing the concerning column numbers:
import xlwings as xw
import pandas as pd
df = pd._testing.makeDataFrame()
xw.view(df, table=True)
ws = xw.sheets.active
ws.used_range.api.AutoFilter(Field=list(range(1, ws.used_range[-1].column + 1)), VisibleDropDown=False)

I want to copy one excel column data to another excel row data using python

import numpy as np
import pandas as pd
dfs = pd.read_excel('input.xlsx', sheet_name=None,header=None)
tester=dfs['Sheet1'].values.tolist()
keys = list(zip(*tester))[0]
seen = set()
seen_add = seen.add
keysu= [x for x in keys if not (x in seen or seen_add(x))]
values = list(zip(*tester))[1]
a = np.array(values).reshape(int(len(values)/len(keysu)),len(keysu))
list1=[keysu]
for i in a:
list1.append(list(i))
df=pd.DataFrame(list1)
df.to_excel('output.xlsx',index=False,header=False)
I want to copy one excel column data to another excel row data using python
want to execute and run

Use Pandas to extract the values from a column based on some condition

I'm trying to pick a particular column from a csv file using Python's Pandas module, where I would like to fetch the Hostname if the column Group is SJ or DC.
Below is what I'm trying but it's not printing anything:
import csv
import pandas as pd
pd.set_option('display.height', 500)
pd.set_option('display.max_rows', 5000)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 500)
low_memory=False
data = pd.read_csv('splnk.csv', usecols=['Hostname', 'Group'])
for line in data:
if 'DC' and 'SJ' in line:
print(line)
The data variable contains the values for Hostname & Group columns as follows:
11960 NaN DB-Server
11961 DC Sap-Server
11962 SJ comput-server
Note: while printing the data it stripped the data and does not print complete data.
PS: I have used the pandas.set_option to get the complete data on the terminal!
for line in data: doesn't iterate over row contents, it iterates over the column names. Pandas has several good ways to filter columns by their contents.
For example, you can use df.Series.isin() to select rows matching one of several values:
print data[data['Group'].isin(['DC', 'SJ'])]['Hostname']
If it's important that you iterate over rows, you can use df.iterrows():
for index, row in data.iterrows():
if row['Group'] == 'DC' or row['Group'] == 'SJ':
print row['Hostname']
If you're just getting started with Pandas, I'd recommend trying a tutorial to get familiar with the basic structure.
Try this:
import csv
import pandas as pd
import numpy as np #You can comment numpy as it is not needed.
low_memory=False
data = pd.read_csv('splnk.csv', usecols=['Hostname', 'Group'])
hostnames = data[(data['Group']=='DC') | (data['Group']=='SJ')]['Hostname'] # corrected the `hostname` to `Hostname`
print(hostnames)

Python Pandas Merge two CSV based on Time Stamp

Could someone give me a tip on how to merge two CSV files based on time stamp? Concat works, but I also need to organize the data based on one single stamp, DateTime. In the shell output snip below both DateTime columns are visible. Thank you
import pandas as pd
import numpy as np
import datetime
WUdata = pd.read_csv('C:\\Users\\bbartling\\Documents\\Python\\test_SP
data\\3rd GoRound k Nearest\\WB data\\WU\\WUdata.csv')
print(WUdata.describe())
print(WUdata.shape)
print(WUdata.columns)
print(WUdata.info())
kWdata = pd.read_csv('C:\\Users\\bbartling\\Documents\\Python\\test_SP
data\\3rd GoRound k Nearest\\WB data\\WU\\kWdata.csv')
print(kWdata.describe())
print(kWdata.shape)
print(kWdata.columns)
print(kWdata.info())
merged = pd.concat([WUdata, kWdata], axis=1)
print(merged)

Pandas: Generating a data frame from each spreadsheet in a large excel file

I have a large excel file which I have imported into pandas, made up of 92 sheets.
I want to use a loop or some tool to generate dataframes from the data in each spreadsheet (one dataframe from each spreadsheet), which also automatically names each dataframe.
I have only just started using pandas and jupyter so I am not very experienced at all.
This is the code I have so far:
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime
%matplotlib inline
concdata = pd.ExcelFile('Documents/Research Project/Data-Ana/11July-27Dec.xlsx')
I also have a list of all the spreadsheet names:
#concdata.sheet_names
Thanks!
Instead of making each DataFrame its own variable you can assign each sheet a name in a Python dictionary like so:
dfs = {}
for sheet in concdata.sheet_names:
dfs[sheet] = concdata.parse(sheet)
And then access each DataFrame with the sheet name:
dfs['sheet_name_here']
Doing it this way allows you to have amortised O(1) lookup of sheets.

Resources