I'm trying to make a program that prints out a two column table (Term and Definition) something like this: (table width should be 80 characters)
+--------------------------------------------------------------------------+
| Term | Definition
|
+--------------------------------------+-----------------------------------+
| this is the first term. |This is the definition for thefirst|
| |term that wraps around because the |
| |definition is longer than the width|
| |of the column. |
+--------------------------------------+-----------------------------------+
|The term may also be longer than the |This is the definition for the |
|width of the column and should wrap |second term. |
|around as well. | |
+--------------------------------------+-----------------------------------+
I have existing code for this, but it prints out "this is the first term" on every line because I have used a nested for loop. (Also tried implementing the textwrap module) Here is the code that I have:
# read file
with open(setsList[selectedSet-1], "r", newline="") as setFile:
cardList = list(csv.reader(setFile))
setFile.close()
for i in range(len(cardList)):
wrapped_term = textwrap.wrap(cardList[i][0], 30)
wrapped_definition = textwrap.wrap(cardList[i][1], 30)
for line in wrapped_term:
for line2 in wrapped_definition:
print(line, " ",line2)
print("- - - - - - - - - - - - - - - - - - - - - - - - - - -")
Can anyone suggest a solution? Thank you.
After a lot of (trial) & error & random youtube videos, the solution: (if anyone has a similar problem)
with open("table.csv", "r", newline="") as setFile:
cardList = list(csv.reader(setFile))
setFile.close()
print("+------------------------------------------------------------------------------+")
print("| Term | Definition |")
print("+------------------------------------------------------------------------------+")
print()
for x in range(len(cardList)):
wrapped_term = textwrap.wrap(cardList[x][0], 30)
wrapped_definition = textwrap.wrap(cardList[x][1], 30)
wrapped_list = []
for i in range(len(wrapped_term)):
try:
wrapped_list.append([wrapped_term[i], wrapped_definition[i]])
except IndexError:
if len(wrapped_term) > len(wrapped_definition):
wrapped_list.append([wrapped_term[i], ""])
elif len(wrapped_term) < len(wrapped_definition):
wrapped_list.append(["", wrapped_definition[i]])
column1 = len(" Term ")
column2 = len(" Definition ")
print("+--------------------------------------+---------------------------------------+")
for item in wrapped_list:
print("|", item[0], " "*(column1 - len(item[0])),"|", item[1], " "*(column2-len(item[1])), "|")
print("+--------------------------------------+---------------------------------------+")
print("* *")
Basically, I created a wrapped version of each of my terms and definitions.
Then the try-catch stuff checks whether the term is longer than the definition (in terms of lines) and if so puts blank lines for the definition and vice versa.
I then created a wrapped_list (combined terms and definitions) to store this the above.
With help from this video: (https://www.youtube.com/watch?v=B9BRuhqEb2Q), I formatted the table.
Hope this helped anyone struggling with a similar problem - this can be applied to any number of columns in a table, and any length of csv file.
Related
I was working on a project which needed me to extract addresses from a sentence.
For e.g. Input sentence: Hi, Mr. Sam D. Richards lives here Shop No / 123, 3rd Floor, ABC Building, Behind CDE Mart, Aloha Road, 12345. If you need any help, call me on 12345678
I am trying to extract just the address i.e. Shop No / 123, 3rd Floor, ABC Building, Behind CDE Mart, Aloha Road, 12345
What I have tried so far:
I tried Pyap which also works on Regex so it is not able to generalize it better for addresses of countries other than US/Canada/UK. I realized that we cannot use Regex as there is no pattern to the address or the sentences whatsoever. Also tried locationtagger which only manages to return the country or the city.
Is there any better way of doing it?
If there is no obvious pattern for regex, you can try an ML-based approach. There is a well known problem named entity recognition (NER), and it is typically solved as a sequence tagging problem: a model is trained to predict for each token (e.g. a word or a subword) whether it is a part of address or not.
You can look for a model that is already trained to extract addresses (e.g. here https://huggingface.co/models?search=address), or fine-tune a BERT-based model on your own dataset (here is a recipe).
Addresses have a well known structure. With a grammar parser it should be possible to parse them.
PyParsing has a feature of scanning that searches for pattern without parsing all the rest of the file. You can try this feature. I have an example for you, that detects three addresses in the example string.
#!/bin/python3
from pyparsing import *
GermanWord = Word("ABCDEFGHIJKLMNOPQRSTUVWXYZÄÖÜ", alphas + "ß")
GermanWordComposition = GermanWord + (ZeroOrMore(Optional(Literal("-")) + GermanWord))
GermanName = GermanWordComposition
GermanStreet = GermanWordComposition
GermanHouseNumber = Word(nums) + Optional(Word(alphas, exact=1) + FollowedBy(White()))
GermanAddressSeparator = Literal(",") | Literal("in")
GermanPostCode = Word(nums, exact=5)
GermanTown = GermanWordComposition
German_Address = GermanName + GermanAddressSeparator + GermanStreet + GermanHouseNumber \
+ GermanAddressSeparator + GermanPostCode + GermanTown
EnglishWord = Word("ABCDEFGHIJKLMNOPQRSTUVWXYZ", alphanums)
EnglishNumber = Word(nums)
EnglishComposition = OneOrMore(EnglishWord)
EnglishExtension = Word("-/", exact=1) + (EnglishComposition | EnglishNumber)
EnglishAddressSeparator = Literal(",")
EnglishFloor = (Literal("1st") | Literal("2nd") | Literal("3rd") | (Combine(EnglishNumber + Literal("th")))) + Literal("Floor")
EnglishWhere = EnglishComposition
EnglishStreet = EnglishComposition
EnglishAddress = EnglishComposition + Optional(EnglishExtension) \
+ EnglishAddressSeparator + Optional(EnglishFloor) \
+ Optional(EnglishAddressSeparator + EnglishWhere) \
+ Optional(EnglishAddressSeparator + EnglishWhere) \
+ EnglishAddressSeparator + EnglishStreet + EnglishAddressSeparator + EnglishNumber
Address = EnglishAddress | German_Address
test_1 = "I am writing to Peter Meyer, Moritzstraße 22, 54543 Musterdorf a letter. But the letter arrived at \
Hubert Figge, Große Straße 14 in 45434 Berlin. In the letter was written: Hi, Mr. Sam D. Richards lives here \
Shop No / 123, 3rd Floor, ABC Building, Behind CDE Mart, Aloha Road, 12345. If you need any help, call \
me on 12345678."
for i in Address.scanString(test_1):
print(i)
I need to split a column value on '|' , get all items except first item for a new column 'address'.
Whats it makes more complicates is that the number of items is not always the same!
df1 = spark.createDataFrame([
["Luc Krier|2363 Ryan Road"],
["Jeanny Thorn|2263 Patton Lane|Raleigh North Carolina"],
["Teddy E Beecher|2839 Hartland Avenue|Fond Du Lac Wisconsin|US"],
["Philippe Schauss|1 Im Oberdor|Allemagne"],
["Meindert I Tholen|Hagedoornweg 138|Amsterdam|NL"]
]).toDF("s")
I've tried this already:
split, size substring but can't get it done. Any help much appreciated !
Expected output:
addres
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
2363 Ryan Road"
2263 Patton Lane|Raleigh North Carolina"
2839 Hartland Avenue|Fond Du Lac Wisconsin|US"
1 Im Oberdor|Allemagne"
Hagedoornweg 138|Amsterdam|NL"
Try this
df1.select(concat_ws('|',slice(split('s','\|'),2,1000))).show()
+------------------------------------------+
|concat_ws(|, slice(split(s, \|), 2, 1000))|
+------------------------------------------+
|2363 Ryan Road|Long Lake South Dakota |
|2263 Patton Lane|Raleigh North Carolina |
|2839 Hartland Avenue|Fond Du Lac Wisconsin|
|1 Im Oberdor|Allemagne |
|Hagedoornweg 138|Amsterdam |
+------------------------------------------+
where 1000 is the max_length of your array, given an arbitrary large int for now.
Function 'instr' can be used for find first '|', and 'substring' for getting result:
df1.selectExpr(
"substring(s, instr(s,'|') + 1, length(s))"
)
Or regexpr from string start to first '|':
df1.select(
regexp_replace($"s", "^[^\\|]+\\|", "")
)
I have an xml with a structure like this one:
<cat>
<foo>
<fooID>1</fooID>
<fooName>One</fooName>
<bar>
<barID>a</barID>
<barName>small_a</barName>
<barClass>
<baz>
<qux>
<corge>
<corgeName>...</corgeName>
<corgeType>
<corgeReport>
<corgeReportRes Reference="x" Channel="High">
<Pos>1</Pos>
</corgeReportRes>
</corgeReport>
</corgeType>
</corge>
</qux>
</baz>
</barClass>
</bar>
<bar>
<barID>b</barID>
<barName>small_b</barName>
<barClass>
<baz>
<qux>
<corge>
<corgeName>...</corgeName>
<corgeType>
<corgeReport>
<corgeReportRes Reference="y" Channel="High">
<Pos>1</Pos>
</corgeReportRes>
</corgeReport>
</corgeType>
</corge>
</qux>
</baz>
</barClass>
</bar>
</foo>
<foo>
<fooID>2</fooID>
<fooName>Two</fooName>
<bar>
<barID>c</barID>
<barName>small_c</barName>
<barClass>
<baz>
<qux>
<corge>
<corgeName>...</corgeName>
<corgeType>
<corgeReport>
<corgeReportRes Reference="z" Channel="High">
<Pos>1</Pos>
</corgeReportRes>
</corgeReport>
</corgeType>
</corge>
</qux>
</baz>
</barClass>
</bar>
</foo>
</cat>
And, I would like to obtain the values of specific parent/grand parent/grand grand parent tags that have a node with attribute Channel="High". I would like to obtain only fooID value, fooName value, barID value, barName value.
I have the following code in Python 3:
import xml.etree.ElementTree as xmlET
root = xmlET.parse('file.xml').getroot()
test = root.findall(".//*[#Channel='High']")
Which is actually giving me a list of elements that match, however, I still need the information of the specific parents/grand parents/grand grand parents.
How could I do that?
fooID | fooName | barID | barName
- - - - - - - - - - - - - - - - -
1 | One | a | small_a <-- This is the information I'm interested
1 | One | b | small_b <-- Also this
2 | Two | c | small_c <-- And this
Edit: fooID and fooName nodes are siblings of the grand-grand-parent bar, the one that contains the Channel="High". It's almost the same case for barID and barName, they are siblings of the grand-parent barClass, the one that contains the Channel="High". Also, what I want to obtain is the values 1, One, a and small_a, not filtering by it, since there will be multiple foo blocks.
If I understand you correctly, you are probably looking for something like this (using python):
from lxml import etree
foos = """[your xml above]"""
items = []
for entry in doc.xpath('//foo[.//corgeReportRes[#Channel="High"]]'):
items.append(entry.xpath('./fooID/text()')[0])
items.append(entry.xpath('./fooName/text()')[0])
items.append(entry.xpath('./bar/barID/text()')[0])
items.append(entry.xpath('./bar/barName/text()')[0])
print('fooID | fooName | barID | barName')
print(' | '.join(items))
Output:
fooID | fooName | barID | barName
1 | One | a | small_a
I am working on a project about gentrification. My teammates pulled data from the census and cleaned it to get the values we need. The issue is, the zip code values won't print 0's (i.e. "2322" when it should be "02322"). We managed to find the tact value that prints the full zip code with the tact codes("ZCTA5 02322"). I want to remove "ZCTA5" to get the zip code alone.
I've tried the below code but it only gets rid of the "ZCTA" instead of "ZCTA5" (i.e. "502322"). I'm also concerned that if I manage to remove the 5 with the characters, it will remove all 5's in the zip codes as well.
From there I will be pulling from pgeocode to access the respective lat & lng values to create the heatmap. Please help?
I've tried the .replace(), .translate(), functions. Replace still prints the zip codes with 5. Translate gets an attribute error.
Sample data
Zipcode | Name | Change_In_Value | Change_In_Income | Change_In_Degree | Change_In_Rent
2322 | ZCTA5 02322 | -0.050242 | -0.010953 | 0.528509 | -0.013263
2324 | ZCTA5 02324 | 0.012279 | -0.022949 | -0.040456 | 0.210664
2330 | ZCTA5 02330 | 0.020438 | 0.087415 | -0.095076 | -0.147382
2332 | ZCTA5 02332 | 0.035024 | 0.054745 | 0.044315 | 1.273772
2333 | ZCTA5 02333 | -0.012588 | 0.079819 | 0.182517 | 0.156093
Translate
zipcode = []
test2 = gent_df['Name'] = gent_df['Name'].astype(str).translate({ord('ZCTA5'): None}).astype(int)
zipcode.append(test2)
test2.head()
Replace
zipcode = []
test2 = gent_df['Name'] = gent_df['Name'].astype(str).replace(r'\D', '').astype(int)
zipcode.append(test2)
test2.head()
Replace
Expected:
24093
26039
34785
38944
29826
Actual:
524093
526039
534785
538944
529826
Translate
Expected:
24093
26039
34785
38944
29826
Actual:
AttributeError Traceback (most recent call last)
<ipython-input-71-0e5ff4660e45> in <module>
3 zipcode = []
4
----> 5 test2 = gent_df['Name'] = gent_df['Name'].astype(str).translate({ord('ZCTA5'): None}).astype(int)
6 # zipcode.append(test2)
7 test2.head()
~\Anaconda3\envs\MyPyEnv\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
5178 if self._info_axis._can_hold_identifiers_and_holds_name(name):
5179 return self[name]
-> 5180 return object.__getattribute__(self, name)
5181
5182 def __setattr__(self, name, value):
AttributeError: 'Series' object has no attribute 'translate'
It looks like you are using pandas so you should be able to use the .lstrip() method. I tried this on a sample df and it worked for me:
gent_df.Name = gent_df.Name.str.lstrip(to_strip='ZCTA5')
Here is a link to the library page for .strip(), .lstrip(), and .rstrip()
I hope this helps!
There are many ways to do this. I can think of 2 off the top of my head.
If you want to keep the last 5 characters of the zipcode string, regardless of whether they are digits or not:
gent_df['Name'] = gent_df['Name'].str[-5:]
If want to get the last 5 digits of the zipcode string:
gent_df['Name'] = gent_df['Name'].str.extract(r'(\d{5})$')[0]
Include some sample data for more specific answer.
I am cleaning a restaurant data set using Pandas' read_csv.
I have columns like this:
name, online_order, book_table, rate, votes
xxxx, Yes, Yes, 4.5/5, 705
I expect them to be like this:
name, online_order, book_table, rate, votes
xxxx, Yes, Yes, 4.5, 705
You basically need to split the item(dataframe["rate"]) based on / and take out what you need. .apply this on your dataframe using lambda x: getRate(x)
def getRate(x):
return str(x).split("/")[0]
To use it with column name rate, we can use:
dataframe["rate"] = dataframe["rate"].apply(lambda x: getRate(x))
You can use the python .split() function to remove specific text, given that the text is consistently going to be "/5", and there are no instances of "/5" that you want to keep in that string. You can use it like this:
num = "4.5/5"
num.split("/5")[0]
output: '4.5'
If this isn't exactly what you need, there's more regex python functions here
You can use DataFrame.apply() to make your replacement operation on the ratecolumn:
def clean(x):
if "/" not in x :
return x
else:
return x[0:x.index('/')]
df.rate = df.rate.apply(lambda x : clean(x))
print(df)
Output
+----+-------+---------------+-------------+-------+-------+
| | name | online_order | book_table | rate | votes |
+----+-------+---------------+-------------+-------+-------+
| 0 | xxxx | Yes | Yes | 4.5 | 705 |
+----+-------+---------------+-------------+-------+-------+
EDIT
Edited to handle situations in which there could be multiple / or that it could be another number than /5 (ie : /4or /1/3 ...)