Clean the text data and save in csv format using Python - python-3.x

I have a text file of about 7000 sentences. Every sentence is in a new line. The sample format of my text file's data is given below. I want to change the format and clean the data using python.
(input.txt)
I\PP.sg.n.n am\VM.3.fut.sim.dcl.fin.n.n.n going\VER.0.gen.n.n to\NC.0.0.n.n school\JQ.n.n.crd .\PU
When\PPR.pl.1.0.n.n.n.n I\PP.0.y go\VM.0.0.0.0.nfn.n.n.n outside\NC.0.0.n.n ,\PU I\NST.0.n.n saw\NN.loc.n.n something\DAB.sg.y .\PU
I\PP.0.y eat\JQ.n.n.nnm rice\NC.0.loc.n.n .\PU
I want to change the format of the above data of the text file and want the below format in CSV.
(input.csv)
Sentences
Tags
I am going to school .
PP VM VER NC JQ PU
When I go outside , I saw something .
PPR PP VM NC PU NST NN DAB PU
I eat rice .
PP JQ NC PU
I have tried some approaches but nothing is working properly to get my desired format. I am really confused. It would be a great help for me if any kind soul can help me.
Thanks in advance for the help.

Python Code:
txt = r"""
I\PP.sg.n.n am\VM.3.fut.sim.dcl.fin.n.n.n going\VER.0.gen.n.n to\NC.0.0.n.n school\JQ.n.n.crd .\PU
When\PPR.pl.1.0.n.n.n.n I\PP.0.y go\VM.0.0.0.0.nfn.n.n.n outside\NC.0.0.n.n ,\PU I\NST.0.n.n saw\NN.loc.n.n something\DAB.sg.y .\PU
I\PP.0.y eat\JQ.n.n.nnm rice\NC.0.loc.n.n .\PU
"""
for line in txt.strip().split('\n'):
words, tags = [], []
for wordtag in line.strip().split():
splits = wordtag.split('\\', 1)
words.append(splits[0])
tags.append(splits[1].split('.')[0])
print(f"\"{' '.join(words)}\",\"{' '.join(tags)}\"")
Output:
"I am going to school .","PP VM VER NC JQ PU"
"When I go outside , I saw something .","PPR PP VM NC PU NST NN DAB PU"
"I eat rice .","PP JQ NC PU"

Related

Python: How to extract addresses from a sentence/paragraph (non-Regex approach)?

I was working on a project which needed me to extract addresses from a sentence.
For e.g. Input sentence: Hi, Mr. Sam D. Richards lives here Shop No / 123, 3rd Floor, ABC Building, Behind CDE Mart, Aloha Road, 12345. If you need any help, call me on 12345678
I am trying to extract just the address i.e. Shop No / 123, 3rd Floor, ABC Building, Behind CDE Mart, Aloha Road, 12345
What I have tried so far:
I tried Pyap which also works on Regex so it is not able to generalize it better for addresses of countries other than US/Canada/UK. I realized that we cannot use Regex as there is no pattern to the address or the sentences whatsoever. Also tried locationtagger which only manages to return the country or the city.
Is there any better way of doing it?
If there is no obvious pattern for regex, you can try an ML-based approach. There is a well known problem named entity recognition (NER), and it is typically solved as a sequence tagging problem: a model is trained to predict for each token (e.g. a word or a subword) whether it is a part of address or not.
You can look for a model that is already trained to extract addresses (e.g. here https://huggingface.co/models?search=address), or fine-tune a BERT-based model on your own dataset (here is a recipe).
Addresses have a well known structure. With a grammar parser it should be possible to parse them.
PyParsing has a feature of scanning that searches for pattern without parsing all the rest of the file. You can try this feature. I have an example for you, that detects three addresses in the example string.
#!/bin/python3
from pyparsing import *
GermanWord = Word("ABCDEFGHIJKLMNOPQRSTUVWXYZÄÖÜ", alphas + "ß")
GermanWordComposition = GermanWord + (ZeroOrMore(Optional(Literal("-")) + GermanWord))
GermanName = GermanWordComposition
GermanStreet = GermanWordComposition
GermanHouseNumber = Word(nums) + Optional(Word(alphas, exact=1) + FollowedBy(White()))
GermanAddressSeparator = Literal(",") | Literal("in")
GermanPostCode = Word(nums, exact=5)
GermanTown = GermanWordComposition
German_Address = GermanName + GermanAddressSeparator + GermanStreet + GermanHouseNumber \
+ GermanAddressSeparator + GermanPostCode + GermanTown
EnglishWord = Word("ABCDEFGHIJKLMNOPQRSTUVWXYZ", alphanums)
EnglishNumber = Word(nums)
EnglishComposition = OneOrMore(EnglishWord)
EnglishExtension = Word("-/", exact=1) + (EnglishComposition | EnglishNumber)
EnglishAddressSeparator = Literal(",")
EnglishFloor = (Literal("1st") | Literal("2nd") | Literal("3rd") | (Combine(EnglishNumber + Literal("th")))) + Literal("Floor")
EnglishWhere = EnglishComposition
EnglishStreet = EnglishComposition
EnglishAddress = EnglishComposition + Optional(EnglishExtension) \
+ EnglishAddressSeparator + Optional(EnglishFloor) \
+ Optional(EnglishAddressSeparator + EnglishWhere) \
+ Optional(EnglishAddressSeparator + EnglishWhere) \
+ EnglishAddressSeparator + EnglishStreet + EnglishAddressSeparator + EnglishNumber
Address = EnglishAddress | German_Address
test_1 = "I am writing to Peter Meyer, Moritzstraße 22, 54543 Musterdorf a letter. But the letter arrived at \
Hubert Figge, Große Straße 14 in 45434 Berlin. In the letter was written: Hi, Mr. Sam D. Richards lives here \
Shop No / 123, 3rd Floor, ABC Building, Behind CDE Mart, Aloha Road, 12345. If you need any help, call \
me on 12345678."
for i in Address.scanString(test_1):
print(i)

Why does my PySpark regular expression not give more than the first row?

Taking inspiration from this answer: https://stackoverflow.com/a/61444594/4367851 I have been able to split my .txt file into columns in a Spark DataFrame. However, it only gives me the first game - even though the sample .txt file contains many more.
My code:
basefile = spark.sparkContext.wholeTextFiles("example copy 2.txt").toDF().\
selectExpr("""split(replace(regexp_replace(_2, '\\\\n', ','), ""),",") as new""").\
withColumn("Event", col("new")[0]).\
withColumn("White", col("new")[2]).\
withColumn("Black", col("new")[3]).\
withColumn("Result", col("new")[4]).\
withColumn("UTCDate", col("new")[5]).\
withColumn("UTCTime", col("new")[6]).\
withColumn("WhiteElo", col("new")[7]).\
withColumn("BlackElo", col("new")[8]).\
withColumn("WhiteRatingDiff", col("new")[9]).\
withColumn("BlackRatingDiff", col("new")[10]).\
withColumn("ECO", col("new")[11]).\
withColumn("Opening", col("new")[12]).\
withColumn("TimeControl", col("new")[13]).\
withColumn("Termination", col("new")[14]).\
drop("new")
basefile.show()
Output:
+--------------------+---------------+-----------------+--------------+--------------------+--------------------+-----------------+-----------------+--------------------+--------------------+-----------+--------------------+--------------------+--------------------+
| Event| White| Black| Result| UTCDate| UTCTime| WhiteElo| BlackElo| WhiteRatingDiff| BlackRatingDiff| ECO| Opening| TimeControl| Termination|
+--------------------+---------------+-----------------+--------------+--------------------+--------------------+-----------------+-----------------+--------------------+--------------------+-----------+--------------------+--------------------+--------------------+
|[Event "Rated Cla...|[White "BFG9k"]|[Black "mamalak"]|[Result "1-0"]|[UTCDate "2012.12...|[UTCTime "23:01:03"]|[WhiteElo "1639"]|[BlackElo "1403"]|[WhiteRatingDiff ...|[BlackRatingDiff ...|[ECO "C00"]|[Opening "French ...|[TimeControl "600...|[Termination "Nor...|
+--------------------+---------------+-----------------+--------------+--------------------+--------------------+-----------------+-----------------+--------------------+--------------------+-----------+--------------------+--------------------+--------------------+
Input file:
[Event "Rated Classical game"]
[Site "https://lichess.org/j1dkb5dw"]
[White "BFG9k"]
[Black "mamalak"]
[Result "1-0"]
[UTCDate "2012.12.31"]
[UTCTime "23:01:03"]
[WhiteElo "1639"]
[BlackElo "1403"]
[WhiteRatingDiff "+5"]
[BlackRatingDiff "-8"]
[ECO "C00"]
[Opening "French Defense: Normal Variation"]
[TimeControl "600+8"]
[Termination "Normal"]
1. e4 e6 2. d4 b6 3. a3 Bb7 4. Nc3 Nh6 5. Bxh6 gxh6 6. Be2 Qg5 7. Bg4 h5 8. Nf3 Qg6 9. Nh4 Qg5 10. Bxh5 Qxh4 11. Qf3 Kd8 12. Qxf7 Nc6 13. Qe8# 1-0
[Event "Rated Classical game"]
.
.
.
Each game starts with [Event so I feel like it should be doable as the file has repeating structure, alas I can't get it to work.
Extra points:
I don't actually need the move list so if it's easier they can be deleted.
I only want the content of what is inside the " " for each new line once it has been converted to a Spark DataFrame.
Many thanks.
wholeTextFiles reads each file into a single record. If you read only one file, the result will a RDD with only one row, containing the whole text file. The regexp logic in the question returns only one result per row and this will be the first entry in the file.
Probably the best solution would be to split the file at the os level into one file per game (for example here) so that Spark can read the multiple games in parallel. But if a single file is not too big, splitting the games can also be done within PySpark:
Read the file(s):
basefile = spark.sparkContext.wholeTextFiles(<....>).toDF()
Create a list of columns and convert this list into a list of column expressions using regexp_extract:
from pyspark.sql import functions as F
cols = ['Event', 'White', 'Black', 'Result', 'UTCDate', 'UTCTime', 'WhiteElo', 'BlackElo', 'WhiteRatingDiff', 'BlackRatingDiff', 'ECO', 'Opening', 'TimeControl', 'Termination']
cols = [F.regexp_extract('game', rf'{col} \"(.*)\"',1).alias(col) for col in cols]
Extract the data:
split the whole file into an array of games
explode this array into single records
delete the line breaks within each record so that the regular expression works
use the column expressions defined above to extract the data
basefile.selectExpr("split(_2,'\\\\[Event ') as game") \
.selectExpr("explode(game) as game") \
.withColumn("game", F.expr("concat('Event ', replace(game, '\\\\n', ''))")) \
.select(cols) \
.show(truncate=False)
Output (for an input file containing three copies of the game):
+---------------------+-----+-------+------+----------+--------+--------+--------+---------------+---------------+---+--------------------------------+-----------+-----------+
|Event |White|Black |Result|UTCDate |UTCTime |WhiteElo|BlackElo|WhiteRatingDiff|BlackRatingDiff|ECO|Opening |TimeControl|Termination|
+---------------------+-----+-------+------+----------+--------+--------+--------+---------------+---------------+---+--------------------------------+-----------+-----------+
|Rated Classical game |BFG9k|mamalak|1-0 |2012.12.31|23:01:03|1639 |1403 |+5 |-8 |C00|French Defense: Normal Variation|600+8 |Normal |
|Rated Classical game2|BFG9k|mamalak|1-0 |2012.12.31|23:01:03|1639 |1403 |+5 |-8 |C00|French Defense: Normal Variation|600+8 |Normal |
|Rated Classical game3|BFG9k|mamalak|1-0 |2012.12.31|23:01:03|1639 |1403 |+5 |-8 |C00|French Defense: Normal Variation|600+8 |Normal |
+---------------------+-----+-------+------+----------+--------+--------+--------+---------------+---------------+---+--------------------------------+-----------+-----------+

Insert a new line or an empty line between two sentences

i have a text file as the output of another program. I want to insert a white space above the line that starts with 'Sentence #'
this is what i currently have:
Sentence #26024 (5 tokens):
Today is a good day
[Text=Today CharacterOffsetBegin=1607176 CharacterOffsetEnd=1607178 PartOfSpeech=IN Lemma=if]
[Text=is CharacterOffsetBegin=1607179 CharacterOffsetEnd=1607181
PartOfSpeech=NN Lemma=yo]
[Text=a CharacterOffsetBegin=1607182 CharacterOffsetEnd=1607186 PartOfSpeech=NN Lemma=girl]
[Text=good CharacterOffsetBegin=1607187 CharacterOffsetEnd=1607193 PartOfSpeech=JJ Lemma=doesnt]
[Text=day CharacterOffsetBegin=1607202 CharacterOffsetEnd=1607205
root(ROOT-0, today-1)
root(today-1, is-2)
dobj(a-2, good-3)
amod(day-3, good-4)
Sentence #26025 (4 tokens):
if you can help
[Text=if CharacterOffsetBegin=1607223 CharacterOffsetEnd=1607225 PartOfSpeech=IN Lemma=if]
[Text=you CharacterOffsetBegin=1607226 CharacterOffsetEnd=1607229 PartOfSpeech=PRP Lemma=you]
[Text=can CharacterOffsetBegin=1607230 CharacterOffsetEnd=1607233 PartOfSpeech=MD Lemma=can
mark(help-4, if-1)
nsubj(help-4, you-2)
aux(help-4, can-3)
This is what i want it to look like:
Sentence #26024 (5 tokens):
Today is a good day
[Text=Today CharacterOffsetBegin=1607176 CharacterOffsetEnd=1607178 PartOfSpeech=IN Lemma=if]
[Text=is CharacterOffsetBegin=1607179 CharacterOffsetEnd=1607181
PartOfSpeech=NN Lemma=yo]
[Text=a CharacterOffsetBegin=1607182 CharacterOffsetEnd=1607186 PartOfSpeech=NN Lemma=girl]
[Text=good CharacterOffsetBegin=1607187 CharacterOffsetEnd=1607193 PartOfSpeech=JJ Lemma=doesnt]
[Text=day CharacterOffsetBegin=1607202 CharacterOffsetEnd=1607205
root(ROOT-0, today-1)
root(today-1, is-2)
dobj(a-2, good-3)
amod(day-3, good-4)
Sentence #26025 (4 tokens):
if you can help
[Text=if CharacterOffsetBegin=1607223 CharacterOffsetEnd=1607225 PartOfSpeech=IN Lemma=if]
[Text=you CharacterOffsetBegin=1607226 CharacterOffsetEnd=1607229 PartOfSpeech=PRP Lemma=you]
[Text=can CharacterOffsetBegin=1607230 CharacterOffsetEnd=1607233 PartOfSpeech=MD Lemma=can
mark(help-4, if-1)
nsubj(help-4, you-2)
aux(help-4, can-3)
Can anyone please provide pointers. Thanks
I can't do it manually because it's a large file that will need thoussand of spaces inserted.
This is what i did, in case anyone else has a similar problem.
file = open("nameofoldfile.txt", 'r')
filelines = file.readlines()
for lines in filelines:
lines = lines.strip()
if lines.startswith('Sentence #'):
print('\n')
print(lines)
else:
print(lines)
then i saved the file to a new text file by running on the command prompt
python nameoffile.py > nameoftextfile.txt

The failure in using CRF+0.58 train NE Model

when i use CRF++0.58 to model a NE and progarm have a problem:
"reading training data:tagger.cpp(399) [feature_index_->buildFeatures(this)] 0.00s"
the develop environment:
red hat linux 6.5,gcc 5.0,CRF++0.58
written feature template:
template
dataset:
Boson_train.txt
Boson_test.txt
the first column is words ,the second column is pos,the third column is NER tagger
the problem:
when i want to train the NER model, i type this sentences "crf_learn -f 3 -c 4.0 template Boson_train crf_model", and i got
this notification, "reading training data:tagger.cpp(399) [feature_index_->buildFeatures(this)] 0.00s". I can't understand
the C++ language, so i can't fix the problem.
the method i tryed:
1.change the encode type of dataset. I use notepad++ to change "utf-8 with no BOM" to "utf-8". It didn't work.
2.change the delimiter from '\t' to ' '(space). It didn't work.
3.And i think maybe the template was wrong.So i use the crf++0.58/example/seg/template for test. It worked. But this template
is simple, so I use /example/JapaneseNE/template which is more similar with my feature template. It didn't work. Then, i check
the JapaneseNE example It works well. So i got confused. Is there someone can help me.
template
U00:%x[-2,0]
U01:%x[-1,0]
U02:%x[0,0]
U03:%x[1,0]
U04:%x[2,0]
U05:%x[-2,0]/%x[-1,0]/%x[0,0]
U06:%x[-1,0]/%x[0,0]/%x[1,0]
U07:%x[0,0]/%x[1,0]/%x[2,0]
U08:%x[-1,0]/%x[0,0]
U09:%x[0,0]/%x[1,0]
U10:%x[-2,1]/%x[0,1]
U11:%x[-2,1]/%x[1,1]
U11:%x[-1,1]/%x[0,1]
U12:%x[0,0]/%x[0,1]
U13:%x[0,1]/%x[1,1]
U14:%x[0,1]/%x[2,1]
U15:%x[-1,0]/%x[0,1]
U16:%x[-1,0]/%x[-1,1]
U17:%x[1,0]/%x[1,1]
U18:%x[1,0]/%x[1,1]
U19:%x[2,0]/%x[2,1]
U20:%x[-1,2]
U21:%x[-2,2]
U22:%x[0,1]/%x[-1,2]
U23:%x[0,1]/%x[-2,2]
U24:%x[0,0]/%x[-1,2]
U25:%x[0,0]/%x[-2,2]
U26:%x[-1,2]/%x[-2,2]/%x[0,1]
U27:%x[-2,2]/%x[0,1]/%x[1,1]
U28:%x[-1,1]/%x[-1,2]/%x[0,1]
U29:%x[-1,2]/%x[0,0]/%x[0,1]
Boson_train
浙江 ns B_product_name
在线 b I_product_name
杭州 ns I_product_name
4 m B_time
月 m I_time
25 m I_time
日 m I_time
讯 ng Out
( x Out
记者 n Out
x Out
x B_person_name
施宇翔 nr I_person_name
x Out
通讯员 n B_person_name
x Out
方英 nr B_person_name
) x Out
毒贩 n Out
很 zg Out
“ x Out
时髦 nr Out
” x Out
, x Out
用 p Out
微信 vn B_product_name
交易 n Out
毒品 n Out
。 x Out
没 v Out
料想 v Out
警方 n B_person_name
也 d Out
You were debugging in the right direction. The issue is indeed with your template file.
Your training data has 3 columns (column 0:word, column 1:pos-tag and column 2:tag).
You cannot use the tag as feature, but your template file has reference to it (i.e, column 2) in many feature definitions (see, U20 to U29). Your training should work after removing/correcting these.
Hope this helps :)
You can also checkout these video tutorials for better understanding of Template Files and Training NER with CRF++ :
1) https://youtu.be/GJHeTvDkIaE
2) https://youtu.be/Ur5umC4BwN4

Reading a specific txt file and re-arrange it to a given format

Below is an output of Chemichal analysis instrument. I need to rearrange the format and sort it in a way that percentage figure for each element goes below its name. My question is how to read this file word by word? how can I choose, for instance word number 12?
txt file format:
Header_1 Date Time Method_Name (Filter_Name) Calc_Mode Heat No. Quality Anal. Code Sample ID C Si Mn P S Cr Mo Ni Al Co Cu Nb Ti V W Pb Sn As Bi Ca Sb Se B Zn N Fe Place Code Work Phase
Single 13.01.13 09:51:10 Fe-10 Test AutoResult 12A 00001.040 00000.437 00000.292 00000.023 00000.007 00001.505 00000.263 00000.081 00000.012 00000.014 00000.110 00000.155 00000.040 00000.098 00000.015 00000.014 00000.013 00000.012 00000.002 00000.001 00000.016 00000.014 00000.005 00000.001 00000.016 00095.813
To find word 12, read the line character by character until you have seen 11 instances of whatever is being used to separate words (which you have not specified); what follows, until the next such separator, will be the 12th word.

Resources