I am working on renaming the Movie titles that has unwanted letters. The string.gsub can replace a string with "" nil value but I have around 200 string patterns that need to be replaces with "".
Right now I have to string.gsub for every pattern. I was thinking is there is a way to put all the string patterns in to single string.gsub line. I have searched around the web for the solution but still didn't got anything.
The movie title is like this B.A.Pass 2013 Hindi 720p DvDRip CROPPED AAC x264 RickyKT
and I want to remove the extra characters like 2013, Hindi, 720p, DvDRip, CROPPED, AAC, x264, RickyKT.
You can pass to string.gsub a table as the third argument like this:
local movie = "B.A.Pass 2013 Hindi 720p DvDRip CROPPED AAC x264 RickyKT"
movie = movie:gsub("%S+", {["2013"] = "", ["Hindi"] = "", ["720p"] = "",
["DvDRip"] = "", ["CROPPED"] = "", ["AAC"] = "",
["x264"] = "", ["RickyKT"] = ""})
print(movie)
Put all of the patterns in a table and then enumerate the table, calling string.gsub() for each pattern:
str = "B.A.Pass 2013 Hindi 720p DvDRip CROPPED AAC x264 RickyKT"
patterns = {"pattern1", "pattern2", "pattern3"}
for i,v in ipairs(patterns) do
str = string.gsub(str, v, "")
end
This will require many invocations of string.gsub(), but the code should be much more maintainable than having a lot of string.gsub() calls.
To avoid to write keys and values on a table for every new entry, i'd write a function to handle a numerically indexed table (the patterns being the values).
This way I dont need to write {["pattern_n"] = ""} for every new pattern.
Ex:
PATTERNS = {"2013", "Hindi", "720p", "DvDRip", "CROPPED", "AAC", "x264", "RickyKT"}
function replace(match)
local ret = nil
for i, v in ipairs(PATTERNS) do
if v:find(match) then
ret = ""
end
end
return ret
end
local movie = "B.A.Pass 2013 Hindi 720p DvDRip CROPPED AAC x264 RickyKT"
movie = movie:gsub("%S+", replace)
print(movie)
You could do it in a simple function, that way you do not need to write the code each time per string, or just put string.gsub, and the replacement value for the string you need
Function:
local large_name = "B.A.Pass 2013 Hindi 720p DvDRip CROPPED AAC x264 RickyKT"
function clean_name(str)
local v = string.gsub(str, "(.-)%s([%(%[']?%d%d%d?%d?[%)%]]?)%s*(.*)", "%1")
return v
end
print(clean_name(large_name))
Only string.gsub for value
local large_name = "B.A.Pass 2013 Hindi 720p DvDRip CROPPED AAC x264 RickyKT"
local clean_name = string.gsub(large_name, "(.-)%s([%(%[']?%d%d%d?%d?[%)%]]?)%s*(.*)", "%1")
print(clean_name)
The replacement pattern places the first value (name of the movie) separated by a space and prints it, also identifies the year as the second value, to avoid error in the titles, so it is not necessary to place all the values that can exist within the name of the movie and will avoid many false positives
I add a testing function to test different movie names
local testing = {"Whiplash 2014 [1080p]",
"Anon (2018) [WEBRip] [1080p] [YTS.AM]",
"Maze Runner The Death Cure 2018 [WEBRip] [1080p] [YTS.AM]",
"12 Strong [2018] [WEBRip] [1080p] [YTS.AM]",
"Kingsman The Secret Service (2014) [1080p]",
"The Equalizer [2014] [1080p]",
"Annihilation 2018 [WEBRip] [1080p] [YTS.AM]",
"The Shawshank Redemption '94",
"Assassin's Creed 2016 HC 720p HDRip 850 MB - iExTV",
"Captain Marvel (2019) [WEBRip] [1080p] [YTS.AM]",}
for k,v in pairs(testing) do
local result = string.gsub(v, "(.-)%s([%(%[']?%d%d%d?%d?[%)%]]?)%s*(.*)", "%1")
print(result)
end
Output:
Whiplash
Anon
Maze Runner The Death Cure
12 Strong
Kingsman The Secret Service
The Equalizer
Annihilation
The Shawshank Redemption
Assassin's Creed
Captain Marvel
Related
I am trying to create an anime series search using the tool anilistpython, but I am not able to ignore the newline character in the plot tag and need help in align the output in a proper view format.
Tried code :
from AnilistPython import Anilist
import pandas as pd
import re
# db access online
anilist = Anilist()
# User input
ani_search = anilist.get_anime(input('Enter the Anime Name\t:\t'), manual_select=True)
df = ani_search
# for Genres split
cate = []
for gen in df['genres']:
cate.append(gen)
cate1 = (' , '.join(cate))
# for Checking Episode
if df['airing_status'] == 'RELEASING':
print('Ongoing')
x ='Ongoing'
y = df['next_airing_ep']
print(y['episode'])
y1 = y['episode']
elif df['airing_status'] == 'FINISHED':
print('Ended')
x = 'Ended'
y = df['airing_episodes']
print(y)
y1 = y
else:
print('None')
# print other details
print(f"\nTitle_Name\t:\t{df['name_english']}\nRomji_Title\t:\t{df['name_romaji']}\nPlot\t:\t{re.split('<br>', df['desc'])}\nAiring_Format\t:\t{df['airing_format']}\nStatus\t:\t{x}\nEpisodes_Count\t:\t{y1}\nGenres\t:\t{cate1}\nRating\t:\t{df['average_score']}/100\n")
The output it generated :
Enter the Anime Name : Bleach
1. BLEACH
2. BEACH
3. Akkanbee da
Please select the anime that you are searching for in number: 1
Title_Name : Bleach
Romji_Title : BLEACH
Plot : ["Ichigo Kurosaki is a rather normal high school student apart from the fact he has the ability to see ghosts. This ability never impacted his life in a major way until the day he encounters the Shinigami Kuchiki Rukia, who saves him and his family's lives from a Hollow, a corrupt spirit that devours human souls. \n", '', '\nWounded during the fight against the Hollow, Rukia chooses the only option available to defeat the monster and passes her Shinigami powers to Ichigo. Now forced to act as a substitute until Rukia recovers, Ichigo hunts down the Hollows that plague his town. \n\n\n']
Airing_Format : TV
Status : Ended
Episodes_Count : 366
Genres : Action , Adventure , Supernatural
Rating : 76/100
I am looking for the format to look like this:
Title_Name : Bleach
Romji_Title : BLEACH
Plot : Ichigo Kurosaki is a rather normal high school student apart from the fact he has the ability to see ghosts. This ability never impacted his life in a major way until the day he encounters the Shinigami Kuchiki Rukia, who saves him and his family's lives from a Hollow, a corrupt spirit that devours human souls. Wounded during the fight against the Hollow, Rukia chooses the only option available to defeat the monster and passes her Shinigami powers to Ichigo. Now forced to act as a substitute until Rukia recovers, Ichigo hunts down the Hollows that plague his town.
Airing_Format : TV
Status : Ended
Episodes_Count : 366
Genres : Action , Adventure , Supernatural
Rating : 76/100
I have a string look like this:
string=""
( 2021-07-10 01:24:55 PM GMT )TEST
---
Badminton is a racquet sport played using racquets to hit a shuttlecock across
a net. Although it may be played with larger teams, the most common forms of
the game are "singles" (with one player per side) and "doubles" (with two
players per side).
( 2021-07-10 01:27:55 PM GMT )PATRICKWARR
---
Good morning, I am doing well. And you?
---
---
* * *""
I am trying to split the String up into parts as:
text=['Badminton is a racquet sport played using racquets to hit a
shuttlecock across a net. Although it may be played with larger teams,
the most common forms of the game are "singles" (with one player per
side) and "doubles" (with two players per side).','Good morning, I am
doing well. And you?']
What I have tried as:
text=re.findall(r'\( \d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2} PM GMT \)\w+ [\S\n]--- .*',string)
I'm not able get how to extract multiple lines.
You can use
(?m)^\(\s*\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\s*[AP]M\s+GMT\s*\)\w+\s*\n---\s*\n(.*(?:\n(?!(?:\(\s*\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\s*[AP]M\s+GMT\s*\)\w+\s*\n)?---).*)*)
See the regex demo. Details:
^ - start of line
{left_rx} - left boundary
--- - three hyphens
\s*\n - zero or more whitespaces and then an LF char
(.*(?:\n(?!(?:{left_rx})?---).*)*) - Group 1:
.* - zero or more chars other than line break chars as many as possible
(?:\n(?!(?:{left_rx})?---).*)* - zero or more (even empty, due to .*) lines that do not start with the (optional) left boundary pattern followed with ---
The boundary pattern defined in left_rx is \(\s*\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\s*[AP]M\s+GMT\s*\)\w+\s*\n, it is basically the same as the original, I used \s* to match any zero or more whitespaces or \s+ to match one or more whitespaces between "words".
See the Python demo:
import re
text = '''string=""\n( 2021-07-10 01:24:55 PM GMT )TEST \n--- \nBadminton is a racquet sport played using racquets to hit a shuttlecock across\na net. Although it may be played with larger teams, the most common forms of\nthe game are "singles" (with one player per side) and "doubles" (with two\nplayers per side). \n \n \n\n \n\n( 2021-07-10 01:27:55 PM GMT )PATRICKWARR \n--- \nGood morning, I am doing well. And you? \n \n \n\n \n \n \n--- \n \n \n \n \n--- \n \n* * *""'''
left_rx = r"\(\s*\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}\s*[AP]M\s+GMT\s*\)\w+\s*\n"
rx = re.compile(fr"^{left_rx}---\s*\n(.*(?:\n(?!(?:{left_rx})?---).*)*)", re.M)
print ( [x.strip().replace('\n', ' ') for x in rx.findall(text)] )
Output:
['Badminton is a racquet sport played using racquets to hit a shuttlecock across a net. Although it may be played with larger teams, the most common forms of the game are "singles" (with one player per side) and "doubles" (with two players per side).', 'Good morning, I am doing well. And you?']
One of the approaches:
import re
# Replace all \n with ''
string = string.replace('\n', '')
# Replace the date string '( 2021-07-10 01:27:55 PM GMT )PATRICKWARR ' and string like '* * *' with ''
string = re.sub(r"\(\s*\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2} [AP]M GMT\s*\)\w+|\*+", '', string)
data = string.split('---')
data = [item.strip() for item in data if item.strip()]
print (data)
Output:
['Badminton is a racquet sport played using racquets to hit a shuttlecock acrossa net. Although it may be played with larger teams, the most common forms ofthe game are "singles" (with one player per side) and "doubles" (with twoplayers per side).', 'Good morning, I am doing well. And you?']
I´m still new to Node.js and currently developing a small app for my kitchen. This app can scan receipts and uses OCR to extract the data. For OCR extracting I´m using the ocr-space web api. Afterwards I need to parse the raw text to a JSON structure and send it to my database. I´ve also tested this receipt using AWS textract, which gave me a even poorer result.
Currently I´m struggling at the parsing part using RegEx in Node.js.
Here is my JSON structure which I use to parse the receipt data:
receipt = {
title: 'title of receipt'
items: [
'item1',
'item2',
'item3'
],
preparation: 'preparation text'
}
As most of the receipts have a items part and afterwards a preparation part my general approach so far looks like the following:
Searching for keywords like 'items' and 'preparation' in the raw text
Parse the text between these keywords
Do further string processing, like missing whitespaces, triming etc.
This approach doesn´t work if these keywords are missing. Take for example the following receipt, where I´m struggle to parse it into my JSON structure. The receipt is in German and there are no corresponding keywords ('items' or 'Zutaten', 'preparation' or 'Zubereitung').
Following information from the raw text are necessary:
title: line 1
items: line 2 - 8
preparation: line 9 until end
Do you have any hints or tips how to come closer to the solution? Or do you have any other ideas how to manage such situations accordingly?
Quinoa-Brot
30 g Chiasamen
350 g Quinoa
70 ml Olivenöl
1/2 TL Speisenatron
1 Prise Salz
Saft von 1/2 Zitrone
1 Handvoll Sonnenblumenkerne
30 g Schwarzkümmelsamen
1 Chiasamen mit 100 ml Wasser
verrühren und 30 Minuten quel-
len lassen. Den Ofen auf 200 oc
vorheizen, eine kleine Kastenform
mit Backpapier auslegen.
2 Quinoa mit der dreifachen
Menge Wasser in einen Topf ge-
ben, einmal aufkochen und dann
3 Minuten köcheln lassen - die
Quinoa wird so nur teilweise ge-
gegart. In ein Sieb abgießen, kalt
abschrecken und anschließend
gut abtropfen lassen.
Between each line there is a \n tabulator.
The parsed receipt should look like this:
receipt = {
title: 'Quinoa-Brot',
items: [
'30 g Chiasamen',
'350 g Quinoa',
'70 ml Olivenöl',
'1/2 TL Speisenatron',
'1 Prise Salz',
'Saft von 1/2 Zitrone'
'1 Handvoll Sonnenblumenkerne'
'30 g Schwarzkümmelsamen',
],
preparation: '1 Chiasamen mit 100 ml Wasser verrühren und 30 Minuten quellen lassen. Den Ofen auf 200 oc vorheizen, eine kleine Kastenform mit Backpapier auslegen. 2 Quinoa mit der dreifachen Menge Wasser in einen Topf geben, einmal aufkochen und dann 3 Minuten köcheln lassen - die Quinoa wird so nur teilweise gegegart. In ein Sieb abgießen, kalt abschrecken und anschließend gut abtropfen lassen.'
}
Pattern matching solutions like RegExp don't sound suitable for this sort of a categorization problem. You might want to consider clustering (k-means, etc.) - training a model to differentiate between ingredients and instructions. This can be done by labeling a number of recipes (the more the better), or using unsupervised ML by clustering line by line.
If you need to stick to RegExp for some reason, you keeping track of repeated words. Weak methodology: ingredient names (Chiasemen, Quinoa, ) will be referenced in the instructions, so you can match on multiline to find where the same word is repeated later on:
(?<=\b| )([^ ]+)(?= |$).+(\1)
If you do run this on a loop, plus logic, you can find pairs ingredient-instruction pairs, and work through the document with silhouette information.
You might be able to take advantage of ingredient lines containing numeric data like numbers or words like "piece(s), sticks, leaves" which you might store in a dictionary. That can enrich the word boundary input matches.
I would reconsider using RegExp here at all...
' '.join(token_list) does not reconstruct the original text in cases with multiple whitespaces and punctuation in a row.
For example:
from spacy.tokenizer import Tokenizer
from spacy.lang.en import English
nlp = English()
# Create a blank Tokenizer with just the English vocab
tokenizerSpaCy = Tokenizer(nlp.vocab)
context_text = 'this is a test \n \n \t\t test for \n testing - ./l \t'
contextSpaCyToksSpaCyObj = tokenizerSpaCy(context_text)
spaCy_toks = [i.text for i in contextSpaCyToksSpaCyObj]
reconstruct = ' '.join(spaCy_toks)
reconstruct == context_text
>False
Is there an established way of reconstructing original text from spaCy tokens?
Established answer should work with this edge case text (you can directly get the source from clicking the 'improve this question' button)
" UNCLASSIFIED U.S. Department of State Case No. F-2014-20439 Doc No. C05795279 Date: 01/07/2016\n\n\n RELEASE IN PART\n B5, B6\n\n\n\n\nFrom: H <hrod17#clintonemail.com>\nSent: Monday, July 23, 2012 7:26 AM\nTo: 'millscd #state.gov'\nCc: 'DanielJJ#state.gov.; 'hanleymr#state.gov'\nSubject Re: S speech this morning\n\n\n\n Waiting to hear if Monica can come by and pick up at 8 to take to Josh. If I don't hear from her, can you send B5\nsomeone else?\n\n Original Message ----\nFrom: Mills, Cheryl D [MillsCD#state.gov]\nSent: Monday, July 23, 2012 07:23 AM\nTo: H\nCc: Daniel, Joshua J <Daniel1.1#state.gov>\nSubject: FW: S speech this morning\n\nSee below\n\n B5\n\ncdm\n\n Original Message\nFrom: Shah, Rajiv (AID/A) B6\nSent: Monday, July 23, 2012 7:19 AM\nTo: Mills, Cheryl D\nCc: Daniel, Joshua.'\nSubject: S speech this morning\n\nHi cheryl,\n\nI look fwd to attending the speech this morning.\n\nI had one last minute request - I understand that in the final version there is no reference to the child survival call to\naction, but their is a reference to family planning efforts. Could you and josh try to make sure there is some specific\nreference to the call to action?\n\nAlso, in terms of acknowledgements it would be good to note torn friedan's leadership as everyone is sensitive to our ghi\ntransition and we want to continue to send the usaid-pepfar-cdc working together public message. I don't know if he is\nthere, but wanted to flag.\n\nLook forward to it.\n\nRaj\n\n\n\n\n UNCLASSIFIED U.S. Department of State Case No. F-2014-20439 Doc No. C05795279 Date: 01/07/2016\n\x0c"
You can very easily accomplish this by changing two lines in your code:
spaCy_toks = [i.text + i.whitespace_ for i in contextSpaCyToksSpaCyObj]
reconstruct = ''.join(spaCy_toks)
Basically, each token in spaCy knows whether it is followed by whitespace or not. So you call token.whitespace_ instead of joining them on space by default.
I have a text file that looks something like this (but with hundreds of lines):
1147-1 SYRUP: DR.PEPPER 5GALLON/BOX
1653-1 SYRUP: DIET DR.PEPPER 5GAL/BOX
2011-2 WATER DISTILLED 6 / 1 GA
1217-2 ALL PURPOSE RASPBERRY FIL 40#
1273-1 STRAWBERRY PIE FILLING 38#
2893-1 BREAD: SOURDOUGH 12/1# OVAL
2287-1 BREAD SQUAW: 8/2.25#LF
1929-1 VINEGAR HERB CONT GRDN 12/12.7
1949-2 KETCHUP: 16/14OZ-PLASTIC BTLS
2408-1 CONE 3 NAB SAMPLER 28/45
2939-1 DULCE LECH FLVR PKT 3/12 EA CS
3017-1 GINGRBRD FLVR PKT 3/12 EA CS
3055-2 EGGNOG FLVR PKT 3/12 EA CS
3192-1 ORIGINAL MRS. DASH SEASONING
I've created the code to pull everything in from the text file line by line and strip out the numbers at the beginning and save the next portion (ie SYRUP: DR.PEPPER 5GALLON/BOX, ALL PURPOSE RASPBERRY FIL 40#) to Mid(nextLine, 10, 30). I want to take that portion and split it up by pulling the name (SYRUP: DR.PEPPER, ALL PURPOSE RASPBERRY FIL), the size (5GALLON, 40#) the number of that size (if it is 12/1# = 12x 1LB) and the unit (BOX, LB) out. As you can see almost every line is different but with many similarities. Not really sure what to do next. I have been trying to use:
re.Pattern = "GALLON|BOX|#|LF|GAL|EA|CS|BTLS"
to pull out the unit portion but I don't know what else to do.
Here is the code I have so far for this portion:
Function NumericOnly(s As String) As String
Dim StrUnit As String
Static re As RegExp
If re Is Nothing Then Set re = New RegExp
re.IgnoreCase = True
re.Global = True
re.Pattern = "GALLON|BOX|#|LF|GAL|EA|CS|BTLS"
StrUnit = re.Replace(s, "")
End Function