I have a string:
test_string="lots of other html tags ,'https://news.sky.net/upload_files/image/2022/202209_166293.png',and still 'https://news.sky.net/upload_files/image/2022/202209_166293.jpg'"
How can I get the whole 2 urls in the string,by using python Regex ?
I tried:
pattern = 'https://news.sky.net/upload_files/image'
result = re.findall(pattern, test_string)
I can get a list:
['https://news.sky.net/upload_files/image','https://news.sky.net/upload_files/image']
but not the whole url ,so I tried:
pattern = 'https://news.sky.net/upload_files/image...$png'
result = re.findall(pattern, test_string)
But received an empty list.
You could match a minimal number of characters after image up to a . and either png or jpg:
test_string = "lots of other html tags ,'https://news.sky.net/upload_files/image/2022/202209_166293.png',and still 'https://news.sky.net/upload_files/image/2022/202209_166293.jpg'"
pattern = r'https://news.sky.net/upload_files/image.*?\.(?:png|jpg)'
re.findall(pattern, test_string)
Output:
[
'https://news.sky.net/upload_files/image/2022/202209_166293.png',
'https://news.sky.net/upload_files/image/2022/202209_166293.jpg'
]
Assuming you would always expect the URLs to appear inside single quotes, we can use re.findall as follows:
I have a string:
test_string = "lots of other html tags ,'https://news.sky.net/upload_files/image/2022/202209_166293.png',and still 'https://news.sky.net/upload_files/image/2022/202209_166293.jpg'"
urls = re.findall(r"'(https?:\S+?)'", test_string)
print(urls)
This prints:
['https://news.sky.net/upload_files/image/2022/202209_166293.png',
'https://news.sky.net/upload_files/image/2022/202209_166293.jpg']
You could match any URL inside the string you have by using the following regex '(https?://\S+)'
by applying this to your code it would be something like this:
import re
string = "Some string here'https://news.sky.net/upload_files/image/2022/202209_166293.png' And here as well 'https://news.sky.net/upload_files/image/2022/202209_166293.jpg' that's it tho"
res = re.findall(r"(http(s)?://\S+)", string)
print(res)
this will return a list of URLs got collected from the string:
[
'https://news.sky.net/upload_files/image/2022/202209_166293.png',
'https://news.sky.net/upload_files/image/2022/202209_166293.jpg'
]
Regex Explaination:
'(https?://\S+)'
https? - to check if the url is https or http
\S+ - any non-whitespace character one or more times
So this will get either https or http then after :// characters it will take any non-whitespace character one or more times
Hope you find this helpful.
Related
I have a string, I have to get digits only from that string.
url = "www.mylocalurl.com/edit/1987"
Now from that string, I need to get 1987 only.
I have been trying this approach,
id = [int(i) for i in url.split() if i.isdigit()]
But I am getting [] list only.
You can use regex and get the digit alone in the list.
import re
url = "www.mylocalurl.com/edit/1987"
digit = re.findall(r'\d+', url)
output:
['1987']
Replace all non-digits with blank (effectively "deleting" them):
import re
num = re.sub('\D', '', url)
See live demo.
You aren't getting anything because by default the .split() method splits a sentence up where there are spaces. Since you are trying to split a hyperlink that has no spaces, it is not splitting anything up. What you can do is called a capture using regex. For example:
import re
url = "www.mylocalurl.com/edit/1987"
regex = r'(\d+)'
numbers = re.search(regex, url)
captured = numbers.groups()[0]
If you do not what what regular expressions are, the code is basically saying. Using the regex string defined as r'(\d+)' which basically means capture any digits, search through the url. Then in the captured we have the first captured group which is 1987.
If you don't want to use this, then you can use your .split() method but this time provide a split using / as the separator. For example `url.split('/').
strings = [
r"C:\Photos\Selfies\1|",
r"C:\HDPhotos\Landscapes\2|",
r"C:\Filters\Pics\12345678|",
r"C:\Filters\Pics2\00000000|",
r"C:\Filters\Pics2\00000000|XAV7"
]
for string in strings:
matchptrn = re.match(r"(?P<file_path>.*)(?!\d{8})", string)
if matchptrn:
print("FILE PATH = "+matchptrn.group('file_path'))
I am trying to get this regular expression with a lookahead to work the way I though it would. Examples of Look Aheads on most websites seem to be pretty basic string matches i.e. not matching 'bar' if it is preceded by a 'foo' as an example of a negative look behind.
My goal is to capture in the group file_path the actual file path only if the string does NOT have an 8 character length number in it just before the pipe symbol | and match anything after the pipe symbol in another group (something I haven't implemented here).
So in the above example it should match only the first two strings
C:\Photos\Selfies\1
C:\HDPhotos\Landscapes\2
In case of the last string
C:\Filters\Pics2\00000000|XAV7
I'd like to match C:\Filters\Pics2\00000000 in <file_path> and match XAV7in another group named .
(This is something I can figure out on my own if I get some help with the negative look ahead)
Currently <file_path> matches everything, which makes sense since it is non-greedy (.*)
I want it to only capture if the last part of the string before the pipe symbol is NOT an 8 length character.
OUTPUT OF CODE SNIPPET PASTED BELOW
FILE PATH = C:\Photos\Selfies\1|
FILE PATH = C:\HDPhotos\Landscapes\2|
FILE PATH = C:\Filters\Pics\12345678|
FILE PATH = C:\Filters\Pics2\00000000|
FILE PATH = C:\Filters\Pics2\00000000|XAV7
Making this modification of \\
matchptrn = re.match(r"(?P<file_path>.*)\\(?!\d{8})", string)
if matchptrn:
print("FILE PATH = "+matchptrn.group('file_path'))
makes things worse as the output is
FILE PATH = C:\Photos\Selfies
FILE PATH = C:\HDPhotos\Landscapes
FILE PATH = C:\Filters
FILE PATH = C:\Filters
FILE PATH = C:\Filters
Can someone please explain this as well ?
You can use
^(?!.*\\\d{8}\|$)(?P<file_path>.*)\|(?P<suffix>.*)
See the regex demo.
Details
^ - start of a string
(?!.*\\\d{8}\|$) - fail the match if the string contains \ followed with eight digits and then | at the end of string
(?P<file_path>.*) - Group "file_path": any zero or more chars other than line break chars as many as possible
\| - a pipe
(?P<suffix>.*) - Group "sfuffix": the rest of the string, any zero or more chars other than line break chars, as many as possible.
See the Python demo:
import re
strings = [
r"C:\Photos\Selfies\1|",
r"C:\HDPhotos\Landscapes\2|",
r"C:\Filters\Pics\12345678|",
r"C:\Filters\Pics2\00000000|",
r"C:\Filters\Pics2\00000000|XAV7"
]
for string in strings:
matchptrn = re.match(r"(?!.*\\\d{8}\|$)(?P<file_path>.*)\|(?P<suffix>.*)", string)
if matchptrn:
print("FILE PATH = {}, SUFFIX = {}".format(*matchptrn.groups()))
Output:
FILE PATH = C:\Photos\Selfies\1, SUFFIX =
FILE PATH = C:\HDPhotos\Landscapes\2, SUFFIX =
FILE PATH = C:\Filters\Pics2\00000000, SUFFIX = XAV7
I have list of websites unfortunately which looks like "rs--google.com--plain" how to remove 'rs--' and '--plain' from the url? I tried strip() but it didn't remove anything.
The way to remove "rs--" and "--plain" from that url (which is a string most likely) is to use some basic regex on it:
import re
url = 'rs--google.com--plain'
cleaned_url = re.search('rs--(.*)--plain', url).group(1)
print(cleaned_url)
Which prints out:
google.com
What is done here is use re's search module to check if anything exists between "rs--" and "--plain" and if it does match it to group 1, we then check for group 1 by doing .group(1) and set our entire "cleaned url" to it:
cleaned_url = re.search('rs--(.*)--plain', url).group(1)
And now we only "google.com" in our cleaned_url.
This assumes "rs--" and "--plain" are always in the url.
Updated to handle any letters on either side of --:
import re
url = 'po--google.com--plain'
cleaned_url = re.search('[A-z]+--(.*)--[A-z]+', url).group(1)
print(cleaned_url)
This will handle anything that has letters before -- and after -- and get only the url in the middle. What that does is check any letters on either side of -- regardless of how many letters are there. This will allow queries with letters that match that regular expression so long as --myurl.com-- letters exist before the first "--" and after the second "--"
A great resource for working on regex is regex101
You can use replace function in python.
>>> val = "rs--google.com--plain"
>>> newval =val.replace("rs--","").replace("--plain","")
>>> newval
'google.com'
I'm very new for python and tried to get parse the URL from the line. How can I get the line?
application_url: https://hafaf.daff.io
I tried to use split but I could not get.
So split works as such:
mystring = "Hello, my name is Sam!"
print(mystring.split('Hello')[1])
That will output:
", my name is Sam!"
What split does it quite literally as it sounds like, is split on a specific string or character.
So to get the url there you'd do the following:
my_url = "application_url: https://hafaf.daff.io".split("application_url: ")[1]
Which would result in the variable my_url being "https://hafaf.daff.io"
Do note the inclusion of the spaces when splitting.
Split breaks a string into a LIST object which you can then access by index. So when I go to get your url from that string I search for the second index being 1 as the "application_url: " is in position 0.
I am trying to extract the main article from a web page. I can accomplish the main text extraction using Python's readability module. However the text I get back often contains several 
 strings (there is a ; at the end of this string but this editor won't allow the full string to be entered (strange!)). I have tried using the python replace function, I have also tried using regular expression's replace function, I have also tried using the unicode encode and decode functions. None of these approaches have worked. For the replace and Regular Expression approaches I just get back my original text with the 
 strings still present and with the unicode encode decode approach I get back the error message:
UnicodeEncodeError: 'ascii' codec can't encode character u'\xa9' in position 2099: ordinal not in range(128)
Here is the code I am using that takes the initial URL and using readability extracts the main article. I have left in all my commented out code that corresponds to the different approaches I have tried to remove the
string. It appears as though 
 is interpreted to be u'\xa9'.
from readability.readability import Document
def find_main_article_text_2():
#url = 'http://finance.yahoo.com/news/questcor-pharmaceuticals-closes-transaction-acquire-130000695.html'
url = "http://us.rd.yahoo.com/finance/industry/news/latestnews/*http://us.rd.yahoo.com/finance/external/cbsm/SIG=11iiumket/*http://www.marketwatch.com/News/Story/Story.aspx?guid=4D9D3170-CE63-4570-B95B-9B16ABD0391C&siteid=yhoof2"
html = urllib.urlopen(url).read()
readable_article = Document(html).summary()
readable_title = Document(html).short_title()
#readable_article.replace("u'\xa9'"," ")
#print re.sub("
",'',readable_article)
#unicodedata.normalize('NFKD', readable_article).encode('ascii','ignore')
print readable_article
#print readable_article.decode('latin9').encode('utf8'),
print "There are " ,readable_article.count("
"),"
's"
#print readable_article.encode( sys.stdout.encoding , '' )
#sent_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
#sents = sent_tokenizer.tokenize(readable_article)
#new_sents = []
#for sent in sents:
# unicode_sent = sent.decode('utf-8')
# s1 = unicode_sent.encode('ascii', 'ignore')
#s2 = s1.replace("\n","")
# new_sents.append(s1)
#print new_sents
# u'\xa9'
I have a URL that I have been testing the code with included inside the def. If anybody has any ideas on how to remove this 
 I would appreciate the help. Thanks, George