is there a way to select a certain "link", in a list of links es: link1 - link2 - link3 in a web page and "click" on it thus downloading the content of that link only?
I would like to select "MAX" and once after selected download the content of "donwload data"
I have already created a program using selenium but it is too slow for the number of downloads I have to run
I put a link to the web page from which I want to extract the data:
https://www.nasdaq.com/market-activity/stocks/clvs/historical
Just use the api on the backend and play around with the dates and limit. I changed the limit to 10000 (defaults to just 18). Then manage the json response however you wish. You don't need BeautifulSoup to do this, just requests()
https://api.nasdaq.com/api/quote/CLVS/historical?assetclass=stocks&fromdate=2012-01-17&limit=10000&todate=2022-01-17
Related
I'm building an instagram Bot using selenium.
How do I extract the URL of a page using python?
For example Selenium is loading a webpage. I want to extract the url of that particular page .(Suppose : https://instagram.com/as80df67s4)
If you still don't understand what I'm talking about, please check the image below. There, I have highlighted the page link. How do I extract that link?
From webdriver.py:
def current_url(self):
"""
Gets the URL of the current page.
:Usage:
driver.current_url
"""
return self.execute(Command.GET_CURRENT_URL)['value']
This means that in order to get a current url you can use:
your_url = driver.current_url
But first you need for this page to open.
Using beautifulsoup it's easy to fetch URLs that follow a certain numeric order. However how do I fetch URL links when it's organized otherwise such as https://mongolia.mid.ru/en_US/novosti where it has articles like
https://mongolia.mid.ru/en_US/novosti/-/asset_publisher/hfCjAfLBKGW0/content/24-avgusta-sostoalas-vstreca-crezvycajnogo-i-polnomocnogo-posla-rossijskoj-federacii-v-mongolii-i-k-azizova-s-ministrom-energetiki-mongolii-n-tavinbeh?inheritRedirect=false&redirect=https%3A%2F%2Fmongolia.mid.ru%3A443%2Fen_US%2Fnovosti%3Fp_p_id%3D101_INSTANCE_hfCjAfLBKGW0%26p_p_lifecycle%3D0%26p_p_state%3Dnormal%26p_p_mode%3Dview%26p_p_col_id%3Dcolumn-1%26p_p_col_count%3D1?
Websites such as these are weird because once you first open the link, you have » Бусад мэдээ button to go to the next page of articles. But once you click there, now you have Previous or Next button which is so unorganized.
How do I fetch all the news articles from websites like these (https://mongolia.mid.ru/en_US/novosti or https://mongolia.mid.ru/ru_RU/)?
It seems that the » Бусад мэдээ button from https://mongolia.mid.ru/ru_RU/ just redirects to https://mongolia.mid.ru/en_US/novosti. So why not start from the latter?
To scrape all the news just go page through page using the link from the Next button.
If you want it to be more programatic, just check the differences in the query parameters and you'll see that _101_INSTANCE_hfCjAfLBKGW0_cur is set to the actual page's number (starting from 1).
Sorry for bothering you with my request. I have started to get acquaintance with web-scraping with the library BeautifulSoup. Beacuase I have to download some data from OECD's websites I wanted to try some web-scraping approaches. More specifically, I wanted to download a .csv file from the following page:
https://goingdigital.oecd.org/en/indicator/50/
As you can see, data can be easily downloaded by clicking on 'Download data'. However, because I will have do deal with some a recursive download with loop, I tried to download it directly from the Python console. Therefore, by inspecting the page, I evidenced the download's URL that I have reported in the following picture:
Hence, I wrote the following code:
from bs4 import BeautifulSoup
import requests
from requests import get
url = 'https://goingdigital.oecd.org/en/indicator/50/'
response = get(url)
print(response.text[:500])
html_soup = BeautifulSoup(response.text, 'html.parser')
type(html_soup)
containers = html_soup.find_all('div', {'class': 'css-cqestz e12cimw51'})
print(type(containers))
print(len(containers))
d = []
for a in containers[0].find_all('a', href = True):
print(a['href'])
d.append(a['href'])
The object containers is composed by three elements since there are three divs with the specified class. The first one (the one I have selected in the loop) should be the one containing the URL in which I am interested. However, I get no result. Conversely, when I select the third element of the object containers I get the following output:
https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgoingdigital.oecd.org%2Fen%2Findicator%2F50%2F
https://twitter.com/intent/tweet?text=OECD%20Going%20Digital%20Toolkit&url=https%3A%2F%2Fgoingdigital.oecd.org%2Fen%2Findicator%2F50%2F
https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgoingdigital.oecd.org%2Fen%2Findicator%2F50%2F
mailto:?subject=OECD%20Going%20Digital%20Toolkit%3A%20Percentage%20of%20individuals%20aged%2055-74%20using%20the%20Internet&body=Percentage%20of%20individuals%20aged%2055-74%20using%20the%20Internet%0A%0Ahttps%3A%2F%2Fgoingdigital.oecd.org%2Fen%2Findicator%2F50%2F
By the way, for this download I guess it could be related to the following thread. Thank you in advance!
When you pull data from a website, you should first check whether the content you are looking for is in the page source. If it's not in the page source, you should try web scraping with selenium.
When I examined the site you mentioned, I could not see it in the page source, it shows that the link you want on this page is dynamically created.
I am trying to scrape data from website with beautiful soup, but to scrape all content, I have to click button
<button class="show-more">view all 102 items</button>
to load every item. I have heard that it could by done with selenium, but it means that i have to open browser with script, and then scrape the data. Are there any other ways to solve this problem.
You can use the same API endpoint the page does which returns all the info in json form. Set a records return count higher than the total expected number. I show parsing out the album titles/urls from the json. You can explore response here. You can find this endpoint in the browser network tab when refreshing the url you supplied.
import requests
data = {"fan_id":1812622,"older_than_token":"1557167238:2897209009:a::","count":1000}
r = requests.post('https://bandcamp.com/api/fancollection/1/wishlist_items', json = data).json()
details = [(item['album_title'], item['item_url']) for item in r['items']]
print(details)
I have used python scrapy to extract data from a website. Now i am able to scrape most of the details of a site using scrapy. But my main problem is that iam not able to extract all the reviews of products from the site. I am only able to extract the top 4 reviews which they display on the page and for getting other reviews i have to go to a pop up window which has all the reviews. I looked for 'href' for the popup window but im not able to find it. This is the link that i tried to scrape. The reviews and ratings are at the bottom of the page: https://www.coursera.org/learn/big-data-introduction
Can any one help me by explaining how to extract the reviews from this popup window. Another think to note is that there is infinite scrolling for the pop up.
Thanks in advance.
Scrapy, unlike tools like Selenium and PhantomJS, does not drive a full web browser in the background. You cannot just click a button.
You need to understand what the button does (e.g. does it simply submit a form? Does it do something with JavaScript? Etc.) and reproduce the functionality in your own code.
For example, you might need to read the content of a script element, apply regular expressions to it to pull a URL from a string literal, then make a new HTTP request to that URL, the pell the data you want from the new DOM.
... and then repeat for the next “page” of the infinite scroll.