Logging into a website and then getting asp data in BeautifulSoup4 in Python3 - python-3.x

I have to do following tasks:
a. Login into a router GUI with user and password
b. You'll land on http://192.168.10.1/index.asp
c. Navigage to http://192.168.10.1/html/common/getneighbourAPinfo.asp and get all data in a json or text file.
I can go directly to the link provided in point "c" if I have logged into the router in another tab. Otherwise it doesnt go ahead.
Is there any way to start a logged in session and then go to the link and get data? All of that in terminal?
Currently I am using pyautoGUI to get data via GUI.

Related

Python selenium redirect page to after logging in a Webpage

I am using Selenium for automating a form filling. For this, I have to first login, then navigate to another page after logging in (or browse from the list of sectional wise options of Webportal and then search from list of IDs). I am successfully able to login but unable to change url to the required page which I wish to browse and land.
Is there a way how to browse and navigate through the sectional tabs after login page is opened. It might be difficult to browse(as it is deep inside nested sections) and that is why I am directly trying to give URL to the required ID which I want to open.
When I use
WebDriverWait(driver,DELAY)
driver.get(desired_URL)
to navigate after login, it automatically makes me end up on the homepage before logging in. I also tried another Method
WebDriverWait(driver,DELAY).until(driver.get(desired_URL))
. This gives me an error TypeError: 'NoneType' object is not callable. Nothing worked for me. But actually this should not happen. After successful login, it should navigate to the page which I specify in driver.get(desired_URL) . I know in javascript there is a particular option like driver.navigate().to(<desiredURL>). But I am using Python for my automation.
I would be really greatful for any of your helpful leads in solving my problem.

Scraping linked-in profile to get user email using NodeJS Puppeteer and scrapedin

Dears, I'm now working on a project that requires me to scrape the public linked-in profiles to get some info like email, name, company, title, photo (some basic information)
I write it with NodeJS, using Puppeteer and library called scrapedin
This library requires me to log in to LinkedIn with e-mail and password, I created a dummy linked-in account to use it for this library, it worked fine in localhost, but once I uploaded it to the server I should use cookies to login
and here is the problem, after maybe 30 min linked-in restricted the account and I can't use it anymore!
How could I solve this problem
Is there another library for linked-in scraping I can use?
Thank you.
LinkedIn likely restricted your account because they don't want bot activity on their website. You could try to make your script act more human, for example by inserting random wait times between actions using setTimeout(). It could also help to spend some time creating a new human-like dummy account that is less easy to detect as a bot, for example by uploading a profile picture and writing some text.
You could also make the script do some human actions like pressing random like buttons between the scrape actions.

Is it possible to prompt user with simple html/css from nodejs and return data on submission?

One way is to start whole server open local host page and wait for response. but I was wondering it's possible to create simple popup and read input and send input etc.
Scenario: a web scraper that backs up hacker news comments, you put credentials, it logs in, scrape, and save it. All of this happens locally. Nothing online. Nothing public. Right now user would have to input stuff inside terminal/cmd which is not great experience for non devs.
Or lets say there is some node script that needs human input, quick to categorize data. you type node app.js it runs, opens a popup, you select settings, click next, next.. finally submit.. data gets returned to node script.

Set redirect URL for Spotify API

I know there have been multiple questions regarding this issue... however, I'm not sure how to handle my case.
I am using spotipy to access the Spotify API. In my python notebook, I entered:
util.prompt_for_user_token('<user_id>',client_id='<client_id>',client_secret='<client_secret>',redirect_uri='localhost:3000/callback/')
On the spotify developer website, I have listed localhost:3000/callback/ as my redirect URL.
When I run the prompt, I am redirected to the spotify page where I would click 'okay' to authorize the account. However, each time I click the 'okay' button, nothing happens. Tried using a separate browser, tried restarting my computer... I'm not sure what to do.
Thank you!
After being redirected, the library should prompt you to copy the URL you're redirected to and paste it back in your python notebook. It then grabs the access token from the URL and uses it to authenticate.

How do I output someone's Steam 64 ID when they click on a button?

I'm trying to create a program in which, when you login with Steam and then there's a button, which when clicked the following happens:
If you are not logged in, it takes you to the login.
If you are logged in, it writes the user's Steam64ID onto a txt file somewhere in the site folder.
I currently read the steam web API documentation but i don't know how i can output someone's steam64ID with a button click. Can someone please help?
You could use this API in order to login the user, this returns their Steam64ID. You can then store this (perhaps as a cookie) and output it on the click of the button (assuming you use PHP for the cookies, you could use Ajax for this)

Resources