why does my default browser didn't open, instead microsoft edge is opened - python-3.x

If i pass full web address like: https://gaana.com
it opens through my default browser chrome.
But when i pass, ganna.com , the web page is perfectly open through microsoft edge.
Any help about why the browser changes?
import webbrowser
alink = input('Enter the name: ')
new = 2
webbrowser.open(alink, new=new)
There is no error in general, but browser changes from default(chrome) to microsoft edge browser

That's most likely because your system doesn't know how to open the URL properly when no protocol was specified. Why don't you just prepend the protocol when the user has not passed any?
Example:
import webbrowser
alink = input('Enter the name: ')
new = 2
# protocol always comes before "://"
temp = alink.split('://')
if len(temp) > 2: # no protocol was specified
# let's use HTTP then
alink = "http://" + alink
webbrowser.open(alink, new=new)

Related

Attempting to open browser with Specific Profile in Incognito Mode

Attempting to open browser with Specific Profile in Incognito Mode.
After the browser is opened, need to verify that the URL opened the proper page
I am using the following code
browserName = set()
profile_name = 'Person 1
rpa_string = 'https://www.google.ca'
path_br_ch = '/Google/Chrome/Application/chrome.exe'
browserName.add("chrome")
browser_profile = f"--profile-directory={profile_name}"
browser_path = path_br_ch
browser_privacy = "--incognito"
rpa_command = [browser_path, browser_privacy, browser_profile, rpa_string]
browser_process = subprocess.Popen(rpa_command)
# Next step need to verify that the google page is opened
# Please help

Login to a website then open it in browser

I am trying to write a Python 3 code that logins in to a website and then opens it in a web browser to be able to take a screenshot of it.
Looking online I found that I could do webbrowser.open('example.com')
This opens the website, but cannot login.
Then I found that it is possible to login to a website using the request library, or urllib.
But the problem with both it that they do not seem to provide the option of opening a web page.
So how is it possible to login to a web page then display it, so that a screenshot of that page could be taken
Thanks
Have you considered Selenium? It drives a browser natively as a user would, and its Python client is pretty easy to use.
Here is one of my latest works with Selenium. It is a script to scrape multiple pages from a certain website and save their data into a csv file:
import os
import time
import csv
from selenium import webdriver
cols = [
'ies', 'campus', 'curso', 'grau_turno', 'modalidade',
'classificacao', 'nome', 'inscricao', 'nota'
]
codigos = [
96518, 96519, 96520, 96521, 96522, 96523, 96524, 96525, 96527, 96528
]
if not os.path.exists('arquivos_csv'):
os.makedirs('arquivos_csv')
options = webdriver.ChromeOptions()
prefs = {
'profile.default_content_setting_values.automatic_downloads': 1,
'profile.managed_default_content_settings.images': 2
}
options.add_experimental_option('prefs', prefs)
# Here you choose a webdriver ("the browser")
browser = webdriver.Chrome('chromedriver', chrome_options=options)
for codigo in codigos:
time.sleep(0.1)
# Here is where I set the URL
browser.get(f'http://www.sisu.mec.gov.br/selecionados?co_oferta={codigo}')
with open(f'arquivos_csv/sisu_resultados_usp_final.csv', 'a') as file:
dw = csv.DictWriter(file, fieldnames=cols, lineterminator='\n')
dw.writeheader()
ies = browser.find_element_by_xpath('//div[#class ="nome_ies_p"]').text.strip()
campus = browser.find_element_by_xpath('//div[#class ="nome_campus_p"]').text.strip()
curso = browser.find_element_by_xpath('//div[#class ="nome_curso_p"]').text.strip()
grau_turno = browser.find_element_by_xpath('//div[#class = "grau_turno_p"]').text.strip()
tabelas = browser.find_elements_by_xpath('//table[#class = "resultado_selecionados"]')
for t in tabelas:
modalidade = t.find_element_by_xpath('tbody//tr//th[#colspan = "4"]').text.strip()
aprovados = t.find_elements_by_xpath('tbody//tr')
for a in aprovados[2:]:
linha = a.find_elements_by_class_name('no_candidato')
classificacao = linha[0].text.strip()
nome = linha[1].text.strip()
inscricao = linha[2].text.strip()
nota = linha[3].text.strip().replace(',', '.')
dw.writerow({
'ies': ies, 'campus': campus, 'curso': curso,
'grau_turno': grau_turno, 'modalidade': modalidade,
'classificacao': classificacao, 'nome': nome,
'inscricao': inscricao, 'nota': nota
})
browser.quit()
In short, you set preferences, choose a webdriver (I recommend Chrome), point to the URL and that's it. The browser is automatically opened and start executing your instructions.
I have tested using it to log in and it works fine, but never tried to take screenshot. It theoretically should do.

OpenCV can't use VideoCapture with url

I'm trying to open a video from a Google Cloud Storage url and process it in a cloud function - the file is publicly available. But v.read() returns None.
Sample video url: https://storage.googleapis.com/dev-brdu1976/268.mov
v = cv2.VideoCapture(request.json['Source_Storage_Path'])
print(v)
frameNum = -1
while (True):
ret_value,frame = v.read()
if ret_value == False or frame is None:
print('Frame is None')
break
frameNum += 1
#do stuff
I figured out how to make it work but I didn't dig into the specifics. Requesting the video via https doesn't open - it did work when I changed the url protocol to http instead.

Google Slides API: no "client_secret.json"

I'm new to Google Slides API and am trying to build a slide deck for daily news headlines by replacing image and text placeholders (for your reference, see https://www.youtube.com/watch?v=8LSUbKZq4ZY and http://wescpy.blogspot.com/2016/11/using-google-slides-api-with-python.html).
But when I try to run my modified program, I get an error message that says no file or directory exists called "client_secret.json" (which is included in the API tutorial's code). The tutorial code is from 2 years ago so I'm not sure if there's been any updates in the Google Slides API, but I'd really appreciate help on navigating this issue. Below is my code (note: "scraped list" is a list of dictionaries, with each dictionary containing a value for keys "headline" and "imgURL".)
from __future__ import print_function
from apiclient import discovery
from httplib2 import Http
from oauth2client import file, client, tools
from datetime import date
from scrapef2 import scrape
scrapedlist = scrape()
TMPLFILE = 'CrimsonTemplate' # use your own!
SCOPES = (
'https://www.googleapis.com/auth/drive',
'https://www.googleapis.com/auth/presentations',
)
store = file.Storage('storage.json')
creds = store.get()
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets('client_secret.json', SCOPES)
creds = tools.run_flow(flow, store)
HTTP = creds.authorize(Http())
DRIVE = discovery.build('drive', 'v3', http=HTTP)
SLIDES = discovery.build('slides', 'v1', http=HTTP)
rsp = DRIVE.files().list(q="name='%s'" % TMPLFILE).execute().get('files')[0]
DATA = {'name': '[DN] '+ str(date.today())}
print('** Copying template %r as %r' % (rsp['name'], DATA['name']))
DECK_ID = DRIVE.files().copy(body=DATA, fileId=rsp['id']).execute().get('id') # TO DO: How to copy into a specific folder
for i in range(3):
print('** Get slide objects, search for image placeholder')
slide = SLIDES.presentations().get(presentationId=DECK_ID,
fields='slides').execute().get('slides')[i]
obj = None
for obj in slide['pageElements']:
if obj['shape']['shapeType'] == 'RECTANGLE':
break
print('** Replacing placeholder text and icon')
reqs = [
{'replaceAllText': {
'containsText': {'text': '{{Headline}}'},
'replaceText': scrapedlist[i]["headline"]
}},
{'createImage': {
'url': scrapedlist[i]["imgURL"],
'elementProperties': {
'pageObjectId': slide['objectId'],
'size': obj['size'],
'transform': obj['transform'],
}
}},
{'deleteObject': {'objectId': obj['objectId']}},
]
SLIDES.presentations().batchUpdate(body={'requests': reqs},
presentationId=DECK_ID).execute()
print('DONE')
Never used python google api but error indicates that you dont have your 'client_secret.json' file or it is in wrong place.
Scenario 1 - you dont have 'client_secret.json' file
This file is used by API to automatically verify that you are you. With this all API calls are made by your behalf. To get this file:
go to Google API console
open your project (or create new one)
click "Enable APIs and services" to find and enable Google Slides API
click "Credentials" in left menu, and then "Create credentials" -> "oAuth client ID"
choose Web application, accept all windows
now you should see new credentials on list, you can click on them and there will be button on top menu named "download JSON", there you will obtain your credentials (which by name are secret so keep them somewhere safe)
Scenario 2 - your 'client_secret.json' file is in wrong place
In this case I can't be very helpful, just try to inspect library to know where it looks for file and put it there (library directory, project root directory, hard to tell).
Let me know if it worked, as Google APIs and their libraries sometimes acts unexpectedly.

Python 3.6.3 - Send MozillaCookieJar File and read HTML source code

I'm very fresh about python (i'm learning just about 1 day long).
I need to send cookies (i got them from my Google Chrome browser to a *.text file) and be redirected after login to my account page, to after read a source HTML code do what i wanna do. With much searches allong internet, i already have this piece of code:
import os
import time
import urllib.request
import http.cookiejar
while 1:
cj = http.cookiejar.MozillaCookieJar('cookies.txt')
cj.load()
print(len(cj)) # output: 9
print(cj) # output: <MozillaCookieJar[<Cookie .../>, <Cookie .../>, ... , <Cookie .../>]>
for cookie in cj:
cookie.expires = time.time() + 14 * 24 * 3600
cookieProcessor = urllib.request.HTTPCookieProcessor(cj)
opener = urllib.request.build_opener(cookieProcessor)
request = urllib.request.Request(url='https://.../')
response = opener.open(request, timeout=100)
s = str(response.read(), 'utf-8')
print(s)
if 'class' in s:
os.startfile('test.mp3')
time.sleep(5)
With this code i believe, hope i'm not be mistaken, have sending the cookies correctly. My main question is: How can i wait and catch the source HTML code after server redirect my login to personal page? I can't call again my Request with the same URL.
Thank you in advance.

Resources