Selenium Python error DesiredCapabilities - python-3.x

Some of my bots have this error today
Selenium.common.exceptions.WebDriverException: Message: unknown error: cannot find dict 'desiredCapabilities'
(Driver info: chromedriver=2.24.417431 (9aea000394714d2fbb20850021f6204f2256b9cf),platform=Windows NT 10.0.22000 x86_64)
I've already tried some possibilities that I read in forums and in the selenium documentation. Example:
from selenium import webdriver
get = "198.0.0.1:4444/wd/hub"
.
capabilities = DesiredCapabilities.CHROME()
capabilities['platform'] = "WINDOWS"
capabilities['version'] = "10"
driver = webdriver.Remote(desired_capabilities=capabilities
command_executor=get)
This problem occurs even in simpler code like:
from selenium import webdriver
driver = (way\chromedriver.exe')
driver.get('stackoverflow.com')
Does anyone know any documentation or solution for this problem?

Related

Error about Selenium version using Python's webbot

I'm using Python 3.9 on macOS. I'm trying to start using webbot, but every time I try, I get this error:
selenium.common.exceptions.SessionNotCreatedException: Message: session not created
exception: Missing or invalid capabilities
(Driver info: chromedriver=2.39.562713
(dd642283e958a93ebf6891600db055f1f1b4f3b2),platform=Mac OS X 10.14.6 x86_64)
I'm using macOS version 10.4 because I use 32 bit software. The part that really puzzles me is why is says chromedriver=2.39.562713. According to the pip, the driver's version is 103.0.5060.53. If I import selenium and try the command help(selenium), towards the end of the output, I get:
VERSION
4.3.0
Where does this lower version come from? I'm pretty sure that's why I have "missing or invalid capabilities." If I start selenium with:
from selenium import webdriver
driver = webdriver.Chrome()
It starts Chrome as expected. Obviously I'm missing something.
I used to start webbot with:
from webbot import Browser
driver = Browser()
But then, just to be sure, I changed it to:
from webbot import Browser
driver = Browser(True, None, '/usr/local/bin/')
'/usr/local/bin/' being the location of a chrome webdriver installed by brew that is explicitly version 103. No difference.
Solution
The approved response was not the solution, but it led me to the solution.
My version of webbot is the latest, but it has a very different __init__ method:
def __init__(self, showWindow=True, proxy=None , downloadPath:str=None):
Upon further inspection, I saw that the driverPath attribute (that I had tried to use earlier) was completely gone by design. So I decided to print the value of the inner variable driverpath inside the __init__ method. This returned the following:
project_root/virtualenvironment/lib/python3.9/site-packages/webbot/drivers/chrome_mac
There was my guilty party! I renamed that executable and in its place put a symbolic link to the correct binary. That worked.
driver = Browser(True, None, '/usr/local/bin/')
actually sets the downloadPath, not the driverPath. Use the parameter name explicitly
driver = Browser(driverPath='/usr/local/bin/')
From webbot.py
class Browser:
def __init__(self, showWindow=True, proxy=None , downloadPath:str=None, driverPath:str=None, arguments=["--disable-dev-shm-usage","--no-sandbox"]):
if driverPath is not None and isinstance(driverPath,str):
driverPath = os.path.abspath(driverPath)
if(not os.path.isdir(driverPath)):
raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), driverPath)
if driverPath is None:
driverfilename = ''
if sys.platform == 'linux' or sys.platform == 'linux2':
driverfilename = 'chrome_linux'
elif sys.platform == 'win32':
driverfilename = 'chrome_windows.exe'
elif sys.platform == 'darwin':
driverfilename = 'chrome_mac'
driverPath = os.path.join(os.path.split(__file__)[0], 'drivers{0}{1}'.format(os.path.sep, driverfilename))
self.driver = webdriver.Chrome(executable_path=driverPath, options=options)
If driverPath is None it will set to
/{parent_folder_abs_path}/drivers/chrome_mac or /{parent_folder_abs_path}/drivers/, I'm guessing you have an older chromedriver version there.

Translating Chrome Page using Selenium is not working

I'm trying to translate a webpage from Japanese to English, below is a code snippet, that used to work 2 weeks before and is not working anymore.
# Initializing Chrome reference
chrome_path = Service("C:\Python\Anaconda\chromedriver.exe")
custom_options = webdriver.ChromeOptions()
prefs = {
"translate_whitelists": {"ja":"en"},
"translate":{"enabled":"True"}
}
custom_options.add_experimental_option('prefs', prefs)
driver = webdriver.Chrome(service = chrome_path, options=custom_options)
I've also tried
custom_options.add_argument('--lang=en')
driver = webdriver.Chrome(service = chrome_path, options=custom_options)
None of these seem to work anymore, no other changes to this part of the code have been made.
The website I'm trying to translate is https://media.japanmetaldaily.com/market/list/
Any help is greatly appreciated.
I can't get that to work via Selenium/Chrome driver either... You could try use a hidden google translate api https://danpetrov.xyz/programming/2021/12/30/telegram-google-translate.html
I've put it into some python code that translates that table for you using that hidden api with source language "ja" and target language "en":
import urllib.parse
from bs4 import BeautifulSoup
import requests
headers = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36'
}
jap_page = requests.get('https://media.japanmetaldaily.com/market/list/',headers=headers)
soup = BeautifulSoup(jap_page.text,'html.parser')
table = soup.find('table',class_='rateListTable')
for line in table.find_all('a'):
endpoint = "https://translate.googleapis.com/translate_a/single?client=gtx&sl=ja&tl=en&dt=t&ie=UTF-8&oe=UTF-8&otf=1&ssel=0&tsel=0&kc=7&dt=at&dt=bd&dt=ex&dt=ld&dt=md&dt=qca&dt=rw&dt=rm&dt=ss&q="
text = urllib.parse.quote_plus(line.text.strip())
data = requests.get(endpoint+text).json()
print(data[0][0][1] +' = '+data[0][0][0])
Try chrome browser and chromedriver version 96.
In ver 97 not working

Cannot import name 'Instalysis' from 'instagramy'

I cannot figure out what this error requires... any ideas for a Python newbie? All pre requisites are installed... this is version 3.9 64-bit.
Details: "ADO.NET: Python script error.
ImportError: cannot import name 'Instalysis' from 'instagramy' (C:\Python\Python39\lib\site-packages\instagramy_init_.py)
"
Here's the test script I'm running:
from instagramy import Instalysis
# Instagram user_id of ipl teams
teams = ["chennaiipl", "mumbaiindians",
"royalchallengersbangalore", "kkriders",
"delhicapitals", "sunrisershyd",
"kxipofficial"]
data = Instalysis(teams)
# return the dataframe
data_frame = data.analyis()
data_frame
The instagramy package doesn't have the 'Instalysis' section to it. But it does have these:
from instagramy import InstagramUser
This will give uyou all of the info also I saw that you had parentheses at data_frame = data.analyis()
You will recieve an error if you have them as it isn't an executable function.
Go have a look at the pypi page here: Pypi- Instagramy
Hope this could help!

ValueError: character U+590048 is not in range [U+0000; U+10ffff] - MAC OS

Could someone help me with the below error while connecting to Teradata from my Python environment.
I'm using ODBC driver method and I've tried all the below existing methods to connect but no luck.
Note: if you are using windows, you can directly use these methods, however the problem comes when you are on MAC OS (not for all though)
USING TERADATA Module and SQL Alchemy.
import teradata
import pyodbc
server='111.00.00.00'
username = 'user'
password = 'pwd'
udaExec = teradata.UdaExec(appName="test", version="1.0",
logConsole=True)
ndw_con = udaExec.connect(method = 'odbc',authentication = "LDAP",
system=server, username=username, password=password)
# SQL ALCHEMY from teradata
from sqlalchemy import create_engine
user = 'user'
pwd = 'pwd'
host = '1'11.00.00.00'
td_engine = create_engine('teradata://'+user+':'+pwd+'#'+host+':22/' )
result = td_engine.execute('select top 100 * from temp.sampledata')
# USING PYODBC: the below code gave me a new error saying ('01000', "
[01000]
[unixODBC][Driver Manager]Can't open lib 'Teradata' : file not found
(0) (SQLDriverConnect)")
import pyodbc
td_conn = pyodbc.connect('DRIVER= .
{Teradata};DBCName='+server+';UID='+username+';PWD='+ password,
automcommit=True)
cursor = td_conn.cursor()
Regardless, I was unable to made a connection to teradata, could someone let me know what's going on here and how to fix this issue once for all.
Thanks!
Found the answer using pyodbc module. Replaced Driver = {Teradata} parameter with full path where the driver is located, check below fpr the full connection string. Please note that this can only be used on MAC OS.
td_conn = pyodbc.connect('DRIVER={/Library/Application Support/teradata/client/16.20/lib/tdataodbc_sbu.dylib};DBCName='+server+';UID='+username+';PWD='+ password, automcommit=True, authentication = "LDAP")

Clicking a button with Selenium in Python

Goal: use Selenium and Python to search for company name on LinkedIn's search bar THEN click on the "Companies" button in the navigation to arrive to information about companies that are similar to the keyword (rather than individuals at that company). See below for an example. "CalSTRS" is the company I search for in the search bar. Then I want to click on the "Companies" navigation button.
My Helper Functions: I have defined the following helper functions (including here for reproducibility).
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
from random import randint
from selenium.webdriver.common.action_chains import ActionChains
browser = webdriver.Chrome()
def li_bot_login(usrnm, pwrd):
##-----Log into linkedin and get to your feed-----
browser.get('https://www.linkedin.com')
##-----Find the Search Bar-----
u = browser.find_element_by_name('session_key')
##-----Enter Username and Password, Enter-----
u.send_keys(usrnm)
p = browser.find_element_by_name('session_password')
p.send_keys(pwrd + Keys.ENTER)
def li_bot_search(search_term):
#------Search for term in linkedin search box and land you at the search results page------
search_box = browser.find_element_by_css_selector('artdeco-typeahead-input.ember-view > input')
search_box.send_keys(str(search_term) + Keys.ENTER)
def li_bot_close():
##-----Close the Webdriver-----
browser.close()
li_bot_login()
li_bot_search('calstrs')
time.sleep(5)
li_bot_close()
Here is the HTML of the "Companies" button element:
<button data-vertical="COMPANIES" data-ember-action="" data-ember-action-7255="7255">
Companies
</button>
And the XPath:
//*[#id="ember1202"]/div[5]/div[3]/div[1]/ul/li[5]/button
What I have tried: Admittedly, I am not very experienced with HTML and CSS so I am probably missing something obvious. Clearly, I am not selecting / interacting with the right element. So far, I have tried...
companies_btn = browser.find_element_by_link_text('Companies')
companies_btn.click()
which returns this traceback:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"link text","selector":"Companies"}
(Session info: chrome=63.0.3239.132)
(Driver info: chromedriver=2.35.528161 (5b82f2d2aae0ca24b877009200ced9065a772e73),platform=Windows NT 10.0.16299 x86_64)
and
companies_btn_xpath = '//*[#id="ember1202"]/div[5]/div[3]/div[1]/ul/li[5]/button'
browser.find_element_by_xpath(companies_btn_xpath).click()
with this traceback...
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[#id="ember1202"]/div[5]/div[3]/div[1]/ul/li[5]/button"}
(Session info: chrome=63.0.3239.132)
(Driver info: chromedriver=2.35.528161 (5b82f2d2aae0ca24b877009200ced9065a772e73),platform=Windows NT 10.0.16299 x86_64)
and
browser.find_element_by_css_selector('#ember1202 > div.application-outlet > div.authentication-outlet > div.neptune-grid.two-column > ul > li:nth-child(5) > button').click()
which returns this traceback...
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"#ember1202 > div.application-outlet > div.authentication-outlet > div.neptune-grid.two-column > ul > li:nth-child(5) > button"}
(Session info: chrome=63.0.3239.132)
(Driver info: chromedriver=2.35.528161 (5b82f2d2aae0ca24b877009200ced9065a772e73),platform=Windows NT 10.0.16299 x86_64)
It seem that you simply used incorrect selectors.
Note that
#id of div like "ember1002" is dynamic value, so it will be different each time you visit page: "ember1920", "ember1202", etc...
find_element_by_link_text() can be applied to links only, e.g. <a>Companies</a>, but not buttons
Try to find button by its text content:
browser.find_element_by_xpath('//button[normalize-space()="Companies"]').click()
With capybara-py (which can be used to drive Selenium), this is as easy as:
page.click_button("Companies")
Bonus: This will be resilient against changes in the implementation of the button, e.g., using <input type="submit" />, etc. It will also be resilient in the face of a delay before the button appears, as click_button() will wait for it to be visible and enabled.

Resources