The following code (with variables changed) is what I am trying to use in order to contact a sharepoint server (and ultimately download files from). But I appear to be getting an error related to urllib, for which I can't work out the reason for.
from sharepoint import SharePointSite, basic_auth_opener
server_url = "http://sharepoint.domain.com/"
opener = basic_auth_opener(server_url, "domain/username", "password")
site = SharePointSite(server_url, opener)
sp_list = site.lists
for sp_list in site.lists:
print(sp_list.id, sp_list.meta['Title'])
The error is as follows
urllib.error.URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
Could you try with below code
import requests from requests_ntlm import HttpNtlmAuth
requests.get("http://sharepoint.domain.com", auth=HttpNtlmAuth('DOMAIN\\USERNAME','PASSWORD'))
Related
I am trying to read and send emails (O365) using OAuth2. This code works fine on my local machine, but
when I am trying to deploy this in a server where there is a proxy to access the internet, I am getting the below error.
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: /2d11ad74-bbf6-403f-32c7-6ff0e039a923/oauth2/v2.0/token (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000002507067C348>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))
Below is my code I tried:
import settings
from exchangelib import Credentials, Account, DELEGATE, BASIC, \
Configuration, NTLM, Message, Mailbox, FileAttachment, HTMLBody, FaultTolerance, IMPERSONATION
from exchangelib.protocol import BaseProtocol, NoVerifyHTTPAdapter
from exchangelib import OAuth2LegacyCredentials
credentials = OAuth2LegacyCredentials(
client_id= settings.clientId, client_secret= settings.clientSecret,
tenant_id=settings.tenentId,
username=settings.eUserName, password=settings.ePassword
)
config = Configuration(server = 'smtp.office365.com', credentials=credentials)
account = Account(settings.eUserName, config=config, access_type=DELEGATE)
m = Message(
account=account,
subject='Any Subject OAth2 Authentication',
body=HTMLBody('OAth2 Authentication'),
to_recipients=map(lambda x: Mailbox(email_address=x), ['myEmail#gmail.com'])
)
for item in account.inbox.all()[:10]:
print (item.sender.email_address)
m.send()
print('Email is being sent...')
You probably need to add setup proxy configuration. Here's how to do that in exchangelib:
https://ecederstrand.github.io/exchangelib/#proxies-and-custom-tls-validation
I am trying to connect to a FileMaker Databse server via python script, and my code was working before but has suddenly stopped, and i didnt make any changes to the portion of code that no longer works. I am encountering the following error:
Request error: HTTPSConnectionPool(host='**.**.*.*', port=443): Read timed out. (read timeout=30)
I have taken out the code that creates the server instance and connects/logs in, and then logs out without making any changes in the database, and i am still recieving the same error. However, i can connect to the filemaker server and database via the FileMaker applicaiton with no issues, and i can connect to the server using Telnet commands. I am on windows 10 and writing the code in PyCharm CE. I have reinstalled PyCharm, created a new virtual environment, and tried reinstalling the fmrest module, as well as using older versions. I have also increased the timeout time to give more time to login, which hasnt worked. I'm basically stumped on why i can no longer log in via the script, when it has been working perfectly in testing for the past couple weeks. My code is below.
import fmrest
from fmrest.exceptions import FileMakerError
from fmrest.exceptions import RequestException
import sys
import requests
# connect to the FileMaker Server
requests.packages.urllib3.disable_warnings()
fmrest.utils.TIMEOUT = 30
try:
fms = fmrest.Server('https://**.**.*.*',
user = '***',
password = '******',
database = 'Hangtag Order Management',
layout = 'OrderAdmin',
verify_ssl = False)
except ValueError as err:
print('Failed to connect to server. Please check server credentials and status and try again\n\n' + str(err))
sys.exit()
print(fms)
print('Attempting to connect to FileMaker Server...')
try:
fms.login()
print('Login Successful\n')
except FileMakerError as err:
print(err)
sys.exit()
except RequestException as err:
print('There was an error connecting to the server, the request timed out\n\n' + str(err))
sys.exit()
fms.logout()
This should successfully login to the database, print 'login successful' and log out. Calling print(fms) returns
<Server logged_in=False database=Hangtag Order Management layout=OrderAdmin>
but i receive the connection error upon the login attempt. I am assuming the error is server side, but i dont know enough about servers to accurately trouble shoot. Could the server have blacklisted my IP for making so many login attempts during my testing? and if so where would i undo that/prevent it from happening again?
A couple of server reboots fixed the error, not really sure of the ultimate cause.
I need to run a code that contains these lines:
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
There seems to be a problem with executing it.
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
As the code tries to download something from somewhere and my internet connecton works well, I assume that server that it wants to access is down.
How can I set it up manually?
fetch_mldata will by default check the data in `'~/scikit_learn_data/mldata' to see if the dataset is already downloaded or not.
According to source code
# if the file does not exist, download it
if not exists(filename):
urlname = MLDATA_BASE_URL % quote(dataname)
So in your case, it will check the location
~/scikit_learn_data/mldata/mnist-original.mat
and if not found, it will download from
http://mldata.org/repository/data/download/matlab/mnist-original.mat
which currently is down as you suspected.
So what you can do is download the dataset from any other location like this:
https://github.com/amplab/datascience-sp14/blob/master/lab7/mldata/mnist-original.mat
and keep that in the above folder.
After that when you run fetch_mldata() it should pick the downloaded dataset without connecting mldata.org.
Update:
Here ~ refers to the user home folder. You can use the following code to know the default location of that folder according to your system.
from sklearn.datasets import get_data_home
print(get_data_home())
Anaconda - Python 3.6
OpenSSL 1.0.2
Operating System: Windows 7
Phase 1 (Completed): Using selenium: launched, navigated, and extracted various data elements including a table from site. Extracted Hyperlinks contained in table that are direct links to documents.
Phase 2: Taking extracted hyperlink from table I need to download the files to a specified folder on the shared drive.
Tried:
import urllib.request
url = 'tts website/test.doc'
urllib.request.urlretrieve(url,'C:\Users\User\Desktop\')
Error I get is sslv3 alert handshake failure
With the site opened, I have clicked on the Lock icon and clicked "Install Certificate". I have saved the certificate to my "Trusted Root Certification Authorities" in the Certificate store.
I can see the certificate name (when i installed certificate) from the above step in the 58 CA Certificates shown by running the following code:
import socket
import ssl
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
context.verify_mode = ssl.CERT_REQUIRED
context.load_default_certs()
ca_certs = context.get_ca_certs()
print('{} CA Certificates loaded: ' .format(len(ca_certs)))
for cert_dict in ca_certs:
print(cert_dict['subject'])
print()
I can't figure out how to secure a SSL connection to the website/server in order to download the file from each of the hyperlinks?? This website uses Single Sign On(SSO) and automatically logins me in when I first launch the website.
I have tried to use server server.net 443 to connect to server, but can't seem to get the scripting right to connect and retrieve the document.
I have connected directly to the server and abstracted the certificate details shown here:
HOST, PORT = server.net, 443
ctx = ssl.create_default_context()
s = ctx.wrap_socket(socket.socket(), server_hostname=HOST)
c.connect((HOST, PORT))
cert = s.getpeercert()
print(cert)
When i run urlretrieve i am still getting the same error: handshake. When reviewing my ca certificates i see there is a Personal certificate for my Windows login (username) listed there, that must be how it is automatically logging me in using SSO. How do i take all of this information, connect to the website using my SSO, and retrieve the documents?
Latest UPDATE:
I am finding pycurl to be promising, however I feel like I need a little assistance making a few tweaks to get it working.
import pycurl
fp = open('Test.doc','wb')
curl = pycurl.Curl()
curl.setopt(pycurl.URL, url) # see url link to go to word doc
curl.setopt(pycurl.FOLLOWLOCATION, 1)
curl.setopt(pycurl.MAXREDIRS, 5)
curl.setopt(pycurl.CONNECTTIMEOUT,30)
curl.setopt(pycurl.TIMEOUT, 300)
try:
curl.setopt(pycurl.WRITEDATA, fp)
curl.perform()
except:
import traceback
traceback.print_exc(file=sys.stderr)
sys.stderr.flush()
curl.close()
fp.close()
This code yields no error, however the created word doc contains an error displaying a print screen of the log on page of the website.
Main Problem: HTTPS connection using Single Signon connection behind a corporate network proxy server.
I have been trying to get this to work to validate cacert, but I have been getting this error message now:
curl.setopt(pycurl.SSL_VERIFYPEER, 1)
curl.setopt(pycurl.SSL_VERIFYPEER, 2)
curl.setopt(pycurl.CAINFO, certifi.where())
but now i am getting ERROR: 51, CERT_TRUST_IS_UNTRUSTED_ROOT
How do i add proxy if that is causing the error? and Secondly, how do i attach the ca certificate file directly?
I am attempting to make a web scraper in Python 3. I keep getting a WinError 10060 stating that the connection failed because the connected party did not properly respond or the connected host failed to respond. Using both the urllib and also trying with requests libraries both create the 10060 error. When using requests the error states they the max retries exceeded with the URL.
import urllib.request
with urllib.request.urlopen('http://python.org') as response:
html = response.read()
People have mentioned that it is likely a proxy or firewall issue as I am attempting to do this on my work network.
Turns out this was a proxy authentication error. Simply adding the proxy with your credentials in the get command solved this.
proxies = {'http': 'http://user:pass#url:8080', 'https': 'http://user:pass#url:8080'}
page = requests.get(webpage, proxies)