Get Access to Sharepoint site using sharepy - python-3.x

I am trying to connect to a sharepoint site, however I get the AttributeError: 'NoneType' object has no attribute 'text'.
my code is as follows
import sharepy
s = sharepy.connect("https://example.sharepoint.com/",\
username='username', password='password')
Any idea why I get this error? I am allowed in the site, but not on the https://example.sharepoint.com/ server url, and also not allowed through python. I have tried several other ways in, using urllib.request and also requests. There my issue lies with having the HTTPError: Forbidden. The code run there is:
from sharepoint import SharePointSite, basic_auth_opener
server_url = 'https://example.sharepoint.com/'
opener = basic_auth_opener(server_url, "username",
"password")
site = SharePointSite(server_url, opener)
for sp_lists in site.lists:
print(sp_lists.id)
Seems like I have an issue with permission on the server or could it be something different?

sharepy shows this error if SharePoint returns an error message during the authentication. It is an error in the library because it doesn't know how to handle all error codes.
In your case the site url is wrong - "https://example.sharepoint.com" instead of "https://example.sharepoint.com/".

it might possible that you are missing header or other, please follow below link for more ideas on HTTP Error 403: Forbidden
https://mediatemple.net/community/products/dv/204644980/why-am-i-seeing-a-403-forbidden-error-message

Related

When I use pycurl to execute a curl command, I get error 3 "illegal characters found in URL" but when pasting said URL in Chome, it can be resolved

Good day. I'm writing a python program that requests some posts from my Facebook page. To do so, Facebook offers a tool that they call "Graph API Explorer". Using something similar to a GET request, I can get anything that I want (granted that I have access and a valid token). I've come up with my own solution for the Graph API Explorer and that is generating my URLs. After generating a URL, I use pycurl to get a JSON object from Facebook that contains all of my data.
When I use pycurl, I get the following error:
pycurl.error: (3, 'Illegal characters found in URL')
but when printing said URL and pasting it to a browser, I got a valid response.
URL: https://graph.facebook.com/v7.0/me?fields=posts%7Bmessage%2Cfrom%7D&access_token=<and my access token which is valid>
my code looks like this:
def get_posts_curl(nodes=['posts'], fields=[['message', 'from']], token_file='Facebook/token.txt'):
curl = pycurl.Curl()
response = BytesIO()
token = get_token_from_file(token_file)
# constructing request.
url = parse_facebook_url_request(nodes, fields, token)
url = convert_to_curl(url)
print("---URL---: " + url)
# curl session and settings.
curl.setopt(curl.CAINFO, certifi.where())
curl.setopt(curl.URL, url)
curl.setopt(curl.WRITEDATA, response)
curl.perform()
curl.close()
return response.getvalue().decode('utf-8')
The error pops up at curl.perform()
Some info that might be relevant:
All was working great a while ago. After transferring my program from my workstation (that is running Windows 10) to my server (Ubuntu 18.04 Server) still, all was working fine and I placed that project to the side. Only now that error pops up and I haven't touched the project in a while.
It seems that the token is causing the issue. I've tried about 100 tokens and some cause the problem and some don't. Also, a fix that solved it all was using urllib3.unquote
from urllib.parse import unquote
...
url = unquote(url)

Can MechanicalSoup log into page requiring SAML Auth?

I'm trying to download some files from behind a SSO (Single Sign-On) site. It seems to be SAML authenticated, that's where I'm stuck. Once authenticated I'll be able to perform API requests that return JSON, so no need to interpret/scrape.
Not really sure how to deal with that in mechanicalsoup (and relatively unfamiliar with web-programming in general), help would be much appreciated.
Here's what I've got so far:
import mechanicalsoup
from getpass import getpass
import json
login_url = ...
br = mechanicalsoup.StatefulBrowser()
response = br.open(login_url)
if verbose: print(response)
# provide the username + password
br.select_form('form[id="loginForm"]')
print(br.get_current_form().print_summary()) # Just to see what's there.
br['UserName'] = input('Email: ')
br['Password'] = getpass()
response = br.submit_selected().text
if verbose: print(response)
At this point I get a page telling me javascript is disabled and that I must click submit to continue. So I do:
br.select_form()
response = br.submit_selected().text
if verbose: print(response)
That's where I get a complaint about state information being lost.
Output:
<h2>State information lost</h2>
State information lost, and no way to restart the request<h3>Suggestions for resolving this problem:</h3><ul><li>Go back to the previous page and try again.</li><li>Close the web browser, and try again.</li></ul><h3>This error may be caused by:</h3><ul><li>Using the back and forward buttons in the web browser.</li><li>Opened the web browser with tabs saved from the previous session.</li><li>Cookies may be disabled in the web browser.</li></ul>
The only hits I've found on scraping behind SAML logins are all going with a selenium approach (and sometimes dropping down to requests).
Is this possible with mechanicalsoup?
My situation turned out to require Javascript for login. My original question about getting into SAML auth was not the true environment. So this question has not truly been answered.
Thanks to #Daniel Hemberger for helping me figure that out in the comments.
In this situation MechanicalSoup is not the correct tool (due to Javascript) and I ended up using selenium to get through authenication then using requests.

Python requests.put() is giving error: Invalid API call, Wrong method or URL

I have been trying to do a put request via BambooHR api to add a time off request. But it gives 404 and the header gives "Invalid API call, Wrong method or URL"
I couldn't understand what's wrong with my code. Add a time off request from Bamboo HR documentation is /api/gateway.php/{company}/v1/employees/{employee id}/time_off/request/ Sample: PUT /api/gateway.php/test/v1/employees/1/time_off/request/
import requests
url = 'https://api.bamboohr.com/api/gateway.php/johnsnowlabs/v1/employees/96/time_off/requests/?status=requested'
response1= requests.put(url, auth=('68d4165c9262fcf2302745a6d791b23dsfsd4107','John11')
print(response1.status_code)
it gives 404 and when response1.headers then the error message says Invalid API call, Wrong method or URL
I tried different combination of inputs in url but none seems to be working. However, all the GET requests are working, it's only the PUT one is not working.

urlopen(url) 403 Forbidden error

I'm using python to open an URL with the following code and sometimes I get this error:
from urllib import urlopen
url = "http://www.gutenberg.org/files/2554/2554.txt"
raw = urlopen(url).read()
error:'\n\n403 Forbidden\n\nForbidden\nYou don\'t have permission to access /files/2554/2554.txt\non this server.\n\nApache Server at www.gutenberg.org Port 80\n\n'
What is this?
Thank you
This is the web page blocking Python access as it is making requests with the header 'User-Agent'.
To get around this, download the 'urllib2' module and use this code:
req = urllib2.Request(url, headers ={'User-Agent':'Chrome'})
raw = urllib2.urlopen(req).read()
You are know accessing the site with the header 'Chrome' and should no longer be forbidden (I tried it myself and it worked).
Hope this helps.

401 Unauthorized error when trying to access schema

I am getting a 401 Unauthorized error when trying to import a schema.
<xsd:import namespace="http://niem.gov/niem/structures/2.0" schemaLocation="http://yak/NSI_FCS_Bin/niem-constrained/structures/2.0/structures.xsd"/>
I have uploaded the schema and the correct folder structure using a module. However, when the parent schema tries to import this schema (and all others) I get a 401 Unauthorized error. But if I type the exact same url into a web browser, I can view/download the file just fine. I made sure that I was logged into my sharepoint site before executing the code. Anyone have any ideas?
One work around for this problem is to add the schemas to your Template\Layouts folder and call the schema from there. I am still unsure why I was getting the unauthorized errors, but I did not get those errors when calling the schemas from the Template\Layouts folder. Hope this helps!

Resources