Docusign API: GET userID by e-mail address - docusignapi

Creating an app in python that uses the Docusign API to delete user accounts. It appears I'll need the users userID to accomplish this so I need to make 2 calls, one to get the UserID and then one to delete the user.
The problem is that when I make a response.get() for the user I get every user account.
email = sys.arg[1]
account_id = "<account_id goes here>"
auth = 'Bearer <long token goes here>'
head = {
'Accept': 'application/json',
'Accept-Encoding': 'gzip,deflate,sdch',
'Accept-Language': 'en-US,en;q=0.8,fa;q=0.6,sv;q=0.4',
'Cache-Control': 'no-cache',
'Origin': 'https://apiexplorer.docusign.com',
'Referer': 'https://apiexplorer.docusign.com/',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36',
'Authorization': auth,
'Content-Length': '100',
'Content-Type': 'application/json'}
url = 'https://demo.docusign.net/restapi/v2/accounts/{}/users'.format(account_id)
data = {"users": [{"email": email}]}
response = requests.get(url, headers=head, json=data)
print(response.text)
Why do I get a response.text with every user? And how can I just get a single user's information based on the e-mail address?

With the v2 API you can do this using the email_substring query parameter that allows you to search on specific users.
GET /v2/accounts/{accountId}/users?email_substring=youremail
https://developers.docusign.com/esign-rest-api/v2/reference/Users/Users/list

Related

Steam Api Store Sales

I would like to do a little script with nodejs that tracks interesting steam promos.
First of all I would like to retrieve the list of games on sale.
I tried several things, without success ...
GET request on the store.steampowered.com page (works but only displays the first 50 results because the rest only appears when you scroll to the bottom of the page)
Use of the API but it would be necessary to retrieve the list of all the games but it would take too long to check if each one is in promotion
If anyone has a solution, I'm interested
Thank's a lot
You can get the list of featured games by sending a GET request to https://store.steampowered.com/api/featuredcategories, though this may not give you all of the results you're looking for.
import requests
url = "http://store.steampowered.com/api/featuredcategories/?l=english"
res = requests.get(url)
print(res.json())
You can also get all the games on sale by sending a GET request to https://steamdb.info/sales/ and doing some extensive HTML parsing. Note that SteamDB is not maintained by Valve at all.
Edit: The following script does the GET request.
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.5',
'DNT': '1',
'Alt-Used': 'steamdb.info',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Cache-Control': 'max-age=0',
'TE': 'Trailers',
}
response = requests.get('https://steamdb.info/sales/', headers=headers)
print(response)

KRS ekrs.ms.gov.pl get documents from requests

I want get information about documents when enter company id 0000000155
My pseudo code I did know where i should pass company id.
url = "https://ekrs.ms.gov.pl/rdf/pd/search_df"
payload={}
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
}
response = requests.request("GET", url, headers=headers, data=payload)
print(response.text)
First of all- you forgot to close the string after the 'Accept' dictionary value. That is to say, your headers should look like this:
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:88.0) Gecko/20100101 Firefox/88.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'
}
As for the payload, after checking the website you linked, I noticed that the ID is sent in the unloggedForm:krs2 parameter. You can add this to the payload like so:
payload={
'unloggedForm:krs2': 0000000155
}
However, in reality, it's nearly impossible to scrape the website like so, because there is ReCaptcha built into the website. Your only options now are either to use Selenium and hope that ReCaptcha doesn't block you, or to somehow reverse engineer ReCaptcha (unlikely).

Requests vs Curl

I have an application running on AWS that makes a request to a page to pull meta tags using requests. I'm finding that page is allowing curl requests, but not allowing requests from the requests library.
Works:
curl https://www.seattletimes.com/nation-world/mount-st-helens-which-erupted-41-years-ago-starts-reopening-after-covid-closures/
Hangs Forever:
imports requests
requests.get('https://www.seattletimes.com/nation-world/mount-st-helens-which-erupted-41-years-ago-starts-reopening-after-covid-closures/')
What is the difference between curl and requests here? Should I just spawn a curl process to make my requests?
Either of the agents below do indeed work. One can also use the user_agent module (located on pypi here) to generate random and valid web user agents.
import requests
agent = (
"Mozilla/5.0 (X11; Linux x86_64) "
"AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/85.0.4183.102 Safari/537.36"
)
# or can use
# agent = "curl/7.61.1"
url = ("https://www.seattletimes.com/nation-world/"
"mount-st-helens-which-erupted-41-years-ago-starts-reopening-after-covid-closures/")
r = requests.get(url, headers={'user-agent': agent})
Or, using the user_agent module:
import requests
from user_agent import generate_user_agent
agent = generate_user_agent()
url = ("https://www.seattletimes.com/nation-world/"
"mount-st-helens-which-erupted-41-years-ago-starts-reopening-after-covid-closures/")
r = requests.get(url, headers={'user-agent': agent})
To further explain, requests sets a default user agent here, and the seattle times is blocking this user agent. However, with python-requests one can easily change the header parameters in the request as shown above.
To illustrate the default parameters:
r = requests.get('https://google.com/')
print(r.request.headers)
>>> {'User-Agent': 'python-requests/2.25.1', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}
vs. the updated header parameter
agent = "curl/7.61.1"
r = requests.get('https://google.com/', headers={'user-agent': agent})
print(r.request.headers)
>>>{'user-agent': 'curl/7.61.1', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'}

Discord.py Register request headers?

I am attempting to register a user in Discord using the following code:
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.12) Gecko/20050915 Firefox/1.0.7',
'Content-Type': 'application/json'
}
r = requests.post('https://canary.discordapp.com/api/v6/auth/register',headers=headers)
print(r)
The output I am getting is HTTP Error 400.
Question: what headers should I use for this to succeed?

I have error The header content contains invalid characters when sending HTTP request

I have function getBody, which gets body from url, on some url (I don't know exactly of which one) I always get error:
_http_outgoing.js:494
throw new TypeError('The header content contains invalid characters');
Those urls contains mostly danish accents characters, this is maybe problem. I have set header : 'Content-Type': 'text/plain; charset=UTF-8', which set charset to UTF-8. Probably header host is problem.
I have tried using punycode, or url, which converts url to ASCII, but those converted URLs did not work.
function getBody(n) {
var url = n; //urls[n];
url = (url.indexOf('http://')==-1 && url.indexOf('https://')==-1) ? 'http://'+url : url;
instance.get(url,
{
headers: {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36',
'Content-Type': 'text/plain; charset=UTF-8'
},
}
}

Resources