Retrieving Facebook Lead Data - python-3.x

I tryed to use this code to retrieving Facebook Lead Data
FacebookAdsApi.init(app_id, app_secret, access_token)
me = AdUser(fbid='me')
my_accounts = list(me.get_ad_accounts(fields=['name']))
my_account = my_accounts[3]
ads = my_account.get_ads(params=params)
for ad in ads[10:20]:
print(ad.get_leads())
But I get an empty response {"data": []} for each ad.
In ADS Manager, I see that there are leads.
App permissions:
pages_show_list,
ads_management,
leads_retrieval,
pages_read_engagement,
pages_read_user_content,
pages_manage_ads.
Second try:
params = {
'access_token':access_token
}
adgroup_id = ad_id
if access_token:
url = 'https://graph.facebook.com/v12.0/%s/leads' %adgroup_id
resp = session.get(url=url, params=params)
print(resp.text)
Annswer: {"data":[]}
Please tell me what needs to be added to get the correct result?

Related

PowerQuery (Excel) - POST rest api webservice call - fail get contents (500): internal server error

I try to call a webservice (sap business one) to login, so I can obtain a session id.
I try to use the following:
let
url = "https://WEB/b1s/v1/Login",
headers = [#"Content-type"="application/json", #"Connection"="keep-alive"],
postData = Json.FromValue([CompanyDB = "Company", Password = "12345", UserName = "test"]),
response = Web.Contents(
url,
[
Headers = headers,
Content = postData
]
),
jsonResponse = Json.Document(response)
in
jsonResponse
This gives me the response described in the title.
Now when I did obtain a session id and manually cal for a get request on the same I do get a result (eg.):
let
url = "https://WEB/b1s/v1/Items?$select=ItemCode,ItemName",
headers = [#"Prefer"="odata.maxpagesize=0", #"Cookie"="B1SESSION=1bba9408-dd9e-11ec-8000-000d3a83435c; ROUTEID=.node1"],
response = Web.Contents(
url,
[
Headers = headers
]
),
jsonResponse = Json.Document(response),
value = jsonResponse[value],
[...]
This gives the list of items as a result.
Doing all of the above in Postman returns a proper result. I would expect this out of the "LOGIN":
{
"odata.metadata": "https://WEB/b1s/v1/$metadata#B1Sessions/#Element",
"SessionId": "7c01d84a-dda2-11ec-8000-000d3a83435c",
"Version": "1000141",
"SessionTimeout": 30
}
Any idea what else I can try?
Cheers and thank you
Andreas

Authlib's Azure login throws an invalid_claim: Invalid claim "iss"

I am currently login with Google with no problems, using Authlib for my Starlette app, but Azure throws this invalid claim "iss" error when doing:
await client.parse_id_token(request, token)
Please, any help will be wonderful. Googling it I didn't found anything.
The complete code snippet its:
async def do_azure_login(request: Request) -> Any:
redirect_uri = request.url_for('authz_azure').replace(' ', '')
azure = OAUTH.create_client('azure')
return await azure.authorize_redirect(request, redirect_uri)
async def authz_azure(request: Request) -> HTMLResponse:
return await authz(request, OAUTH.azure)
async def authz(request: Request, client: OAuth) -> HTMLResponse:
token = await client.authorize_access_token(request)
user = dict(await client.parse_id_token(request, token))
request.session['username'] = user['email']
request.session['first_name'] = user.get('given_name', '')
request.session['last_name'] = user.get('family_name', '')
response = TEMPLATING_ENGINE.TemplateResponse(
name='app.html',
context={
'request': request
}
)
return response
I think the problem may be in using those:
AZURE_CONF_URL = (
'https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration'
)
AZURE_AUTHZ_URL = (
'https://login.microsoftonline.com/common/oauth2/authorize'
)
Problem should be this "issuer":"https://login.microsoftonline.com/{tenantid}/v2.0" placed in the AZURE_CONF_URL link. I've seen people having this same issue.
I'm still researching. This may be useful.
https://github.com/MicrosoftDocs/azure-docs/issues/38427
https://github.com/authlib/loginpass/issues/65

How to get access_token from fyers API?

I'm looking to get access_token from fyers API
I'm able to get authorization_code and build authorization_url to open it in browser to enter user credentials. access_token is displayed in browser's address when user enters credentials but my program is unable to retrieve the access_code.
Your help is much appreciable.
My code is as follows:
from fyers_api import accessToken
from fyers_api import fyersModel
import requests
import webbrowser
import urllib.request as ur
app_id = "XXXXXXXXX"
app_secret = "XXXXXXXXX"
app_session = accessToken.SessionModel(app_id, app_secret)
response = app_session.auth()
if response['code'] != 200:
print('CODE=' + str(response['code']))
print('MESSAGE=' + str(response['message']))
print('Exiting program...')
exit(0)
authorization_code = response['data']['authorization_code']
app_session.set_token(authorization_code)
authorization_url=app_session.generate_token('XXXXXX')
token = webbrowser.open(authorization_url)
#Following authorization url is opened in browser:
#https://api.fyers.in/api/v1/genrateToken?authorization_code=xxxxxxxxxxxxx&appId=xxxxxxxxx&user_id=xxxxxx
#User is redirected to following url after successful log-in:
#https://trade.fyers.in/?access_token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=&user_id=xxxxxx
print(token)
#token=”your_access_token”
#is_async = False #(By default False, Change to True for asnyc API calls.)
#fyers = fyersModel.FyersModel(is_async)
#fyers. get_profile(token = token)
Instead of writing the mentioned code, it is better to directly call Fyers Api.
import requests
url = 'https://api.fyers.in/api/v1/token'
requestParams = {
"fyers_id":"Your Client ID",
"password":"Your Password",
"pan_dob":"Your PAN card or DOB(DD-MM-YYYY)",
"appId":"YOur APP ID",
"create_cookie":False}
response = requests.post(url, json = requestParams )
print (response.text)
from fyers_api import accessToken
from fyers_api import fyersModel
app_id = "xxxxxxxxxx"
app_secret = "xxxxxxxxxx"
app_session = accessToken.SessionModel(app_id, app_secret)
response = app_session.auth()
print(app_session)
print(response)
authorization_code = response['data']['authorization_code']
app_session.set_token(authorization_code)
gen_token = app_session.generate_token()
print("token url is copy paste this url in browser and copy access
token excluding your id at Last ")
print(gen_token)
print("tokent printed thanks")
token="gAAAAABeTWk7AnufuuQQx0D0NkgABinWk7AnufuuQQx0DQ3ctAFWk7AnufuuQQx0DMQQwacJ-
_xUVnrTu2Pk5K5QCLF0SZmw7nlpaWk7AnufuuQQx0DG4_3EGCYw92-iAh8="
is_async = False
fyers = fyersModel.FyersModel(is_async)
print(fyers. get_profile(token = token))
fyers.funds(token = token)
print(fyers.funds(token = token))

Scraping from site that requires login, how to access the contents?

So I am trying to scrape a website that requires a login. I have used requests and submitted my login details, although when I try to extract the data from the website, I am not getting the website I am looking for.
USERNAME = "test#gmail.com"
PASSWORD = "test"
#MIDDLEWARE_TOKEN = "TESTTOKEN"
LOGIN_URL = "https://vrdistribution.com.au/auth/login/process"
VR_URL = "https://vrdistribution.com.au/categories/tabletop-gaming?page=1"
def main():
session_requests = requests.session()
# Get login csrf token
result = session_requests.get(LOGIN_URL)
tree = html.fromstring(result.text)
authenticity_token = list(set(tree.xpath("//input[#name='_token']/#value")))
# Create payload
payload = {
"email": USERNAME,
"password": PASSWORD,
"csrfmiddlewaretoken": authenticity_token
}
# Perform login
result = session_requests.post(LOGIN_URL, data = payload, headers = dict(referer = LOGIN_URL))
#Scrape
result = session_requests.get(VR_URL, headers =dict(referer=VR_URL))
response = requests.get(VR_URL)
soup = BeautifulSoup(response.text, 'lxml')
print(soup)
The output is not the same contents as the VR_URL(https://vrdistribution.com.au/categories/tabletop-gaming?page=1) that I had specified, when I inspect the page I want to scrape as opposed to the output of the soup object, it is completely different.
Is there a way for me to access and scrape contents off the VR_URL?

Trying to scrape a website that requires login

so i m new to this and been at it for almost a week now trying to scrape a website i use to collect analytics data (think of it like google analytics).
I tried playing around with xpath to figure out what this script is able to pull but all i get is "[]" as an output after running it.
Please help me find what i'm missing.
import requests
from lxml import html
#credentials
payload = {
'username': '<my username>',
'password': '<my password>',
'csrf-token': '<auth token>'
}
#open a session with login
session_requests = requests.session()
login_url = '<my website>'
result = session_requests.get(login_url)
#passing the auth token
tree = html.fromstring(result.text)
authenticity_token = list(set(tree.xpath('//input[#name=\'form_token\']/#value')))[0]
result = session_requests.post(
login_url,
data=payload,
headers=dict(referer=login_url)
)
#scrape the analytics dashboard from this event link
url = '<my analytics webpage url>'
result = session_requests.get(
url,
headers=dict(referer=url)
)
#print output using xpath to find and load what i need
trees = html.fromstring(result.content)
bucket_names = trees.xpath("//*[#id='statistics_dashboard']/div[1]/text()")
print(bucket_names)
print(result.ok)
print(result.status_code)
..........
this is what i get as a result
[]
True
200
Process finished with exit code 0
which is a big step for me because i've been getting so many errors to get to this point.

Resources