https://developers.google.com/youtube/v3/code_samples/apps-script#subscribe_to_channel
Hello,
I cant figure how to subscribe to a youtube channel with a post request. Im not looking to use YoutubeSubscriptions as shown above. Im simple looking to pass an api key, but cant seem to figure it out. Any suggestions?
If you don't want to use the YoutubeSubscriptions, you have to get the session_token after login youtube account.
The session_token is stored in the hidden input tag:
document.querySelector('input[name=session_token]').value
or full-text search XSRF_TOKEN field, the corresponding value is session_token, reference regular:
const regex = /\'XSRF_TOKEN\':(.*?)\"(.*?)\"/g
Below is an implementation in Python:
def YouTubeSubscribe(url,SessionManager):
while(1):
try:
html = SessionManager.get(url).content
session_token = (re.findall("XSRF_TOKEN\W*(.*)=", html , re.IGNORECASE)[0]).split('"')[0]
id_yt = url.replace("https://www.youtube.com/channel/","")
params = (('name', 'subscribeEndpoint'),)
data = [
('sej', '{"clickTrackingParams":"","commandMetadata":{"webCommandMetadata":{"url":"/service_ajax","sendPost":true}},"subscribeEndpoint":{"channelIds":["'+id_yt+'"],"params":"EgIIAg%3D%3D"}}'),
('session_token', session_token+"=="),
]
response = SessionManager.post('https://www.youtube.com/service_ajax', params=params, data=data)
check_state = json.loads(response.content)['code']
if check_state == "SUCCESS":
return 1
else:
return 0
except Exception as e:
print "[E] YouTubeSubscribe:"+ str(e)
pass
Related
I have an script using Python and PyVimeo that I am working on to use the "GEThttps://api.vimeo.com/videos/{video_id}" so I can get the file name. When I try to run my app, I am getting an error {'error': "The requested video couldn't be found."}. However, when I use this same video ID under the Try it out section (https://developer.vimeo.com/api/reference/videos#get_video), it works fine.
I am assuming there is something wrong with my code, but if I use the demo from the github example (about_me = v.get('/me')), it works fine and that needs authentication as well.
Is there something simple I am missing? Thank you so much.
import vimeo
v = vimeo.VimeoClient(
token= 'VimeoToken',
key= 'VimeoKey',
secret= 'VimeoSecret'
)
class Vimeo:
def get_vimeo_data(video_file):
uri = 'https://api.vimeo.com/videos/{video_file}'
# uri = 'https://api.vimeo.com/me/videos' - This response works
response = v.get(uri)
data = response.json()
print(data)
Vimeo.get_vimeo_data(55555)
You forgot to add an f before your f-string.
class Vimeo:
def get_vimeo_data(video_file):
# THIS f
uri = f"https://api.vimeo.com/videos/{video_file}"
# uri = 'https://api.vimeo.com/me/videos' - This response works
response = v.get(uri)
data = response.json()
print(data)
I am trying to build a Discord Music Bot and I need to search the YouTube using keywords given by the user. Currently I know how to play from a url.
loop = loop or asyncio.get_event_loop()
data = await loop.run_in_executor( None, lambda: ytdl.extract_info(url, download=not stream))
if "entries" in data:
data = data["entries"][0]
filename = data["url"] if stream else ytdl.prepare_filename(data)
return cls(discord.FFmpegPCMAudio(filename, **ffmpeg_options), data=data)
Youtube_DL has a extract_info method that you can use. Instead of giving it a link, you just have to pass ytsearch:args like so:
from requests import get
from youtube_dl import YoutubeDL
YDL_OPTIONS = {'format': 'bestaudio', 'noplaylist':'True'}
def search(arg):
with YoutubeDL(YDL_OPTIONS) as ydl:
try:
get(arg)
except:
video = ydl.extract_info(f"ytsearch:{arg}", download=False)['entries'][0]
else:
video = ydl.extract_info(arg, download=False)
return video
A few important things with this function:
It works with both words and urls
If you make a youtube search, the output will be a list a dictionnaries. In this case, it will return the first result
It will return a dictionnary containing the following informations:
video_infos = search("30 sec video")
#Doesn't contain all the data, some keys are not very important
cleared_data = {
'channel': video['uploader'],
'channel_url': video['uploader_url'],
'title': video['title'],
'description': video['description'],
'video_url': video['webpage_url'],
'duration': video['duration'], #in seconds
'upload_date': video['upload_data'], #YYYYDDMM
'thumbnail': video['thumbnail'],
'audio_source': video['formats'][0]['url'],
'view_count': video['view_count'],
'like_count': video['like_count'],
'dislike_count': video['dislike_count'],
}
I'm not sure youtube-dl is good to search for youtube urls using keywords. You should probably take a look at youtube-search for this.
I'm querying Facebook with the following code, iterating over a list of page names to get the pages' numeric ID and store it in a dictionary. I keep catching a HTTP 500 error, however; this doesn't appear in the short list I present here, though. See code:
import json
def FB_IDs(page_name, access_token=access_token):
""" get page's numeric information """
# construct URL
base = "https://graph.facebook.com/v2.4"
node = "/" + str(page_name)
parameters = "/?access_token=%s" % access_token
url = base + node + parameters
# retrieve data
with urllib.request.urlopen(url) as url:
data = json.loads(url.read().decode())
return data
pages_ids_dict = {}
for page in pages:
pages_ids_dict[page] = FacebookIDs(page, access_token)['id']
How can I automate this and avoid the error?
There is a pretty standard helper function for this, which you might want to look at:
### HELPER FUNCTION ###
def request_until_succeed(url):
""" helper function to catch HTTP error 500"""
req = urllib.request.Request(url)
success = False
while success is False:
try:
response = urllib.request.urlopen(req)
if response.getcode() == 200:
success = True
except Exception as e:
print(e)
time.sleep(5)
print("Error for URL") # use following code if URL shall be printed %s: %s" % (url, datetime.datetime.now()))
return response.read()
Implement that function into yours so your function calls that the URL through that one and you should be fine.
I'm trying to extract emails from web pages, here is my email grabber function:
def emlgrb(x):
email_set = set()
for url in x:
try:
response = requests.get(url)
soup = bs.BeautifulSoup(response.text, "lxml")
emails = set(re.findall(r"[a-z0-9\.\-+_]+#[a-z0-9\.\-+_]+\.[a-z]+", soup.text, re.I))
email_set.update(emails)
except (requests.exceptions.MissingSchema, requests.exceptions.ConnectionError):
continue
return email_set
This function should be fed by another function, that creates a list of url. Feeder function:
def handle_local_links(url, link):
if link.startswith("/"):
return "".join([url, link])
return link
def get_links(url):
try:
response = requests.get(url, timeout=5)
soup = bs.BeautifulSoup(response.text, "lxml")
body = soup.body
links = [link.get("href") for link in body.find_all("a")]
links = [handle_local_links(url, link) for link in links]
links = [str(link.encode("ascii")) for link in links]
return links
It continues with many exceptions, which if raised - return empty list(not important). However return value from get_links() look like this:
["b'https://pythonprogramming.net/parsememcparseface//'"]
of course there are many of links in the list(cannot post it - reputation). emlgrb() function is not able to process the list (InvalidSchema: No connection adapters were found) However if I manually remove b and redundant quotes - so the list looks like this:
['https://pythonprogramming.net/parsememcparseface//']
emlgrb() works. Any suggestion where is the problem or haw to create "cleaning function" to get second list from first - are welcomed.
Thanks
The solution is to drop .encode('ascii')
def get_links(url):
try:
response = requests.get(url, timeout=5)
soup = bs.BeautifulSoup(response.text, "lxml")
body = soup.body
links = [link.get("href") for link in body.find_all("a")]
links = [handle_local_links(url, link) for link in links]
links = [str(link) for link in links]
return links
You can add coding in str() like in this pydoc: str(object=b'', encoding='utf-8', errors='strict')
That's because str() calls .__repr__() or .__str__() on the object, thus if it is bytes, then output is "b'string'". Actually that's what gets printed when you do print(bytes_obj). And calling .ecnode() on str object creates bytes object!
I'm trying to query Facebook for different information, for example - friends list. And it works fine, but of course it only gives limited number of results. How do I access the next batch of results?
import facebook
import json
ACCESS_TOKEN = ''
def pp(o):
with open('facebook.txt', 'a') as f:
json.dump(o, f, indent=4)
g = facebook.GraphAPI(ACCESS_TOKEN)
pp(g.get_connections('me', 'friends'))
The result JSON does give me paging-cursors-before and after values - but where do I put it?
I'm exploring Facebook Graph API through the facepy library for Python (works on Python 3 too), but I think I can help.
TL-DR:
You need to append &after=YOUR_AFTER_CODE to the URL you've called (e.g: https://graph.facebook/v2.8/YOUR_FB_ID/friends/?fields=id,name), giving you a link like this one: https://graph.facebook/v2.8/YOUR_FB_ID/friends/?fields=id,name&after=YOUR_AFTER_CODE, that you should make a GET Request.
You'll need requests in order to make a get request for Graph API using your user ID (I'm assuming you know how to find it programatically) and some url similar to the one I give you below (see URL variable).
import facebook
import json
import requests
ACCESS_TOKEN = ''
YOUR_FB_ID=''
URL="https://graph.facebook.com/v2.8/{}/friends?access_token={}&fields=id,name&limit=50&after=".format(YOUR_FB_ID, ACCESS_TOKEN)
def pp(o):
all_friends = []
if ('data' in o):
for friend in o:
if ('next' in friend['paging']):
resp = request.get(friend['paging']['next'])
all_friends.append(resp.json())
elif ('after' in friend['paging']['cursors']):
new_url = URL + friend['paging']['cursors']['after']
resp = request.get(new_url)
all_friends.append(resp.json())
else:
print("Something went wrong")
# Do whatever you want with all_friends...
with open('facebook.txt', 'a') as f:
json.dump(o, f, indent=4)
g = facebook.GraphAPI(ACCESS_TOKEN)
pp(g.get_connections('me', 'friends'))
Hope this helps!