Why can't get price pair (USDT/KGS, USDT/KZT) by ticker in BINANCE API? - python-3.x

`So, this is my code
# Import libraries
import json
import requests
# defining key/request url
key = "https://api.binance.com/api/v3/ticker/price?symbol=USDTKGS"
# requesting data from url
data = requests.get(key)
data = data.json()
print(f"{data['symbol']} price is {data['price']}")
But for some reason I get this error:
Traceback (most recent call last):
File "rate.py", line 11, in <module>
print(f"{data['symbol']} price is {data['price']}")
KeyError: 'symbol'
Probably, this pair doesn't exist, but what to do in such situation?
I need to get the pair by API, but don't see any other ways to do so...
Please, help me!
I tried to use usual pairs like USDT/UAH, EUR/USDT - they work
But USDT/KGS, USDT/KZT doesn't work - they print error, but I need to get it

There is no such pair in Binance API currently (12/10/2022)

Related

how to count the number of methods other than OPTION on AWS API Gateway with python?

I need to count the number of methods on AWS API Gateway. I'm very new to python scripting.
It doesn't care about any path, and it's just that the OPTIONS method doesn't count.
GET, POST, PUT, etc. sections (exclude OPTION)
Here I attach the code that I'm currently trying to implement.
import boto3
Create a client for the API Gateway service
client = boto3.client('apigateway')
Get the ID of the API Gateway
api_id = 'my_api_id'
Get the list of resources in the API Gateway
resources = client.get_resources(restApiId=api_id)
Initialize a variable to store the number of methods
method_count = 0
# Get the list of methods for each resource
for resource in resources['items']:
resource_methods = client.get_methods(restApiId=api_id, resourceId=resource['id'])
for method in resource_methods['items']:
if method['httpMethod'] != 'OPTIONS':
method_count += 1
# Print the number of methods
print(method_count)
I got this error and am still confused about doing the calculations.
Please help. Your response is precious.
Traceback (most recent call last):
File "aws_count_endpoint_api-2.py", line 17, in <module>
resource_methods = client.get_methods(restApiId=api_id, resourceId=resource['id'])
File "/home/users/.local/lib/python3.8/site-packages/botocore/client.py", line 646, in __getattr__
raise AttributeError(
AttributeError: 'APIGateway' object has no attribute 'get_methods'

Python3 Trace back issue

First, let me disclaim that I am extremely new to the coding world and work requires me to use Python going forward. My experience is limited to having just completed SANS 573.
I'm trying to obtain the Date and Time from image files in their various formats. Excuse the example, I'm just using this to try and get it working.
This what I currently have:
from pathlib import Path
from PIL import Image
L88 = Path("L88.jpg")
def corvette(L88):
imgobj=Image.open(L88)
info=imgobj._getexif()
return info[36867]
>>> corvette(L88)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 4, in corvette
KeyError: 36867
>>>
I am running the function from the desktop which is where I have the image currently saved.
I don't understand the error messages.
KeyError: 36867 also has me confused too as when I looked up the tag, that is what I found and what worked when I just did my course.
The _getexif() method returns a Python datastructury called a dictionary.
The values in a dictionary are accessed by their key.
In this case, the keys are numbers.
Not all keys are mandatory. You can use the keys() method to see which ones exist. Try:
print(info.keys())
to see which keys exist.
You can also test if a key exists:
if 36867 in info:
return info[36867]
else:
return None
Note that there is a mapping between these numbers and their exif tags names available.
You can use that to create a readable dictionary.
Note that not all JPEG images have EXIF information. In that case, the _getexif() method returns None, so you should take that into account and test for that:
from PIL import Image, ExifTags
def image_info(path):
imgobj = Image.open(path)
info = imgobj._getexif()
if info is None:
return None
rv = {
ExifTags.TAGS[key]: info[key]
for key in info.keys()
}
return rv

pymodm can't find an object, while pymongo successfully finds it

I have a problem getting an object from the mongodb instance. If I search for this object with pymongo interface, everything is fine - object can be found. If try to do the very same thing with pymodm - it fails with error.
Here is what I'm doing:
from pymodm import connect, MongoModel, fields
from pymongo import MongoClient
class detection_object(MongoModel):
legacy_id = fields.IntegerField()
client = MongoClient(MONGODB_URI)
db = client[MONGODB_DEFAULT_SCHEME]
collection = db['detection_object']
do = collection.find_one({'legacy_id': 1437424})
print(do)
connect(MONGODB_URI)
do = detection_object.objects.raw({'legacy_id': 1437424}).first()
print(do)
The first print outputs this: {'_id': ObjectId('5c4099dcffa4fb11494d983d'), 'legacy_id': 1437424}. However, during the execution of this command: do = detection_object.objects.raw({'legacy_id': 1437424}).first() interpreter fails with the following error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/pymodm/queryset.py", line 127, in first
return next(iter(self.limit(-1)))
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/konsof01/PycharmProjects/testthisfuckingshit/settings.py", line 29, in <module>
do = detection_object.objects.raw({'legacy_id': 1437424}).first()
File "/usr/local/lib/python3.7/site-packages/pymodm/queryset.py", line 129, in first
raise self._model.DoesNotExist()
__main__.DoesNotExist
How can this be? I'm trying to query the very same object, with the same connection and collection. Any ideas, please?
you could try it as follows:
detection_object.objects.raw({'legacy_id': "1437424"} ).first()
probably the legacy_id is stored as string.
Othewise, make sure the db name is present at the end of the MONGO_URI as it is underlined in the docs.
Each document in your 'detection_object' collection requires to have '_cls' attribute. The string value stored in this attribute should be
__main__.classname
(class name according to your code is detection_object).
For example a document in your database needs to look like this:
{'_id': ObjectId('5c4099dcffa4fb11494d983d'), 'legacy_id': 1437424, '_cls': '__ main __.detection_object'}

Youtube video download using python script

i have this script for downloading the youtube video
from pytube import YouTube
yt = YouTube('https://www.youtube.com/watch?v=kAGacI3JwS4')
#yt.title
#yt.thumbnail_url
#yt.streams.all()
stream = yt.streams.first()
#stream
stream.download('C:\\Users\')
but i wanted this to happened based on a user prompt mode.so it should ask the user to enter the url then from there take it further and download the video,so i did like this
>>> pk=input("Enter the url:")
Enter the url:https://www.youtube.com/watch?v=GhklL_kStto
>>> pk
'https://www.youtube.com/watch?v=GhklL_kStto'
>>> pk.title
<built-in method title of str object at 0x02362770>
>>> pk.stream()
Traceback (most recent call last):
File "<pyshell#44>", line 1, in <module>
>>pk.stream()
AttributeError: 'str' object has no attribute 'stream'
so this is what the error am getting. can someone help me to solve this issue?
appreciate your support!
I hope this answer isn't too late, but the problem is that pk is a string because of this:
pk=input("Enter the url:")
pk is assiged to a string here (aka whatever you input), so there's no way you would have created the relevant Youtube object.
The description you get when you enter pk.title attests to that where it says it's a built in method of a str object. You haven't even made a YouTube object for it to have even a stream method.
You can fix it like this:
url = input("Enter the url:")
pkl = YouTube(url)
pkl.stream()
Hope this helps

Can you please show me how to fix this "ValueError: not enough values to unpack" in beautiful soup?

I am going through the Beautiful Soup page of this book
Python for Secret Agents by Steven Lott Dec 11, 2015 2nd edition
http://imgur.com/EgBnXmm
I ran the code from the page and got this error:
Traceback (most recent call last):
File "C:\Python35\Draft 1.py", line 12, in
timestamp_tag, *forecast_list = strong_list
ValueError: not enough values to unpack (expected at least 1, got 0)
For the life of me I cannot figure out the correct way to fix the code listed here in its entirety:
from bs4 import BeautifulSoup
import urllib.request
query= "http://forecast.weather.gov/shmrn.php?mz=amz117&syn=amz101"
with urllib.request.urlopen(query) as amz117:
document= BeautifulSoup(amz117.read())
content= document.body.find('div', id='content').div
synopsis = content.contents[4]
forecast = content.contents[5]
strong_list = list(forecast.findAll('strong'))
timestamp_tag, *forecast_list = strong_list
for strong in forecast_list:
desc= strong.string.strip()
print( desc, strong.nextSibling.string.strip() )
Thanks a million.
You are experiencing the differences between parsers. And, since you have not provided one explicitly, BeautifulSoup picked one automatically based on internal ranking and what you have installed in the Python environment - I suspect, in your case, it picked up lxml or html5lib. Switch to html.parser:
document = BeautifulSoup(amz117.read(), "html.parser")

Resources