Binance API in python APIError(code=-1121): Invalid symbol - python-3.x

I am trying to code a binance Buy function. However the code
from binance.enums import *
order = client.create_order(symbol='DOGEUSDT', side = 'BUY', type = 'MARKET', quantity = 475, timeInForce='GTC')
this code outputs > APIError(code=-1121): Invalid symbol.
Also, for the same symbol,
print(client.get_symbol_info(symbol="DOGEUSDT"))
ouptuts > None
The symbol DOGEUSDT exists is the orderbook. https://api.binance.com/api/v3/ticker/bookTicker
I dont know why the same symbol I get from binance is invalid.

Are you using the testnet? I had the same error, however when I removed testnet=True from the client initialization the error was resolved.

two things can happen
client.API_URL = https://testnet.binance.vision/api
in that case change it to
https://api.binance.com/api
Client(api_key, secret_key, testnet=True)
in that case change testnet=False or delete it

Related

bioMart Package error: error in function useDataset

I am trying to use the biomaRt package to access the data from Ensembl, however I got error message when using the useDataset() function. My codes are shown below.
library(httr)
listMarts()
ensembl = useMart("ENSEMBL_MART_ENSEMBL")
listDatasets(ensemble)
ensembl = useDataset("hsapiens_gene_ensembl",mart = ensemble)
When I type useDataset function i got error message like this:
> ensembl = useDataset("hsapiens_gene_ensembl",mart = ensembl)
Ensembl site unresponsive, trying asia mirror
Error in textConnection(text, encoding = "UTF-8") :
invalid 'text' argument
and sometimes another different error message showed as:
> ensembl = useDataset("hsapiens_gene_ensembl",mart = ensembl)
Ensembl site unresponsive, trying asia mirror
Error in textConnection(bmResult) : invalid 'text' argument
it seems like that the mirror automatically change to asia OR useast OR uswest, but the error message still shows up over and over again, and i don't know what to do.
So if anyone could help me with this? I will be very grateful for any help or suggestion.
Kind regards Riley Qiu, Dongguan, China

Error occurred on routing_api __get _route response status code 403

I'd like to compute the travel time between two points. So I use the RoutingApi from herepy library (as reported in the example at https://github.com/abdullahselek/HerePy/blob/master/examples/routing_api.py):
from herepy import (
RoutingApi,
RouteMode,
MatrixRoutingType,
MatrixSummaryAttribute,
RoutingTransportMode,
RoutingMode,
RoutingApiReturnField,
RoutingMetric,
RoutingApiSpanField,
AvoidArea,
AvoidFeature,
Avoid,
Truck,
ShippedHazardousGood,
TunnelCategory,
TruckType,
)
routing_api = RoutingApi(api_key="my_key")
response = routing_api.truck_route(
waypoint_a=[lat_a, lon_a],
waypoint_b=[lat_b, lon_b],
modes=[RouteMode.truck, RouteMode.fastest],
)
print(response.as_dict())
Though, even if my api key is valid and "enabled" on the HERE developer platform, I get the following error message:
HEREError: Error occurred on routing_api __get _route response status code 403
Can anyone explain to me why is this happening and how to solve that? Thank you in advance.
The issue is with coordinates.
Looking at the example at https://github.com/abdullahselek/HerePy/blob/master/examples/routing_api.py
if you try:
response = routing.car_route(waypoint_a=[41.9798, -87.8801], waypoint_b=[41.9043, -87.9216], modes=[herepy.RouteMode.car, herepy.RouteMode.fastest])
print(response.as_dict())
it should work.

Google cloud function (python) does not deploy - Function failed on loading user code

I'm calling a simple python function in google cloud but cannot get it to save. It shows this error:
"Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation."
Logs don't seem to show much that would indicate error in the code. I followed this guide: https://blog.thereportapi.com/automate-a-daily-etl-of-currency-rates-into-bigquery/
With the only difference environment variables and the endpoint I'm using.
Code is below, which is just a get request followed by a push of data into a table.
import requests
import json
import time;
import os;
from google.cloud import bigquery
# Set any default values for these variables if they are not found from Environment variables
PROJECT_ID = os.environ.get("PROJECT_ID", "xxxxxxxxxxxxxx")
EXCHANGERATESAPI_KEY = os.environ.get("EXCHANGERATESAPI_KEY", "xxxxxxxxxxxxxxx")
REGIONAL_ENDPOINT = os.environ.get("REGIONAL_ENDPOINT", "europe-west1")
DATASET_ID = os.environ.get("DATASET_ID", "currency_rates")
TABLE_NAME = os.environ.get("TABLE_NAME", "currency_rates")
BASE_CURRENCY = os.environ.get("BASE_CURRENCY", "SEK")
SYMBOLS = os.environ.get("SYMBOLS", "NOK,EUR,USD,GBP")
def hello_world(request):
latest_response = get_latest_currency_rates();
write_to_bq(latest_response)
return "Success"
def get_latest_currency_rates():
PARAMS={'access_key': EXCHANGERATESAPI_KEY , 'symbols': SYMBOLS, 'base': BASE_CURRENCY}
response = requests.get("https://api.exchangeratesapi.io/v1/latest", params=PARAMS)
print(response.json())
return response.json()
def write_to_bq(response):
# Instantiates a client
bigquery_client = bigquery.Client(project=PROJECT_ID)
# Prepares a reference to the dataset
dataset_ref = bigquery_client.dataset(DATASET_ID)
table_ref = dataset_ref.table(TABLE_NAME)
table = bigquery_client.get_table(table_ref)
# get the current timestamp so we know how fresh the data is
timestamp = time.time()
jsondump = json.dumps(response) #Returns a string
# Ensure the Response is a String not JSON
rows_to_insert = [{"timestamp":timestamp,"data":jsondump}]
errors = bigquery_client.insert_rows(table, rows_to_insert) # API request
print(errors)
assert errors == []
I tried just the part that does the get request with an offline editor and I can confirm a response works fine. I suspect it might have to do something with permissions or the way the script tries to access the database.

400 Caller's project doesn't match parent project

I have this block of code that basically translates text from one language to another using the cloud translate API. The problem is that this code always throws the error: "Caller's project doesn't match parent project". What could be the problem?
translation_separator = "translated_text: "
language_separator = "detected_language_code: "
translate_client = translate.TranslationServiceClient()
# parent = translate_client.location_path(
# self.translate_project_id, self.translate_location
# )
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = (
os.getcwd()
+ "/translator_credentials.json"
)
# Text can also be a sequence of strings, in which case this method
# will return a sequence of results for each text.
try:
result = str(
translate_client.translate_text(
request={
"contents": [text],
"target_language_code": self.target_language_code,
"parent": f'projects/{self.translate_project_id}/'
f'locations/{self.translate_location}',
"model": self.translate_model
}
)
)
print(result)
except Exception as e:
print("error here>>>>>", e)
Your issue seems to be related to the authentication method that you are using on your application, please follow the guide for authention methods with the translate API. If you are trying to pass the credentials using code, you can explicitly point to your service account file in code with:
def explicit():
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json(
'service_account.json')
Also, there is a codelab for getting started with the translation API with Python, this is a great step by step getting started guide for running the translate API with Python.
If the issue persists, you can try creating a Public Issue Tracker for Google Support

dnspython aws lambda function returning "dns.resolver.Answer object at 0x7fb830c2a450"

I originally had this script working using the socket module, which is why x is defined the way it is. However, I ended up needing to specify the DNS server being used.
This script is currently running successfully in AWS Lambda, however instead of an FQDN/IP address as an output, I am receiving the output defined at the bottom (which is also coming through in the SNS topic).
I can't seem to find any information about what might be going on here. I am hoping someone can shed some light on how I can fix this.
import boto3
import dns.resolver
import dns.query
my_resolver = dns.resolver.Resolver()
my_resolver.nameservers = ['0.0.0.0'] #IP omitted, but internal DNS server is being used
sns = boto3.client('sns')
a = my_resolver.query('crl.godaddy.com')
x = ('gdcrl.godaddy.com.akadns.net', ['crl.godaddy.com'], ['72.167.18.237'])
def cb_crl_check(event=None, context=None):
crlgodaddy(a, x)
def crlgodaddy(a, x):
if a == x:
pass
else:
response = sns.publish(
TopicArn='arn:aws:sns:xxxxxxxxxxxxxx',
Message=('crl.godaddy.com IP address has changed! \n \nThe old
information was: \n {0} \n \nThe new information is: \n {1}').format(x,a),
Subject = '"crl.godaddy.com IP change"')
if '__name__' == '__main__':
cb_crl_check()
AWS Lambda output:
START RequestId: b637c14d-38f0-46cf-83bf-fd1279aff9d8 Version: $LATEST
***<dns.resolver.Answer object at 0x7fb830c2a450>***
END RequestId: b637c14d-38f0-46cf-83bf-fd1279aff9d8
The answer: I needed to use rrset[0] from the dnspython module in order to get this to work. The following is how the code was changed to make it work:
a = str(my_resolver.query('crl.godaddy.com').rrset[0])
x = '72.167.18.237'

Resources