I want to download all my trades from Binance using the api. The problem I have is that the api/v3/myTrades api call requires a market symbol. As there are hundreds of market symbols, I need to make hundreds of API calls to ensure that I cover all possibilities.
Is there a more efficient way to achieve this? Either another API call that doesn't require a market symbol, or a way to get the symbols that I have traded in?
You can use https://explorer.bitquery.io to get historical info, though this isn't precisely what you're asking for, since it's a different API.
Related
I am using the python-binance API wrapper.
After successfully sending a 'normal' MARKET order, I want to send in a STOP_LOSS_LIMIT order. If I'm not mistaken this is a subtype of Stop-Limit orders. This is what they are called in the Binance UI on the app.
Anyway, this is my code for the STOP_LOSS_LIMIT order:
order2 = client.create_order(
symbol='BTCUSDT',
side=SIDE_SELL,
type=ORDER_TYPE_STOP_LOSS_LIMIT,
TimeInForce=TIME_IN_FORCE_GTC,
stopPrice='33000',
price = '30000'
)
I get the following response:
Not all sent parameters were read; read '7' parameter(s) but was sent '8'.
Obviously there is something fundamentally wrong with the code. Can someone provide me with an example for this type of orders. What are the necessary parameters and what do they do. Please do not link me the official documentation. Sadly, there are no examples for these types.
Seems like what I was trying to achieve is not possible with Spot trading. Once I switched to Futures, it all worked out.
This is how I set the leverage to 1:
client.futures_change_leverage(symbol='BNBUSDT', leverage=1)
I conclude that Stop Loss/Take Profit orders do not work with Spot trading, either by design (which actually makes sense), or because of some bugs.
Anyway, if anyone ever hits the same wall, this is how to set a stop loss order on existing Futures (buy) orders in python-binance
FuturesStopLoss =client.futures_create_order(
symbol='BNBUSDT',
type='STOP_MARKET',
side='SELL',
stopPrice=220,
closePosition=True
)
Changing side to BUY sets a stop loss order on existing sell orders.
P.S. Achieving the same with Spot trading is most likely possible by using Websocket streams and executing Market orders when desired prices are reached. However I did not want to go with that route.
By API you can make it with this structure
StopLoss
API POST https://fapi.binance.com/fapi/v1/order
{
"side": "BUY",
"stopPrice": 40100,
"symbol": "BTCUSDT"
}
I am building a package that uses the Google Analytics API for Python.
But, in severous cases when I have multiple dimensions the extraction by day is sampled.
I know that if I use sampling_level = LARGE will use a sample more accurate.
But, somebody knows if has a way to reduce a request that you can extract one day without sampling?
Grateful
setting sampling to LARGE is the only method we have to decide the amount of sampling but as you already know this doesn't prevent it.
The only way to reduce the chances of sampling is to request less data. A reduced number of dimensions and metrics as well as a shorter date range are the best ways to ensure that you dont get sampled data
This is probably not the answer you want to hear but, one way of getting unsampled data from Google analytics is to use unsampled reports. However this requires that you sign up for Google Marketing Platform. With these you can create an unsampled report request using the API or the UI.
There is also a way to export the data to Big Query. But you lose the analysis that Google provides and will have to do that yourself. This too requires that you sign up for Google Marketing Platform.
there are several tactics of building unsampled reports, most popular is splitting your report into shorter time ranges up to hours. Mark Edmondson did a great work on anti-sampling in his R package so you might find it useful. You may start with this blog post https://code.markedmondson.me/anti-sampling-google-analytics-api/
I am trying to create an app that will help users find restaurants/movie theaters/malls/etc. to hang out based on ratings and distance. Other than just the place itself, I would also like to know more detailed information about the place. For example, if I were to look for parks, I would also like to know if theres a basketball or tennis court there. Ratings and popularity would also be an important aspect to prioritize suggestions.
After looking through all three of the APIs, I could not really find any substantial differences other than their search limits. Could anyone really differentiate each API for me? Maybe even recommend one based on my specific need?
Thanks!
The Foursquare API would fit this use case perfectly because you can supply very specific filters through the API. Also, they have extensive coverage around the world, unlike Google or Yelp.
I would check out the venues/explore endpoint and use a categoryId of Parks. You can use a query parameter of "basketball" or "tennis" to find parks that have courts for these.
What are the queries we can use with media/popular. Can we localize it according to country or geolocation?
Also is there a way to get the discovery feature's results with the api?
This API is no longer supported.
Ref : https://www.instagram.com/developer/endpoints/media/
I was recently struggling with same problem and came to conclusion there is no other way except the hard one.
If you want location based popular images you must go with location endpoint.
https://api.instagram.com/v1/locations/214413140/media/recent
This link brings up recent media from custom location, key being the location-id. Your job is now to follow simple pagination api and merge responded arrays into one big bunch of JSON. $response['pagination']['next_max_id'] parameter is responsible for pagination, so you simply send every next request with max_id of previous request.
https://api.instagram.com/v1/locations/214413140/media/recent?max_id=1093665959941411696
End result will depend on the amount of information you gathered. In the end you will just gonna need to sort the array with like count and you're up to go whatever you were going to do.
Of course important part is to save images locally rather than generating every time user opens the webpage. Reason being not only generation time but limited amount of requests per hour.
Hope someone will come up better solution or Instagram API will finally support media/popular by location.
I've just gotten into the Adwords API for an upcoming project and I need something quite simple actually, but I want to go about it the most efficient way.
I need code to retrieve the Global Monthly Search Volume for multiple keywords (in the millions). After reading about BulkMutateJobService, in the Google documentation they say
If you want to perform a very large number of operations (up to 500,000) on your AdWords campaigns and child objects, use BulkMutateJobService
But later on in the page they give limits of
No more than 25 OperationStream objects are allowed.
No more than 10,000 operations are allowed per BulkMutateRequest.
No more than 100 request parts are allowed.
as well as a few others. See source here http://code.google.com/apis/adwords/docs/bulkjobs.html
Now, my questions:
What do these numbers mean? If I have 1 million words I need information on, do I only need to perform 2 requests with 500K words each?
Also, are there examples of code that does this task?
I only need Global Monthly Search Volume and CPC for each keyword. I've searched online, but to no avail have I found any good example or anything leaning in that direction that utilizes BulkMutateJobService.
Any links, resources, code, advice you can offer? All is appreciated.
The BulkMutateJobService only allows for mutates, or changes, to the account. It does not provide the bulk retrieval of information.
You can fetch monthly search volume for keywords using the TargetingIdeaService. If you use it in STATS mode you can include up to 2500 keywords per request.
Estimates CPC values are obtained from the TrafficEstimatorService. You can request up to 500 keywords per request.
FYI, there is an official AdWords API Forum that you can ask questions on.