I would like to create a candle graph using GDAX api. I am currently using the HTTP request for historical data https://docs.gdax.com/#get-historic-rates bug this is marked that I should use websocket API. Unfortunately I don't know how to handle historic data through Gdax websocket api https://github.com/coinbase/gdax-node. Could someone help me ?
Here are 1m candlesticks built from match channel using GDAX websocket
"use strict";
const
WebSocket = require('ws'),
PRECISION = 8
function _getPair(pair) {
return pair.split('-')
}
let ws = new WebSocket('wss://ws-feed.pro.coinbase.com')
ws.on('open', () => {
ws.send(JSON.stringify({
"type": "subscribe",
"product_ids": [
"ETH-USD",
"BTC-USD"
],
"channels": [
"matches"
]
}))
})
let candles = {}
let lastCandleMap = {}
ws.on('message', msg => {
msg = JSON.parse(msg);
if (!msg.price)
return;
if (!msg.size)
return;
// Price and volume are sent as strings by the API
msg.price = parseFloat(msg.price)
msg.size = parseFloat(msg.size)
let productId = msg.product_id;
let [base, quote] = _getPair(productId);
// Round the time to the nearest minute, Change as per your resolution
let roundedTime = Math.floor(new Date(msg.time) / 60000.0) * 60
// If the candles hashmap doesnt have this product id create an empty object for that id
if (!candles[productId]) {
candles[productId] = {}
}
// If the current product's candle at the latest rounded timestamp doesnt exist, create it
if (!candles[productId][roundedTime]) {
//Before creating a new candle, lets mark the old one as closed
let lastCandle = lastCandleMap[productId]
if (lastCandle) {
lastCandle.closed = true;
delete candles[productId][lastCandle.timestamp]
}
// Set Quote Volume to -1 as GDAX doesnt supply it
candles[productId][roundedTime] = {
timestamp: roundedTime,
open: msg.price,
high: msg.price,
low: msg.price,
close: msg.price,
baseVolume: msg.size,
quoteVolume: -1,
closed: false
}
}
// If this timestamp exists in our map for the product id, we need to update an existing candle
else {
let candle = candles[productId][roundedTime]
candle.high = msg.price > candle.high ? msg.price : candle.high
candle.low = msg.price < candle.low ? msg.price : candle.low
candle.close = msg.price
candle.baseVolume = parseFloat((candle.baseVolume + msg.size).toFixed(PRECISION))
// Set the last candle as the one we just updated
lastCandleMap[productId] = candle
}
})
What they're suggesting is to get the historic rates https://docs.gdax.com/#get-historic-rates from this endpoint, then keep your candles up to date using the Websocket feed messages - whenever a 'match'/ticker message is received, you update the last candle accordingly.
From the docs: "The maximum number of data points for a single request is 300 candles. If your selection of start/end time and granularity will result in more than 300 data points, your request will be rejected." so probably why you can't get more than 2 days worth of data.
ps. I have a live orderbook implemented here and a basic candle stick chart - lots not hooked together fully still but its at least available for preview until its complete https://github.com/robevansuk/gdax-java/
Related
Is there a way to query the results to show only data that has been published and is not in draft state? I looked in the documentation and didn't quite find it.
This is what I currently have:
export const getAllPages = async (context?) => {
const client = createClient({
space: process.env.CONTENTFUL_SPACE_ID,
accessToken: process.env.CONTENTFUL_ACCESS_TOKEN,
});
const pages = await client.getEntries({
content_type: "page",
include: 10,
"fields.slug[in]": `/${context.join().replace(",", "/")}`,
});
return pages?.items?.map((item) => {
const fields = item.fields;
return {
title: fields["title"],
};
});
};
You can detect that the entries you get are in Published state:
function isPublished(entity) {
return !!entity.sys.publishedVersion &&
entity.sys.version == entity.sys.publishedVersion + 1
}
In your case, I would look for both Published and Changed:
function isPublishedChanged(entity) {
return !!entity.sys.publishedVersion &&
entity.sys.version >= entity.sys.publishedVersion + 1
}
Check the documentation:
https://www.contentful.com/developers/docs/tutorials/general/determine-entry-asset-state/
To get only the published data you will need to use the Content Delivery API token. If you use the Content Preview API Token, you will receive both the published and draft entries.
You can read more about it here: https://www.contentful.com/developers/docs/references/content-delivery-api/
If using the Content Delivery API you need to filter on the sys.revision attribute for each item. A published item should have its revision attribute set to greater than 0.
const publishedItems = data.items.filter(item => item.sys.revision > 0)
After reading the docs on ServerValue.TIMESTAMP, I was under the impression that once the object hits the database, the timestamp placeholder evaluates once and remains the same, but this was not the case for me:
// Example on Node:
> const db = f.FIREBASE_APP.database();
> const timestamp = f.FIREBASE_APP.database.ServerValue.TIMESTAMP;
> const ref = db.ref('/test');
> ref.on(
... 'child_added',
... function(snapshot) {
..... console.log(`Timestamp from listener: ${snapshot.val().timestamp}`);
..... }
... )
> var child_key = "";
> ref.push({timestamp: timestamp}).then(
... function(thenable_ref) {
..... child_key = thenable_ref.key;
..... }
... );
Timestamp from listener: 1534373384299
> ref.child(child_key).once('value').then(
... function(snapshot) {
..... console.log(`Timestamp after querying: ${snapshot.val().timestamp}`);
..... }
... );
> Timestamp after querying: 1534373384381
> 1534373384299 < 1534373384381
true
The timestamp is different when queried from the on listener and it is different during a later query.
Is this like this by design and I just missed some parts of the documentation? If this is the case, when does the ServerValue.TIMESTAMP stabilize?
I am building a CQRS/ES library on the Realtime Database, and just wanted to avoid the expected_version (or sequence numbers) of events.
UPDATE
The proof for Frank's explanation below:
/* `db`, `ref` and `timestamp` are defined above,
and the test path ("/test") has been deleted
from DB beforehand to avoid noise.
*/
> ref.on(
... 'child_added',
... function(snapshot) {
..... console.log(`Timestamp from listener: ${snapshot.val().timestamp}`);
..... }
... )
> ref.on(
... 'value',
... function(snapshot) {
..... console.log(snapshot.val());
..... }
... )
> ref.push({timestamp: timestamp}); null;
Timestamp from listener: 1534434409034
{ '-LK2Pjd8FS_L8hKqIpiE': { timestamp: 1534434409034 } }
{ '-LK2Pjd8FS_L8hKqIpiE': { timestamp: 1534434409114 } }
Bottom line is, if one needs to rely on immutable server side timestamps, keep this in mind, or work around it.
When you perform the ref.push({timestamp: timestamp}) the Firebase client immediately makes an estimate of the timestamp on the client and fires an event for that locally. It then send the command off to the server.
Once the Firebase client receives the response from the server, it checks if the actual timestamp is different from its estimate. If it is indeed different, the client fires reconciliatory events.
You can most easily see this by attaching your value listener before setting the value. You'll see it fire with both the initial estimates value, and the final value from the server.
Also see:
How to use the Firebase server timestamp to generate date created?
Trying to convert Firebase timestamp to NSDate in Swift
firebase.database.ServerValue.TIMESTAMP return an Object
CAVEAT: After wasting another day, the ultimate solution is not to use Firebase server timestamps at all, if you have to compare them in a use case that is similar to the one below. When the events come in fast enough, the second 'value' update may not trigger at all.
One solution, to the double-update condition Frank describes in his answer, is to get the final server timestamp value is (1) to embed an on('event', ...) listener inside an on('child_added', ...) and (2) remove the on('event', ...) listener as soon as the specific use case permits.
> const db = f.FIREBASE_APP.database();
> const ref = db.ref('/test');
> const timestamp = f.FIREBASE_APP.database.ServerValue.TIMESTAMP;
> ref.on(
'child_added',
function(child_snapshot) {
console.log(`Timestamp in 'child_added': ${child_snapshot.val().timestamp}`);
ref.child(child_snapshot.key).on(
'value',
function(child_value_snapshot) {
// Do a timestamp comparison here and remove `on('value',...)`
// listener here, but keep in mind:
// + it will fire TWICE when new child is added
// + but only ONCE for previously added children!
console.log(`Timestamp in embedded 'event': ${child_value_snapshot.val().timestamp}`);
}
)
}
)
// One child was already in the bank, when above code was invoked:
Timestamp in 'child_added': 1534530688785
Timestamp in embedded 'event': 1534530688785
// Adding a new event:
> ref.push({timestamp: timestamp});null;
Timestamp in 'child_added': 1534530867511
Timestamp in embedded 'event': 1534530867511
Timestamp in embedded 'event': 1534530867606
In my CQRS/ES case, events get written into the "/event_store" path, and the 'child_added' listener updates the cumulated state whenever new events come in, where each event has a ServerValue.TIMESTAMP. The listener compares the new event's and the state's timestamp whether the new event should be applied or it already has been (this mostly matters when the server has been restarted to build the internal in-memory state). Link to the full implementation, but here's a shortened outline on how single/double firing has been handled:
event_store.on(
'child_added',
function(event_snapshot) {
const event_ref = event_store.child(event_id)
event_ref.on(
'value',
function(event_value_snapshot){
const event_timestamp = event_value_snapshot.val().timestamp;
if ( event_timestamp <= state_timestamp ) {
// === 1 =======
event_ref.off();
// =============
} else {
var next_state = {};
if ( event_id === state.latest_event_id ) {
next_state["timestamp"] = event_timestamp;
Object.assign(state, next_state);
db.ref("/state").child(stream_id).update(state);
// === 2 =======
event_ref.off();
// =============
} else {
next_state = event_handler(event_snapshot, state);
next_state["latest_event_id"] = event_id;
Object.assign(state, next_state);
}
}
}
);
}
);
When the server is restarted, on('child_added', ...) goes through all events already in the "/event_store", attaching on('value',...) dynamically on all children and compares the events` timestamps to the current state's.
If the event is older than the age of the current state (event_timestamp < state_timestamp is true), the only action is detaching the 'value' listener . This callback will be fired once as the ServerValue.TIMESTAMP placeholder has already been resolved once in the past.
Otherwise the event is newer, which means that it hasn't been applied yet to the current state and ServerValue.TIMESTAMP also hasn't been evaluated yet, causing the callback to fire twice. To handle the double update, this block saves the actual child's key (i.e., event_id here) to the state (to latest_event_id) and compares it to the incoming event's key (i.e., event_id):
I'm coding a messaging app with Node.js and I need to detect when the same user sends N consecutive messages in a group (to avoid spammers). I'm using a bacon.js Bus where I push the incoming messages from all users.
A message looks like this:
{
"text": "My message",
"user": { "id": 1, name: "Pep" }
}
And this is my working code so far:
const Bacon = require('baconjs')
const bus = new Bacon.Bus();
const CONSECUTIVE_MESSAGES = 5;
bus.slidingWindow(CONSECUTIVE_MESSAGES)
.filter((messages) => {
return messages.length === MAX_CONSECUTIVE_MESSAGES &&
_.uniqBy(messages, 'user.id').length === 1;
})
.onValue((messages) => {
console.log(`User ${__.last(messages).user.id}`);
});
// ... on every message
bus.push(message);
It creates a sliding window, to keep only the number on consecutive messages I want to detect. On every event, it filters the array to let the data flow to the next step only if all the messages in the window belong to the same user. Last, in the onValue, it takes the last message to get the user id.
The code looks quite dirty/complex to me:
The filter doesn't look very natural with streams. Is there a better way to emit an event when N consecutive events match some criteria? .
Is there a better way to receive just a single event with the user (instead of an array of messages) in the onValue function.
It doesn't really throttle. If a user sends N messages in one year, he or she shouldn't be detected. The stream should forget old events somehow.
Any ideas to improve it? I'm open to migrating it to rxjs if that helps.
Maybe start with
latestMsgsP = bus.slidingWindow(CONSECUTIVE_MESSAGES)
.map(msgs => msgs.filter(msg => msgAge(msg) < AGE_LIMIT))
See if we should be blockking someone
let blockedUserIdP = latestMsgsP.map(getUserToBlock)
Where you can use something shamelessly imperative such as
function getUserToBlock(msgs) {
if (msgs.length < CONSECUTIVE_MESSAGES) return
let prevUserId;
for (var i = 0; i < msgs.length; i++) {
let userId = msgs[i].user.id
if (prevUserId && prevUserId != userId) return
prevUserId = userId
}
return prevUserId
}
Consider mapping the property you’re interested in as early as possible, then the rest of the stream can be simpler. Also, equality checks on every item in the sliding window won’t scale well as you increase the threshold. Consider using scan instead, so you simply keep a count which resets when the current and previous values don’t match.
bus
.map('.user.id')
.scan([0], ([n, a], b) => [a === b ? n + 1 : 1, b])
.filter(([n]) => n >= MAX_CONSECUTIVE_MESSAGES)
.onValue(([count, userId]) => void console.log(`User ${userId}`));
I have been stuck with this problem for a month.
I've been trying to use firebase to create a FIFO Inventory. I am using firebase cloud function to update the FIFO inventory. However if I stress test the following code just 10 times with for loop for both insert (push) and remove (pop), it breaks because of the concurrent update.
Does anyone have another solution for this?
Insert FIFO/PUSH:
let fifoRef = admin.database().ref('fifo/' + item.itemId + '/').push();
let fifo = {
price: data.items[uniqueId].price,
in: data.items[uniqueId].quantity,
quantity: data.items[uniqueId].quantity,
}
fifoRef.set(fifo);
Get FIFO Value/POP (Here I simply update the quantity for POP):
// get fifo
let fifoReference = 'fifo/' + item.itemId;
let fifoRef = admin.database().ref(fifoReference);
fifoRef.once('value').then(currentData => {
let fifo = currentData.val();
for (let key in fifo) {
let val = fifo[key];
if (val.quantity > 0) {
// get fifo quantity
let fifoQuantityRef = admin.database().ref(fifoReference + '/' + key + '/quantity/');
// get local cache value
let fifoQuantityListener = fifoQuantityRef.on('value', () => {
// transaction start
fifoQuantityRef.transaction(function (quantity) {
if (quantity) {
if (quantity > 0 && quantitySubtotal > 0) {
if (quantity >= quantitySubtotal) {
// minus inventory amount
let amount = calculator(quantitySubtotal + "*" + val.price);
quantity = calculator(quantity + "-" + quantitySubtotal);
quantitySubtotal = 0;
// update quantity
return quantity;
} else {
let amount = calculator(quantity + "*" + val.price);
quantitySubtotal = calculator(quantitySubtotal + "-" + quantity);
return 0;
}
}
}
return quantity;
}, (error, committed, result) => {
fifoQuantityRef.off('value', fifoQuantityListener);
}, true);
});
Brainstorming:
I just need Insight on how to get the VALUE using FIFO. From my understanding Firebase is best used to insert and remove not with transaction. But if I am using only insert and remove, how to create a FIFO? If I create FIFO with the quantity of 1, the data stored will be too large.
I did try to use Google Datastore, however datastore persistence is extremely slow (over 2 seconds for writing). Which is not possible to be used with firebase which have persistence of less than 1 second. The problem arise when PUSH and POP is done within 1 seconds, datastore insert is not persisted yet.
Any other brainstorming ideas?
1 .Hi SO, I have a created a class for fetching user's tweets from twitter with the help of screen name. My problem is I'm getting rate limit exceeded very frequently.
2 .I had created table for screen name in which I'm saving all screen names and
3 .I had created another table to store user's tweets.
Below is my Code:
public List<TwitterProfileDetails> GetAllTweets(Func<SingleUserAuthorizer> AuthenticateCredentials,string screenname)
{
List<TwitterProfileDetails> lstofTweets = new List<TwitterProfileDetails>();
TwitterProfileDetails details = new TwitterProfileDetails();
var twitterCtx = new LinqToTwitter.TwitterContext(AuthenticateCredentials());
var helpResult =
(from help in twitterCtx.Help
where help.Type == HelpType.RateLimits &&
help.Resources == "search,users,socialgraph"
select help)
.SingleOrDefault();
foreach (var category in helpResult.RateLimits)
{
Console.WriteLine("\nCategory: {0}", category.Key);
foreach (var limit in category.Value)
{
Console.WriteLine(
"\n Resource: {0}\n Remaining: {1}\n Reset: {2}\n Limit: {3}",
limit.Resource, limit.Remaining, limit.Reset, limit.Limit);
}
}
var tweets = from t in twitterCtx.Status
where t.Type == StatusType.User && t.ScreenName == screename && t.Count == 15
select t;
if (tweets != null)
{
foreach (var tweetStatus in tweets)
{
if (tweetStatus != null)
{
lstofTweets.Add(new TwitterProfileDetails { Name = tweetStatus.User.Name, ProfileImagePath = tweetStatus.User.ProfileImageUrl, Tweets = tweetStatus.Text, UserID = tweetStatus.User.Identifier.UserID, PostedDate = Convert.ToDateTime(tweetStatus.CreatedAt),ScreenName=screename });
}
}
}
return lstofTweets;
}
I am using above method has below..
foreach (var screenObj in screenName)
{
var getTweets = api.GetAllTweets(api.AuthenticateCredentials, screenObj.UserName);
foreach (var obj in getTweets)
{
using (DBcontext = new DBContext())
{
tweets.Name = obj.Name;
tweets.ProfileImage = obj.ProfileImagePath;
tweets.PostedOn = obj.PostedDate;
tweets.Tweets = obj.Tweets;
tweets.CreatedOn = DateTime.Now;
tweets.ModifiedOn = DateTime.Now;
tweets.Status = EntityStatus.Active;
tweets.ScreenName = obj.ScreenName;
var exist = context.UserTweets.Any(user => user.Tweets.Equals(obj.Tweets));
if (!exist)
context.UserTweets.Add(tweets);
context.SaveChanges();
}
}
}
I see that you found the Help/RateLimits query. There are various approaches you can take. e.g. add a delay between queries, delay the next query if the limit has been exceeded, or catch the exception and delay until the next 15 minute window.
If you want to monitor interactively, you can watch the rate limit for each query. The TwitterContext instance you use for performing the query contains RateLimitXxx properties that populate after every query. You'll need to read those values after the query, which appears to be inside your GetAllTweets method. You have to expose those values to your loop somehow, via return object, out params, static field, or whatever logic you feel is necessary.
// the first time through, you have the whole rate limit for the 15 minute window
foreach (var screenObj in screenName)
{
var getTweets = api.GetAllTweets(api.AuthenticateCredentials, screenObj.UserName);
// your processing logic ...
// assuming you have the RateLimitXxx values in scope
if (rateLimitRemaining == 0)
Thread.Sleep(CalculateRemainingMilliseconds(RateLimitReset));
}
RateLimitRemaining is how many queries you can do in the current 15 minute window and RateLimitReset is the number of epoch seconds remaining until the rate limit resets (when you can start querying again).
It would be helpful to review the Twitter docs on Rate Limiting.
For reference, here are a couple other questions that might provide more ideas:
Twitter rate limiting
Get all followers using LINQ to Twitter