Using Redis Pattern Subscribe - node.js

I am working on a NodeJS application.
I am new to redis, I just installed it yesterday, but I'd like to be able publish this data and subscribe to it from another process.
Suppose I have the following data:
var Exchanges = [
{
_id: 'tsx',
name: 'Toronto Stock Exchange',
data: {
instrument: [
{
symbol: 'MBT'
markPrice: 0,
},
{
symbol: 'ACQ'
markPrice: 0,
}
],
orderBooks: [
{
symbol: 'MBT',
bids: [],
asks: [],
},
{
symbol: 'ACQ',
bids: [],
asks: [],
}
],
trades: [
{
timestamp: "2014-11-06T20:53:00.000Z",
symbol: "MBT",
side: "Buy",
size: 0,
price: 352.80,
},
{
timestamp: "2014-11-06T20:53:00.000Z"
symbol: "ACQ",
side: "Sell",
size: 0,
price: 382.90,
}
],
},
},
{
_id: 'nyse',
name: 'New York Stock Exchange',
data: {
instrument: [
{
symbol: 'IBM'
markPrice: 0,
},
{
symbol: 'WMT'
markPrice: 0,
}
],
orderBooks: [
/* Orderbook Data Here */
],
trades: [
/* Trades Data Here */
],
},
}
];
I am saving this with something like:
exchange.websocket_conn.on('message', function (updateData) {
// Use 'updateData' (a diff) to update exchange.data object.
// ...
// Then
redisClient.hmset(exchange._id.toString(), exchange.data);
redisClient.publish(exchange._id.toString(), exchange.data);
});
This works and does publish the data, however I've been reading about 'PSUBSCRIBE' and I'm wondering if this can be broken down a bit further:
I'd like to be able to do something like:
someOtherRedisClient.subscribe('tsx');
// Receive All Data from the Exchange Whenever Anything Changes.
someOtherRedisClient.subscribe('tsx.instrument');
// Receive 'Instrument' array of All Instruments on Exchange when any Instrument Changes.
someOtherRedisClient.subscribe(tsx.instrument:MBT');
// Get Back Only the 'MBT' Instrument Whenever It Changes.
Can the 'Pattern Subscribe' function be used to achieve this?
Thanks,

I'd break down that one big JSON into many JSON's, one for each type of content, e.g.
Level 1 (e.g. last trade price, best bid/ask)
Order book
Trades
and create a seperate topic for each, e.g.
mktdata:tsx:level1:MBT would have the market price for MBT on the TSX exchange
mktdata:tsx:orderbook:MBT would be the order book for MBT on the TSX exchange
mktdata:tsx:trades:MBT could be all the trades but more likely, due to volume, would be better used as a notification to the client to make a seperate query to get the last N trades required from a list
You don't say how many instruments you're writing into Redis, or how many different client applications are consuming the data, but assuming you've not got a huge number of instruments you could indeed use PSUBSCRIBE to get all orderbook updates across the exchange, etc. Given a list of symbols you can also subscribe a long list of channels (e.g. mktdata:tsx:level1:MBT mktdata:tsx:orderbook:MBT mktdata:tsx:level1:ACQ), which can run to tens/hundreds without problem.

Related

Merging of API response data

I am currently working on a React.js full stack application with Express back-end. I had a question regarding a design decision for the API calls. I have 3 APIs at the moment
GET /airports/
{
"total_count":269,
"items":[
{
"airport_code":"ABJ",
"city":"ABJ",
"country":"CI",
"name":"Port Bouet Airport",
"city_name":"Abidjan",
"country_name":"Cote d'Ivoire",
"lat":5.261390209,
"lon":-3.926290035,
"alt":21,
"utc_offset":0.0
},
{
"airport_code":"ABV",
"city":"ABV",
"country":"NG",
"name":"Nnamdi Azikiwe International Airport",
"city_name":"Abuja",
"country_name":"Nigeria",
"lat":9.006790161,
"lon":7.263169765,
"alt":1123,
"utc_offset":1.0
},
........
]
}
GET /airports/{airport_code}
GET /flights/
{
"total_count": 898,
"items": [
{
"flight_number": "ZG6304",
"aircraft_registration": "ZGAJG",
"departure_airport": "BAH",
"arrival_airport": "LHR",
"scheduled_departure_time": "2020-01-01T20:50:00",
"scheduled_takeoff_time": "2020-01-01T21:00:00",
"scheduled_landing_time": "2020-01-02T03:00:00",
"scheduled_arrival_time": "2020-01-02T03:10:00"
},
{
"flight_number": "ZG6311",
"aircraft_registration": "ZGAJH",
"departure_airport": "CDG",
"arrival_airport": "FRA",
"scheduled_departure_time": "2020-01-01T06:45:00",
"scheduled_takeoff_time": "2020-01-01T06:55:00",
"scheduled_landing_time": "2020-01-01T07:50:00",
"scheduled_arrival_time": "2020-01-01T08:00:00"
},
........
]
}
I am working on building an airport arrivals and departures web application using the above data. My idea was to try and combine the data of /fligts/ and /airports/ API call based on departure_airport and arrival_airport to be able to have more information inside a single array such as information about the city_name, lat, long etc. to visualize the data. I wanted to know a good approach for solving this issue keeping in mind the computational overhead of filtering and merging large sets of data. I looked into using RxJS but I have not worked with it before to be sure if it would provide a good solution
I recommend to convert the airports array to an object. After you can access the airports by keys.
const airports = {
total_count: 269,
items: [
{
airport_code: 'ABJ',
city: 'ABJ',
country: 'CI',
name: 'Port Bouet Airport',
city_name: 'Abidjan',
country_name: "Cote d'Ivoire",
lat: 5.261390209,
lon: -3.926290035,
alt: 21,
utc_offset: 0.0,
},
{
airport_code: 'ABV',
city: 'ABV',
country: 'NG',
name: 'Nnamdi Azikiwe International Airport',
city_name: 'Abuja',
country_name: 'Nigeria',
lat: 9.006790161,
lon: 7.263169765,
alt: 1123,
utc_offset: 1.0,
},
],
};
const mappedAirports = airports.items.reduce(
(result, airport) =>
(result = { ...result, [airport.airport_code]: airport }),
{}
);
console.log(mappedAirports);
Output:
{"ABJ":{"airport_code":"ABJ","city":"ABJ","country":"CI","name":"Port Bouet Airport","city_name":"Abidjan","country_name":"Cote d'Ivoire","lat":5.261390209,"lon":-3.926290035,"alt":21,"utc_offset":0},"ABV":{"airport_code":"ABV","city":"ABV","country":"NG","name":"Nnamdi Azikiwe International Airport","city_name":"Abuja","country_name":"Nigeria","lat":9.006790161,"lon":7.263169765,"alt":1123,"utc_offset":1}}

MongoDB aggregation $group stage by already created values / variable from outside

Imaging I have an array of objects, available before the aggregate query:
const groupBy = [
{
realm: 1,
latest_timestamp: 1318874398, //Date.now() values, usually different to each other
item_id: 1234, //always the same
},
{
realm: 2,
latest_timestamp: 1312467986, //actually it's $max timestamp field from the collection
item_id: 1234,
},
{
realm: ..., //there are many of them
latest_timestamp: ...,
item_id: 1234,
},
{
realm: 10,
latest_timestamp: 1318874398, //but sometimes then can be the same
item_id: 1234,
},
]
And collection (example set available on MongoPlayground) with the following schema:
{
realm: Number,
timestamp: Number,
item_id: Number,
field: Number, //any other useless fields in this case
}
My problem is, how to $group the values from the collection via the aggregation framework by using the already available set of data (from groupBy) ?
What have been tried already.
Okay, let skip crap ideas, like:
for (const element of groupBy) {
//array of `find` queries
}
My current working aggregation query is something like that:
//first stage
{
$match: {
"item": 1234
"realm" [1,2,3,4...,10]
}
},
{
$group: {
_id: {
realm: '$realm',
},
latest_timestamp: {
$max: '$timestamp',
},
data: {
$push: '$$ROOT',
},
},
},
{
$unwind: '$data',
},
{
$addFields: {
'data.latest_timestamp': {
$cond: {
if: {
$eq: ['$data.timestamp', '$latest_timestamp'],
},
then: '$latest_timestamp',
else: '$$REMOVE',
},
},
},
},
{
$replaceRoot: {
newRoot: '$data',
},
},
//At last, after this stages I can do useful job
but I found it a bit obsolete, and I already heard that using [.mapReduce][1] could solve my problem a bit faster, than this query. (But official docs doesn't sound promising about it) Does it true?
As for now, I am using 4 or 5 stages, before start working with useful (for me) documents.
Recent update:
I have checked the $facet stage and I found it curious for this certain case. Probably it will help me out.
For what it's worth:
After receiving documents after the necessary stages I am building a representative cluster chart, that you may also know as a heatmap
After that I was iterating each document (or array of objects) one-by-one to find their correct x and y coordinated in place which should be:
[
{
x: x (number, actual $price),
y: y (number, actual $realm),
value: price * quantity,
quantity: sum_of_quantity_on_price_level
}
]
As for now, it's old awful code with for...loop inside each other, but in the future, I will be using $facet => $bucket operators for that kind of job.
So, I have found an answer to my question in another, but relevant way.
I was thinking about using $facet operator and to be honest, it's still an option, but using it, as below is a bad practice.
//building $facet query before aggregation
const ObjectQuery = {}
for (const realm of realms) {
Object.assign(ObjectQuery, { `${realm.name}` : [ ... ] }
}
//mongoose query here
aggregation([{
$facet: ObjectQuery
},
...
])
So, I have chosen a $project stage and $switch operator to filter results, such as $groups do.
Also, using MapReduce could also solve this problem, but for some reason, the official Mongo docs recommends to avoid using it, and choose aggregation: $group and $merge operators instead.

"Transaction was aborted due to detection of concurrent modification" in FaunaDB

I have a document that could be written to from many different concurrent requests.. the same section of the document isn't altered, but it could see concurrent writes (from a nodejs app).
example:
{
name: "testing",
results: {
a: { ... },
b: { ... },
}
I could update the document with "c", etc etc.
If I don't async await the transactions (in a test, for example), I will get partial writes and an error "transaction was aborted due to detection of concurrent modification" .. What's the best way to go about this? I feel like Fauna's main selling point is dealing with issues like this, but I don't have enough knowledge to understand my way around it.
Anyone have any queue strategies/ideas/suggestions?
index:
CreateIndex({
"name": "byName",
"unique": true,
"source": Collection("Testing"),
"serialized": true,
"terms":
[
{ "field": [ "data", "name" ] }
]
})
JS AWS Lambda function is what is doing the writing..
Currently the unit of transaction in Fauna is the document. So in this case I'd recommend something like the following:
CreateCollection({name: "result"})
CreateCollection({name: "sub-result"})
CreateIndex({
name: "result-agg",
source: Collection("sub-result"),
terms: [{"field": ["data", "parent"]}]
})
Assuming parent contained the ref of the main result. Then given $ref as a result ref
Let({
subs: Select("data", Map(Paginate(Match(Index("result-agg"), $ref)), Lambda("x", Get(Var("x")))))
main: Select("data", Get($ref))},
Merge(Var("main"), {results: Var("subs")})
)

How to use scope in loopback filter in json format

I am trying to make call from my angular service to loopback api. I have a parcelStatuses collection that contains a parcelId so i am able to include parcel collection too but I also need to check against a particular vendorId and that vendorId exists in parcel collection. I am trying to make use of scope to check against particular vendorId but i think i am not writing correct json syntax/call. Here is my function inside service
private getParcelsByFilter(
limit: number,
skip: number,
vendorId: string,
filter: string
) {
const checkFilter = {
"where": {
"and": [{"statusRepositoryId": filter}]
},
"include": [
{
"parcel": [
{
"scope": {"vendorId": vendorId}
},
"parcelStatuses",
{"customerData":"customer"}
]
}
],
"limit": limit,
"skip": skip,
}
return this._http.get<IParcel[]>(
`${environment.url}/ParcelStatuses?filter=${encodeURIComponent(JSON.stringify(checkFilter))}`
);
}
Here is my demo view of parcelStatus collection object
[{
"id":"lbh24214",
"statusRepositoryId":"3214fsad",
"parcelId":"LH21421"
}]
Demo json of parcel
[{
"id":"LHE21421",
"customerDataId":"214fdsas",
"customerId":"412dsf",
"vendorId":"123421"
}]
Please help me with writing correct call
Formatting aside, there's several issues with the query:
Unnecessary and
This line:
where: {
and: [{statusRepositoryId: filter}]
}
Can be simplified to:
where: {
statusRepositoryId: filter
}
As there is only 1 where condition, and becomes redundant.
Misuse of include and scope
include is used to include relations while scope applies filters to those relations. They can work in tandem to create a comprehensive query:
include: [
{
relation: "parcels",
scope: {
where: {vendorId: vendorId},
}
}
],
This will include the parcels relation as part of the response, while filtering the parcels relation with a where filter.
That means the final code should look similar to the following:
private getParcelsByFilter(
limit: number,
skip: number,
vendorId: string,
filter: string
) {
const checkFilter = {
where: {statusRepositoryId: filter},
include: [
{
relation: "parcels",
scope: {
where: {vendorId: vendorId},
}
}
],
limit: limit,
skip: skip,
}
return this._http.get<IParcel[]>(
`${environment.url}/ParcelStatuses?filter=${encodeURIComponent(JSON.stringify(checkFilter))}`
);
}
Further reading
Please review these resources to get a better understanding on how to use filters.
https://loopback.io/doc/en/lb4/Include-filter.html

Create subscription with addon using node-recurly

Using node-recurly, I can create a subscription object and pass it to recurly.subscriptions.create call:
const subscription = {
plan_code: plan.code,
currency: 'USD',
account: {
account_code: activationCode,
first_name: billingInfo.first_name,
last_name: billingInfo.last_name,
email: billingInfo.email,
billing_info: {
token_id: paymentToken,
},
},
};
I would also like to add subscription_add_ons property, which, looking at the documentation, supposed to be an array of add-ons. I tried passing it like this:
subscription_add_ons: [
{
add_on_code: shippingMethod.servicelevel_token,
unit_amount_in_cents: parseFloat(shippingMethod.amount) * 100,
},
],
The server returned an error:
Tag <subscription_add_ons> must consist only of sub-tags named
<subscription_add_on>
I attempted this:
subscription_add_ons: [
{
subscription_add_on: {
add_on_code: shippingMethod.servicelevel_token,
unit_amount_in_cents: parseFloat(shippingMethod.amount) * 100,
},
},
],
Got back this error:
What's the proper format to pass subscription add on in this scenario?
The proper format is:
subscription_add_ons: {
subscription_add_on: [{
add_on_code: shippingMethod.servicelevel_token,
unit_amount_in_cents: parseFloat(shippingMethod.amount) * 100,
}],
},
I ended up doing this which works whether you have 1 add-on or multiple add-ons. subscription_add_ons is an array which can contain 1 or more subscription add ons. I then send over the details (along with other info) in the subscription update call. This is similar to what you attempted in your original post so I'm not sure why that didn't work for you.
details.subscription_add_ons = [
{ subscription_add_on: {add_on_code: "stream", quantity: 3} },
{ subscription_add_on: {add_on_code: "hold", quantity: 2} }
];

Resources