The goal is to group items in a stream into multiple groups. Run transformations on those groups separately and then re-combine all the groups into one stream.
The only call I can use after group seems to be each and that doesn't pass individual groups to my callback it passes the entire dictionary of grouped objects. Calling map won't pass anything to my callback. Example:
const groups = stream.group(func);
groups.map(item => console.log(item)); // prints nothing
groups.each(item => console.log(item)); // instead of item being one of the groups created, it's a dictionary with all the groups included.
How can I do that?
are you looking for doing this ? :
groups.on('error', handleError)
.pipe(transformObject(async item => {
some code...
If non, please try to give an example of what you want to do with your data.
import highland from 'highland';
const source = highland([
{ age: 21, name: 'Alice' },
{ age: 42, name: 'Bob' },
{ age: 42, name: 'Carol' }
]);
source
.group(person => person.age)
.flatMap(highland.values)
// Apply transformation to each group here
.map(group => group)
.flatten(1)
.doto(console.log)
.done(() => {});
Related
Looking to construct a query against a firestore collection ('parent') where the documents have a nested map (2 logical levels deep). Specifically when the first map has dynamic keys which are not known at the time of running the query. As an example:
Document 1
{
codes: {
abc: {
id: 'hi'
},
def: {
id: 'there'
}
}
}
Document 2
{
codes: {
ghi: {
id: 'you'
},
zmp: {
id: 'guys'
}
}
}
What I would like to do is have a WHERE clause that takes a wildcard for a key in the document. ie.
firestore.collection('parent').WHERE('codes.*.id', '==', 'there')
// Results in Document 1
or
firestore.collection('parent').WHERE('codes.*.id', '==', 'you')
// Results in Document 2
Is there any way to achieve this behavior without having to resort to generating subcollection documents to be used for indexing, or polluting the document itself with a second map that maps ids to codes.
== Not ideal solution 1 (subcollections) ==
Build out the server so that when these documents are filed, a subcollection ('child') is maintained with documents that contain the related information. As an example filing Document 1 above would require filing two documents in the child subcollection:
{
id: 'hi'
code: 'abc'
}
{
id: 'there'
code: 'def'
}
Now we can query for the id we want, and get the parent reference, and follow that all the way back to the parent...
firestore.collectionGroup('child').where('id', '==', 'there')
.get()
.then(snapshot => {
for(const doc of snapshot.docs) {
return doc.ref.parent.parent
}
return Promise.reject('no parents, how sad.')
})
.then(ref => ref.get())
.then(snapshot => snapshot.data())
.then(parent => {
// Thank goodness, the parent is Document 1!
}
The downside to this is maintenance of the sub collections, as well as a number of extra operations against firestore.
== Not ideal solution 2 (model pollution) ==
Another way to achieve this is to implement another map or an array in the document itself which simply contains the ids which would then let us query on those values. ie
{
codes: {
abc: {
id: 'hi'
},
def: {
id: 'there'
}
},
codeids:['hi','there']
}
Although this is easy to query:
.WHERE('codeids', 'ARRAY CONTAINS', 'hi')
I don't like the idea of adding fields that are not meaningful to the consumer of the document (the purpose of the field only being to facilitate a documents ability to be queried due to system constraints)
Open to suggestions!
I just can't figure out the query and even if it's allowed to write a single query to push 4 different objects into 4 different arrays deeply nested inside the user Object.
I receive PATCH request from front-end which's body looks like this:
{
bodyweight: 80,
waist: 60,
biceps: 20,
benchpress: 50,
timestamp: 1645996168125
}
I want to create 4 Objects and push them into user's data in Mongo Atlas
{date:1645996168125, value:80} into user.stats.bodyweight <-array
{date:1645996168125, value:60} into user.stats.waist <-array
...etc
I am trying to figure out second argument for:
let user = await User.findOneAndUpdate({id:req.params.id}, ???)
But i am happy to update it with any other mongoose method if possible.
PS: I am not using _id given by mongoDB on purpose
You'll want to use the $push operator. It accepts paths as the field names, so you can specify a path to each of the arrays.
I assume the fields included in your request are fixed (the same four property names / arrays for every request)
let user = await User.findOneAndUpdate(
{ id: req.params.id },
{
$push: {
"stats.bodyweight": {
date: 1645996168125,
value: 80,
},
"stats.waist": {
date: 1645996168125,
value: 60,
},
// ...
},
}
);
If the fields are dynamic, use an object and if conditions, like this:
const update = {};
if ("bodyweight" in req.body) {
update["stats.bodyweight"] = {
date: 1645996168125,
value: 80,
};
}
// ...
let user = await User.findOneAndUpdate(
{ id: req.params.id },
{
$push: update,
}
);
The if condition is just to demonstrate the principle, you'll probably want to use stricter type checking / validation.
try this:
await User.findOneAndUpdate(
{id:req.params.id},
{$addToSet:
{"stats.bodyweight":{date:1645996168125, value:80} }
}
)
I'm trying to figure out how to update many elements at once. Suppose I have the following array:
[
{
id: 100,
order: 1,
},
{
id: 101,
order: 2,
},
{
id: 102,
order: 3,
},
]
I then transform this array, replacing the values of order. The resulting array becomes the following:
[
{
id: 102,
order: 1,
},
{
id: 101,
order: 2,
},
{
id: 100,
order: 3,
},
]
I use this on the frontend to render a list in the appropriate order, based on the value of order.
But how can I update these 3 entities in my database?
I can obviously make 3 UPDATE statements:
const promises = [];
newArray.forEach(({ id, order }) => {
promises.push(
// executeMutation is just a custom query builder
executeMutation({
query: `UPDATE my_table SET order = ${order} WHERE id = ${id}'`
})
)
})
await Promise.all(promises)
But is it possible to do this in one query?
You can do this using the UNNEST function. First you'll need to handle the query parameters properly. https://www.atdatabases.org/ does this for you, otherwise you need to separately pass a string with placeholders and then the values. If you use #databases, the code could look like:
await database.query(sql`
UPDATE my_table
SET order = updates_table.order
FROM (
SELECT
UNNEST(${newArray.map(v => v.id)}::INT[]) as id,
UNNEST(${newArray.map(v => v.order)}::INT[]) as order
) AS updates_table
WHERE my_table.id = updates_table.id
`);
The trick here is that UNNEST lets you take an array for each column and turn that into a kind of temporary table. You can then use that table to filter & update the records.
I am new to dynamodb.
I want to increment the Sort Key
If the id=0 the next id=1 and so on,
If the user(Partition key), id(Sort Key) add items the next add items the id increment 1.
The code use on PutItem with dynamodb.
Is possible to do that?
I did not want use the UUID( unique Key)
Most situations don't need an auto-incrementing attribute and DynamoDB doesn't provide this feature out of the box. This is considered to be an anti-pattern in distributed systems.
But, see How to autoincrement in DynamoDB if you really need to.
I understand that you may need this number because it is a legal obligation to have incremental invoice numbers for example.
One way would be to create a table to store your number sequences.
Add fields like:
{
name: "invoices",
prefix: "INV",
numberOfDigits: 5,
leasedValue: 1,
appliedValue: 1,
lastUpdatedTime: '2022-08-05'
},
{
name: "deliveryNotes",
prefix: "DN",
numberOfDigits: 5,
leasedValue: 1,
appliedValue: 1,
lastUpdatedTime: '2022-08-05'
}
You need 2 values (a lease and an applied value), to make sure you never skip a beat, even when things go wrong.
That check-lease-apply-release/rollback logic looks as follows:
async function useSequence(name: string, cb: async (uniqueNumber: string) => void) {
// 1. GET THE SEQUENCE FROM DATABASE
const sequence = await getSequence("invoices");
this.validateSequence(sequence);
// 2. INCREASE THE LEASED VALUE
const oldValue = sequence.appliedValue;
const leasedValue = oldValue + 1;
sequence.leasedValue = leasedValue;
await saveSequence(sequence);
try {
// 3. CREATE AND SAVE YOUR DOCUMENT
await cb(format(leasedValue));
// 4. INCREASE THE APPLIED VALUE
sequence.appliedValue++;
await saveSequence(sequence);
} catch(err) {
// 4B. ROLLBACK WHEN THINGS ARE BROKEN
console.err(err)
try {
const sequence = await getSequence(name);
sequence.leasedValue--;
this.validateSequence(sequence);
await saveSequence(sequence);
} catch (err2) {
console.error(err2);
}
throw err;
}
}
function validateSequence(sequence) {
// A CLEAN STATE, MEANS THAT THE NUMBERS ARE IN SYNC
if (sequence.leasedValue !== sequence.appliedValue) {
throw new Error("sequence is broken.");
}
}
Then, whenever you need a unique number you can use the above function to work in a protected scope, where the number will be rollbacked when something goes wrong.
const details = ...;
await useSequence("invoice", async (uniqueNumber) => {
const invoiceData = {...details, id: uniqueNumber};
const invoice = await this.createInvoice(invoiceData);
await this.saveInvoice(invoice);
})
Can it scale? Can it run on multiple instances? No, it can't. It never will be, because in most countries it's just not legal to do so. You're not allowed to send out invoice 6 before invoice 5 or to cancel invoice 5 after you've send invoice 6.
The only exception being, if you have multiple sequences. e.g. in some cases you're allowed to have a sequence per customer, or a sequence per payment system, ... Hence, you want them in your database.
I have an object array in a reducer that looks like this:
[
{id:1, name:Mark, email:mark#email.com},
{id:2, name:Paul, email:paul#gmail.com},
{id:3,name:sally, email:sally#email.com}
]
Below is my reducer. So far, I can add a new object to the currentPeople reducer via the following:
const INITIAL_STATE = { currentPeople:[]};
export default function(state = INITIAL_STATE, action) {
switch (action.type) {
case ADD_PERSON:
return {...state, currentPeople: [ ...state.currentPeople, action.payload]};
}
return state;
}
But here is where I'm stuck. Can I UPDATE a person via the reducer using lodash?
If I sent an action payload that looked like this:
{id:1, name:Eric, email:Eric#email.com}
Would I be able to replace the object with the id of 1 with the new fields?
Yes you can absolutely update an object in an array like you want to. And you don't need to change your data structure if you don't want to. You could add a case like this to your reducer:
case UPDATE_PERSON:
return {
...state,
currentPeople: state.currentPeople.map(person => {
if (person.id === action.payload.id) {
return action.payload;
}
return person;
}),
};
This can be be shortened as well, using implicit returns and a ternary:
case UPDATE_PERSON:
return {
...state,
currentPeople: state.currentPeople.map(person => (person.id === action.payload.id) ? action.payload : person),
};
Mihir's idea about mapping your data to an object with normalizr is certainly a possibility and technically it'd be faster to update the user with the reference instead of doing the loop (after initial mapping was done). But if you want to keep your data structure, this approach will work.
Also, mapping like this is just one of many ways to update the object, and requires browser support for Array.prototype.map(). You could use lodash indexOf() to find the index of the user you want (this is nice because it breaks the loop when it succeeds instead of just continuing as the .map would do), once you have the index you could overwrite the object directly using it's index. Make sure you don't mutate the redux state though, you'll need to be working on a clone if you want to assign like this: clonedArray[foundIndex] = action.payload;.
This is a good candidate for data normalization. You can effectively replace your data with the new one, if you normalize the data before storing it in your state tree.
This example is straight from Normalizr.
[{
id: 1,
title: 'Some Article',
author: {
id: 1,
name: 'Dan'
}
}, {
id: 2,
title: 'Other Article',
author: {
id: 1,
name: 'Dan'
}
}]
Can be normalized this way-
{
result: [1, 2],
entities: {
articles: {
1: {
id: 1,
title: 'Some Article',
author: 1
},
2: {
id: 2,
title: 'Other Article',
author: 1
}
},
users: {
1: {
id: 1,
name: 'Dan'
}
}
}
}
What's the advantage of normalization?
You get to extract the exact part of your state tree that you want.
For instance- You have an array of objects containing information about the articles. If you want to select a particular object from that array, you'll have to iterate through entire array. Worst case is that the desired object is not present in the array. To overcome this, we normalize the data.
To normalize the data, store the unique identifiers of each object in a separate array. Let's call that array as results.
result: [1, 2, 3 ..]
And transform the array of objects into an object with keys as the id(See the second snippet). Call that object as entities.
Ultimately, to access the object with id 1, simply do this- entities.articles["1"].
If you want to replace the old data with new data, you can do this-
entities.articles["1"] = newObj;
Use native splice method of array:
/*Find item index using lodash*/
var index = _.indexOf(currentPeople, _.find(currentPeople, {id: 1}));
/*Replace item at index using splice*/
arr.splice(index, 1, {id:1, name:'Mark', email:'mark#email.com'});