I am using Dynamodb streams to do some work on records as they are added or modified to my table. I am also using Dynamoose models in my application.
The Dynamodb stream event passes an event object to my node.js lambda handler that includes the objects record.dynamoDb.NewImage and record.dynamoDb.OldImage. However, these objects are in DynamoDB's AttributeValue format including all of the data types ('S' for string), rather than a normal javascript object. So record.id becomes record.id.S.
Dynamoose models allow you to instantiate a model from an object, like so: new Model(object). However, it expects that argument to be a normal object.
I know that Dynamoose has a dynamodb parser, I think its Schema.prototype.dynamodbparse(). However, that doesn't work as expected.
import { DYNAMODB_EVENT } from '../../constant';
import _get from 'lodash/get';
import { Entry } from '../../model/entry';
import { applyEntry } from './applyEntry';
async function entryStream(event) {
await Promise.all(
event.Records.map(async record => {
// If this record is being deleted, do nothing
if (DYNAMODB_EVENT.Remove === record.eventName) {
return;
}
// What I've tried:
let entry = new Entry(record.dynamodb.NewImage);
// What I wish I could do
entry = new Entry(Entry.schema.dynamodbparse(record.dynamodb.newImage));
await applyEntry(entry);
})
);
}
export const handler = entryStream;
So is there a way to instantiate a Dynamoose model from DynamoDB's AttributeValue format? Has anyone else done this?
The alternative, is that I simply extract the key from the record, and then make a trip to the database using Model.get({ id: idFromEvent }); But I think that would be inefficient, since the record was just handed to me from the stream.
I solved it by using AWS.DynamoDB.Converter.unmarshall to parse the object before passing to Dynamoose.
import { DYNAMODB_EVENT } from '../../constant';
import _get from 'lodash/get';
import { Entry } from '../../model/entry';
import { applyEntry } from './applyEntry';
// https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/DynamoDB/Converter.html#unmarshall-property
var AWS = require('aws-sdk');
var parseDynamo = AWS.DynamoDB.Converter.unmarshall;
async function entryStream(event) {
await Promise.all(
event.Records.map(async record => {
// If this record is being deleted, do nothing
if (DYNAMODB_EVENT.Remove === record.eventName) {
return;
}
entry = new Entry(parseDynamo(record.dynamodb.newImage));
await applyEntry(entry);
})
);
}
export const handler = entryStream;
Related
Is there a way to implement custom logic right after the graphql query has been parsed, but before any of the resolvers have executed?
Given this query schema
type Query {
products(...): ProductConnection!
productByHandle(handle: String!): Product
}
How can I accomplish the task of logging the info object for the products and productByHandle queries, before their resolvers have had a chance to execute?
I'm basically looking to "hook up" to an imaginary event like query:parsed, but it doesn't appear to exist. I'm using the express-graphql package.
Props to #xadm for figuring this out.
express-graphql package accepts a custom execute function, which is the function that gets called after the query has been parsed. Its return value is what gets returned from the /graphql endpoint.
import { graphHTTP } from 'express-graphql'
import { execute } from 'graphql'
app.use('/graphql', graphHTTP((req, res) => {
return {
...,
async customExecuteFn(ExecutionArgs) {
// The `info` object is available on ExecutionArgs
// { data: {...}, errors: [...] }
const result = await execute(ExecutionArgs)
return result
}
}
}))
I will still leave this here, as it might be useful for something more specific, but you should probably use the code above.
// This returns an object, whose keys are the query names and the values are the definitions ( name, resolve etc )
const queryFields = graphqlSchema.getQueryType().getFields()
// They can then be iterated, and the original `resolve` method can be monkey-patched
for (const queryName in queryFields) {
const queryInfo = queryFields[queryName]
// Grab a copy of the original method
const originalResolve = queryInfo.resolve
// Overwrite the original `resolve` method
queryInfo.resolve = function patchedResolve(src, args, context, info) {
// Your custom logic goes here
console.log(info);
// Call the original `resolve` method, preserving the context and
// passing in the arguments
return originalResolve.apply(this, arguments)
}
}
I am trying to:
Poll a public API every 5 seconds
Store the resulting JSON in a variable
Store the next query to this same API in a second variable
Compare the first variable to the second
Print the second variable if it is different from the first
Else: Print the phrase: 'The objects are the same' if they haven't changed
Unfortunately, the comparison part appears to fail. I am realizing that this implementation is probably lacking the appropriate variable scoping but I can't put my finger on it. Any advice would be highly appreciated.
data: {
chatters: {
viewers: {
},
},
},
};
//prints out pretty JSON
function prettyJSON(obj) {
console.log(JSON.stringify(obj, null, 2));
}
// Gets Users from Twitch API endpoint via axios request
const getUsers = async () => {
try {
return await axios.get("http://tmi.twitch.tv/group/user/sixteenbitninja/chatters");
} catch (error) {
console.error(error);
}
};
//Intended to display
const displayViewers = async (previousResponse) => {
const usersInChannel = await getUsers();
if (usersInChannel.data.chatters.viewers === previousResponse){
console.log("The objects are the same");
} else {
if (usersInChannel.data.chatters) {
prettyJSON(usersInChannel.data.chatters.viewers);
const previousResponse = usersInChannel.data.chatters.viewers;
console.log(previousResponse);
intervalFunction(previousResponse);
}
}
};
// polls display function every 5 seconds
const interval = setInterval(function () {
// Calls Display Function
displayViewers()
}, 5000);```
The issue is that you are using equality operator === on objects. two objects are equal if they have the same reference. While you want to know if they are identical. Check this:
console.log({} === {})
For your usecase you might want to store stringified version of the previousResponse and compare it with stringified version of the new object (usersInChannel.data.chatters.viewers) like:
console.log(JSON.stringify({}) === JSON.stringify({}))
Note: There can be issues with this approach too, if the order of property changes in the response. In which case, you'd have to check individual properties within the response objects.
May be you can use npm packages like following
https://www.npmjs.com/package/#radarlabs/api-diff
My problem is the following: I want to test a method that uploads a buch of data into an AWS S3 bucket. The problem is: I don't want to really upload data every time I am testing and I don't want to care about credentials sitting in the env. So I want to setup Sinon's fake-server module to simulate the upload and return the same results then S3 would. Sadly, it seems to be difficult to find a working example with code using async/await.
My test looks like this:
import {skip, test, suite} from "mocha-typescript";
import Chai from "chai";
import {S3Uploader} from "./s3-uploader.class";
import Sinon from "sinon";
#suite
class S3UploaderTest {
public server : Sinon.SinonFakeServer | undefined;
before() {
this.server = Sinon.fakeServer.create();
}
after() {
if (this.server != null) this.server.restore();
}
#test
async "should upload a file to s3 correctly"(){
let spy = Sinon.spy();
const uploader : S3Uploader = new S3Uploader();
const upload = await uploader.send("HalloWelt").toBucket("onetimeupload.test").toFolder("test/hw.txt").upload();
Chai.expect(upload).to.be.a("object");
}
}
Inside of the uploader.upload() method, I resolved a promise out of a callback. So how can I simulate the uploading-process?
Edit: Here is the code of the s3-uploader:
import AWS from "aws-sdk";
export class S3Uploader {
private s3 = new AWS.S3({ accessKeyId : process.env.ACCESS_KEY_ID, secretAccessKey : process.env.SECRET_ACCESS_KEY });
private params = {
Body: null || Object,
Bucket: "",
Key: ""
};
public send(stream : any) {
this.params.Body = stream;
return this;
}
public toBucket(bucket : string) {
this.params.Bucket = bucket;
return this;
}
public toFolder(path : string) {
this.params.Key = path;
return this;
}
public upload() {
return new Promise((resolve, reject) => {
if (process.env.ACCESS_KEY_ID == null || process.env.SECRET_ACCESS_KEY == null) {
return reject("ERR_NO_AWS_CREDENTIALS");
}
this.s3.upload(this.params, (error : any, data : any) => {
return error ? reject(error) : resolve(data);
});
});
}
}
Sinon fake servers are something you might use to develop a client that itself makes requests, instead of a wrapper around an existing client like AWS.S3, like you're doing. In this case, you're better off just stubbing the behavior of AWS.S3 instead of testing the actual requests it makes. That way you can avoid testing the implementation details of AWS.S3.
Since you're using TypeScript and you've made your s3 client private, you're going to need to make some changes to expose it to your tests. Otherwise, you won't be able to stub its methods without the TS compiler complaining about it. You also won't be able to write assertions using the params object, for similar reasons.
Since I don't use TS regularly, I'm not too familiar with it's common dependency injection techniques, but one thing you could do is add optional constructor arguments to your S3Uploader class that can overwrite the default s3 and arguments properties, like so:
constructor(s3, params) {
if (s3) this.s3 = s3;
if (params) this.params = params;
}
After which, you can create a stub instance and pass it to your test instance like this:
const s3 = sinon.createStubInstance(AWS.S3);
const params = { foo: 'bar' };
const uploader = new S3Uploader(s3, params);
Once you have the stub instance in place, you can write assertions to make sure the upload method was called the way you want it to be:
sinon.assert.calledOnce(s3.upload);
sinon.assert.calledWith(s3.upload, sinon.match.same(params), sinon.match.func);
You can also affect the behavior the upload method using the sinon stub api. For example, to make it fail like so:
s3.upload.callsArgWith(1, null);
Or make it succeed like so:
const data = { whatever: 'data', you: 'want' };
s3.upload.callsArgWith(1, null, data);
You'll probably want a completely separate test for each of these cases, using an instance before hook to avoid duplicating the common setup stuff. Testing for success will involve simply awaiting the promise and checking that its result is the data. Testing for failure will involve a try/catch that ensures the promise was rejected with the proper error.
Also, since you seem to be doing actual unit tests here, I'll recommend testing each S3Uploader method separately instead of calling them all in once big test. This drastically reduces the number of possible cases you need to cover, making your tests a lot more straightforward. Something like this:
#suite
class S3UploaderTest {
params: any; // Not sure the best way to type this.
s3: any; // Same. Sorry, not too experienced with TS.
uploader: S3Uploader | undefined;
before() {
this.params = {};
this.s3 = sinon.createStubInstance(AWS.S3);
this.uploader = new S3Uploader(this.s3, this.params);
}
#test
"send should set Body param and return instance"() {
const stream = "HalloWelt";
const result = this.uploader.send(stream);
Chai.expect(this.params.Body).to.equal(stream);
Chai.expect(result).to.equal(this.uploader);
}
#test
"toBucket should set Bucket param and return instance"() {
const bucket = "onetimeupload.test"
const result = this.uploader.toBucket(bucket);
Chai.expect(this.params.Bucket).to.equal(bucket);
Chai.expect(result).to.equal(this.uploader);
}
#test
"toFolder should set Key param and return instance"() {
const path = "onetimeupload.test"
const result = this.uploader.toFolder(path);
Chai.expect(this.params.Key).to.equal(path);
Chai.expect(result).to.equal(this.uploader);
}
#test
"upload should attempt upload to s3"() {
this.uploader.upload();
sinon.assert.calledOnce(this.s3.upload);
sinon.assert.calledWith(
this.s3.upload,
sinon.match.same(this.params),
sinon.match.func
);
}
#test
async "upload should resolve with response if successful"() {
const data = { foo: 'bar' };
s3.upload.callsArgWith(1, null, data);
const result = await this.uploader.upload();
Chai.expect(result).to.equal(data);
}
#test
async "upload should reject with error if not"() {
const error = new Error('Test Error');
s3.upload.callsArgWith(1, error, null);
try {
await this.uploader.upload();
throw new Error('Promise should have rejected.');
} catch(err) {
Chai.expect(err).to.equal(err);
}
}
}
If I were doing this with mocha proper, I'd group each method's tests into a nested describe block. I'm not sure if that's encouraged or even possible with mocha-typescript, but if so you might consider it.
Have a very basic understanding of the Typescript language, but would like to know, how can I copy multiple documents from one firestore database collection to another collection?
I know how to send the request from the app's code along with the relevant data (a string and firebase auth user ID), but unsure about the Typescript code to handle the request...
Thats a very broad question, but something like this can move data in moderate sizes from one collection to another:
import * as _ from 'lodash';
import {firestore} from 'firebase-admin';
export async function moveFromCollection(collectionPath1: string, collectionPath2: string): void {
try {
const collectionSnapshot1Ref = firestore.collection(collectionPath1);
const collectionSnapshot2Ref = firestore.collection(collectionPath2);
const collectionSnapshot1Snapshot = await collectionSnapshot1Ref.get();
// Here we get all the snapshots from collection 1. This is ok, if you only need
// to move moderate amounts of data (since all data will be stored in memory)
// Now lets use lodash chunk, to insert data in batches of 500
const chunkedArray = _.chunk(collectionSnapshot1Snapshot.docs, 500);
// chunkedArray is now an array of arrays, with max 500 in each
for (const chunk of chunkedArray) {
const batch = firestore.batch();
// Use the batch to insert many firestore docs
chunk.forEach(doc => {
// Now you might need some business logic to handle the new address,
// but maybe something like this is enough
const newDocRef = collectionSnapshot2Ref.doc(doc.id);
batch.set(newDocRef, doc.data(), {merge: false});
});
await batch.commit();
// Commit the batch
}
console.log('Done!');
} catch (error) {
console.log(`something went wrong: ${error.message}`);
}
}
But maybe you can tell more about the use case?
I have the following queries, which starts with the GetById method firing up, once that fires up and extracts data from another document, it saves into the race document.
I want to be able to cache the data after I save it for ten minutes. I have taken a look at cacheman library and not sure if it is the right tool for the job. what would be the best way to approach this ?
getById: function(opts,callback) {
var id = opts.action;
var raceData = { };
var self = this;
this.getService().findById(id,function(err,resp) {
if(err)
callback(null);
else {
raceData = resp;
self.getService().getPositions(id, function(err,positions) {
self.savePositions(positions,raceData,callback);
});
}
});
},
savePositions: function(positions,raceData,callback) {
var race = [];
_.each(positions,function(item) {
_.each(item.position,function(el) {
race.push(el);
});
});
raceData.positions = race;
this.getService().modelClass.update({'_id' : raceData._id },{ 'positions' : raceData.positions },callback(raceData));
}
I have recently coded and published a module called Monc. You could find the source code over here. You could find several useful methods to store, delete and retrieve data stored into the memory.
You may use it to cache Mongoose queries using simple nesting as
test.find({}).lean().cache().exec(function(err, docs) {
//docs are fetched into the cache.
});
Otherwise you may need to take a look at the core of Mongoose and override the prototype in order to provide a way to use cacheman as you original suggested.
Create a node module and force it to extend Mongoose as:
monc.hellocache(mongoose, {});
Inside your module you should extend the Mongoose.Query.prototype
exports.hellocache = module.exports.hellocache = function(mongoose, options, Aggregate) {
//require cacheman
var CachemanMemory = require('cacheman-memory');
var cache = new CachemanMemory();
var m = mongoose;
m.execAlter = function(caller, args) {
//do your stuff here
}
m.Query.prototype.exec = function(arg1, arg2) {
return m.execAlter.call(this, 'exec', arguments);
};
})
Take a look at Monc's source code as it may be a good reference on how you may extend and chain Mongoose methods
I will explain with npm redis package which stores key/value pairs in the cache server. keys are queries and redis stores only strings.
we have to make sure that keys are unique and consistent. So key value should store query and also name of the model that you are applying the query.
when you query, inside the mongoose library, there is
function Query(conditions, options, model, collection) {} //constructor function
responsible for query. inside this constructor,
Query.prototype.exec = function exec(op, callback) {}
this function is responsible executing the queries. so we have to manipulate this function and have it execute those tasks:
first check if we have any cached data related to the query
if yes respond to request right away and return
if no we need to respond to request and update our cache and then respond
const redis = require("client");
const redisUrl = "redis://127.0.0.1:6379";
const client = redis.createClient(redisUrl);
const util = require("util");
//client.get does not return promise
client.get = util.promisify(client.get);
const exec = mongoose.Query.prototype.exec;
//mongoose code is written using classical prototype inheritance for setting up objects and classes inside the library.
mongoose.Query.prototype.exec = async function() {
//crate a unique and consistent key
const key = JSON.stringify(
Object.assign({}, this.getQuery(), {
collection: this.mongooseCollection.name
})
);
//see if we have value for key in redis
const cachedValue = await redis.get(key);
//if we do return that as a mongoose model.
//the exec function expects us to return mongoose documents
if (cachedValue) {
const doc = JSON.parse(cacheValue);
return Array.isArray(doc)
? doc.map(d => new this.model(d))
: new this.model(doc);
}
const result = await exec.apply(this, arguments); //now exec function's original task.
client.set(key, JSON.stringify(result),"EX",6000);//it is saved to cache server make sure capital letters EX and time as seconds
};
if we store values as array of objects we need to make sure that each object is individullay converted to mongoose document.
this.model is a method inside the Query constructor and converts object to a mongoose document.
note that if you are storing nested values instead of client.get and client.set, use client.hset and client.hget
Now we monkey patched
Query.prototype.exec
so you do not need to export this function. wherever you have a query operation inside your code, mongoose will execute above code