How to save session data in alexa skill without using session attributes? - alexa-skill

In my skill I read an RSS feed and I want to reuse the data readed in current session but I can't use SessionAttributes because the object is bigger than 8000 char.

You can also use persistent attributes, it'll basically store the data in DynamoDB and the data will be available between the sessions too.
https://developer.amazon.com/en-US/docs/alexa/alexa-skills-kit-sdk-for-nodejs/manage-attributes.html#persistent-attributes

If you are using AWS Lambda for your backend then, you can save your data in a global variable for the session. Otherwise uses Dynamo Db.

Related

Storing data temporarily in nodejs application

I'm developing a nodejs back-end application which will fetch data from third-party Hotel API provider based on the user input from the Angular application. User should be able to filter and sort the received data like filtering price, hotel rating and sorting price, hotel name etc. but unfortunately API doesn't support this. So I thought to store that data in nodejs temporarily but I'm not sure what's the right approach. Will Redis support this?. A good suggestion will be really appreciated.
Redis should be able to support something like this. That or you could do all of the sorting client side and save all the hotel information in local or session storage. Either route you go with you'll need to make sure to save the entire response with a unique key so that it is easy to fetch, or if you save individual values to Redis make sure each has a key to query against. Also keep in mind Redis is best for caching information for short term periods rather than long term solutions like PostgreSQL and MySQL. But for just temp responses, it should be a fine approach.

Switch Databases dynamically

I'm doing a POS(point of sale) as Saas with React in the frontend, NodeJs in backend(API Rest) and MongoDB as the database.
I've finished a basic program and now I want any user is registered will have his own database.
After read some articles and question on the internet my conclusion was switch between databases each time the frontend consume the backend(API).
General Logic:
User Log in
In the backend, I use a general database to check user credentials and also I acquire the name of the database of this user.
Each time the frontend consumes the API the next codes are executed in a middleware to know what database should use the API:
var dbUser = db.useDb('nameDataBaseUser');
var Product = dbUser.model('Product', ProductSquema);
I have the schemas and the variable 'db' defined fixed in the code:
var db = mongoose.createConnection('mongodb://localhost');
Problem:
I don't know if is the correct solution about what I am trying to make, but it seems me inefficient that the model is generated constantly each time the API is called, because in some API(i.e in some middlewares I have until 4 different models)
Question:
This is the best way? or any suggestion to face this problem?
Not sure about the idea of creating a new db for each new user. That seems to create a lot of complexity and makes it difficult to maintain, and makes it difficult to access the data for analytics and such later. Why not use a new collection per new user? That way you can use just one set of db access credentials. Furthermore, Creating a new collection happens automatically when you store data for it.

Sharing memory-only NeDB instance in multiple Electron BrowserWindows

We are developing an app using Electron and Vue.js. In our app we are using NeDB for temporarily storing JSON documents after having received (and decrypted) them from a Firebase Database. A requirement of the app is, that the decrypted data stays in memory and is not saved to disk during the user session. Therefore we use NeDB with the inMemoryOnly flag.
Our goal is to show reports for contents of the database in a different BrowserWindow to print / save them as PDF.
We tried to initialize the database using a global variable, but unfortunately the database content was empty. Is there another possibility to access the database from within a different BrowserWindow?

Scan AWS DynamoDB records only when there is new information

I am struggling to work out something that seems like it would be so simple.
Here is some context:
I have a web app, which has 6 graphs powered by D3 and this data is stored in one table in DynamoDB. I am using AWS and NodeJS with the awssdk.
I need to have the graphs updating in real-time when new information is added.
I currently have it set so that the scan function runs every 30 seconds for each graph, however, when I have multiple users it causes the db to be hit so many times that it maxes out the reads.
I want it so that when data in the database is updated, potentially the server will save that data to a document so that the users can poll that instead of the database itself and that doc will simply update when new info is added to the database.
Basically, any way to have it where dynamodb is only scanned when there is new information.
I was looking into using streams however I am completely lost on where to start and if that is the best approach to take.
You would want to configure a DynamoDB Stream of your table to trigger something like an AWS Lambda function. That function could then scan the table and generate your new file and store it somewhere like S3.

generating results set on server by userId - is this something I should offload to AWS Lambda?

My backend stack is basically node (express) and mongo. Nothing too fancy.
However, I'm generating search and browse page results requests from my client side by userId. For example, if a user favorites an item, that item is added to a list of favorite itemIds on the back end for that particular user. So, if the user happens to search for "green scarf" and there's a green scarf that he'd already favorited, the resulting JSON will show this via a isFavorite: bool.
Thus, each user will have a different set of data. The favorites is just one aspect - there are a few other tags as well such as whether a friend has favorited an item, etc.
Is this a use case that warrants offloading to AWS lambda? The only things I need to do are to connect to my database, execute a query, and return the results.
Thanks
You can do it from AWS Lambda or not. What I would consider here is using Redis to get the relevant results and tags. You can use Redis in addition to Mongo or you can use Redis only with persistence.
You didn't explain your code in any detail or your load, but if you're getting a lot of those queries that need to check DB to annotate your results for every user then keeping that tags in an in-memory data store can help you with performance no matter if you use AWS Lambda or use a traditional Node process.

Resources