How to apply a microservices architecture to a voting application? - node.js

I am developing the FreeCodeCamp full stack voting application and would like to apply a microservices architecture. The user stories of the voting application are as follows:
As an authenticated user, I can keep my polls and come back later to access them.
As an authenticated user, I can share my polls with my friends.
As an authenticated user, I can see the aggregate results of my polls.
As an authenticated user, I can delete polls that I decide I don't want anymore.
As an authenticated user, I can create a poll with any number of possible items.
As an unauthenticated or authenticated user, I can see and vote on everyone's polls.
I am conceptualizing an architecture and come up with this:
The application is composed of 6 microservices:
1. UI
2. Aggregator
3. Authorization (login, logout)
4. Social Media (sharing)
5. Polls (with db)
6. Users (with db)
Curious how a developer having built microservices would break down these user stories into microservices. Thank you!

It looks like you have already done a good job spliting the application in microservices. Every one of the has its own persistence and they should communicate with a technology agnostic protocol or by asynchronous events.
I would do it pretty much like you did. Maybe Auth should be splitted into Authentication (i.e. using a stateless jwt) and Authorization (+its own database).
The Authentication would ensure that the User is who he says it is.
The Authorization would verify that a User may modify only his own polls.

Polls Module is as of now seems to be responsible for creating a polls, conducting polls, managing poll session (given that you might have more than one question to answer in your poll) and keeping track of results.
Alright, you might have just one db to keep all of these things but they have separate concerns and (if you scale this up) different biz teams managing it. So I would suggest you can split polls alongs those lines. (They might be helpful when you start scaling it up)

Related

Rest API real time Tricky Question- Need Answer

I was recently interviewed by a MNC technical panel and they asked me different questions related to RestAPI , i was able to answer all but below 2 questions though i answered but not sure if those are correct answers. Can somebody answer my queries with real time examples
1) How can i secure my Rest API when somebody send request from Postman.The user provides all the correct information in the header like session id, Token etc.
My answer was: The users token sent in the header of the request should be associated with the successfully authenticated user info then only the user will be granted access if the Request either comes from Postman or application calls these API.(The panel said no to my answer)
2) How can i handle concurrency in Rest API Means if multiple users are trying to access the API at the same given time (For e.g multiple post request are coming to update data in a table) how will you make sure one request is served at one time and accordingly the values are updated as requested by different user request.
2) My answer was: In Entitiy framework we have a class called DbUpdateConcurrencyException, This class takes of handling concurrency and serves one request is served at a time.
I am not sure about my both the above answers and i did not find any specific answer on Googling also.
Your expert help is appreciated.
Thanks
1) It is not possible, requests from Postman or any other client or proxy (Burp, ZAP, etc) are indistinguishable from browser requests, if the user has appropriate credentials (like for example can observe and copy normal requests). It is not possible to authenticate the client application, only the client user.
2) It would be really bad if a web application could only serve one client at a time. Think of large traffic like Facebook. :) In many (maybe most?) stacks, each request gets its own thread (or similar) to run, and that finishes when the request-response ends. These threads are not supposed to directly communicate with each other while running. Data consistency is a requirement of the persistence technology, ie. if you are using a database for example, it must guarantee that database queries are run one after the other. Note that if an application runs multiple queries, database transactions or locks need to be used on the database level to maintain consistency. But this is not at all about client requests, it's about how you use your persistence technology to achieve consistent data. With traditional RDBMS it's mostly easy, with other persistence technologies (like for example using plaintext files for storage) it's much harder, because file operations typically don't support a facility similar to transactions (but they do support locks, which you have to manage manually).

Microservices: how to effectively deal with data dependencies between microservices

I am developing an application utilizing the microservices development approach with the mean stack. I am running into a situation where data needs to be shared between multiple microservices. For example, let's say I have user, video, message(sending/receiving,inbox, etc.) services. Now the video and message records belong to an account record. As users create video and send /receive message there is a foreign key(userId) that has to be associated with the video and message records they create. I have scenarios where I need to display the first, middle and last name associated with each video for example. Let's now say on the front end a user is scrolling through a list of videos uploaded to the system, 50 at a time. In the worst case scenario, I could see a situation where a pull of 50 occurs where each video is tied to a unique user.
There seems to be two approaches to this issue:
One, I make an api call to the user service and get each user tied to each video in the list. This seems inefficient as it could get really chatty if I am making one call per video. In the second of the api call scenario, I would get the list of video and send a distinct list of user foreign keys to query to get each user tied to each video. This seems more efficient but still seems like I am losing performance putting everything back together to send out for display or however it needs to be manipulated.
Two, whenever a new user is created, the account service sends a message with the user information each other service needs to a fanout queue and then it is the responsibility of the individual services to add the new user to a table in it's own database thus maintaining loose coupling. The extreme downside here would be the data duplication and having to have the fanout queue to handle when updates needs to be made to ensure eventual consistency. Though, in the long run, this approach seems like it would be the most efficient from a performance perspective.
I am torn between these two approaches, as they both have their share of tradeoffs. Which approach makes the most sense to implement and why?
I'm also interested in this question.
First of all, scenario that you described is very common. Users, videos and messages definitely three different microservices. There is no issue in how you broke down system into pieces.
Secondly, there are multiple options, how to solve data sharing problem. Take a look at great article from auth0: https://auth0.com/blog/introduction-to-microservices-part-4-dependencies/
Don't restrict your design decision to those 2 options you've outlined. The hardest thing about microservices is to get your head around what a service is and how to cut your application into chunks/services that make sense to be implemented as a 'microservice'.
Just because you have those 3 entities (user, video & message) doesn't mean you have to implement 3 services. If your actual use case shows that these services (or entities) depend heavily on each other to fulfil a simple request from the front-end than that's a clear signal that your cutting was not right.
From what I see from your example I'd design 1 microservice that fulfills the request. Remember that one of the design fundamentals of a microservice is to be as independent as possible.
There's no need to over complicate services, it's not SOA.
https://martinfowler.com/articles/microservices.html -> great read!
Regards,
Lars

Check user permissions in RESTful API

I'm developing a SaaS API with NodeJS, Express, MongoDB. It has implemented a JWT authentication/security methodology.
In my personal case, I have (for now) two collections: User and Client.
You can see the fields that each collection has (for defining purposes). So in terms of endpoint design I'm using a trully restful approach so:
/api/users/{userId}/clients: to insert clients i.e.
This is exactly the point I'm bringing I want, that before posting a new client to check if the price plan allows the user to do that. In terms of logic:
function post(req,res){
// Check if the JWT user.id is the same of the endpoint request
if(req.user._id == req.params.id){
// Here I want to know which is the price plan and to count the Clients that the user has
}
}
In terms of my doubts I have thought in some hypothesis but I truly don't know which one is the best:
Do a query in the User collection get the price plan, do a query count on the Clients collection validate and then post the new Client.
Put the User's price plan information in the JWT, do a query count on the user's Clients collection validate and then post the new Client.
These are the two main possible solutions I thought about, but I have serious doubts security and performance wise of which one I should implement/follow.
Thank you in advance.
I have same doubts. Also if you put anything into your tokens, then when information change, you will have to reissue those tokens (will have to make user login and logout) or implement complex token update logic. Also application evolves: today you need price, tomorrow something else. Changing every time tokens of all users (using it as distributer storage by fact) is not a good idea probably. That's why it is better to keep JWT as short as possible.
Your question is more opinion based, but as my own opinion, I would definitely store in jwt only userId (+ meta information if needed). But not app specific things. Reading from database is the way to go.

Updating per user stats in SignalR hub

I'm working on a simple game using SignalR 2 and MVC 5. I need to track number of "deaths" and "kills" for each user. These stats need to be read/write from multiple game instances (user can have multiple concurrent sessions) across multiple servers.
Is adding fields to ApplicationUser : IdentityUser a reasonable solution? I'm planning on using the built-in authentication system because I like how easy it is to support Facebook and other OAuth providers.
How should I update these stats in an optimized manor that reduces multi-user/threads/server issues and is highly scalable? The stats themselves are simple, and probably only update once every few seconds per user, but I'd like a design that can support millions of users across multiple servers.
For example, I know I could add this code inside an MVC controller to update the stats:
var um = HttpContext.GetOwinContext().GetUserManager<ApplicationUserManager>();
var user = um.FindById(User.Identity.GetUserId());
user.Deaths++;
um.Update(user);
However, that doesn't seem very safe/transactional. If another process/connection/server is updating that user at the same time, bad things are likely.
In a pure SQL design I'd probably have a stored procedure that runs in a SQL transaction to get current counter, and increment it. Not sure how to translate that to a good SignalR design that takes advantage of all that the various API layers have to offer (OWIN, MVC, ASP.NET, etc). Ideally something I can easily add Redis to down the road, if direct SQL access becomes an issue.

Securing RESTful API: Is it possible to disallow XHR requests from the JS Console?

My application (mostly client-side code written in backbone) interfaces with a Node.js server. The sole purpose of my server is to provide API endpoints for my backbone application.
GET requests are pretty safe, attackers can't do much here. But I do have a few POST and PUT requests. One of the PUT requests is responsible for updating vote count for a particular user, e.g.
app.put('/api/vote`, function(req, res) {
// POST form data from the client
var winningPerson = req.body.winner; // userID
var losingPerson = req.body.loser; // userID
}
I have noticed that some people were just spamming PUT requests for one particular user via JS console or some kind of REST API console, bypassing the intention of the application enforced by the User Interface. If you were to use this application as it is intended, it would never allow you to vote for the same person multiple times in a row, let alone any arbitrary user from the database (assuming you know their user id).
But yes, yes I know: "Don't trust the client". So how can I fix the above problem? Will some kind of IP address checking help here to prevent voting multiple times within a span of 3-5 minutes? What can I do to disallow access to my API from the console so that users cannot arbitrarily vote for anyone they wish, but instead only vote by clicking on an image with a mouse, or at the very least vote from console just for those two people, not any arbitrary person?
The answer lies within your server. It shouldn't allow the user to vote more than once within the specified timespan. This is a kind of business rule you can enforce via server only because it's under your control.
Any enforcing in the UI is good and profitable, but is not bullet-proof. You definitely have to check on the server to be sure. There is much more to the server's business logic than
The sole purpose of my server is to provide API endpoints for my backbone application.
Don't try to control something that is out of your control - the client side of your application. Some people vote more times because you (your API) ALLOW them to do so. As soon as your server replies "Try in 5 minutes, dude." they'll stop doing this or there will be no harm when doing this at least.

Resources