I just want to make an API requests rate limiting per account plan so let's say that we have users and every user have a plan that has some limits of how many API requests per day they can make.
So now, How can i make an API limit policy in loopback 3.x.
Thanks
If you're planning on using Loopback on IBM Bluemix hosting you can use their API Connect service that includes customer plan based policies with API level throttling, monitoring, API billing and many other API management features.
StrongLoop API Microgateway is used by API Connect but is now open sourced (Apr 2017).
Since Loopback is just a layer on top of Express, you can alternatively just use an Express lib.
For rate limiting on a single standalone Loopback server you can use one of these Express libs:
express-rate-limit
express-throttle
If you plan to use this on a cluster of Loopback servers you'll need to store the API call counts as part of the shared server state of each user or user session. The weapon of choice for this is Redis since it's a high performance in memory data store that can be scaled. Rate limiting Express libs that support Redis include:
strict-rate-limiter
express-brute
express-limiter
Finally, you could also implement rate limiting on a reverse proxy. See Nginx Rate Limiting
This is an access control policy.
You can handle this by custom roles created by role resolver.
By creating a custom role and checking in that resolver callback if the current user exceeded from rate limit or not.
such a policy can only* be made with a database, such as redis/memcached. For my projects I rely on redback which is based on Redis. It has a built in RateLimit helper (among others) and it takes care of some raceconditions and atomic transactions.
* if you don't have a database, you could store it in-memory (in a hash or array) and use intervals to flush it, but I'd go with redback :)
Related
1st approach
Implement the user profile in every micro service.
2nd approach: user profile service
Implement the user profile check in a single micro service.
What are other factors I might consider when making a decision? What would you do?
Actually you haven't mentioned yet another approach which I actually can recommend to consider:
Introduce the gateway - a special service that will take care of authorization / authentication between the "outer word" and your backend services:
Client ---> Gateway -----> Service 1
|-----> Service 2
...
It will be impossible to directly access Service1, 2, etc from the "Outer world" directly, only the gateway will be exposed, it will also take care of routing.
On the other hand, all the requests coming to the backend will be considered to be already authorized (might have additional headers with the "verified" roles list, or use some "standard" technology like JWT)
Besides the separation of concerns (backend services "think" only about the business logic implementation), this approach has the following benefits:
All the logic in one place, easy to fix, upgrade, etc. For example, your first approach might suffer from more advanced ecosystem (what if Services are written in different languages, using different frameworks, etc) - you'll have to re-implement the AuthZ in different technology stacks.
The user is not "aware" of all the variety of the services ( only the gateway is an entry-point, the routing is done in the gateway).
Doesn't have "redundant" calls (read CPU / Memory / IO) by backend services for the authZ. Compare with the second presented approach - you'll have to call external service upon each request.
You can scale the authZ service (gateway) and backend services separately.
For example, if you introduce new service you don't have to think how much overhead it will introduce to your AuthZ component (redis, database, etc). So you can scale it out only by business requirements
We produce an enterprise DLT (Blockchain) application for big banks, FMIs, exchanges etc..
Being distributed each instance of the application is installed on-prem for each customer, as they must remain the sovereign owner of their private keys.
We want to integrate with a SaaS application that is widely used in the banking sector. We intend to achieve this by writing a "connector" which will also run on-prem and be able to communicate and marshal data between the SaaS system and our on-prem system.
Events occur in the SaaS app, which must then trigger something to happen in our on-prem app.
The SaaS app has a RESTful API as well as webhooks. So there are 2 options in my eyes:
Poll the RESTful API
Con: This is inefficient as most traffic will simply be "any new events?" "no"
Con: There will be some latency between the event occurring on the SaaS system and our on-prem app being triggered
Pro: This is stable. If the connector (the thing doing the polling) goes down, it will pick up any missed "events" from the SaaS system when it comes back up and process them
Pro: There is no requirement to allow internet traffic into the firewall - the comms are all outbound.
Use the webhooks
Pro: Very efficient
Pro: Get events in near real-time
Con: What happens if the connector is down and we miss a webhook? Does the SaaS system need a retry mechanism? We need to ensure that we only process messages exactly once. (this is important because the action we perform moves large amounts of funds so double processing would be extremely bad!)
Con: The bank would need to punch a hole in the firewall to allow the SaaS app to communicate into the connector - the bank's security teams won't like this IMO.
Is there a common, enterprise ready, security policy friendly way to do deal with this?
I think here you can use RESTful API with an enterprise ready-solution for API management. I would recommend that you explore APIGEE and see if it fits your usecase.
APIGEE is a platform for developing and managing API proxies.
An API proxy is an interface to developers that want to use backend services. Rather than having them consume those services directly, they access an Edge API proxy that you create. You can have it on cloud and also on-premises.
Here, you will solve your two main issues which are events management and latency.
DocumentDB on Azure can besides the data hold JavaScript app logic in stored procedures, trigger and user defined functions.
If the app logic is computationally fairly simple (or even if it is not) would it then be a usable solution to have the entire backend in the DocumentDB instance and then have the client apps connecting directly via the DocumentDB REST interface? Or am I missing something in terms of security performance here?
Yes, there are scenarios where you don't need a middle tier and directly perform queries from your javascript client to the DocumentDB.
However, you don't want to expose a Master key to the client, instead you wan't to work with Resource tokens, thus you need a small middle tier service that issue a timebound token.
Also see Securing access to DocumentDB data.
I'm building a game for Windows Phone 8 and would like to use Windows Azure SQL Database for storing my users' data (mostly scores and rankings).
I have been reading Azure's documentation on SQL Database and found this link which describes just the scenario I'm looking for (it's Scenario B in the picture): I want my clients (the game running in a user's windows phone) to get data from an SQL Server through a middle application also hosted on Windows Azure.
By reading further the documentation (personally I think it's really messy and hard to find what you're looking for in there), I've learned that I could use Cloud Services for this middle application, however I'm not sure if I should use a background worker which provides an HTTP API or a worker with a Service Bus Relay (I discovered that I can use service bus in WP8 in this link).
I've got a few questions that I couldn't find an answer to:
1) What would be the "standard" way to go in this case?
2) If both ways are acceptable, are there other advantages to using a Service Bus other than an easier way to connect and send messages to my middle application? What are the disadvantages?
3) Is a cloud service really what I'm looking for (and not just a VM with the middle application code running in it)?
Its difficult to answer these sort of question as there are lots of considerations. I don't believe there is a necessarily 'standard way'.
The Service Bus' relay service's purpose is to help traverse firewalls and NATs, not something that directly relates to your scenario, I suspect.
The Service Bus, though, also includes a messaging capability which provides queues, topics and subscriptions to use to exchange messages between clients or client/server.
You could use the phone client to write and read messages to/from queues. you would then have a worker role hosting your application logic and accessing the database as needed.
Some of the advantages of using messaging include being load leveller, helping handling peaks in traffic (at the expense of latency), helping separating concerns and allowing you to accept requests from the clients when the backend is down as so can help with resiliency.
In theory they can also help you deliver messages to the client in the same fashion, by using a queue or subscription per client, but for a large number of clients this may become a management issue.
On the downside you would have to work with what is a proprietary protocol, and will need to understand the characteristics and limitations of the service bus. you will need to manage the queues and topics over time. there will also be some increased latency, although typically not an issue and, finally, you will have to implement asynchronous messaging on the client side which has advantages but is also harder to implement.
I would imagine that many architectures follow the WEB API route by using a web role cloud service exposing the API. The web role can then perform any business logic and connect to the database in the background.
A third option, which you didn't mention, is to use Windows Azure Mobile Services and implement your business logic as a service API there
We have our own application that stores contacts in an SQL database. What all is involved in getting up and running in the cloud so that each user of the application can have his own, private list of contacts, which will be synced with both his computer and his phone?
I am trying to get a feeling for what Azure might cost in this regard, but I am finding more abstract talk than I am concrete scenarios.
Let's say there are 1,000 users, and each user has 1,000 contacts that he keeps in his contacts book. No user can see the contacts set up by any other user. Syncing should occur any time the user changes his contact information.
Thanks.
While the Windows Azure Cloud Platform is not intended to compete directly with consumer-oriented services such as Dropbox, it is certainly intended as a platform for building applications that do that. So your particular use case is a good one for Windows Azure: creating a service for keeping contacts in sync, scalable across many users, scalable in the amount of data it holds, and so forth.
Making your solution is multi-tenant friendly (per comment from #BrentDaCodeMonkey) is key to cost-efficiency. Your data needs are for 1K users x 1K contacts/user = 1M contacts. If each contact is approx 1KB then we are talking about approx 1GB of storage.
Checking out the pricing calculator, the at-rest storage cost is $9.99/month for a Windows Azure SQL Database instance for 1GB (then $13.99 if you go up to 2GB, etc. - refer to calculator for add'l projections and current pricing).
Then you have data transmission (Bandwidth) charges. Though since the pricing calculator says "The first 5 GB of outbound data transfers per billing month are also free" you probably won't have any costs with current users, assuming moderate smarts in the sync.
This does not include the costs of your application. What is your application, how does it run, etc? Assuming there is a client-side component, (typically) this component cannot be trusted to have the database connection. This would therefore require a server-side component running that could serve as a gatekeeper for the database. (You also, usually, don't expose the database to all IP addresses - another motivation for channeling data through a server-side component.) This component will also cost money to operate. The costs are also in the pricing calculator - but if you chose to use a Windows Azure Web Site that could be free. An excellent approach might be the nifty ASP.NET Web API stack that has recently been released. Using the Web API, you can implement a nice REST API that your client application can access securely. Windows Azure Web Sites can host Web API endpoints. Check out the "reserved instance" capability too.
I would start out with Windows Azure Web Sites, but as my service grew in complexity/sophistication, check out the Windows Azure Cloud Service (as a more advance approach to building server-side components).