Writing all your functions in one cloud function - node.js

What if I put multiple function inside a single cloud function so that its instance lives at max and that I will have to deal with cold start once?
Why is this a bad idea?
export const shop = functions.https.onCall(async (data, context) => {
switch (data.type) {
case "get_fruits":
return await getFruits();
case "place_order":
return await placeOrder();
case "add_to_cart":
return await addToCart();
default:
return;
}
});

It will work but, IMO, it's not a good thing to do. There are many principles and patterns that exist today and that you do not enforce your solution.
Microservice
One of them is the split in microservices. There is no problem to build a monolith, but when I'm seeing your example (get_fruit, place_order, add_to_cart), I'm seeing different roles and responsibilities. I love the separation of concern: 1 service does 1 thing.
Routing
But, maybe your service is only a service for the routing and call functions deployed independently (and you enforce the microservice principle). If so, your service can become a bottleneck, if there are a lot of entries and a lot of queries.
In addition, there are services dedicated for routing: load balancers. They use the URL path of the requests and reach the correct microservices to serve them
Developer usage
Yes a URL, not a field in the body of your message to route the traffic. Today, the developers are familiar with the REST API. To get the fruit, they perform a GET request to the /fruit URL and they know they will get the fruits. If they want to add to the cart, they perform a POST request to the /cart URL and it works!
You USE URL, standard REST definition, load balancers and microservices.
You can imagine other benefits:
Each microservice can scale independently (you can have more get_fruit request than place_order, the service scale differently)
The security is easier to control (no security to get the catalog (fruits)), but you have to be authenticated to place an order
Evolution velocity can be decoupled between the services
...

Related

Is it good practice to internally call an API within the server?

I have some codes as follows
// [DELETE] /api/v1/authors/:id
async deleteAuthor(req, res) {
const author = await Author.findByIdAndRemove(req.params.id);
// delete blogs of the author
axios.delete(
`http://localhost:${process.env.PORT}/api/v1/blogs/author/${author._id}`,
{
headers: {
Authorization: req.headers.authorization
}
}
);
res.status(200).send();
}
By this, I want to delete an author and all their blogs. I know the naming of the uri is not good but is it overall a good way to code like this or there are other ways to do the same thing. I'm using Node.js and Mongoose
I think is not a good practice, you should avoid making circular HTTP Calls (Not optimal, can unnecessary duplicate logic, and is more difficult to read your code)
What do I suggest? Following DDD & Hex. Architecture:
Have separated services e.g. RemoveAuthor & RemoveAuthorBlogs
(Following the SRP principle of SOLID, one service do only one
thing)
Your HTTP endpoints (E.g. DELETE /author/:id & DELETE /blogs/author/:id) will invoke those services.
If you need to delete the author and his blogs in the same request.
a. Create a high service that calls to RemoveAuthor and RemoveAuthorBlogs (E.g. RemoveAuthorReferences)
b. (My vote is for this) => The service RemoveAuthor remove from DB the author as the first step and as the second step will dispatch a domain event (E.g. AuthorDeleted that will be listened to by an EventHandler (E.g. DeleteBlogsOnAuthorDeleted who will remove the blogs of the author)
In my opinion, your services shouldn't make internal HTTP requests when you can use your own services, in this way, if the logic changes, you'll only need to modify the service

Feathers JS nested Routing or creating alternate services

The project I'm working on uses the feathers JS framework server side. Many of the services have hooks (or middleware) that make other calls and attach data before sending back to the client. If I have a new feature that needs to query a database but for a only few specific things I'm thinking I don't want to use the already built out "find" method for this database query as that "find" method has many other unneeded hooks and calls to other databases to get data I do not need for this new query on my feature.
My two solutions so far:
I could use the standard "find" query and just write if statements in all hooks that check for a specific string parameter that can be passed in on client side so these hooks are deactivated on this specific call but that seems tedious especially if I find this need for several other different services that have already been built out.
I initialize a second service below my main service so if my main service is:
app.use('/comments', new JHService(options));
right underneath I write:
app.use('/comments/allParticipants', new JHService(options));
And then attach a whole new set of hooks for that service. Basically it's a whole new service with the only relation to the origin in that the first part of it's name is 'comments' Since I'm new to feathers I'm not sure if that is a performant or optimal solution.
Is there a better solution then those options? or is option 1 or option 2 the most correct way to solve my current issue?
You can always wrap the population hooks into a conditional hook:
const hooks = require('feathers-hooks-common');
app.service('myservice').after({
create: hooks.iff(hook => hook.params.populate !== false, populateEntries)
});
Now population will only run if params.populate is not false.

Katana+OWIN Multithreaded Performance?

Where can I find info on how many requests per second a Katana-on-OWIN implementation (Azure-hosted) can support?
There are performance benchmarks all over the place for IIS but I can't seem to find comparable data anywhere.
I am concerned that if I do something like this in a vacuum
public async Task Invoke(IDictionary<string, object> environment)
{
var response = environment["owin.ResponseBody"] as Stream;
using (var writer = new StreamWriter(response))
{
if (_options.IncludeTimestamp)
{
await writer.WriteAsync(DateTime.Now.ToLongTimeString());
}
await writer.WriteAsync("Hello, " + _options.Name + "!");
}
}
(taken from http://odetocode.com/blogs/scott/archive/2013/11/11/writing-owin-middleware.aspx) and compare it to a simple .aspx.cs page that writes "Hello world" I will not get an apples-to-apples performance metric.
The way IIS handles threading and pooling is well-documented. But I am not sure about how Katana-on-OWIN (self hosted or under Azure) handles simultaneous requests and works "under load".
Thanks.
OWIN is merely an abstraction for running web applications on different web servers. Katana is one implementation. The most important performance numbers for requests/second are those for web servers, not OWIN or Katana.
OWIN performance comparisons would only make sense if you wanted to know how much overhead a framework added to your web app and could be tested using the Microsoft.Owin.Testing TestServer in isolation of network latency. Here, you could compare differences in Katana, Dyfrig, NancyFx, Web API, and others.

Handling large number of same requests in Azure/IIS WebRole

I have a Azure Cloud Service based HTTP API which is currently serving its data out of an Azure SQL database. We also have a in role cache at the WebRole side.
Generally this model is working fine for us but sometimes what happening is that we get a large number of requests for the same resource within a short period of time span and if that resource is not there in the cache, all the requests went directly to our DB which is a problem for us as many time DB is not able to take that much load.
By looking at the nature of the problem, it seems like it should be a pretty common problem which most of the people build API would face. I was thinking if somehow, I can send only 1st request to DB and hold all the remaining till the time when 1st one completes, to control the load going to DB but I did get any good of doing it. Is there any standard/recommended way of doing it in Azure/IIS?
The way we're handling this kind of scenario is by putting calls to the DB in a lock statement. That way only one caller will hit the DB. Here's pseudo code that you can try:
var cachedItem = ReadFromCache();
if (cachedItem != null)
{
return cachedItem;
}
lock(object)
{
cachedItem = ReadFromCache();
if (cachedItem != null)
{
return cachedItem;
}
var itemsFromDB = ReadFromDB();
putItemsInCache(itemsFromDB);
reurn itemsFromDB;
}

What is lost from the stack when a service handles async messages in ServiceStack?

I'm using the messaging feature of ServiceStack for back end transactions I expect to involve database locks where consistency is very important.
I've registered handlers as explained in the documentation:
mqHost.RegisterHandler<Hello>(m => {
return this.ServiceController.ExecuteMessage(m);
});
I've noticed the Filters don't get called. Presumably, they're really "Http" filters similar to MVC. So it makes sense they're ignored.
How does Authorization work with message handlers, is it ignored too?
And as I want to keep my async services internal, and always async, is there any benefit in making them inherit from ServiceBase at all?
As I'm thinking of creating another envelop layer between IMessage and Body for some Identity data that can be passed from my public services out of AuthSession and to the Async service.

Resources