I have some concern on getting an idea of migrating current microservices system into serverless.
Right now, between services are communicating with HTTP or API based.
Serverless like lambda or function can talk to each other with function call or lambda call. This way can be done by changing all HTTP code into lambda call within all services.
Another way is still using HTTP request to call another service that on lambda through API Gateway. This method of calling is not good because the service request gone to Internet and go back again into API Gateway then neighbor service get the request. Too long and does not make sense for me.
I will be glad if lambda app call another lambda app with local network HTTP request, this is still on my research on how to do it.
I would like to know from all of you about your experience on migrating microservices based on HTTP communication between services into serverless like Lambda or Functions ?
Do you change all your code into specific lambda function call ?
Do you use HTTP over internet and API Gateway again to call neighbor service ?
Have you guys figured it out on Local / Private network lambda call ?
Thank You
Am I correct that you're talking about the orchestration of your microservices/functions?
If so have you looked at AWS Step Functions or Durable Functions on Azure?
AWS Step Functions
AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly. Using Step Functions, you can design and run workflows that stitch together services such as AWS Lambda and Amazon ECS into feature-rich applications. Workflows are made up of a series of steps, with the output of one step acting as input into the next. Application development is simpler and more intuitive using Step Functions, because it translates your workflow into a state machine diagram that is easy to understand, easy to explain to others, and easy to change. You can monitor each step of execution as it happens, which means you can identify and fix problems quickly. Step Functions automatically triggers and tracks each step, and retries when there are errors, so your application executes in order and as expected.
Source: https://aws.amazon.com/step-functions/
Azure Durable Functions
The primary use case for Durable Functions is simplifying complex, stateful coordination problems in serverless applications. The following sections describe some typical application patterns that can benefit from Durable Functions: Function Chaining, Fan-out/Fan-in, Async HTTP APIs, Monitoring.
Source: https://learn.microsoft.com/en-us/azure/azure-functions/durable-functions-overview
You should consider communicating using queues. When one function finishes, it puts the results into the Azure Storage Queue, which is picked up by another function. Therefore there is no direct communication between functions unless it's necessary to trigger the other function.
In other words, it may look like this
function1 ==> queue1 <== function2 ==> queue2 <== function 3 ==> somewhere else, e.g. storage
Related
Our existing system uses App Services with API controllers.
This is not a good setup because our scaling support is poor, its basically all or nothing
I am looking at changing over to use Azure Functions
So effectively each method in a controller would become a new function
Lets say that we have a taxi booking system
So we have the following
Taxis
GetTaxis
GetTaxiDrivers
Drivers
GetDrivers
GetDriversAvailableNow
In the app service approach we would simply have a TaxiController and DriverController with the the methods as routes
How can I achieve the same thing with Azure Functions?
Ideally, I would have 2 function apps - Taxis and Drivers with functions inside for each
The problem with that approach is that 2 function apps means 2 config settings, and if that is expanded throughout the system its far too big a change to make right now
Some of our routes are already quite long so I cant really add the "controller" name to my function name because I will exceed the 32 character limit
Has anyone had similar issues migrating from App Services to Azure Functions>
Paul
The problem with that approach is that 2 function apps means 2 config
settings, and if that is expanded throughout the system its far too
big a change to make right now
This is why application setting is part of the release process. You should compile once, deploy as many times you want and to different environments using the same binaries from the compiling process. If you're not there yet, I strongly recommend you start by automating the CI/CD pipeline.
Now answering your question, the proper way (IMHO) is to decouple taxis and drivers. When requested a taxi, your controller should add a message to a Queue, which will have an Azure Function listening to it, and it get triggered automatically to dequeue / process what needs to be processed.
Advantages:
Your controller response time will get faster as it will pass the processing to another process
The more messages in the queue / more instances of the function to consume, so it will scale only when needed.
Http Requests (from one controller to another) is not reliable (unless you implement properly a circuit breaker and a retry policy. With the proposed architecture, if something goes wrong, the message will remain in the queue or it won't get completed by the Azure function and will return to the queue.
On AWS, is it possible to have one HTTP request execute a Lambda, which in turn triggers a cascade of Lambdas running in serial, where the final Lambda returns the result to the user?
I know one way to achieve this is for the initial Lambda to "stay running" and orchestrate the other Lambdas, but I'd be paying for that orchestration Lambda to effectively do nothing most of the time, i.e. paying for the time it's waiting on the others. If it were non-lambda code, that would be like blocking (and paying for) an entire thread while the other threads do their work.
Unless AWS stops the billing clock while async Lambdas are "sleeping"/waiting on network IO?
Unfortunately as you've found only a single Lambda function can be invoked, this becomes an orchestrator.
This is not ideal but will have to be the case if you want to use multiple Lambda functions as you're serving a HTTP request, you can either use the Lambda to call a number of Lambda or instead create a Step Function which can can orchestrate the individual steps. You would still need the Lambda to start this, and then poll the status of it before returning the results.
We are using microservicse approach in our backend
We have a nodejs service which provide a REST endpoint that grap some data from mongodb and apply some business logic to it.
We would need to add a schedule job every 15 min to sync the mongodb data with some 3rd party data source.
The question here is - dose adding to this microservicse a schedule job that would do that, consider anti pattern?
I was thinking from the other point of having a service that just do the sync job will create some over engineering for simple thing, another repo, build cycle deployment etc hardware, complicated maintenance etc
Would love to hear more thoughts around it
You can use an AWS CloudWatch event rule to schedule CloudWatch to generate an event every 15 minutes. Make a Lambda function a target of the CloudWatch event so it executes every 15 minutes to sync your data. Be aware of the VPC/NAT issues if calling your 3rd party resources from Lambda if they are external to your VPC/account.
Ideally, if it is like an ETL job, you can offload it to a Lambda function (if you are using AWS) or a serverless function to do the same.
Also, look into MongoDB Stitch that can do something similar.
have two function app (httptrigger) in one of azure function apps project.
PUT
DELETE
In certain condition, would like to call DELETE function app from PUT function app.
Is it possible to get directly RUN of DELETE function app as both are resides in same function app project ?
I wouldn't recommend trying to call the actual function directly, but you can certainly refactor the DELETE functionality into a normal method and then call that from both the DELETE and PUT functions.
There is a few ways to call a function from the function:
HTTP request - it's simple, execute a normal HTTP request to your second function. It's not recommended, because it extends function execution time and generates a few additional problems, such as the possibility of receiving a timeout, the unavailability of the service and others.
Storage queues - make communication through queues (recommended), e.g. the first function (in your situation: "PUT function) can insert a message to the queue and the second function ("DELETE function") can listen on this queue and process a message.
Azure Durable Functions - this extensions allows to create rich, easy-to-understand workflows that are cheap and reliable. Another advantage is that they can retain their own internal state, which can be used for communication between functions.
Read more about cross function communication here.
I want to use Node.js in AWS Lambda in production.
The question is how to make this reliable.
For me, reliability means:
Retrying some parts of code - this exists from-the-box in AWS Lambda
Exception notification - I tried Airbrake, but it does not work in AWS Lambda - process.on('uncaughtException') does not work
Possibility to know if something is down and even exception notification does not work - in a usual app, I have the healthcheck endpoint.
So how can I implement 2 and 3 points?
One idea is using a SQS queue as a Dead Letter Queue, which can be set up for that lambda (http://docs.aws.amazon.com/lambda/latest/dg/dlq.html). So you can monitor that queue to analyze the inputs that made the function to fail and take some action.
For logging, you can use winston(https://github.com/winstonjs/winston). Works fine with AWS lambdas.
In your lambda workflow I guess you have some kind of error handler function where is anything happens, It ends there. You can use that error handler to send you emails with the event and context inputs and other error thingy descriptions.
Also you can monitor lambdas via cloudfront, and create an alarm that sends you certain data if somethign went wrong.