Calling Web API from Web Job or use a class? - azure

I am working on creating a Web Job on Azure and the purpose is to handle some work load and perform server background tasks on my website.
My website has several Web API methods which are used by my website but I also want the Web Job to perform the same tasks as the Web API methods after they are finished.
My question is, should I get the Web Job to call this web API (if possible) or should I move the Web API code to a Class and have the Web API and also Web Job both call this class?
I just wondered what was normal practice here.
Thanks

I would suggest you put the common logic in a dll and have them both share that library instead of trying to get the webjob to call the webapi.
I think that will be the simple way to get what you want (plus it will help you keep things separated so they can be shared - instead of putting too much logic in your webapi).

I think it's players choice here. Both will run on the same instance(s) in Azure if you choose to deploy them that way. You can either reuse by dogfooding your API or reuse by sharing a class via a .dll. We started off mixed but eventually went with the dogfooding the API as the amount of Webjobs we are using got bigger/more complex. Here are a couple of reasons.
No coupling to the libraries/code used by the API
Easier to move the Web Job into its own solution(s) only dependent on the API and any other libraries we pick for it
Almost free API testing (Dogfooding via our own Client to the API)
We already have logging and other concerns wired up in the API (more reuse)
Both work though in reality, it really comes down to managing the dependencies and the size of app/solution is you are building.

Related

Azure Functions how much code can be done in one?

I am a complete newbie for Azure and Azure Functions but my team plans to move to Azure soon. Now I'm researching how I could use Azure Functions to basically do what I would normally do in a .Net console application.
My question is, can Azure Functions handle quite a bit of code processing?
Our team uses several console apps that effectively pick up a pipe delimited file, do some business logic, update a database with the data, and log everything along the way. From what I've been reading so far I typically see that Azure Functions are used for little pieces of code. How little do they mean? Is it best practice to have a bunch of Azure Functions to replace a console app EX: have one function that does the reading of a file and create a list of objects, another function to loop through those items and add business logic, and then another to write the data to a database or can I use one Azure Function to do all of that?
Direct answer is yes - you can run bigger pieces of code as Azure Function - this is not a problem as long as you meet their limitations. You can even have dependency injecton. For chained scenarios, you can use Durable Functions. However, Microsoft do not recommend long running functions, cause of unexpected timeouts. See best practices for azure functions.
Because of that, I would consider alternatives:
If all what you need is run console app in Azure you can use WebJobs. Here is example how to deploy console app directly to azure via VisualStudio
For more complex logic you can use .NET Core Worker Service which behaves as Windows Service, and could be deployed to azure as App Service.
If you need long-running jobs but with scheduled runs only I had really great experience with Hangfire which can be hosted in Azure as well.
This is really hard to answer because we don't know what kind of console app you have over there. I usually try to use the same SOLID principles used to any class on my functions too. And whenever you need to coordenate actions or if you need to run things in parallel you always use Durable Functions Framework too.
The only concern is related to execution time. Your function cans get pretty expensive if you're running on consumption plan and do know pay attention to it. I recommend you the reading of the following gread article:
https://dev.to/azure/is-serverless-really-as-cheap-as-everyone-claims-4i9n
You can do all of that in one function.
If you need on-the-fly data processing, you can safely use Azure Functions even if it takes reading files or database communication.
What you need to be careful at and configure, though, is the timeout. Their scalability is an interesting topic as well.
If you need to host an application, you need a machine or a part of the storage space of a machine in Azure to do that.

Is Logic Apps performance slower compared to a direct .NET REST Call?

I use ASP.NET 4.7 MVC 5 on an Azure App Service.
I currently get JSON response data by calling REST APIs directly from my .NET code and then deserialising this JSON using
var order = JsonConvert.DeserializeObject<Order>(json.ToString());
This works fine and is pretty good regarding speed. However I am now looking into Azure Logic Apps to see if this could be used to call 3rd party APIs which would then transform its native schema to my standard schema.
Would the use of Logic Apps slow down the retrieval of data from the API endpoints compared to my current native .NET approach. I have a feeling that it will as it may be much more asynchronous/fire and forget. I am hoping that I would just call into the Logic App and get the same response as if I had done it natively, but with greater flexibility and scalability.
Thanks.
EDIT: My question is about the use of Logic App versus native calling. So assume one job each. I confused the matter talking about transformation as well. Apologies
I think the easy answer is yes. The api call in your native code would be fired immediately and return the payload to your application directly, whereas a logic app step has all the plumbing required by activity orchestration and infrastructure costs. You could probably measure a difference, but it may not impact the quality of your application, depending on what it is trying to do.
The question I would ask is “does it matter”? What are you giving up by using logic apps over c# code? Is it worth the trade off to enjoy the benefits of serverless computing (scale out, no infrastructure to maintain, focus on the what instead of the how, etc)?

Approaches to build a nodejs backend, use one or several instances?

I am planning to build a nodejs application as back end of a web application.
This server application will provide a rest api, websockets service and process to scrape some sites with a headless navigation (like zombie.js) each n hours to feed a database.
I would to ask if it's a good approache build all the things in one nodejs instance or if it's better use several nodejs applications for every task.
If you are having small size application (which doesnot require scale in future), you can keep all the stuff Rest, Socket, scraping on the same project).
Note: After scraping if you are processing HTML content, it will take some time to process that in Sync manner. So at that time event loop will be blocked. So Rest API will not handle any request. In this scenario, you can keep Rest and Socket combinely in one project and Scraping thing in another project.
If you are planning to scale in near future, I suggest to Keep Seperate Instance, considering Benefit of SOA in scaling.
In my opinion better way is using different Node.js applications. First one will server your API. Another one will work as a socket server on different port.
About scraper it can be PHP (as well as nodejs) script which will run as a cron job. How to setup cron job you can check this question:
https://askubuntu.com/questions/2368/how-do-i-set-up-a-cron-job
or this tutorial for Ubuntu server: https://help.ubuntu.com/community/CronHowto
I think it will be best approach.

Using Node.js along with old Java Web application (Spring MVC)

I have an existing web application. It uses Spring MVC on Tomcat and MySQL at the back end with transactions (multi table updates). I am also using spring security for role based authorization. This is very typical website without any real time notification, chat etc.
Now my client wants to add real time notifications like Facebook, chat module etc. Typically on front end some action will be taken, and all or specific logged in users need to get notified. After receiving notification, I need to update some <div> content. Frequency would be high. (Currently user needs to refresh browser.)
I completed POCs using Node.js/Express and looks like it's easy to accomplish these 2 things with Node.js.
I have 3 options:
Move front end to Node.js and may be Angular.js. Manage form validations / security through Node.js/Angular.js but all database calls are still managed by my old website (in a RESTful manner) as I can reuse my existing stuff. Most of the method calls return to a tile but we can easily convert to return JSON object.
Keep existing website as it is and plug in Node.js into it just for real time notification / chat etc.
Scrap existing website and redo everything including security, transactions using Node.js. There are many posts saying as Node.js is kind of new, not preferable for enterprise application and also this might be too much.
Approach 2 would be my preferred approach as we have expertise in Spring, Node would be completely new for us but I don't know whether it's practical & doable approach, what kind of issues I might be facing. I am also not sure how to integrate Spring MVC running on Tomcat and Node.js together.
Please could anybody help me in deciding, out of the three what's the best way to go? Any other approach which might be easier?
You already have an existing Spring MVC codebase and you can reuse it. However you can go ahead with AngularJS for your front-end technology. Your front-end AngularJS code can talk to your existing Spring MVC via REST api and to NodeJS for real-time features which you plan to develop.
You need to use Socket.io within NodeJS which will provide the features you are looking for.
One major problem you might face is related to session when talking to two different backend stack.

Need to poll a bunch of web services from a Linux server - What is the most efficient way?

So basically, I'm looking to build a web app that aggregates a bunch of data from various web services and presents the data visually. To achieve what I want, I will basically need to regularly poll these web services, and store the resulting data from each call in a database. This data would then be queried by the web app etc.
I'm looking to build the web app using PHP (code igniter), but I'm not entirely sure how to go about the polling component. I'm coming from a .NET background and still getting used to the Linux/web world. I would normally solve this problem by simply writing a .NET Windows Service... I want all of this to run on a linux box however, so if anyone could recommend any technologies for this sort of thing that would be great.
Thanks in advance!

Resources