ASP.NET WebAPI URL Versioning - azure

We want to create general public web services and we create customized APIs.
But how to isolate, version and bind those endpoints as a hyperscaled system?
We want to have:
https://api.domain.tld/v1/..
https://api.domain.tld/v2/..
https://api.domain.tld/latest/..
https://api.domain.tld/bosch/v1/..
or
https://domain.tld/api/v1/..
https://domain.tld/api/v2/..
All endpoints should be isolated. Behind an endpoint like https://domain.tld/api/v2/.. exists at least 3 instances of an ASP.NET WebAPI. We do not want to separate versioning by namespaces inside the WebAPI project and use internal route configurations to resolve this.
We want to have this behavior onpremise and aswell on Azure.
Is there any recommendation or best practise and configuration samples out there?
I could only found one thread here (How to version and configure WebApi with multiple aliases) which is very old and there is no answer.

I always use the version as the api route, like you so:
https://domain.tld/api/v1/..
https://domain.tld/api/v2/..
which has always worked fine, but other use url parameters, which I find to be less explicit:
Visual Studio Team Services - API
I would just not recomend you to go with this pattern:
https://api.domain.tld/latest/..
https://api.domain.tld/bosch/v1/..
It would you make you API look a bit messy, but it could make sense if you use logical services in your API:
https://api.domain.tld/service1/v1
https://api.domain.tld/service1/v2
https://api.domain.tld/service2/v1

Related

"Dependency hell" when using same npm module for both sdk and service itself

I'm currently developing a project in Node JS that uses microservices architecture, in which each service has it's own repository that contains both the code for the service itself (NodeJS express server), and also an SDK file that I publish for other services to use with methods that are available in this service and Typescript definitions.
So for instance I have a users-service that handles all of the user related actions, and a reports-service that handles all of the reports that users can CRUD.
users-service has a method called "deleteUser" that also goes to reports-service SDK in order to delete all of this user reports. On the other hand reports-service uses user-service SDK to "getUserById" for instance. So what happens is that user-service has reports-service-sdk as one of it's dependencies, and reports-service has users-service-sdk as one of its dependencies. Because the SDK is inside the same npm module with the service, I get users-service-sdk as one of the dependencies of users-service.
I thought of separating the SDK with a different package.json file, but I wanted to know if it's the right way to go or am I doing something really wrong in my architecture :)
Thanks.
This sounds like Circular Dependency which as you stated in the title is tough to deal with. Microservices are great but this sort of architecture sounds like a lot of extra unnecessary work without any added benefit.
You should look into running your services/packages/repositories as Cloud functions (or Firebase functions). AWS also has their own solution for microservices architecture. The reason being is each service can communicate with other services by using internal authorized calls or authorized REST API calls --- or you can make them totally public.
The great thing about these Google Cloud Functions is each function is automatically an Express end-point that accepts GET, POST, DELETE, PUT. Or if you use the internal call for Firebase, each function automatically contains relevant context from the frontend (such as the user's authentication details).
You also configure IAM permissions to only allow who and what service you want to be able to execute your cloud functions so that you have full control of permissions.
To answer your questions directly though, I would definitely avoid Package A having Package B as a dependency as Package B has Package A as a dependency. You absolutely can make that work but there's no upside and a lot of downside.
Here's an old thread which covers the topic.

Using Azure SDK for JS to create .NET 4.x App Service

I'm starting to wonder whether this is the right tool for the job, still here goes.
I'm attempting to automate the creation of our Azure Test environment using Azure SDK for JS. The environment spans many services (as you can imagine), including Classic ASP.NET app services.
Node is my safe space, so that is why I started with the JS SDK.
I have started scripting the creation of an app service using WebSiteManagementClient.webApps.createOrUpdate. I'm confused though, there is seemingly no way to configure any of the following:
Which app service plan the app service should be connected to. This feels fundamental.
The operating system, Windows or Linux.
The stack version, .NET 4.8, .NET Core, or whatever.
Is it possible to configure the above using the JS SDK, or am I going to have find another approach?
Update 23/03/21
Untested, but these are my findings so far:
App Service Plan - The plan is set using the serverFarmId property of the Site interface.
Operating system - Assuming Windows as the default, if you want a Linux app service, you change the kind property of Site from app, to app,linux.
Stack & version - In the SiteConfig interface, you have linuxFxVersion and windowsFxVersion. Again, I think the assumption is 'latest .NET' (e.g. .NET 4.8). For .NET Core 3.1, the setting looks to be DOTNETCORE|3.1.
It can be achieved using js SDK. I checked the source code and it is ok. But I don't recommend to use js sdk to do this.
Because you need to call the SDK, there are many internal logics that you need to code. This will waste a lot of your time. So I recommend you to use restapi.
The restapi method name is similar to the naming in the SDK, mainly because you can test api interfaces online to achieve the functions you want. So you can selectively choose the method you want to achieve the function you want.
Official doc
Web Apps - Create Or Update
As for your concerns, you only need to write all the configuration in json format and put it in the request body.
Tips:
First use the online interface, encode the json format, create a webapp according to your needs, and then integrate it into your code.

Is it possible to publish a .net core web api to azure functions?

I've recently switched jobs from a AWS shop to an Azure shop, both using dotnet. AWS publishes Amazon.Lambda.AspNetCoreServer, which is a magic Nuget package that allows you to write a plain ol' ASP.NET core Web API and deploy it as into a lambda with only a few lines on config. I really loved this pattern because it allows developers to just write a normal web api without having the host runtime leak into their coding.
Does anything like this exist in Azure? Even something that is community supported? Or is there some any way to achieve something like this in Azure Functions?
Unfortunately there is no simple way to do that since Azure function format is a bit different.
[FunctionName(nameof(GetAll))]
public IActionResult GetAll([HttpTrigger("get", Route = "entity")]HttpRequest request)
Then it will generate json with meta data for AF.
If you wish to host pure .net core without any changes I would look into Containers option
PS0: Theoretically it would be possible to do it with little bit of reflection. For instance you create project with all your Asp.Net core apis, which you can use in asp.net core hosting. Then you write tool which grabs your dll and using reflection you find all actions in your controllers and generate code for AF
PS1: Have a look https://github.com/tntwist/NL.Serverless.AspNetCore

how should I architecture an API and front app in Google App Engine?

I'm developing my first node.js app deploying to GAE.
It'll be organized as an API service and a front-end web app developed with Next.js
I'm looking at this architecture, and, although I have the app separated in two repositories I could have one merged repo to create two different microservices:
https://medium.com/this-dot-labs/node-js-microservices-on-google-app-engine-b1193497fb4b
For me, it seems overwork creating a new repo to merge them and deploy (doesn't it break one of the basic ideas of microservices to make isolated deploys?)
I have to discourage this because we need SEO in some of the parts, and We should use Next.js (or similar):
https://cloud.google.com/storage/docs/hosting-static-website
Another idea I've been working on is... create different GAE projects for front and API to deploy independently. For me, it seems like the best option, but I would like to know your opinion as GAE experts.
Which one should I use?
Thanks!
GAE doesn't care how is the code to be deployed into the services mapped to one or more VCS repositories (or no repositories at all). That's entirely up to you.
With a single repository you may encounter difficulties deploying from CI/CD pipelines - for example unnecessary deployments to one service when only the other one is changed.
Many examples out there focus on applications rather than services, but those are nothing more than the default services of those applications. Personally I like keeping the code for different services in separate directories, see the image captured in Can a default service/module in a Google App Engine app be a sibling of a non-default one in terms of folder structure? (it's no longer present on the updated documentation page). This also allows for easy mapping to multiple, separate VCS repositories
As for multiple projects vs multiple services, this might be of help: Advantages of implementing CI/CD environments at GAE project/app level vs service/module level?
The static website link you mentioned isn't part of GAE, it's part of GCS - a different GCP product. It's fine to use by itself - for a static website, but it might be difficult/impossible to:
communicate between a service running on it and one running on GAE - if you need that
make the 2 services appear as one (for example serve under the same custom domain name)

REST API Versioning with Swagger 2.0

I needed my Node REST API's to be versioned. I am using swagger 2.0 for the validation middleware and documentation. Currently i have only a single swagger yml file that is used for all purposed.
I am using url prefixes (version number: /v1/... /v2/... etc) to support versioning in my Node Rest API. And i need to support multiple versions at any point of time.
Should i create a separate swagger yml file for each API version? If yes, how to load/manage multiple swagger yml files in the swagger-validation middleware
Does Swagger 2.0 format specification allow definition of versioned paths within the same file.
Swagger does not specify a versioning scheme simply because there is no single solution, and forcing one approach to use the spec would not make sense. Here are common techniques that I've seen:
1) Tie your authentication to a version. I think this is the coolest way to handle versioning but is also the most expensive to support and maintain. Based on, for example the api key being used to access your service, you can keep track of the version that they're expecting to access, and route it to the right server. In this case, you can simply have multiple services running, with different swagger definitions.
2) Use a path part to indicate the version. That means you have a /v2 or /v3 in your path, and based on that, some routing logic points you to the right server. Again, a separate swagger definition.
3) Based on some header, let the user choose what server to talk to. This is pretty unintuitive, but it can work. You should always have a default version (typically the latest)
That said, all of the above solutions mean multiple swagger files. You can use the $ref syntax to link and reuse portions of the spec.
I believe with swagger-tools, you can have multiple instances listening for requests. You just need a routing tier in front of them to handle the different versioning that you choose.

Resources