Using API Apps with Swagger in dev/test/production environments - azure

I'm migrating a combined Azure Website (with both Controllers and ApiControllers) to a split Web App and API App. Let's call it MyApp.
I've created MyAppDevApi, MyAppTestApi, and MyAppProductionApi API Apps (in different App Services) to host the three environments, expecting to promote code from one environment to another.
So far, I've only deployed to MyAppDevApi, since I'm just getting started.
When I do Add/Azure API App Client to my UI-only project to start referring to the API app, and I point it to MyAppDevApi, it uses AutoRest to create classes in my code. These classes now all have the name MyAppDevApi, rather than just MyAppApi, which is the actual namespace of the code I'm deploying to every environment. Obviously, I can't check that in... how can I promote that through Test and Prod?
There's nothing in the Swagger JSON that refers to this name, so it must be on the AutoRest side (I think).
Has anyone come up with a strategy or work-around to deal with this multiple-environment promotion issue with API Apps?
Edit
So far the best thing I've come up with is to download the Swagger from the API App to a local file (which, again, has only the namespace from the original code, not from the name of the API App), and then import it into the Web App. This will generate classes in the Web App that have the naming I expect.
The problem is that I then have to edit the generated MyAppApi.cs file's _baseUri property to pull from an AppSetting, have the different web.config.dev, .test, .prod, and then do the web.config transform. I mean, that'll work, but then every time I change the API App's interface, I'll regenerate... and then I'll have remember to change the _baseUri again... and someone, sometime is going to forget to do this and then deploy to production. It's really, really fragile.
So... anyone have a better idea?

I'm not quite sure why you're creating three different apps, one for each environment? One application is fine and use web.config transforms for each environment. This is the general way I do all of my apps and works fine.
Information about how to apply web.config transforms can be found here which may help in your situation.
Hope that helps.

Well, here's how I've solved this:
Download Swagger file from API App to local hard drive.
Import local Swagger file into Web App to generate classes that have the naming from my code, not from the environment.
Use AppSettings to specify the environment-specific settings to point to the API App. This can be either a web.config transform, or you can just specify them in the Azure Portal on the Web App in Application Settings.
Instantiate the generated API App Client using the constructor that takes in a URL to point to the API App (these are at class level, hence static):
private readonly static Uri apiAppUrl = new Uri(CloudConfigurationManager.GetSetting("ApiAppUrl"));
private readonly static MyAppApi myAppApi = new MyAppApi(apiAppUrl);
I'd still love a solution to this that doesn't require downloading the Swagger file, but, all in all, if that's the only workaround necessary, it's not all that bad.

Related

swagger codegen to google cloud best workflow

I'm developing a small website with my dev partner. He's doing the front and and I'm doing the backend (API). I'm imagining a workflow like the following:
We both collaborate on basic API structure and requirements using
Swagger.io
I generate the server stub and publish to a public google cloud service.
Initially this just serves example data from the OpenAPI yaml file.
This gives my partner something to work with as he gets started.
I update the server code to use faker.js data and give him the ability to trigger various server responses like 500, 404, 200 etc... This allows him to further develop the frontend to handle various issue.
I fill in the stub with actual, working, code that we can both test
before going live.
Is this a realistic workflow? If so, any hints on how to approach it? I was hoping I could take the swagger server stub code and easily publish it to some Google Cloud service like Cloud Functions, App Engine, Containers, Cloud Endpoints etc... but nothing seems straightforward.
We are a 2 man show and this will be an iterative process.
If this is too open ended of a question, then I'd like to ask the following:
What is the simplest way to host Swagger Server Stub code on a public server using faker.js data?
Thanks.

Deploy a React app + Node server with Heroku

I want to deploy a project (React app + Node server), but I'm new to deployment,
I wanted to know : do I need to have the React app in a Github repo and the Node server in another, or I can deploy all in one ?
Currently, I have 1 Github repository with a folder "frontend" and an other "backend",
I want to have my React app on -> nameofmyapp.herokuapp.com
and the Node server on -> api-nameofmyapp.herokuapp.com,
If someone got ideas... Thanks
While in theory that's not a problem, I would suggest maybe considering keeping things on one domian for reasons such as additional latency and connection trouble as well as path issues such as you are facing. It would seem to me that you would ideally just like to prefix the name of your app with 'backend' or similar and in such a case I would just consider setting up a sub domain on a domain which I had control ie mydomain.com and backend.mydomain.com. While developing on Heroku this model could prove to be tricky as each 'site' or app is separate and not actually intended to work together while they most certainly could. Consider setting up separate routes and an endpoint for 'backend' on your app, similar to your frontend login, then when you are finished developing your app and happy you could register your domain name and point it to your app and point a subdomain ie backend.mysite.com or login.mysite.com to your endpoint on Heroku ie mysite.com/backend. Unless you have a specific reason for separating them into their own repos with separate source control and urls, it might make debugging things much harder. Apologies if I missed your point. Most web hosting companies should allow you to register a subdomain or vanity domian free or charge because you own the primary domian. Just some considerations.
Anything is possible, you just need to understand how things are working... my advice would be that you start simple and have a single repo that contains front+back, you can then deploy that as a single Heroku app.
One app can only have a unique Heroku url, so you cannot have what you mention nameofmyapp + api-nameofmyapp hosted by a single Heroku instance, this would need to be hosted by two different instances, which means code from two repos.
Usually for a node app, you would create an /api route that is hosted by the same app, so you have your frontend served at nameofmyapp.herokuapp.com and your api at nameofmyapp.herokuapp.com/api with some sub routes, for example nameofmyapp.herokuapp.com/api/items.
You should be able to easily find tons of Node/React/Heroku tutorials on the web, just play a bit with it to experiment and build some understanding of how those are working together.

how to access a different module in multi target application

I'm new to cloud foundry, so I'm not sure, if my thoughts and plans are right. Maybe someone can explain or discuss it with me.
What I want to do:
Implement a MTA (Multitarget Application) with a a html5-module as frontend and a nodeJS-module as backend. Furthermore there should be a mongodb instance, which will be accessed from the nodejs-module. Later it should also get multitenant.
What I already did:
I implemented a simple nodejs-app and connected it to the db. Persisting and calling data with rest works already fine. I implemented a simple sapui5 app, which consumes data from the db with ajax. For now, the node startscript is in the html5 module, so it works somehow. But now I want to separate the modules.
So I created a mta-project with the two modules in webide and imported the two apps.
What I expect to do for it:
For now, I have an approuter, which is in my nodejs-module, but I can not access the webapp folder in the html5-module from here: file not found error: /home/vcap/app//. Is there a possibility to access the webapp-folder in another module over the path "/home/vcap/app/"? Or can I lookup the app-directory anywhere?
I have read, that an approuter-module (nodejs) can be needed, but I don't know exactly what it does. I think it serves the index.html file when opening the url of the whole app?

Google App Engine API + static architecture

I'm trying to (con)figure the best way to structure a JS client + NodeJS server app to host it on Google Cloud AppEngine (plus possibly other GCP resources). So I'm looking for advice / best practices here.
We have an API server running on a non-default AppEngine service and would like to be able to run several, e.g. development/staging/production versions on the same project (if possible).
We would like to host / serve our static client app on this system because we want to use the same domain to point at it.
In our normal server based setup, the client app is proxied/served on domain.com/ and requests to the API are on domain.com/v1/
I've been working through different options - hosting a separate static site running on AppEngine and using dispatch.yaml to try to route requests - this option doesn't seem to work with domain prefixes, only wildcards, e.g.
dispatch:
- url: "my-client-service-project.appspot.com/"
service: my-client-service
- url: "my-client-service-project.appspot.com/v1/*"
service: my-backend-service
Doesn't work, but:
- url: "*/v1/*"
service: my-backend-service
Does, which we didn't want because we'd like to run dev, staging & production if possible.
The other option I've been looking at is having the static folder hosted as part of my app, but I can't seem to get this working either, here is the snippet from my app.yaml:
handlers:
- url: /.*
static_files: client/dist/index.html
upload: frontend/dist/index.html
- url: /v1/*
script: dist/index.js
My guess is that script may not work the same as for Python apps, but I could be wrong - the doc's aren't very clear.
Ideally, I'd like to host the client front-end static files on storage and point to the AppEngine API server (without specifically pointing to a domain from the client, e.g. /v1/auth/login rather than my-backend-service-project.appspot.com/v1/
References:
How can I use bucket storage to serve static files on google flex/app engine environment?
Node.js + static content served by Google App Engine
https://cloud.google.com/appengine/docs/flexible/nodejs/serving-static-files
https://cloud.google.com/appengine/docs/standard/python/how-requests-are-routed#routing_via_url
https://cloud.google.com/appengine/docs/standard/python/config/appref
https://cloud.google.com/appengine/docs/standard/python/config/dispatchref
To begin: you're mixing up standard and flexible env docs - not a good idea as they don't work the same way. See How to tell if a Google App Engine documentation page applies to the standard or the flexible environment.
Since your app is Node.JS you have to use the flexible env, for which script and static_files aren't applicable inside app.yaml. Which is why you can't get them to work.
The first reference in your list shows the options you have for serving the static files. But I kinda question your desire to use the shared GCS option - it will serve the same content regardless of the dev/staging/production environment, so:
you can't have different client side environments
how do you see selecting a particular server-side environment since the client side references can only point in one direction (i.e. environment, if I understand your intention correctly)?
If your desire to use a single domain means that you'd still be OK with using different subdomains (of that domain) and if you'd be willing to use a custom domain this might be of interest: How to use GAE's dispatch.yaml with multiple development environments?
UPDATE:
Node.JS is currently available in the standard environment as well, so you can use those features, see:
Now, you can deploy your Node.js app to App Engine standard environment
Google App Engine Node.js Standard Environment Documentation
Take this answer as a complement to the one of #Dan Corneliscu, as I think it is pretty useful and summarizes what you are doing wrong and what can be achieved in the type of scenario you present. In any case, I would like to provide some more information which may be useful.
As for the reason why the dispatch rules approach you suggested does not work, you should update your paths in the application accordingly. They should now be listening to /v1/your_endpoint instead of /your_endpoint as they probably did before. It is not enough with changing your dispatch file. Then also make sure that the Dispatch routes field is populated in your App Engine > Services tab in the Console.
Also the alternative approach you suggested will indeed not work using static_files, but you can follow this guide explaining how to serve Static Files from a GAE Flexible application.

Need to create an api doc for an existing application written with nodejs/express

I have a few private apis written in plain old express. Time to let it out and provide some api documentation.
What I don't want (at least yet) it to re-write my express app to integrate api documentation into the code. Mainly since I am not sure what framework or spec to use to document my api I don't really want to be locking into one particular thing.
I would like to serve out the doc as part of a sub resource under my api (ie I do not want to run a different server or subdomain). Maybe '/api/docs'. A plus would also be a UI that I could embed within my app that could parse the docs and at the very least provide a nice presentation of the docs in html (api interaction is a plus).
Things like swagger-node are cool, but would require me to re-write all my express code to integrate swagger. At that point I have a big investment and am tightly coupled to swagger.
Is there a way to serve out swagger or iodocs or maybe something else to document my api in a way that is minimally invasive to existing routes?
EDIT:
I could serve out the Swagger spec from a hand written doc. Problem I see is that you have to define basePath in the swagger doc. This does not really allow me to easily deploy under different domains.
There's a wide array of node.js tools to integrate Swagger with your application, and I assume they offer different ways of doing so. You can find a list of such integrations here - https://github.com/webron/swagger-spec/#nodejs - but I can tell you that there are additional tools out there that are not listed there. You can try searching github for swagger and node/express.
As for the manual spec and the basePath - Swagger 2.0 actually solves that for you. You can use the online editor - http://editor.swagger.io - to write your specs in a more human-friendly YAML form, which then you can export to JSON.
Unlike Swagger 1.2 and previous versions, the basePath is now split into three properties - schemes (http, https), host (domain, port) and basePath (the root context of the application). None of these properties are mandatory, and they all default to whatever is serving the swagger.json file (the spec itself). schemes defaults to the scheme service the swagger.json, host defaults to the host used for serving the swagger.json and basePath will be \ unless explicitly specified. I believe this should solve your concerns regarding the basePath.

Resources