Azure Function App project/repo structure? - azure

I'm fairly new to Function Apps, anyways we have about a dozen small programs currently running as Windows scheduled tasks on an Azure VM and we are looking to migrate these to PaaS. Most of these are small console type background processes that might make an API call, perform a calc, and store the result in a db, or maybe read some data from a db and then send out an email. We have a mixture of pwsh, Python, and .NET.
Anyways I was wondering how many repos I should have? I assume I would need at least 3 (one for each runtime stack). Also I didn't want to create a separate repo per app and end up having 50 git repos eventually. I don't know if it's best just to make some root level folders using the app names in the repo to keep the structure separate looking?
Lastly should each of these apps be hosted in their own function app (Azure Resource), or can I have several of the apps hosted in a single FA? I'm not sure if splitting everything up into separate function apps would make deployment easier or not. I guess ease of long term support/maintenance would be the most important aspect to me.
Short version: What is considered the best practice for logically grouping and creating the relationship/mapping between your apps, number of git repos, and azure function app resources?

Related

Simplest setup for a staging server and a production server -

What's the simplest way to manage a staging server vs production?
What's the point of having a staging server if you could just push changes to a different branch in production?
What's the best way to merge the staging server with production? Cron job?
Current setup is staging server which we don't use we are just pushing straight to production, but trying to improve the process
What's the simplest way to manage a staging server vs production?
The simplest and cheapest way is to get rid of your staging server. Staging servers don't inherently make deploys safer, but generally developers want at least a dev environment (functionally not necessarily distinct from the idea of staging server) to host their code in a prod-like environment before they push it to prod.
What's the point of having a staging server if you could just push changes to a different branch in production?
If you have 2 branches running in production simultaneously, that's functionally equivalent to a staging server. Most shops prefer to have a staging environment not server so that their data tier, 3rd party integrations, etc. are completely separate between staging and prod.
Simply deploying another copy of your application in prod is deceptively dangerous because if you mess up the data tier or 3rd party integrations you can easily effect prod.
trying to improve the process
Feature flags. if you can enable new features or even fixes for specific users you can then roll it out to your QA team (or the devs, whomever is going to test) and then when you're happy with it, roll it out to the general user base. This isn't inherently safer than anything else, but it has the advantage that it front loads the work of planning for multiple concurrent code paths and makes that planning more explicit.
Unfortunately there's no magic bullet for having testing (dev ,staging , whatever you want to call them) environments increase reliability.
What's the best way to merge the staging server with production? Cron job?
For code, usually the preferred method is to "promote" the artifact you deployed to staging over to prod without rebuilding, guaranteeing the same thing is shipped.
for runtime environment, using containerization makes most of that part of the code artifact and that's the simplest way. If you're running on a container-centric hosting like ECS Fargate or Google's docker oriented service, there's nothing else on the app side to ship. This is what I recommend, it's straight forward and easy to reason about. Adding virtual servers into the mix just adds an OS level to manage and there's little benefit to that. If you can make your app serverless so it's not sitting waiting for connections but instead is invoked when connections come in, the same thing applies, no OS to manage (AWS lambda for example has serverless docker image support)
Data is generally considered the tricky bit to having test environments by those who have experience with them. If your production data is not at all sensitive you can copy it over, but that may or may not actually work depending on what's in the data and how distributed your data ends up being. Generally production data is sensitive enough that you don't want to expose it to dev environments, which makes it tricky to ensure the dev data is appropriate for testing features. One common methodology for overcoming that obstacle is automating end-to-end tests via something like selenium for web broswers, and automated API tests for non-browser-centric endpoints. This allows you to write the test along with the app to prove it's working.

Is there any way to perform rollback in firebase cloud functions?

We are running a larger backend application in NodeJS/TS on Firebase with about 180 cloud functions and Firestore as database. Firebase has been good for our needs so far, but we are getting to a level of usage where even small amounts of down-time can cause a lot of damage. Due to the amount of cloud functions a full deploy could take up to 30 minutes, we therefore usually only do partial deploys of changed functions only, which still take about 10 minutes. I am trying to find a way to be able to do quick rollback to previous version of a given function in case a bug is discovered after a production deploy. Firebase does not seem to provide rollback functionality, so the only option is to re-deploy the code with the previous version. One issue is the deploy time (up to 10 min for a single function), and the other is git versioning when there are partial deploys. Normally there would be a branch reflecting exactly what is in prod that could be used, but with partial deploys this is no longer the case. The only alternative for maintaining good git versioning with one to one branch with prod is to do a full deploy every time, but this takes a prohibitive amount of time (30+ minutes not including retries). The firebase deploy usually fail or exceed deployment quota as well, which makes things like CI pipelines very difficult (it would have to automatically retry failed functions, and the time is still an issue since 30+ min to deploy is not acceptable in the case of down-time). Has anyone found a good solution for roll-back (versioning) and a git structure that works well with firebase at scale?
Cloud Functions for Firebase is based on Cloud Functions and their behavior are the same. And today, it's not possible to route the traffic to a previous version (and to perform a rollback). (And I can also told you that NodeJS16 is now GA, instead of Beta as still mentioned in the Cloud Functions for Firebase documentation)
The next Cloud Functions runtime is cooking (and available in preview). That runtime is based on Cloud Run under the hood, that allow traffic splitting/routing, and therefore accept rollback.
So, for now, you haven't solution to perform a simple rollback with Firebase functions. A great change could be to use Cloud Functions V2 runtime directly, or event Cloud Run, but it's a big change in your code base.
Another solution could be to use a load balancer in front of all your functions and to:
Deploy new function under new name (no update of the current deployment, create a new service each time that you deploy a new version)
Create a new serverless backend with the new functions
Update the URL map to take into account the new backend.
After a while, delete the old function versions.
That also requires a lot of work to put that in action. And the advertising delay when you update your URL map should be between 3 and 5 minutes, not a such great advantage compare to your current solution.
it looks like your not the only one. previous questions answered. I recommend setting up some version control.I would solve the failing deploy issues first which should reduce the deploy time and redeploy times specifically if its multiple . You could use a different deploy branch or setup a staging environment as well. I would invest the time in getting the GIT control setup/turnkey.
Per user Ariel:
Each time you make a deploy to a cloud function you get an output line like this:
sourceArchiveUrl: gs://my-store-bucket/us-central1-function_name-xxoxtdxvxaxx.zip
I entered my Google Cloud Platform Developer Console -> Cloud Functions -> function_name -> Source tab
and there almost at the bottom it says: Source location
my-store-bucket/us-central1-function_name-xxoxtdxvxaxx.zip
the same as it was shown in the CLI, but without gs:// that link lead me to the following: https://storage.cloud.google.com/my-store-bucket/us-central1-function_name-........
I removed from the link everything that came after
https://storage.cloud.google.com/my-store-bucket
and that lead me to a huge list of files that each one of them represented a an image of all my cloud functions at the time point of each time i have made a deploy, exactly what i needed!
The only thing left to do was to locate the file with the last date before my mistaken deploy
source: Retrieving an old version of a Google Cloud function source
as of 2019
Rolling back to an older version of a firebase function (google cloud function)
2021:
Roll back Firebase hosting and functions deploy jointly?
You can roll back a Firebase Hosting deployment, but not the functions without using a GIT Version control etc. Using partials you can deploy multiple functions/Groups. You can checkout remote config templates to rollback and its kept for up to 90 days.
https://firebase.google.com/docs/remote-config/templates
Firebase partial deploy multiple grouped functions
https://firebase.google.com/docs/cli#roll_back_deploys

What is the production deployment / runtime architecture of ResolveJS backend systems?

Does the reSolveJS generally run as a single NodeJS application on the server for production deployment?
Of course, event store and read models may be separate applications (e.g. databases) but are the CQRS read-side and write-side handled in the same NodeJS application?
If so, can / could these be split to enable them to scale separately, given the premise of CQRS is that the read-side is usually much more active than the write-side?
The reSolve Cloud Platform may alleviate these concerns, given the use of Lambdas that can scale naturally. Perhaps this is the recommended production deployment option?
That is, develop and test as monolith (single NodeJS application) and deploy in production to reSolve Cloud Platform to allow scaling?
Thanks again for developing and sharing an innovative platform.
Cheers,
Ashley.
reSolve app can be scaled as any other NodeJS app, using containers or any other scaling mechanisms.
So several instances can work with the same event store, and it is possible to configure several instances to work with the same read database, or for every instance to have its own read database.
reSolve config logic is specified in the app's run.js code, so you can extend it to have different configurations for different instance types.
Or you can have the same code in all instances and just route command and queries to the different instance pools.
Of course, reSolve Cloud frees you from these worries, in this case you use local reSolve as dev and test environment, and deploy there.
Please note that reSolve Cloud is not yet publicly released. Also, local reSolve can not have all required database adapters at the moment, so those yet to be written.

Use Google App Engine or Google Cloud Compute VM to Test Run My App?

I'm moving my Three.js app and its customized node.js environment, which I've been running on my local machine to Google Cloud. I want to test things out there, and hopefully soon get some early alpha testing going with other people.
I'm not sure which is the wiser way to go... to upload the repo I've been running locally as-is onto a VM which users would then access via the VM's external IP until I get a good name to call this app... or merge my local node.js environment with what's available via the Google App Engine and run it on GAE.
Issues I'm running into with the linux VM approach... I'm not sure how to do the equivalent on the VM of what I've been doing locally. In Windows Powershell I cd into the app directory and then enter node index.js. I'm assuming by this method of deployment that I can get the app running as soon as the browser hits the external IP. I should mention too that the app will allow users to save content as well as upload images, and eventually, 3D models as well as json datasets.
Issues I'm running into with the App Engine approach: it looks like I only have access to a linux-based command line, and have to install all the node.js modules manually. Meanwhile I have a bunch of files to upload, both the server-side node files and all the frontend stuff. I don't see where to upload those files, and ultimately what I'd like to do is have access to a visual, editable file-tree interface, as I have in Windows and FileZilla, so I can swap files in and out, etc. Alternatively I suppose I could import a repo from Github? Github would be fine as long as I can visually see what's happening. Is there a visual interface for file structure available in GAE somewhere? Am I missing something?
I went through the GAE "Hello World" tutorial and that worked fine, but was left scratching my head afterward regarding how to actually see and edit the guts of the tutorial app, or even where to look for the files.
So first off, I want to determine what's the better approach, and then if possible, determine how to make the experience of getting my app up there and running a more visual, user-friendly experience.
Thanks.
There are many things to consider when choosing how to run an app, but my instinct for your use case is to simply use a VM on GCE. The most compelling reason for this is that it's the most similar thing to what you have now. You can SSH into the machine and run nohup node index.js & (or node index.js inside tmux/screen if you prefer) and it will start the app and not stop it when you log out of SSH. You can use SCP / SFTP with whatever GUI client you want to upload files. You don't have to learn anything new! If you wanted to, you could even use a Windows VM (although I think you have to pay a little more than for a comparable Linux VM due to the licensing fees).
That said, the other way is arguably more "correct" by modern development standards, but it will involve a lot more learning that will prevent you from getting your app running somewhere other than your laptop in the short term:
First, you'll need to learn about Docker and stateless containers, which is basically what your app runs inside of on AppEngine.
Next, you'll need to learn how to hook up a separate stateful service (database, file server, ...) to your app's container so you can store your files, etc. in it, and then probably rewrite your app somewhat to use it to store stuff.
Next, you'll probably want some way to automatically deploy this from code instead of manually doing it, which gets you into build systems, package managers, artifact storage, continuous integration systems, and on and on and on.
This latter path is certainly what you should choose for a long-running production service if you work with a big team of developers -- but that doesn't mean that it's necessarily the right path for your project today. If you don't care about scaling up automatically, load balancing between nodes, redundant copies of your app running in different regions in case there's a natural disaster, etc., then go with the easy way for now, and you can learn new ways to improve the service when they're actually needed.

Azure ARM Templates

I have created 5 x ARM templates that combined deploys my application. Currently I have separate Templates/parameter files for the various assets (1 x servicebus, 1 x sql server, 1 x eventhub, etc)
Is this OK or should I merge them into 1 x template, 1 x parameter file that deploys everything?
Pro & cons? What is best practice here?
Its always advised to have seperate JSON File for azuredeploy.json and azuredeploy.parameters.json.
Reason:
Azuredeploy is the json file which actually holds your resouces and paramaters.json holds your paramaters. You can have one azuredeploy.json file and have multiple paramaters.json files. Like for example let say you different environements, Dev/Test/Prod, then you have seperate azuredeploy-Dev.paramaters.json, azuredeploy-Test.paramaters.json and so and so forth; you get the idea.
You can either merger seperate json files, one for service bus, one for VMs, etc. this will help when you want multiple people to work on seperate sections of your Resource group. Else you can merge them together.
BottomLine: You are the architect, do it as you want, whichever makes your life easy.
You should approach this from the deployment view.
First answer yourself few question:
How separate resources such as ASB, SqlServer, Event hub are impacting your app? can your app run independently while all above are unavailable?
How often do you plan to deploy? I assume you are going to implement some sort of Continuous deployment.
How often will you provision a new environment.
so long story short.
Anything that will have minimum (0) downtime on your app during deployment/disaster recovery, should be considered along with the fact anyone from the street can take you scripts and have your app running in reasonable time, say 30 min max.

Resources