I have created 5 x ARM templates that combined deploys my application. Currently I have separate Templates/parameter files for the various assets (1 x servicebus, 1 x sql server, 1 x eventhub, etc)
Is this OK or should I merge them into 1 x template, 1 x parameter file that deploys everything?
Pro & cons? What is best practice here?
Its always advised to have seperate JSON File for azuredeploy.json and azuredeploy.parameters.json.
Reason:
Azuredeploy is the json file which actually holds your resouces and paramaters.json holds your paramaters. You can have one azuredeploy.json file and have multiple paramaters.json files. Like for example let say you different environements, Dev/Test/Prod, then you have seperate azuredeploy-Dev.paramaters.json, azuredeploy-Test.paramaters.json and so and so forth; you get the idea.
You can either merger seperate json files, one for service bus, one for VMs, etc. this will help when you want multiple people to work on seperate sections of your Resource group. Else you can merge them together.
BottomLine: You are the architect, do it as you want, whichever makes your life easy.
You should approach this from the deployment view.
First answer yourself few question:
How separate resources such as ASB, SqlServer, Event hub are impacting your app? can your app run independently while all above are unavailable?
How often do you plan to deploy? I assume you are going to implement some sort of Continuous deployment.
How often will you provision a new environment.
so long story short.
Anything that will have minimum (0) downtime on your app during deployment/disaster recovery, should be considered along with the fact anyone from the street can take you scripts and have your app running in reasonable time, say 30 min max.
Related
I'm fairly new to Function Apps, anyways we have about a dozen small programs currently running as Windows scheduled tasks on an Azure VM and we are looking to migrate these to PaaS. Most of these are small console type background processes that might make an API call, perform a calc, and store the result in a db, or maybe read some data from a db and then send out an email. We have a mixture of pwsh, Python, and .NET.
Anyways I was wondering how many repos I should have? I assume I would need at least 3 (one for each runtime stack). Also I didn't want to create a separate repo per app and end up having 50 git repos eventually. I don't know if it's best just to make some root level folders using the app names in the repo to keep the structure separate looking?
Lastly should each of these apps be hosted in their own function app (Azure Resource), or can I have several of the apps hosted in a single FA? I'm not sure if splitting everything up into separate function apps would make deployment easier or not. I guess ease of long term support/maintenance would be the most important aspect to me.
Short version: What is considered the best practice for logically grouping and creating the relationship/mapping between your apps, number of git repos, and azure function app resources?
The infrastructure is provisioned using terraform code.
In our AWS environment, we have a new AMI created for every commit made to the repository. Now, if we want to have autoscaling configured for the web servers behind an ALB using this new AMI
how can we make sure that the ASG replaces existing instances with every change in the Launch configuration, because I believe, once you change the LC, only the instances that are created out of scaling in/out are launched using the new AMI and the existing ones are not replaced.
Also, do you have any idea of how can we pro-grammatically (via terraform) get how many servers run at any point in time, in case of auto- scaling ?
Any help is highly appreciated here.
Thanks!
For the most part this is pretty straightforward and there are already a dozen of implementations around the web.
The tricky part is to express the 'create_before_destroy' field on the LC and the ASG. You schould also refer to the LC in your ASG resource. That way once your LC is changed you would trigger a workflow that creates a new ASG, that replaces your current one.
Very Good Documented Example
Also, do you have any idea of how can we pro-grammatically (via
terraform) get how many servers run at any point in time, in case of
auto- scaling ?
This depends on the context. If you have a static number it's easy, you could define it in your module and stick with it. If it's about passing the previous ASG value the way would be again described in the guide above :) You need to write a custom external handler for how many in 'the moment' running instances you have around your target groups. There might be of course a new AWS REST API addition that gives you the chance to query all your Target Groups health check property and get their total sum ( not aware about it ). Then again, you might add some custom rules for scaling policies.
External Handler
Side note: in the example the deployment is happening with ELB.
I have 200 + SFTP Sub folders and it will be dynamically adding 10 folders every month. We created List rows in a table through Onedrive and started monitoring the SFTP location, but somehow this approach is missing some files at certain point.. Is there better way or different approach to tackle this problem.. Has anyone came across in the past?
The first thing that comes to my mind is that, is the Onedrive file with the table perhaps in a locked state and can't be accessed by logic apps? That is the most similar issue I have had in the past.
Is this one Logic App or multiple Logic Apps? If it is one logic app you could try to make this into an app that only checks the list of dynamically added folders which would kick of an Azure automation job that would deploy new logic apps for each line(folder) which would monitor the folders, though with this approach you would end up with 200+ logic apps, not that it is anything wrong with that, besides the limit per resource group is 800.
I have a stateless app with 3 component say x,y,z in the same code base. Each component will be run by checking on the environment variable. I want to deploy it on Kubernetes on GCP using kind: Deployment yaml config with 3 replica pods. How can I make sure each component has a single dedicated pod to it?
Can it be done on single deployment file?
As #Ivan Aracki mentioned in the comments, the best way would be distinguish each application component with appropriate deployment object in order to guarantee a Pod assignment here.
As Ivan, suggested above, deploy three deployments one each for x, y and z.
You can use the same image for the three deployments, just pass different environment variable/value for each deployment to deploy specific component. You might have to build some logic in the container start up script to retrieve the value from the environment variable and start the desired component
As I understand your requirements stated that you have three processes application code base inside one solution. Nit sure weather three components you mentioned are independent process components or just layer front end , service , DAL etc or even tiers e.g. typical 3 tier architecture application with front end web , API and backend tier but let's call three microservices or services for simplicity...
Whichever the case is , best practices of docker , kubernetes hosted microservices pattern recommends :
container per process small app (not monolethe)
though there can be multiple containers per pod, suggested is keep one per pod - you can possibly have three containers inside pod
You can have three pods one each for your component app provided these apps can be refactored into three separate independent processes.
Having one yaml file per service and include all related objects inside seperated by --- on seperate line
Three containers inside single pod or three pods per service would be easily accessible to each other
Hope this helps.
The scenario is as follows: A large text file is put somewhere. At a certain time of the day (or manually or after x number of files), a Virtual Machine with Biztalk installed is about to start automatically for processing of these files. Then, the files should be put in some output place and the VM should be shut down. I donĀ“t know the time it takes for processing these files.
What is the best way to build such a solution? The solution is preferably to be used for similar scenarios in the future.
I was thinking of Logic Apps for the workflow, blob storage or FTP for input/output of the files, an API App for starting/shutting down the VM. Can Azure Functions be used in some way?
EDIT:
I also asked the question elsewhere, see link.
https://social.msdn.microsoft.com/Forums/en-US/19a69fe7-8e61-4b94-a3e7-b21c4c925195/automated-processing-of-large-text-files?forum=azurelogicapps
Just create an Azure Runbook with a Schedule, make that Runbook check for specific files in a Storage Account, if they exist, start up a VM and wait till the files are gone, once the files are gone (so BizTalk processed them, deleted and put them in some place where they belong), Runbook would stop the VM.