Let's say I have an Azure App Service web app at foo.azurewebsites.net. The code for the web app (a simple Node.js server and React frontend) is hosted on VSTS, and a custom deployment script is configured build and deploy the web app every time code is pushed to the repository's master branch. In other words, the standard web app configuration.
Now, all of my API code (just a Node.js server) is in another repository on VSTS. I'd like to be able to do the following:
Have all requests to foo.azurewebsites.net/api be handled by the API server (an implication of this, which I would nonetheless like to state explicitly, is that the server can ask the browser to set cookies that the web app can then read, and vice versa).
Set up similar continuous deployment for the API server, such that it gets redeployed whenever there are code changes in the API repo.
Be able to maintain the web app and API repositories completely separately.
This seems like a fairly standard scenario...is there an accepted solution? I came across this, but it seems like a pretty hacky way to do it, not to mention the fact that I have no idea what the correct URL is for the web hook for VSTS and can't seem to find any information on it. Also, that example doesn't cover how to deal with point (1) above.
EDIT: Additional clarification
Note that the accepted answer on this question is not what I'm looking for. It describes how to pull from a second repository at deployment time, but not how to have that second repository trigger deployments, or how to handle the fact that the the second repository is its own server. Additionally, it introduces a dependency between the two repositories, since the deploy.cmd is presumably under source control in the first repository.
EDIT: Virtual Directories
Thanks to #CtrlDot for pointing out that Virtual Directories are the way to solve (1). Still seeking guidance on (2) and (3).
I think the concept you are referring to is called Virtual Directories
I'm not sure which VSTS task you are using to deploy, but based on the article provided, you should be to configure it to target only the virtual directory you want to deploy to.
EDIT
Sorry for not being more clear. The AzureRmWebAppDeployment task has a parameter for virtual application name. You would simply set that in your deployment pipeline for the API project (/api) and for the main project (leave it blank)
Related
We have a site design that makes use of modules that are developed separately from the master site. Thru reflection, we pick up the modules when the main app starts.
This works fine in local development and on a normal web server. But in the Azure environment when we try to use FTP to deploy the modules to our Azure-hosted site we are unable to because the main Azure deployment is read-only (because it is running from a package).
Is it possible to not have the main site running from a package? Is it acceptable to run it that way?
Is there another way to deploy Dlls to the Azure-hosted site without having them be part of the main site's build and deploy? Ultimately we are trying to avoid rebuilding the main site every time we want to add a module.
our Azure-hosted site we are unable to because the main Azure
deployment is read-only (because it is running from a package).
You could set WEBSITE_RUN_FROM_PACKAGE=0in your app setting to make not read only. WEBSITE_RUN_FROM_PACKAGE=1 is read only.
Is it possible to not have the main site running from a package? Is it
acceptable to run it that way?
You could consider switch your deployment methods to Zip Deploy to make your Azure-hosted site not read only.
Refer to this doc.
What can help in this situation is to build and publish the modules using Azure Artifacts.
There are several approaches please check the best practices page over here:
https://learn.microsoft.com/en-us/azure/devops/artifacts/concepts/best-practices?view=azure-devops
Depending on your approach the build, the release but also local development can use these published modules.
Example
You can for example use a private nuget feed:
Publish the modules from your modules pipeline:
https://learn.microsoft.com/en-us/azure/devops/pipelines/artifacts/nuget?toc=%2Fazure%2Fdevops%2Fartifacts%2Ftoc.json&view=azure-devops&tabs=yaml
Consume these from visual studio:
https://learn.microsoft.com/en-us/azure/devops/artifacts/nuget/consume?view=azure-devops&tabs=windows
And consume them from the website pipeline, this can be a build but also a release if you want to side load them:
https://learn.microsoft.com/en-us/azure/devops/pipelines/packages/nuget-restore?source=recommendations&view=azure-devops
To keep a record of what modules are used in the website, I advise to build or release the website if the modules are changed.
Currently in the process of potentially moving our sites to Azure. As it stands we are testing deploying to Azure app service, everything works and publishes fine using one computer. However if someone else runs a publish from a different computer with an identical build the publish operation sees fit to 'update' all of the files, of which there are a lot. Then when the next publish occurs from the original computer the same happens there. Further publishes from the same computer do not generate this 'updating' of all the files which takes a long time.
Never had this issue previously when publishing to IIS on our Rackspace servers. Why is MSDeploy choosing to update these files even though they have not changed at all and seemingly only because the publish is coming from a different computer to the last publish that occurred?
Can anyone explain how I can stop this?
It seems your project is in local repository or personal repository, and maybe published using zip deploy or Visual Studio.
Deploy like that would come up with a connection between Azure and the
location of your project. If you deploy from another computer, or
another repository, the connection would be fresh to the new one,
which would update all the files like publishing a new project.
You could consider deploying continuously from a remote repository like GitHub, which you could access it on any computer.
Here are the samples you could have a look:
Deploy using GitHub Action
Deploy using DevOps
I'm coming from a long SSIS background, we're looking to use Azure data factory v2 but I'm struggling to find any (clear) way of working with multiple environments. In SSIS we would have project parameters tied to the Visual Studio project configuration (e.g. development/test/production etc...) and say there were 2 parameters for SourceServerName and DestinationServerName, these would point to different servers if we were in development or test.
From my initial playing around I can't see any way to do this in data factory. I've searched google of course, but any information I've found seems to be around CI/CD then talks about Git 'branches' and is difficult to follow.
I'm basically looking for a very simple explanation and example of how this would be achieved in Azure data factory v2 (if it is even possible).
It works differently. You create an instance of data factory per environment and your environments are effectively embedded in each instance.
So here's one simple approach:
Create three data factories: dev, test, prod
Create your linked services in the dev environment pointing at dev sources and targets
Create the same named linked services in test, but of course these point at your tst systems
Now when you "migrate" your pipelines from dev to test, they use the same logical name (just like a connection manager)
So you don't designate an environment at execution time or map variables or anything... everything in test just runs against test because that's the way the linked servers have been defined.
That's the first step.
The next step is to connect only the dev ADF instance to Git. If you're a newcomer to Git it can be daunting but it's just a version control system. You save your code to it and it remembers every change you made.
Once your pipeline code is in git, the theory is that you migrate code out of git into higher environments in an automated fashion.
If you go through the links provided in the other answer, you'll see how you set it up.
I do have an issue with this approach though - you have to look up all of your environment values in keystore, which to me is silly because why do we need to designate the test servers hostname everytime we deploy to test?
One last thing is that if you a pipeline that doesn't use a linked service (say a REST pipeline), I haven't found a way to make that environment aware. I ended up building logic around the current data factories name to dynamically change endpoints.
This is a bit of a bran dump but feel free to ask questions.
Although it's not recommended - yes, you can do it.
Take a look at Linked Service - in this case, I have a connection to Azure SQL Database:
You have possibilities to use dynamic content for either the server name and database name.
Just add a parameter to your pipeline, pass it to the Linked Service and use in the required field.
Let me know whether I explained it clearly enough?
Yes, it's possible although not so simple as it was in VS for SSIS.
1) First of all: there is no desktop application for developing ADF, only the browser.
Therefore developers should make the changes in their DEV environment and from many reasons, the best way to do it is a way of working with GIT repository connected.
2) Then, you need "only":
a) publish the changes (it creates/updates adf_publish branch in git)
b) With Azure DevOps deploy the code from adf_publish replacing required parameters for target environment.
I know that at the beginning it sounds horrible, but the sooner you set up an environment like this the more time you save while developing pipelines.
How to do these things step by step?
I describe all the steps in the following posts:
- Setting up Code Repository for Azure Data Factory v2
- Deployment of Azure Data Factory with Azure DevOps
I hope this helps.
Suppose I built a rails app with authentication as per Michael Hartl’s tutorial. Now suppose I want to push the app on to a private github repository and deploy to heroku to for use by the public
What step(s) should be undertaken prior to pushing to source control and heroku to ensure the app is secure?
To be clear, I am not talking about adding additional features (e.g. two factor authentication, minimum password complexity requirements etc), but I would like to know anything and everything the app developer must do once the app has been built in development prior to going live in production e.g. moving or removing certain files or lines of code, what to do with ‘secrets’ etc
As a starting point, I read this article (which is old) but it made me nervous about whether or not I’d completed the necessary steps to ensure the app is secure
I'm looking for some advices concerning release management in azure
I've found a lot of ressources, but I still have some questions
I have an asp.net 4 solution (to make it simple : one asp.net project, one database project, one test project)
I'm using GIT in visual studio online
At this moment I have one app service and one sql server database in azure.
I have a build that download nuget packages, build, execute a dacpac for the database, executes the tests from the project (I have integration tests that uses the database) and finaly deploys the app on an azure app services
What I want to do seems a "normal stuff" :
I want to cerate the build, then deploy it on a "dev" environnement in azure, then in a "qa" environnement, then in a "stagging" and in "prod"
In my web project, I created different web.config transformations (one for each environnement)
I've seen the releases in visual studio online and I get that its for the deployment part in different environnements
What I have questions with :
In Azure :
Do I create 1 app service by environnement ? or do I create a single app service and use slots ?
Do I create 1 sql server for each environnement or is it best to use a single sql server and to have one database for each environnement ?
In visual studio online :
How do I do the tasks ?
In the build part, what configuration do I use ? Which environnement do I select ?
In the build, how do I manage the database project ? I thing the deployment part should be in the release part, but I don't see how to configure the connexionstring ?
For the tests -> Do I execute them in the release part? How do I change the connexionstrings and appsettings? there is no transformations for the app.config
I've seen that there is test-plans as well, but I don't really get it
Can somebody help me to see a little bit better in all of that ?
I cannot answer all of these, but I can answer a few.
I would create separate Web App instances for your separate environments. With slots, your slots exist on the same Web App and share computing resources. If something goes horribly wrong (your staging code pegs CPU to 100% or eats all of your RAM), this will cause problems for your production slot. I see slots as part of A/B testing or to aid in deployment.
You will likely need to create a separate database per environment as well. This is almost always required if you will be upgrading your database schema at any point in the future and introduce breaking changes to your database schema. For example, what happens if your production code requires a specific field in a database table, but your next version of the database removes that field?
You mentioned you're using web.config transforms, but I want to throw out another option that we've found to be easier and have fewer moving parts and sources for error. If you're just changing connection strings and AppSettings, consider using the Web App's application settings per environment. These override whatever is in your web.config. Doing so means you can forget about web.config transforms and not have one more thing that could possibly go wrong in a deployment.
Since you're using a Database project, to deploy your database, check out the VSTS Azure SQL Database Deployment task. It'll use your database project to create a DACPAC, and then deploy that to your target server.