We run our web application on Amazon EC2 with load-balanced, autoscaling web servers (IIS).
Before autoscaling, our deployment process was file-copy to a couple of big web servers.
Now with autoscaling we have anything from 5 to 12 webservers which appear and vanish at will, making the deployment process more difficult.
To address this, I wrote a powershell script that retrieves the IP of servers in an autoscaling group and uses MSDeploy to synchronise them with a designated deployment server (in load balancer, outside of autoscaling group). It then creates a new AMI and updates the autoscaling config.
All seemed to be good, until after rebuilding the deployment server, the sync script does not update the running state of the web sites. So I can put the site into maintenance mode.
I would like to know:
how other people approach the problem (specifically syncing iis servers in autoscaling ec2) (in the absence of WFF for IIS 8)
why the start/stop sync is failing
Code:
Set-AWSCredentials -AccessKey XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX -SecretKey XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Set-DefaultAWSRegion "us-west-2"
$date = get-date
$dateString = $date.ToString("yyyyMMdd-HHmm")
$name = $dateString + "Web"
$imageId = new-ec2image -InstanceId x-xxxxxxxx -Name $name -NoReboot 1
$launchConfiguration = New-ASLaunchConfiguration -LaunchConfigurationName $name -ImageId $imageId -InstanceType "m3.medium" -SecurityGroups #('Web') -InstanceMonitoring_Enabled $false
Update-AsAutoScalingGroup -AutoScalingGroupName "XxxxxxxxxxxxXxxxxxxxxx" -LaunchConfigurationName $name
$a = Get-ASAutoScalingInstance | select -expandproperty InstanceId | Get-EC2Instance | select -expandproperty RunningInstance | select -property PrivateIpAddress
foreach($ip in $a)
{
$command = "C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe"
$arg = "-dest:webServer,computerName=" + $ip.PrivateIpAddress;
$args = #('-verb:sync', '-source:webServer', $arg)
&$command $args
}
Don't try and "sync" the web servers. Consider doing a one time install - and allowing a tool boot for deploying manage the syncing.
What I've done in the past is used CloudFormation to create the environment, and with a combination of cfn-init and cfn-hup to do the installation. The deployment process then becomes a case of reploying a new package to somewhere like S3, and then using CloudFormation to bump the version.
This triggers a cfn-hup update, whereby each server will pull down the package from S3 and reinstall.
Also - if your scaling group scales, it will automatically use the cfn-init to pull down and install the package completely before registering the instance with the load balancer.
A couple of similar StackOverflow questions here
AWS - Automatic deployment (.NET) to CloudFormation stack
Installing Windows applications/extensions with Amazon CloudFormation
Amazon EC2: load balancing / way to sync files / EC2 + CF
How do I execute UserData content in a Windows EC2 instance
Also I wrote two articles many moons ago about it
http://blog.kloud.com.au/2013/08/05/bootstrapping-on-aws/
http://blog.kloud.com.au/2013/08/19/bootstrap-update/
This should give you enough to get going.
Related
We are using Devops to recreate our demo environment. Within the Devops deployment we have an Azure Powershell task to copy our production Azure SQL database to a "demo" database on the same server the prod db is located on.
We first search for the databases on the server and if the "demo" database exists we delete it:
Remove-AzSqlDatabase -ResourceGroupName prdResource -ServerName prdServer -DatabaseName demoDb
Then we copy the prod db to the demo db:
New-AzSqlDatabaseCopy -ResourceGroupName prdResource -ServerName prdServer -DatabaseName prodDb -CopyDatabaseName demoDb
Finally we set the service level on the demoDb:
Set-AzSqlDatabase -ResourceGroupName prdResource -ServerName prdServer -DatabaseName demoDb -Edition "Standard" -RequestedServiceObjectiveName "S4"
This all works fine and the demo db is created correctly with the appropriate service level. The issue is our Azure prod webapp that is connected to the prod database struggles with performance issues. Calls that used to take ~2 seconds just prior to the copy db, now take 30+ seconds. We found if we restart the webApp that clears the issue.
Just wondering why the copy db command is effecting our performance on the web app? Are there other settings we should be using with the copy command? We have ran this process several times and get the same performance issues each time we run.
From our understanding this process should not have any negative side effects on the prod db, is that a correct assumption? Any other ways of fixing the issue without having to restart the webApp?
Scale out your sql database tier and located webapp and db to be in the same region.
These two changes resulted in a massive performance increase.
Also, you could refer to this article to troubleshoot Azure SQL Database performance issues with Intelligent Insights.
The DTU's really don't seem to be the issue as they don't go above 20%. We have a DevOps deployment in place that runs all the tasks and that is scheduled each Sat at 1:00am. Here is a screenshot of the DTUs for that database during that timeframe:
Also the DB and the WebApp are both in the East US Region so that also should not be the issue.
Again, restarting the webapp clears up the issue so that points to it not being a DB/DTU issue.
I've been trying to find a way to run a simple command against one of my existing Azure VMs using Azure Data Factory V2.
Options so far:
Custom Activity/Azure Batch won't let me add existing VMs to the pool
Azure Functions - I have not played with this but I have not found any documentation on this using AZ Functions.
Azure Cloud Shell - I've tried this using the browser UI and it works, however I cannot find a way of doing this via ADF V2
The use case is the following:
There are a few tasks that are running locally (Azure VM) in task scheduler that I'd like to orchestrate using ADF as everything else is in ADF, these tasks are usually python applications that restore a SQL Backup and or purge some folders.
i.e. sqdb-restore -r myDatabase
where sqldb-restore is a command that is recognized locally after installing my local python library. Unfortunately the python app needs to live locally in the VM.
Any suggestions? Thanks.
Thanks to #martin-esteban-zurita, his answer helped me to get to what I needed and this was a beautiful and fun experiment.
It is important to understand that Azure Automation is used for many things regarding resource orchestration in Azure (VMs, Services, DevOps), this automation can be done with Powershell and/or Python.
In this particular case I did not need to modify/maintain/orchestrate any Azure resource, I needed to actually run a Bash/Powershell command remotely into one of my existing VMs where I have multiple Powershell/Bash commands running recurrently in "Task Scheduler".
"Task Scheduler" was adding unnecessary overhead to my data pipelines because it was unable to talk to ADF.
In addition, Azure Automation natively only runs Powershell/Python commands in Azure Cloud Shell which is very useful to orchestrate resources like turning on/off Azure VMs, adding/removing permissions from other Azure services, running maintenance or purge processes, etc, but I was still unable to run commands locally in an existing VM. This is where the Hybrid Runbook Worker came into to picture. A Hybrid worker group
These are the steps to accomplish this use case.
1. Create an Azure Automation Account
2. Install the Windows Hybrid Worker in my existing VM . In my case it was tricky because my proxy was giving me some errors. I ended up downloading the Nuget Package and manually installing it.
.\New-OnPremiseHybridWorker.ps1 -AutomationAccountName <NameofAutomationAccount> -AAResourceGroupName <NameofResourceGroup>
-OMSResourceGroupName <NameofOResourceGroup> -HybridGroupName <NameofHRWGroup>
-SubscriptionId <AzureSubscriptionId> -WorkspaceName <NameOfLogAnalyticsWorkspace>
Keep in mind that in the above code, you will need to find your own parameter values, the only parameter that does not have to be found and will be created is HybridGroupName this will define the name of the Hybrid Group
3. Create a PowerShell Runbook
[CmdletBinding()]
Param
([object]$WebhookData) #this parameter name needs to be called WebHookData otherwise the webhook does not work as expected.
$VerbosePreference = 'continue'
#region Verify if Runbook is started from Webhook.
# If runbook was called from Webhook, WebhookData will not be null.
if ($WebHookData){
# Collect properties of WebhookData
$WebhookName = $WebHookData.WebhookName
# $WebhookHeaders = $WebHookData.RequestHeader
$WebhookBody = $WebHookData.RequestBody
# Collect individual headers. Input converted from JSON.
$Input = (ConvertFrom-Json -InputObject $WebhookBody)
# Write-Verbose "WebhookBody: $Input"
#Write-Output -InputObject ('Runbook started from webhook {0} by {1}.' -f $WebhookName, $From)
}
else
{
Write-Error -Message 'Runbook was not started from Webhook' -ErrorAction stop
}
#endregion
# This is where I run the commands that were in task scheduler
$callBackUri = $Input.callBackUri
# This is extremely important for ADF
Invoke-WebRequest -Uri $callBackUri -Method POST
4. Create a Runbook Webhook pointing to the Hybrid Worker's VM
4. Create a webhook activity in ADF where the above PowerShell runbook script will be called via a POST Method
Important Note: When I created the webhook activity it was timing out after 10 minutes (default), so I noticed in the Azure Automation Account that I was actually getting INPUT data (WEBHOOKDATA) that contained a JSON structure with the following elements:
WebhookName
RequestBody (This one contains whatever you add in the Body plus a default element called callBackUri)
All I had to do was to invoke the callBackUri from Azure Automation. And this is why in the PowerShell runbook code I added Invoke-WebRequest -Uri $callBackUri -Method POST. With this, ADF was succeeding/failing instead of timing out.
There are many other details that I struggled with when installing the hybrid worker in my VM but those are more specific to your environment/company.
This looks like a use case that is supported with Azure Automation, using a hybrid worker. Try reading here: https://learn.microsoft.com/en-us/azure/automation/automation-hybrid-runbook-worker
You can call runbooks with webhooks in ADFv2, using the web activity.
Hope this helped!
We wish to implement CI using a TFS / Visual Studio Online-hosted build server. To run our unit/integration tests the build server needs to connect to a SQL Azure DB.
We've hit a stumbling block here because SQL Azure DBs use an IP address whitelist.
My understanding is that the hosted build agent is a VM which is spun-up on demand, which almost certainly means that we can't determine its IP address beforehand, or guarantee that it will be the same for each build agent.
So how can we have our hosted build agent run tests which connect to our IP-address-whitelisted SQL DB? Is it possible to programmatically add an IP to the whitelist and then remove it at the end of testing?
After little research found this (sample uses PowerShell):
Login to your azure account
Select relevant subscription
Then:
New-AzureRmSqlServerFirewallRule -EndIpAddress 1.0.0.1 -FirewallRuleName test1 -ResourceGroupName testrg-11 -ServerName mytestserver111 -StartIpAddress 1.0.0.0
To remove it:
Remove-AzureRmSqlServerFirewallRule -FirewallRuleName test1 -ServerName mytestserver111 -ResourceGroupName testrg-11 -Force
Found in Powershell ISE for windows. Alternatively there should be something similar using cross platform cli if not running on windows machine
There is the task/step of Azure PowerShell that you can call azure powershell (e.g. New-AzureRmSqlServerFirewallRule)
On the other hand, you can manage server-level firewall rules through REST API, so you can custom build/release task to get necessary information (e.g. authentication) of selected Azure Service Endpoint, then send the REST API to add new or remove firewall rules.
The SqlAzureDacpacDeployment task has the source code to add firewall rules through REST API that you can refer to. Part SqlAzureDacpacDeployment source code, VstsAzureRestHelpers_.psm1 source code.
There now is a "Azure SQL InlineSqlTask" build task which u can use to automatically set firewall rules on the Azure server. Just make sure "Delete Rule After Task Ends" is not checked. And just add some dummy query like "select top 1 * from...." as "Inline SQL Script"
I'm using VSO to continuously deploy to azure.
I have three slots :
Staging ( for automated tests )
AutoSwap ( if the version passes the automated tests in staging, it's deployed to AutoSwap )
Production ( When AutoSwap is deployed, it will auto swaps with Production ).
The problem is that my deployments are done using FTP ( I can't do it otherwise because it's an Asp.net Core 1.0 app ), so when I deploy to AutoSwap, it's not detected as an actual deployment so no auto swap is done with production.
My question : Is there any powershell command that I can call from the TFS task to start that auto swapping ? ( A command to tell for example that a deployment has been done which I can call when the FTP uploading ends ) ?
EDIT
I have found and tried this but it does simply nothing ( it doesn't fail ) :
Switch-AzureWebsiteSlot -Name "MyApp" -Slot1 "production" -Slot2 "AutoSwap" -Force
Try to use the Move-AzureDeployment, which swaps the deployments in production and staging.
Parameter Set: Default
Move-AzureDeployment [-ServiceName] <String> [ <CommonParameters>]
More details you can refer the link from MSDN: Move-AzureDeployment
Note: This applies only to cloud services, not web apps. The difference between Web APP and cloud service: Web App vs Cloud Service
Update
This may caused by the azure powershell script loads from VSO online does not support swap-slot. Try to remove the Azure powershell module, and import a different one.
See the answer from Ryan P in this MSDN link: https://social.msdn.microsoft.com/Forums/windows/en-US/0f30b76b-7954-4558-a10d-6a2b6635765a/switchazurewebsiteslot-does-not-work-in-vso-online
The problem was that I'm using this utility to upload the files via FTP :
https://marketplace.visualstudio.com/items?itemName=januskamphansen.ftpupload-task
In its code there is this line which should be commented to not to block azure commands in the tasks following it :
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$CurrentSession.ignoreCert}
Now everything works great, It took me one week to find it, I hope that this answer will save time to someone.
I am a developer and I have arrived at a solution to a webservice authentication problem that involved ensuring Kerberos was maintained because of multiple network hops. In short:
A separate application pool for the virtual directory hosting the webservice was established
The Identity of this application pool is set to a configurable account (DOMAINname\username which will remain constant but the strong password is somehow changed every 90 days I think); at a given point in time, the password is known or obtainable somehow by our system admin).
Is there a script language that could be used to setup a new application pool for this application and then set the identity as described (rather than manual data entry into property pages in IIS)?
I think our system admin knows a little about Powershell but can someone help me offer him something to use (he will need to repeat this on 2 more servers as the app is rolled out). Thanks.
You can use such PowerShell script:
Import-Module WebAdministration
$appPool = New-WebAppPool -Name "MyAppPool"
$appPool.processModel.userName = "domain\username"
$appPool.processModel.password = "ReallyStrongPassword"
$appPool.processModel.identityType = "SpecificUser"
$appPool | Set-Item