How to give a 200 response from elasticsearch without authentication to an app gateway? - azure

I have an issue setting up an ES cluster on Azure. I'd like my cluster to be behind an application gateway and also use shield authentication.
The problem is that the azure application gateway needs to send a health ping to the cluster and get back a 200 response, otherwise it returns a 502 "bad gateway". If I create an anonymous use then I can get the cluster to return a 200 but I'd rather not enable an anonymous user and use basic authentication instead.
Is there some endpoint on the cluster that will return a 200 even if the user is not authenticated and anonymous users are turned off?
Thanks!

There is no such endpoint in Elasticsearch. There is status.allowAnonymous in Kibana for the stats api endpoint, but nothing similar in Elasticsearch.
You'd have to define your own user that has access to a specific healthcheck url and use that or anonymous access enabled.
The healthchecks story can have variations: you check the health of a specific node (/_cluster/health?local=true), or the health of the cluster. You can also get a 200 if you send a _search request (with preference=_local) to a specific node even if that cluster doesn't have an elected master node, for example, because by default a _search operation is permitted on a node even in such situation.

In addition to #Andrei's answer, I'd recommend taking a look at Elastic's Azure ARM template which can deploy a cluster with Application Gateway configured for load balancing and SSL offload to the cluster.
This also works with X-Pack Security by setting up anonymous access for the Application Gateway ping health check that is assigned a role with access only to
cluster:
- cluster:monitor/main
It would be great if the ping check supported supplying credentials in the future, in which case anonymous access would not be required, locking things down further.
To deploy a cluster with Application Gateway using Azure PowerShell
function PromptCustom($title, $optionValues, $optionDescriptions)
{
Write-Host $title
Write-Host
$a = #()
for($i = 0; $i -lt $optionValues.Length; $i++){
Write-Host "$($i+1))" $optionDescriptions[$i]
}
Write-Host
while($true)
{
Write-Host "Choose an option: "
$option = Read-Host
$option = $option -as [int]
if($option -ge 1 -and $option -le $optionValues.Length)
{
return $optionValues[$option-1]
}
}
}
function Prompt-Subscription() {
# Choose subscription. If there's only one we will choose automatically
$subs = Get-AzureRmSubscription
$subscriptionId = ""
if($subs.Length -eq 0) {
Write-Error "No subscriptions bound to this account."
return
}
if($subs.Length -eq 1) {
$subscriptionId = $subs[0].SubscriptionId
}
else {
$subscriptionChoices = #()
$subscriptionValues = #()
foreach($subscription in $subs) {
$subscriptionChoices += "$($subscription.SubscriptionName) ($($subscription.SubscriptionId))";
$subscriptionValues += ($subscription.SubscriptionId);
}
$subscriptionId = PromptCustom "Choose a subscription" $subscriptionValues $subscriptionChoices
}
return $subscriptionId
}
$subscriptionId = "{YOUR SUBSCRIPTION ID}"
try {
Select-AzureRmSubscription -SubscriptionId $subscriptionId -ErrorAction Stop
}
catch {
Write-Host "Please Login"
Login-AzureRmAccount
$subscriptionId = Prompt-Subscription
Select-AzureRmSubscription -SubscriptionId $subscriptionId
}
# Specify the template version to use. This can be a branch name, commit hash, tag, etc.
# NOTE: different template versions may require different parameters to be passed, so be sure to check
# the parameters/password.parameters.json file in the respective tag branch
$templateVersion = "master"
$templateSrc = "https://raw.githubusercontent.com/elastic/azure-marketplace/$templateVersion/src"
$elasticTemplate = "$templateSrc/mainTemplate.json"
$location = "Australia Southeast"
$resourceGroup = "app-gateway-cluster"
$name = $resourceGroup
$cert = [System.Convert]::ToBase64String([System.IO.File]::ReadAllBytes("{PATH TO cert.pfx}"))
$clusterParameters = #{
"artifactsBaseUrl"= $templateSrc
"esVersion" = "5.1.2"
"esClusterName" = $name
# Deploy to same location as Resource Group location
"location" = "ResourceGroup"
# Install X-Pack plugins.
# Will install trial license
"xpackPlugins" = "Yes"
# Use Application Gateway
"loadBalancerType" = "gateway"
"vmDataDiskCount" = 2
"dataNodesAreMasterEligible" = "Yes"
"adminUsername" = "russ"
"authenticationType" = "password"
"adminPassword" = "{Super Secret Password}"
"securityAdminPassword" = "{Super Secret Admin User Password}"
"securityReadPassword" = "{Super Secret Read User Password}"
"securityKibanaPassword" = "{Super Secret Kibana User Password}"
"appGatewayCertBlob" = $cert
"appGatewayCertPassword" = "{Password for cert.pfx (if it has one)}"
}
Write-Host "[$(Get-Date -format 'u')] Deploying cluster"
New-AzureRmResourceGroup -Name $resourceGroup -Location $location
New-AzureRmResourceGroupDeployment -Name $name -ResourceGroupName $resourceGroup -TemplateUri $elasticTemplate -TemplateParameterObject $clusterParameters
Write-Host "[$(Get-Date -format 'u')] Deployed cluster"
Take a look at the other parameters available for the template which can be used to control other elements such as size and number of disks to attach to each data node, setting up Azure Cloud plugin for snapshot/restore, etc.

Related

Passing multiple Parameters in single Azure Storage Script for various environments

I have a powershell script that creates the storage and blob account for a given subscription that works fine . Subscription Name, resource group keeps changing for different environments like DEV,UAT,PROD
STRUCTURE OF MY TEMPLATE / CODE :
param(
[string] $subscriptionName ="ABC",
[string] $resourceGroupName = "XYZ",
[string] $resourceGroupLocation ="westus",
[string] $templateFilePath = "template.json",
[string] $parametersFilePath = "parameters.json"
)
Function RegisterRP {
Param(
[string]$ResourceProviderNamespace
)
Write-Host "Registering resource provider '$ResourceProviderNamespace'";
Register-AzureRmResourceProvider -ProviderNamespace $ResourceProviderNamespace;
}
$ErrorActionPreference = "Stop"
$confirmExecution = Read-Host -Prompt "Hit Enter to continue."
if($confirmExecution -ne '') {
Write-Host "Script was stopped by user." -ForegroundColor Yellow
exit
}
# sign in
Write-Host "Logging in...";
Login-AzureRmAccount;
# select subscription
Write-Host "Selecting subscription '$subscriptionName'";
Select-AzureRmSubscription -SubscriptionName $subscriptionName;
# Register RPs
$resourceProviders = #("microsoft.storage");
if($resourceProviders.length) {
Write-Host "Registering resource providers"
foreach($resourceProvider in $resourceProviders) {
RegisterRP($resourceProvider);
}
}
#Create or check for existing resource group
$resourceGroup = Get-AzureRmResourceGroup -Name $resourceGroupName -ErrorAction SilentlyContinue
if(!$resourceGroup)
{
Write-Host "Resource group '$resourceGroupName' does not exist. To create a new resource group, please enter a location.";
if(!$resourceGroupLocation) {
$resourceGroupLocation = Read-Host "resourceGroupLocation";
}
Write-Host "Creating resource group '$resourceGroupName' in location '$resourceGroupLocation'";
New-AzureRmResourceGroup -Name $resourceGroupName -Location $resourceGroupLocation
}
else{
Write-Host "Using existing resource group '$resourceGroupName'";
}
# Start the deployment
Write-Host "Starting deployment...";
if(Test-Path $parametersFilePath) {
New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -Name $deploymentName -TemplateFile $templateFilePath -TemplateParameterFile $parametersFilePath -storageAccounts_name $storageAccountName
} else {
New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -Name $deploymentName -TemplateFile $templateFilePath; -storageAccounts_name $storageAccountName
}
Approach 1 :
Created multiple powershell scripts for each denvironment
Created 1 Menu Based powershell script that calls the other script and executes like : Select 1 for Dev , 2 for UAt , 3 for PROD , this approach works but is not effective .
Approach 2 :
I would like to combine all scripts and just have one script for all environments and based on select should allow me to create the storage accounts. Only Subscription and resource group change rest all structure of the powershell remains same .
I tried using GET function commandlets and it selects but still throws the error
[string] $subscriptionName = Get-AzureSubscription,
[string] $resourceGroupName = Get-AzureRmLocation,
If i try to use it using an array based approach like passing the values as below im unable to understand how do i pass these array based values to the code and get it to work .
$environment=#('DEV','TEST','QA','PROD')
$resourcegroupname = #('test','test1','test2','test3')
$subscriptionName = #('devsub1','devsub2','test3','prod4')
I'm trying to call the functions using :
$environment[0]
$subscriptionName[0]
It returns the value as below if i execute it seperately but how do i pass these values to my script to create storage account ?
DEV
devsub1
Requesting expert help if anyone has come across such scenarios earlier and if you can help in changing the above code and provide a tested code that would be of great help.
APPROACH 3:
$subscription = #(Get-AzureRmSubscription)
$resourcegroup = #(Get-AzureRmResourceGroup)
$Environment = #('DEV','TEST','QA','PROD')
$resourceGroupName = $resourcegroup | Out-GridView -PassThru -Title 'Pick the environment'
$subscriptionName = $subscription | Out-GridView -PassThru -Title 'Pick the subscription'
Write-Host "Subscription:" $subscriptionName
Write-Host "ResourceGroup:" $resourcegroup
OUTPUT :
If you look at resource group it fails to give the selection option for resource group .
Subscription: < it returns the subscription name >
ResourceGroup: Microsoft.Azure.Commands.ResourceManager.Cmdlets.SdkModels.PSResourceGroup Microsoft.Azure.Commands.ResourceManager.Cmd
lets.SdkModels.PSResourceGroup Microsoft.Azure.Commands.ResourceManager.Cmdlets.SdkModels.PSResourceGroup Microsoft.Azure.Commands.Res
ourceManager.Cmdlets.SdkModels.PSResourceGroup
What you are proposing is an interesting approach. I would likely an input parameter that defines which environment the work will be done in, and then have a conditional block that sets the dynamic variables for that environment. There would be some duplication of initialization code for each environment, but the main code block would still be unified.

In Azure, how can you configure an alert or notification when a SQL Server failover happened?

In Azure, how can you configure an alert or notification when a SQL Server failover happened if you setup a SQL server with Failover groups and failover policy is set to automatic? If it can't be setup in Monitor can it be scripted elsewhere?
Found a way to script this in Azure using Automation Accounts > Runbook > using Powershell. A simple script like this should do it. Just need to figure out the run as account and trigger it by schedule or alert.
function sendEmailAlert
{
# Send email
}
function checkFailover
{
$list = Get-AzSqlDatabaseFailoverGroup -ResourceGroupName "my-resourceGroup" -server "my-sql-server"
if ( $list.ReplicationRole -ne 'Primary')
{
sendEmailAlert
}
}
checkFailover
Azure SQL database only support these alert metrics:
We could not using the alert when SQL Server failure happened. You can get this from this document: Create alerts for Azure SQL Database and Data Warehouse using Azure portal.
Hope this helps.
Thanks CKelly - gave me a good kick start to something that should be standard in Azure. I created an Azure Automation Account, added the Az.Account, Az.Automation and Az.Sql modules then added a bit more to your code. In Azure I created a SendGrid account.
#use the Azure Account Automation details to login to Azure
$Conn = Get-AutomationConnection -Name AzureRunAsConnection
Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint
#create email alert
function sendEmailAlert
{
# Send email
$From = "<email from>"
$To = "<email of stakeholders to receive this message>"
$SMTPServer = "smtp.sendgrid.net"
$SMTPPort = "587"
$Username = "<sendgrid username>"
$Password = "<sendgridpassword>"
$subject = "<email subject>"
$body = "<text to go in email body>"
$smtp = New-Object System.Net.Mail.SmtpClient($SMTPServer, $SMTPPort)
$smtp.EnableSSL = $true
$smtp.Credentials = New-Object System.Net.NetworkCredential($Username, $Password)
$smtp.Send($From, $To, $subject, $body)
}
#create failover check and send if the primary server has changed
function checkFailover
{
$list = Get-AzSqlDatabaseFailoverGroup -ResourceGroupName "<the resourcegroup>" -server "<SQl Databse server>"
if ( $list.ReplicationRole -ne 'Primary')
{
sendEmailAlert
}
}
checkFailover
This process may help others.

Azure Logic App - How can I re-started my operations failed?

I have many operations failed in my Azure Logic App.
I see that if you click on a single operation on the Azure portal you can re-started the operation:
Is it possible to select ALL of these failed operations, and re-run all together?
Thanks a lot guys
If you want to resubmit one or more logic app runs that failed, succeeded, or are still running, you could bulk resubmit Logic Apps from the Runs Dashboard.
About how to use this function, you could refer to this article:Monitor logic apps with Azure Monitor logs. Under the tile View logic app run information, you could find the Resubmit description.
Alternative to bulk resubmit Logic Apps from the Runs Dashboard, you can utilize the PowerShell commands. Take a look at the script below, which can automate listing Failed logic app runs, identify triggers, actions responsible and restart the apps by input ResourceGroupName. You can change some of those bits as per your needs. (skip the interactions and just restart apps again) I have just show it for understanding.
Using: Get-AzLogicApp, Get-AzLogicAppRunHistory ,
Get-AzLogicAppRunAction, Get-AzLogicAppTrigger and Start-AzLogicApp
cmdlets.
Script using Az PowerShell 6.2 Module: Az.LogicApp [Copy below to file, say restart.ps1 and run] Make sure you assign $rg with your actual AzResourceGroup name
$rg = "MyResourceGrp"
#get logic apps
$logicapps = Get-AzLogicApp -ResourceGroupName $rg
Write-Host "logicapps:" -ForegroundColor "Yellow"
write-output $logicapps.Name
#list all logicapp runs failed
$failedruns = #(foreach($name in $logicapps.Name){
Get-AzLogicAppRunHistory -ResourceGroupName $rg -Name $name | Where {$_.Status -eq 'Failed'}
})
Write-Host "failedruns:" -ForegroundColor "Yellow"
Write-Output $failedruns.Name | select -Unique
Write-Host "failedruns: LogicAppNames" -ForegroundColor "Yellow"
Write-Output $failedruns.Workflow.Name | select -Unique
#list all logicappRunsActions failed
foreach($i in $logicapps){
foreach($y in $failedruns){
if ($i.Version -eq $y.Workflow.Name) {
$resultsB = Get-AzLogicAppRunAction -ResourceGroupName $rg -Name $i.Name -RunName $y.Name -FollowNextPageLink | Where {$_.Status -eq 'Failed'}
}
}
}
foreach($item in $resultsB){
Write-Host "Action:" $item.Name " " -NoNewline "Time:" $item.EndTime
Write-Output " "
}
#get logicapp triggers
foreach($ii in $logicapps){
foreach($yy in $failedruns){
if ($ii.Version -eq $yy.Workflow.Name) {
$triggers = Get-AzLogicAppTrigger -ResourceGroupName $rg -Name $ii.Name
}
}
}
Write-Host "triggers:" -ForegroundColor "Yellow"
Write-Output $triggers.Name
#start logic apps with triggers
Write-Host "Starting logic apps....." -ForegroundColor "green"
foreach($p in $logicapps){
foreach($tri in $triggers){
if ($p.Version -eq $triggers.Workflow.Name) {
Start-AzLogicApp -ResourceGroupName $rg -Name $p.Name -TriggerName $tri.Name
}
}
}
$verify = Read-Host "Verify ruuning app? y or n"
if ($verify -eq 'y') {
$running = #(foreach($name2 in $logicapps.Name){
Get-AzLogicAppRunHistory -ResourceGroupName $rg -Name $name2 | Where {$_.Status -eq 'Running' -or $_.Status -eq 'Waiting'}
})
Write-Host $running
}
else {
Write-Host "Bye!"
}
Although my LogicApp has failed again, but you can see it was triggered in time by script
Note: If your logic app trigger expects inputs or actions (unlike recurrence or scheduled) please edit or make changes accordingly for Start-AzLogicApp command to execute successfully.
Here I am considering all logic apps are enabled (use -State Enabled) parameter for Get-AzLogicApp command if you want to run this on only currently enabled apps.
Example: Get-AzLogicApp -ResourceGroupName "rg" | where {$_.State -eq 'Enabled'}
2. You can also try advanced settings for triggers in workflow. Such as retry policy.
You can specify it to retry at custom intervals in case of failures due to an intermittent issues.
You can submit a feedback or Upvote a similar Feedback : ability-to-continue-from-a-particular-point-as-in
Refer: help topics for the Logic Apps cmdlets

Problems Running Azure Automation Powershell to Scale Database Back After Restore Operation

I am trying to scale back a database after the restore operation has completed and am running into some problems. I am getting this exception and wonder if there is something in this script not supported by Azure Automation Workflows?
Parameter set cannot be resolved using the specified named parameters.
workflow insertflowname
{
<#
.SYNOPSIS
The purpose of this runbook is to demonstrate how to restore a particular database to a new database using an Azure Automation workflow. Then it is scaled back to Basic.
.NOTES
#>
# Specify Azure Subscription Name
$subName = 'insertsubscription name'
# Connect to Azure Subscription
Connect-Azure -AzureConnectionName $subName
Select-AzureSubscription -SubscriptionName $subName
# Define source databasename
$SourceDatabaseName = 'insert database name'
# Define source server
$SourceServerName = 'insert source server'
# Define destination server
$TargetServerName = 'insert destination server'
Write-Output "`$SourceServerName [$SourceServerName]"
Write-Output "`$TargetServerName [$TargetServerName]"
Write-Output "`$SourceDatabaseName [$SourceDatabaseName]"
Write-Output "Retrieving recoverable database details for database [$SourceDatabaseName] on server [$SourceServerName]."
$RecoverableDatabase = Get-AzureSqlRecoverableDatabase –ServerName $SourceServerName -DatabaseName $SourceDatabaseName
$TargetDatabaseName = "$SourceDatabaseName-$($RecoverableDatabase.LastAvailableBackupDate.ToString('O'))"
Write-Output "`$TargetDatabaseName [$TargetDatabaseName]"
Write-Output "Starting recovery of database [$SourceDatabaseName] to server [$TargetServerName] as database [$TargetDatabaseName]."
Start-AzureSqlDatabaseRecovery -SourceDatabase $RecoverableDatabase -TargetServerName $TargetServerName –TargetDatabaseName $TargetDatabaseName
$PollingInterval = 10
Write-Output "Monitoring status of recovery operation, polling every [$PollingInterval] second(s)."
$KeepGoing = $true
while ($KeepGoing) {
$operation = Get-AzureSqlDatabaseOperation -ServerName $TargetServerName -DatabaseName $TargetDatabaseName | Where-Object {$_.Name -eq "DATABASE RECOVERY"} | Sort-Object StartTime -Descending
if ($operation) {
$operation[0]
if ($operation[0].State -eq "COMPLETED") { $KeepGoing = $false }
if ($operation[0].State -eq "FAILED") {
# Throw error
$KeepGoing = $false
}
} else {
# Throw error since something went wrong and object was not created
# May want to have this retry a few times before giving up or at least notify somebody
# since at this point the recovery has been kicked off and you don't want the database
# restore to remain at the elevated service level.
$KeepGoing = $false
}
if ($KeepGoing) { Start-Sleep -Seconds $PollingInterval }
}
if ($operation[0].State -eq "COMPLETED") {
Write-Output "Setting service level for database [$TargetDatabaseName] on server [$TargetServerName] to Basic."
$ServiceObjective = Get-AzureSQLDatabaseServiceObjective –ServerName $TargetServerName –ServiceObjectiveName "Basic"
$ServiceObjective
Set-AzureSqlDatabase –ServerName $TargetServerName –DatabaseName $TargetDatabaseName –Edition "Basic" –ServiceObjective $ServiceObjective -MaxSizeGB 2 –Force
}
}
You are probably hitting the issue described here: https://social.msdn.microsoft.com/Forums/en-US/ce6412b8-5cce-4573-befb-6017924ce0d0/whereobject-fails-with-parameter-set-cannot-be-resolved-using-the-specified-named-parameters?forum=azureautomation
Summary:
Use parameter names, don't rely on positional parameters, in PowerShell Workflow. In this case, you need to add the -FilterScript parameter name to Where-Object.

How to update an Azure Cloud Service setting using Azure Powershell

Is it possible to update the value of a setting in an Azure Cloud Service with Azure Powershell?
So far there is no way to update just a single setting (the Service Management API does not allow it - it only accepts the whole service configuration). So, in order to update a single setting, you will have to update the entire configuration. And you can do this with PowerShell:
# Add the Azure Account first - this will create a login promppt
Add-AzureAccount
# when you have more then one subscription - you have explicitly select the one
# which holds your cloud service you want to update
Select-AzureSubscription "<Subscription name with spaces goes here>"
# then Update the configuration for the cloud service
Set-AzureDeployment -Config -ServiceName "<cloud_service_name_goes_here>" `
-Configuration "D:/tmp/cloud/ServiceConfiguration.Cloud.cscfg" `
-Slot "Production"
For the the `-Configuration' parameter I have provided full local path to the new config file I want to use with my cloud service.
This is verified and working solution.
As astaykov says, you can't update a single cloud config value using Powershell.
But you can read all of the settings, update the one you wish to change, save it to a temp file, and then set all the settings again, like so:
UpdateCloudConfig.ps1:
param
(
[string] $cloudService,
[string] $publishSettings,
[string] $subscription,
[string] $role,
[string] $setting,
[string] $value
)
# param checking code removed for brevity
Import-AzurePublishSettingsFile $publishSettings -ErrorAction Stop | Out-Null
function SaveNewSettingInXmlFile($cloudService, [xml]$configuration, $setting, [string]$value)
{
# get the <Role name="Customer.Api"> or <Role name="Customer.NewsletterSubscription.Api"> or <Role name="Identity.Web"> element
$roleElement = $configuration.ServiceConfiguration.Role | ? { $_.name -eq $role }
if (-not($roleElement))
{
Throw "Could not find role $role in existing cloud config"
}
# get the existing AzureServiceBusQueueConfig.ConnectionString element
$settingElement = $roleElement.ConfigurationSettings.Setting | ? { $_.name -eq $setting }
if (-not($settingElement))
{
Throw "Could not find existing element in cloud config with name $setting"
}
if ($settingElement.value -eq $value)
{
Write-Host "No change detected, so will not update cloud config"
return $null
}
# update the value
$settingElement.value = $value
# write configuration out to a file
$filename = $cloudService + ".cscfg"
$configuration.Save("$pwd\$filename")
return $filename
}
Write-Host "Updating setting for $cloudService" -ForegroundColor Green
Select-AzureSubscription -SubscriptionName $subscription -ErrorAction Stop
# get the current settings from Azure
$deployment = Get-AzureDeployment $cloudService -ErrorAction Stop
# save settings with new value to a .cscfg file
$filename = SaveNewSettingInXmlFile $cloudService $deployment.Configuration $setting $value
if (-not($filename)) # there was no change to the cloud config so we can exit nicely
{
return
}
# change the settings in Azure
Set-AzureDeployment -Config -ServiceName $cloudService -Configuration "$pwd\$filename" -Slot Production
# clean up - delete .cscfg file
Remove-Item ("$pwd\$filename")
Write-Host "done"

Resources