Clearing Azure Redis Cache using PowerShell during deployment - azure

When deploying new versions of our web application to Azure App Service, I have a requirement to clear out the data in the associated Azure Redis Cache. This is to ensure that we don't return old versions of items which have schema changes in the new version.
We're deploying using Octopus Deploy, and I have previously tried executing the following PowerShell command to Reset the cache:
Reset-AzureRmRedisCache -ResourceGroupName "$ResourceGroup" -Name "$PrimaryCacheName" -RebootType "AllNodes" -Force
This works successfully but it's a bit heavy-handed and we're having intermittent connection issues which I suspect are caused by the fact that we're rebooting Redis and dropping existing connections.
Ideally, I'd just like to execute a FLUSHALL command via PowerShell. Is this a better approach, and is it possible to execute in PowerShell using the StackExchange.Redis library?

The Reset-AzureRmRedisCache cmdlet restarts nodes of an Azure Redis Cache instance, which I agree it is a bit overkill for your requirement.
Yes, it is possible to execute a Redis FLUSHALL command in PowerShell.
As the pre-requisite, you should install the Redis CLI and set an environment variable to point to the Redis CLI executable/binary path in your environment.
Then, you can execute in PowerShell using the Redis-CLI commands as shown below.
Invoke-Command -ScriptBlock { redis-cli -h <hostname>.redis.cache.windows.net -p <redisPort> -a <password> }
Invoke-Command -ScriptBlock { redis-cli flushall }
A execution result of code sample above is as shown below:

The way I eventually implemented this is to call the StackExchange.Redis library via PowerShell, so you'll need to have a copy of this DLL somewhere handy. During my deployment, I have access to the connection string, so this function strips out the host and port to connect to the server. This works without the need to open the non-SSL port, and the connection string allows admin access to the cache:
function FlushCache($RedisConnString)
{
# Extract the Host/Port from the start of the connection string (ignore the remainder)
# e.g. MyUrl.net:6380,password=abc123,ssl=True,abortConnect=False
$hostAndPort = $RedisConnString.Substring(0, $RedisConnString.IndexOf(","))
# Split the Host and Port e.g. "MyUrl.net:6380" --> ["MyUrl.net", "6380"]
$RedisCacheHost, $RedisCachePort = $hostAndPort.split(':')
Write-Host "Flushing cache on host - $RedisCacheHost - Port $RedisCachePort" -ForegroundColor Yellow
# Add the Redis type from the assembly
$asm = [System.Reflection.Assembly]::LoadFile("StackExchange.Redis.dll")
# Open a connection
[object]$redis_cache = [StackExchange.Redis.ConnectionMultiplexer]::Connect("$RedisConnString,allowAdmin=true",$null)
# Flush the cache
$redisServer = $redis_cache.GetServer($RedisCacheHost, $RedisCachePort,$null)
$redisServer.FlushAllDatabases()
# Dispose connection
$redis_cache.Dispose()
Write-Host "Cache flush done" -ForegroundColor Yellow
}

I have used the Windows port of netcat to clear Redis cache remotely from my Windows machine, like so:
$redisCommands = "SELECT $redisDBIndex`r`nFLUSHDB`r`nQUIT`r`n"
$redisCommands | .\nc $redisServer 6379
Where $redisDBIndex is the Redis Cache index you want to clear. Or simply the command FLAUSHALL if you want to clear everything. $redisServer is your Redis server. And simply pipe to nc.
I have also documented it here: https://jaeyow.github.io/fullstack-developer/automate-redis-cache-flush-in-powershell/#

Related

Azure startup script is not executed

I've learned how to deploy .sh scripts to Azure with Azure CLI. But it seems like I have no clear understanding of how they work.
I'm creating the script that simply unarchives a .tgz archive in a current directory of Azure Web App, and then just deletes it. Quite simple:
New-Item ./startup.sh
Set-Content ./startup.sh '#!/bin/sh'
Add-Content ./startup.sh 'tar zxvf archive.tgz; rm-rf ./archive.tgz'
And then I deploy the script like this:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--target-path /home/site/wwwroot/startup.sh
--type=startup
Supposedly, it should appear in /home/site/wwwroot/, but for some reason it never does. No matter how I try. I thought it just gets executed and then deleted automatically (since I specified it as a startup script), but the archive is there, not unarchived at all.
My stack is .NET Core.
What am I doing wrong, and what's the right way to do what I need to do? Thank you.
I don't know if it makes sense, but I think the problem might be that you're using the target-path parameter while you should be using path instead.
From the documentation you cited, when describing the Azure CLI functionality, they state:
The CLI command uses the Kudu publish API to deploy the package and can be
fully customized.
The Kudu publish API reference indicates, when describing the different values for type and especially startup:
type=startup: Deploy a script that App Service automatically uses as the
startup script for your app. By default, the script is deployed to
D:\home\site\scripts\<name-of-source> for Windows and
home/site/wwwroot/startup.sh for Linux. The target path can be specified
with path.
Note the use of path:
The absolute path to deploy the artifact to. For example,
"/home/site/deployments/tools/driver.jar", "/home/site/scripts/helper.sh".
I never tested it, I am aware that the option is not described when taking about the az webapp deploy command itself, and it may be just an error in the documentation, but it may work:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--path /home/site/wwwroot/startup.sh
--type=startup
Note that the path you are providing is the default one; as a consequence, you could safely delete it if required:
az webapp deploy --resource-group Group
--name Name
--src-path ./startup.sh
--type=startup
Finally, try including some debug or echo commands in your script: perhaps the problem can be motivated for any permissions issue and having some traces in the logs could be helpful as well.

Alternative to New-SqlAzureKeyVaultColumnMasterKeySettings

I currently have a script that is similair to the script that is described inside https://learn.microsoft.com/en-us/sql/relational-databases/security/encryption/configure-always-encrypted-keys-using-powershell?view=sql-server-ver15
However the script uses powershell 5 which isn't available to me on linux agents in our azure-devops environment. Because of this a majority of the azure-sql commands aren't available to us. Is there an alternative to these sqlserver module cmdlets.
The recommended alternative is to use the cloud console, but you may be able to get away with linux powershell like so:
I can't guarantee if this all works, but that specific command only creates a SqlColumnMasterKeySettings object that contains information about the location of your column master key. You can probably just create one manually, but you'll need to know the exact values. I would recommend running it from a windows machine first to see what the values should be for your environment.
# On Windows [Optional]
$cmkSettings = New-SqlAzureKeyVaultColumnMasterKeySettings -KeyURL $akvKey.Key.Kid
$cmkSettings | Format-List KeystoreProviderName,KeyPath
KeystoreProviderName : # Take these strings
KeyPath : # And use them in your script on linux
# Now on Linux, using the values above:
$cmkSettings = [Microsoft.SqlServer.Management.PowerShell.AlwaysEncrypted.SqlColumnMasterKeySettings]::new("YourKeystoreProviderName","YourKeyPath")
New-SqlColumnMasterKey -Name 'CMK1' -InputObject $database -ColumnMasterKeySettings $cmkSettings
# Success!
The key settings properties are just strings that get saved to your SQL Instance, so this should work fine. The harder part is authenticating to Azure to create keys from your master key, but you can try importing the desktop version of the commands like so:
# Start a NEW powershell session without the sqlserver module:
pwsh
# Get the module directory:
$d = (Get-Item (Get-Module SqlServer).path).DirectoryName
# Import the desktop version of these assemblies:
Import-Module "$d/Microsoft.SqlServer.Diagnostics.Strace.dll"
Import-Module "$d/Microsoft.SqlServer.Management.PSSnapins.dll"
Import-Module "$d/Microsoft.SqlServer.Management.AzureAuthenticationManagement.dll"
Import-Module "$d/Microsoft.SqlServer.Management.AlwaysEncrypted.Types.dll"
# Then import the module normally (there will be errors - you can ignore these)
Import-Module SqlServer
# Now you may be able to authenticate to Azure to generate new keys:
# Required to generate new keys
# NOTE: -Interactive fails on linux
Add-SqlAzureAuthenticationContext -ClientID "YourID" -Secret "YourSecret" -Tenant "YourTenant"
# Create a key using your master key:
New-SqlColumnEncryptionKey -Name 'CEK1' -InputObject $database -ColumnMasterKey 'CMK1'
This worked on my installation of centos7/pwsh7.1.3 - make sure you have SqlServer version 21.1.18245 (only 10 days old at the moment) as many new sql commands got ported to pwsh 7.1.

Azure: Deprovision a linux instance using Custom Script Extension

I am attempting to deprovision an Azure Linux instance using the Custom Script Extension.
My Script stored in an Anonymous Access Blob:
sudo waagent -deprovision+user -verbose -force
exit
My Command to apply the extension:
az vm extension set --resource-group rg01--vm-name vm01--name CustomScript --publisher Microsoft.Azure.Extensions --version 2.0 --settings "{'fileUris': [ 'https://mystorageaccount.blob.core.windows.net/scripts/Deprovision.sh' ], 'commandToExecute': 'sh Deprovision.sh'}"
When I run the az command, all the /var/log/azure sub directories and logs disappear!. I can tell from the bastion window, something tried to delete my user account, so I am confident the extension is getting provisioned and run.
Unfortunately, the extension, shows all status information as unavailable, and my az command just sits there. The "Create or Update Virtual Machine Extension" item in the VM's activity log also shows no activity once the deprovision starts. The Azure activity log suggest a restart occurred and my account is no longer valid.
Hopefully Linux/Azure folks have a recipe for this...
-
I saw similar behavior in Windows, and ended up using this script to Sysprep (the -Wait was critical, as it forced the Powershell process to wait for completion, preventing the Agent from returning success/fail until the process completed.), this prevents a restart. I can then script deallocation to occur when the extension completes. I suspect something similar is going on here.
Start-Process -FilePath C:\Windows\System32\Sysprep\Sysprep.exe -ArgumentList '/generalize /oobe /quiet /quit' -Wait
While the command:
sudo waagent -deprovision+user -verbose -force
works via SSH, when run via CustomScript extension this command basically kills everything on the machine. The CustomScript extension is not able to acknowledge completion.
Using this script:
sudo shutdown +3
sudo waagent -deprovision+user -verbose -force -start
Line 1, shuts the VM down in 3 minutes(deprovision command seems very fast)
Line 2, adding '-start', runs waagent as a background process. This allows CustomScript extension to acknowledge completion.
Now this command completes (instead of hangs):
var cmd = "sh Deprovision.sh";
var result = vm.Update()
.DefineNewExtension("deprovision")
.WithPublisher("Microsoft.Azure.Extensions")
.WithType("CustomScript")
.WithVersion("2.1")
.WithMinorVersionAutoUpgrade()
.WithPublicSetting("fileUris", new string[] { blobUri })
.WithPublicSetting("commandToExecute", cmd)
.Attach()
.Apply();
After completion, we must poll Azure for when the VM is Stopped.
WaitForVMToStop(vm);
private void WaitForVMToStop(IVirtualMachine newVM)
{
Context.Logger.LogInformation($"WaitForVMToStop...");
bool stopped = false;
int cnt = 0;
do
{
var stoppedVM = Context.AzureInstance.VirtualMachines.GetById(newVM.Id);
stopped = (stoppedVM.PowerState == PowerState.Stopped);
if (!stopped)
{
cnt++;
Context.Logger.LogInformation($"\tPolling 60 seconds for 'PowerState = Stopped on [{newVM.Name}]...");
System.Threading.Thread.Sleep(60000);
if (cnt > 20)
{
Context.Logger.LogInformation($"\tSysPrep Extension exceeded 20 minutes. Aborting...");
throw new Exception($"SysPrep Extension exceeded 20 minutes on [{newVM.Name}]");
}
}
} while (!stopped);
Context.Logger.LogInformation($"\tWaited {cnt} minutes for 'PowerState = Stopped...");
}
This seems way too complicated to me, but it works. I especially do not like assuming deprovision will occur in 3 minutes or less. If anybody has a better way, please share.

How to automate Azure P2S VPN connection on Windows 10 with Jenkins pipeline

I have installed an Azure P2S VPN on my Windows computer and I can connect it manually. I also have a PowerShell script to do the job.
Here's the script:
rasphone "Azure-VPN"
$wshell = New-Object -ComObject wscript.shell;
$wshell.AppActivate('Network Connections')
Sleep 2
$wshell.SendKeys('~')
Sleep 2
$wshell.SendKeys('~')
The $wshell.SendKeys('~') is to replace pressing Enter key when I connect manually.
I can run this script to connect VPN successfully from command line:
> powershell C:\myScript.ps1
True
Now I want to run this script on a Jenkins pipeline. But it seems like this cannot be achieved.
stage('VPN'){
bat "powershell C:\\myScript.ps1"
}
It returns False on the Jenkins console output.
I also tried following the accepted answer here but still no luck (cannot run neither from command line nor on Jenkins)
> rasdial Azure-VPN /phonebook:%userprofile%\AppData\Roaming\Microsoft\Network\Connections\Cm\<aLongNumber>\<aLongNumber>.pbk
Remote Access error 623 - The system could not find the phone book entry for this connection.
Is there any workaround for this? My purpose is to use Jenkins pipeline to turn on the VPN, send some files over the network and then turn it off.
You could select to use Jenkin’s Powershell plugin for directly running Powershell scripts on Windows via Jenkins. You could get more references from this blog.
Alternatively, refer to this SO answer, you could invoke a batch file with Jenkins like this for Windows paths:
stage('build') {
dir("build_folder"){
bat "run_build_windows.bat"
}
}
or
stage('build') {
bat "c://some/folder/run_build_windows.bat"
}

Conditionally detecting whether a Node server is running inside a Docker Container

I have my node.js code where I establish mongodb connections like this: mongodb://localhost:27017/mycollection
Now, I put my server in one container and db in another container and I am able to connect to my db from the server like this: mongodb://mycontainer:27017/mycollection
I have this connection string configured in my server code/config.
Now, how do I detect whether a person is running the server in a container or not and accordingly take the connection string for the db?
If he is running it in the host machine, I want to use the first connection string with localhost and connect to the db in the host machine and if he connects through a container, I want to use the container link name to connect as mentioned in the second case.
Is there any way to do this?
Personally, when I want to accomplish that, I set an ENV variable in the Dockerfile like the following:
ENV DATABASE_HOST db
You can have the full documentation on the Dockerfile reference.
Then, in your Node.js code source, you need to know whether the DATABASE_HOST is set or not (I can redirect you to this Stack Overflow Jayesh's post: Read environment variables in Node.js):
var dbHost = 'localhost';
if (process.env.DATABASE_HOST) {
dbHost = process.env.DATABASE_HOST;
}
or in one line:
var dbHost = process.env.DATABASE_HOST || 'localhost';
Then, for MongoDB connection:
var mongodbConnection = 'mongodb://' + dbHost + ':27017/mycollection'
Now, when you run the container, you must link the container in the docker run command with --link <your mongodb container>:db (since db is the value set in the ENV variable).
But, you can also use the option -e DATABASE_HOST=<somthing else> (again with the docker run command) and use a MongoDB container under another name: -e DATABASE_HOST=anotherOne --link mongo:anotherOne.
And again, you can use an external MongoDB without linking any container if you want (which is not in another container maybe): -e DATABASE_HOST=www.mymongo.com.
EDIT: This solution is maybe better than just identifying if the application is run in a Docker container because with this one your code is usable anywhere.
is-docker is a popular npm packages to accomplish this.
import isDocker from 'is-docker';
if (isDocker()) {
console.log('Running inside a Docker container');
}
The purpose of me using the dependency is perhaps for those who are trying to determine which host to use on their database.
import isDocker from "is-docker";
const host = !!isDocker() ? "host.docker.internal" : env.NODE_DB_HOST;

Resources