Install SCOM ( System Center Operation Manager ) on Azure VM with ARM Template - azure

Does anyone have a sample ARM Template to install SCOM agent on an Azure VM ?
I searched through Microsoft docs but couldn't find an example.
Also, What are the other critic points during operating this task?
Could you go through the steps?
Any help is appreciated.
Thanks

• You can surely install SCOM agent through custom script extension in an ARM template as below. Use a SAS token to download the SCOM agent installation package, viz., MOMAgent.msi in the Azure VM during deployment itself and then use a powershell script to invoke the silent install of SCOM agent.
ARM Template: -
Using the default quickstart template for deploying an Azure VM through ARM template as given in this link : -
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-template?toc=/azure/azure-resource-manager/templates/toc.json
In this template, you must add the below custom script extension installation content in ‘resources’ section in the above ARM template. Please check the formatting of the ARM template code correctly, i.e., commas, curly brackets, square brackets, etc. Also, ensure to open HTTPS port 443 inbound also as below. Please ensure that the required ports for successful communication between SCOM Management Server and the SCOM agent installed on the Azure VM are opened as below through the addition of various security rules: -
"securityRules": [
{
"name": "default-allow-3389",
"properties": {
"priority": 1000,
"access": "Allow",
"direction": "Inbound",
"destinationPortRange": "3389",
"protocol": "Tcp",
"sourcePortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*"
}
},
{
"name": "AllowHTTPSInBound",
"properties": {
"priority": 1010,
"access": "Allow",
"direction": "Inbound",
"destinationPortRange": "443",
"protocol": "Tcp",
"sourcePortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*"
}
}
]
For including the custom script extension in your Azure VM deployment, kindly add the below ARM template commands as stated above.
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"apiVersion": "2021-04-01",
"name": "[concat(parameters('vmName'),'/', 'InstallWebServer')]",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/',parameters('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.7",
"autoUpgradeMinorVersion": true,
"protectedSettings": {
"storageAccountName": "SCOM",
"storageAccountKey": "EN6iUzOfVe8Ht0xvyxnqK/iXEGTEunznASsumuz0FR4SCvc2mFFHUJfbMy1/GSK7gXk0MB38MMo7+AStoKxC/w==",
"fileUris": [
"https://SCOM.blob.core.windows.net/SCOMAgent/Testdemo2.ps1"
],
"commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File Testdemo2.ps1"
}
}
}
Also, please note that you need to provision a storage account container already for storing the powershell script and the application package in it so that you can use that storage account’s key, its name and the powershell script’s blob URI in place of the same as requested above. Also, please change the name of the powershell script to be executed through the extension in ‘commandToExecute’ section. I have used the name of the script as ‘Testdemo2.ps1’ so have entered the blob URI of that script and its name accordingly in the ARM template above.
Once the above has been done, please ensure the successful execution of silent installation commands for the SCOM agent to be installed locally so that they can be accordingly modified in the powershell script. Please find my powershell script as below. Ensure that this script and the MOMAgent.msi is uploaded beforehand, and the access level of the container is set to ‘Anonymous and public access’: -
Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
Install-Module -Name Az.Storage -AllowClobber -Force
Import-Module -Name Az.Storage -Force
$StorageAccountName = "SCOM"
$ContainerName = "SCOMAgent"
$Blob1Name = "MOMAgent.msi"
$TargetFolderPath = "C:\"
$context = New-AzStorageContext -StorageAccountName $StorageAccountName -SASToken "sp=r&st=2022-02-10T08:40:34Z&se=2022-02-10T16:40:34Z&spr=https&sv=2020-08-04&sr=b&sig=DRDulljKTJiRbVPAXAJkTHi8QlnlbjPpVR3aueEf9xU%3D"
Get-AzStorageBlobContent -Blob $Blob1Name -Container $ContainerName -Context $context -Destination $TargetFolderPath
$arg="/I C:\MOMAgent.msi /QN USE_SETTINGS_FROM_AD=1 MANAGEMENT_GROUP=MGname MANAGEMENT_SERVER_DNS=MSname SECURE_PORT=PortNumber ACTIONS_USE_COMPUTER_ACCOUNT=0 ACTIONSUSER=UserName ACTIONSDOMAIN=DomainName ACTIONSPASSWORD=Password INSTALLDIR=C:\ProgramFiles\ AcceptEndUserLicenseAgreement=1"
Start-Process msiexec.exe -Wait -ArgumentList $arg ’
If you intend to modify the above arguments as stated by me for SCOM agent installation on the Azure VM, please refer to the documentation link below. It clearly explains the various command line arguments to be passed for SCOM agent installation. Please note that these arguments depend on your existing SCOM Server setup and configuration settings so accordingly ensure to open/modify the port settings accordingly for Azure VM as well as for other components in the SCOM setup.
https://learn.microsoft.com/en-us/system-center/scom/manage-deploy-windows-agent-manually?view=sc-om-2019#to-deploy-the-operations-manager-agent-from-the-command-line
Then edit the parameters file with the desired values in ‘adminUsername’, ‘adminPassword’ and ‘location’ and save it in the same location where template file is stored and execute the commands below from powershell console with elevated privileges locally, i.e., through the path where these ARM template files are stored by browsing to that path in powershell itself.
az login
az deployment group create -n <name of the deployment> -g <name of the resource group> --template-file "azuredeployVM.json" --parameters "azuredeployVM.parameters.json" ’
Thus, after successful deployment, you will be able to see the SCOM agent installed during the VM creation itself. In this way, you can install the SCOM agent in Azure VM through ARM template with storage account provisioning.

Related

How to Use Powershellscript in Azure classic Release pipeline - script file stored in Azure Devops Secure File

I am using a custom script extension for VM in ARM Template:
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"name": "[concat(parameters('vm-Name'),'-0',copyIndex(1),'/script')]",
"apiVersion": "2015-05-01-preview",
"location": "[resourceGroup().location]",
"copy": {
"name": "storagepoolloop",
"count": "[parameters('virtualMachineCount')]"
},
"dependsOn": [
"virtualMachineLoop",
"nicLoop"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.4",
"settings": {
"fileUris": [
],
"commandToExecute": "[parameters('commandToExecute')]"
}
}
}
where parameters = "powershell.exe $(Agent.TempDirectory)/$(script.secureFilePath)"
I am using azure devops secure files to store my script. I have Download a secure file task before deploying the vm.
I have also tried directly referencing script file name
"powershell.exe $(Agent.TempDirectory)/puscript.ps1"
I am using classic Release pipeline, if this is not the right way please guide how to use powershell script stored in secure files.
Any help is appreciated. Thanks in advance.
The script will need to be downloaded on to the VM you're creating, not downloaded onto the machine that is deploying the ARM. That command does not actually get executed until the VM starts the extension, so the variable $(Agent.TempDirectory) refers to the directory on the machine executing the pipeline and won't exist when the VM starts up.
I did the same thing for a VM custom extension by including the script in the image that I was using to create the VM. If you're not using a custom image, you can add the storage account information to download it in the protectedSettings like this:
"protectedSettings": {
"commandToExecute": "powershell.exe puscript.ps1",
"storageAccountName": "yourstorageaccount",
"storageAccountKey": "<account key>",
"fileUris": [
"https://yourstorageaccount.blob.core.windows.net/container/puscript.ps1"
]
}
ref: https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows#extension-schema
You can try like as below steps:
Use the Download Secure File task to download the PowerShell script file. On the task, set a Reference name for use.
Use the PowerShell task (or Azure PowerShell task) to execute the PowerShell script.
Consider you want execute the PowerShell script to run ARM Template deployment, you could use the Azure PowerShell task.

Install applications or software on Azure VM with ARM Template

We're trying to install Kaspersky Network Agent on an Azure VM using ARM Template.
Also, we need to get the .exe or .msi file from VM storage using SAS token. I was searching for the Template examples to operate but couldn't come up with one that accomplishes the task. Do you know, if it's possible to do in this way?
If so,
Can you share a template that does a similar task?
Also, please explain how to modify the template for this case.
Thanks in advance
• Yes, you can successfully install an application using the custom script extension in an ARM template in an Azure VM as follows. Kindly check the ARM template file as deployed by me for this purpose. Also, I have used the SAS token to download the application package in the Azure VM during deployment itself and have also used a powershell script to invoke the silent install of the concerned application.
ARM Template: -
I am using the default quickstart template for deploying an Azure VM through ARM template as given in this link : - https://learn.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-template?toc=/azure/azure-resource-manager/templates/toc.json
In this template, I have added the below custom script extension installation content in ‘resources’ section in the above ARM template. Please check the formatting of the ARM template code correctly, i.e., commas, curly brackets, square brackets, etc. Also, ensure to open HTTPS port 443 inbound also as below: -
"securityRules": [
{
"name": "default-allow-3389",
"properties": {
"priority": 1000,
"access": "Allow",
"direction": "Inbound",
"destinationPortRange": "3389",
"protocol": "Tcp",
"sourcePortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*"
}
},
{
"name": "AllowHTTPSInBound",
"properties": {
"priority": 1010,
"access": "Allow",
"direction": "Inbound",
"destinationPortRange": "443",
"protocol": "Tcp",
"sourcePortRange": "*",
"sourceAddressPrefix": "*",
"destinationAddressPrefix": "*"
}
}
]
Custom Script VM extension: -
{
"type": "Microsoft.Compute/virtualMachines/extensions",
"apiVersion": "2021-04-01",
"name": "[concat(parameters('vmName'),'/', 'InstallWebServer')]",
"location": "[parameters('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/',parameters('vmName'))]"
],
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.7",
"autoUpgradeMinorVersion": true,
"protectedSettings": {
"storageAccountName": "techtrix",
"storageAccountKey": "EN6iUzOfVe8Ht0xvyxnqK/iXEGTEunznASsumuz0FR4SCvc2mFFHUJfbMy1/GSK7gXk0MB38MMo7+AStoKxC/w==",
"fileUris": [
"https://techtrix.blob.core.windows.net/executable/Testdemo2.ps1"
],
"commandToExecute": "powershell.exe -ExecutionPolicy Unrestricted -File Testdemo2.ps1"
}
}
}
Also, please note that you need to provision a storage account container already for storing the powershell script and the application package in it so that you can use that storage account’s key, its name and the powershell script’s blob URI in place of the same as requested above. Also, please change the name of the powershell script to be executed through the extension in ‘commandToExecute’ section.
Once the above has been done, please ensure the successful execution of silent installation commands for the application package to be installed locally so that they can be accordingly modified in the powershell script. I have used installed ‘7-zip’ application here for demonstration purposes. Please find my powershell script as below. Ensure that this script and the application package is uploaded beforehand, and the access level of the container is set to ‘Anonymous and public access’: -
Install-PackageProvider -Name NuGet -MinimumVersion 2.8.5.201 -Force
Install-Module -Name Az.Storage -AllowClobber -Force
Import-Module -Name Az.Storage -Force
$StorageAccountName = "techtrix"
$ContainerName = "executable"
$Blob1Name = "7z2107-x64.exe"
$TargetFolderPath = "C:\"
$context = New-AzStorageContext -StorageAccountName $StorageAccountName -SASToken "sp=r&st=2022-02-10T08:40:34Z&se=2022-02-10T16:40:34Z&spr=https&sv=2020-08-04&sr=b&sig=DRDulljKTJiRbVPAXAJkTHi8QlnlbjPpVR3aueEf9xU%3D"
Get-AzStorageBlobContent -Blob $Blob1Name -Container $ContainerName -Context $context -Destination $TargetFolderPath
$arg="/S"
Start-Process -FilePath "C:\7z2107-x64.exe" -ArgumentList $arg ’
Then edit the parameters file with the desired values in ‘adminUsername’, ‘adminPassword’ and ‘location’ and save it in the same location where template file is stored. Now, execute the commands below from powershell console with elevated privileges locally, i.e., through the path where these ARM template files are stored by browsing to that path in powershell itself.
az login
az deployment group create -n <name of the deployment> -g <name of the resource group> --template-file "azuredeployVM.json" --parameters "azuredeployVM.parameters.json" ’
After successful deployment, you will be able to see the application installed during the VM creation itself as below: -
In this way, you can install an '.exe' or '.msi' through the ARM template with custom script extension.
Installing .exe files requires elevated permissions to run as administrator.
Custom script is not working when it tries to run .exe file.

Dataset not getting created via Powershell for type=AzureSqlMITable

Issue: error on ADF when trying to create ADF Components such as dataset for AzureSQLMITable via powershell
Analysis:
Error is reproducable on BuildServer (run via DevOps) & locally via Windows PowerShell.
Error is not reproducible in Azure Cloudshell & Powershell core with same set of commands
Error on ADF for the dataset:
Could not load resource #datasetname. Please ensure no mistakes in the JSON and that referenced resources exist. Status: UnknownError, Possible reason: Fetch failed for named: dataset$#datasetname. Adapter not found. Type: dataset.
If manually pasted the file(jsonfile) in ADF it works as expected without error
Expected resolution: How to make it work with WindowsPowershell?
Json file:
{
"name": "#datasetname",
"properties": {
"linkedServiceName": {
"referenceName": "<connection name>",
"type": "LinkedServiceReference"
},
"annotations": [],
"type": "AzureSqlMITable",
"schema": [],
"typeProperties": {
"tableName": {
"value": "<StoredProcedure_Name_Name>",
"type": "Expression"
}
}
},
"type": "Microsoft.DataFactory/factories/datasets"
}
Powershell commands:
Connect-AzureRmAccount
$BaseFolder=<FilePath>
$file = Get-ChildItem $BaseFolder -Recurse -Include *.json -Filter #somefilter -ErrorAction Stop
Set-AzureRmDataFactoryV2Dataset -DataFactoryName <datafactoryname> -Name ($file.BaseName) -ResourceGroupName <resourcegroupname> -DefinitionFile $file.FullName -Force -ErrorAction Stop
Resolved with Az Commands using powershell core commands(via Azure Powershell task in CICD instead of Powershell script which was Windows one).
The point here is that henceforth people should use Powershell core(Az) as it will have new feature and windows powershell(AzureRm) will only have bug fixes

Enabling long file paths on Azure Service Fabric VMSS cluster

My Azure Service Fabric application sometimes requires paths longer than MAX_PATH, especially given the length of the work directory. As such, I'd like to enable long file paths (via the registry's LongPathsEnabled value, via group policy, or via some other mechanism, see https://superuser.com/questions/1119883/windows-10-enable-ntfs-long-paths-policy-option-missing). But I can't figure out how to do that.
The cluster runs on an Azure VMSS, so I can remote into the individual instances and set it manually, but that doesn't scale well of course.
UPDATE:
#4c74356b41's answer got me most of where I needed to be. My VMSS already had a customScript extension installed, so I actually had to modify it to include the PS command, here's my final command:
# Get the existing VMSS configuration
$vmss = Get-AzVmss -ResourceGroupName <resourceGroup> -Name <vmss>
# inspect $vmss to determine which extension is the customScript, in ours it's at index 3. Note the existing commandToExecute blob, you'll need to modify it to add the additional PS command
# modify the existing Settings.commandToExecute blob to add the reg set command
$vmss.VirtualMachineProfile.ExtensionProfile.Extensions[3].Settings.commandToExecute = 'powershell -ExecutionPolicy Unrestricted -File AzureQualysCloudAgentPowerShell_v2.ps1 && powershell -c "Set-ItemProperty -Path HKLM:\System\ControlSet001\Control\FileSystem -Name LongPathsEnabled -Value 1"'
# update the VMSS with the new config
Update-AzVmss -ResourceGroupName $vmss.ResourceGroupName -Name $vmss.Name -VirtualMachineScaleSet $vmss
I'd suggest using script extension and a simple powershell script to set this value. this will automatically get applied to all the instances (including to when you scale).
{
"apiVersion": "2018-06-01",
"type": "Microsoft.Compute/virtualMachineScaleSet/extensions",
"name": "config-app",
"location": "[resourceGroup().location]",
"properties": {
"publisher": "Microsoft.Compute",
"type": "CustomScriptExtension",
"typeHandlerVersion": "1.9",
"autoUpgradeMinorVersion": true,
"settings": {
"fileUris": []
},
"protectedSettings": {
"commandToExecute": "powershell -c 'Set-Item HKLM:\System\CurrentControlSet\Policies\LongPathsEnabled -Value 1'"
}
}
}
The command itself is probably a bit off, but you can experiment on your local and get it right and then put it into the script extension
https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows

How to find if a Virtual Machine is using managed/Unmanaged disks in Azure

Is there a way in Azure to find if a VM in azure created to with Managed/Unmanaged disks?
We can use PowerShell to list the information of Azure VM.
Here is the Unmanaged disks VM output:
PS C:\Users> (get-azurermvm -ResourceGroupName jasonvn -Name jasonvm1).StorageProfile.OsDisk
StorageProfile and NetworkProfile, respectively.
OsType : Linux
EncryptionSettings :
Name : jasonvm1
Vhd : Microsoft.Azure.Management.Compute.Models.VirtualHardDisk
Image :
Caching : ReadWrite
CreateOption : FromImage
DiskSizeGB :
ManagedDisk :
Here is the Managed disks VM output:
PS C:\Users> (get-azurermvm -ResourceGroupName jasonvn -Name jasonvm).StorageProfile.OsDisk
StorageProfile and NetworkProfile, respectively.
OsType : Linux
EncryptionSettings :
Name : jasonvm
Vhd :
Image :
Caching : ReadWrite
CreateOption : FromImage
DiskSizeGB : 30
ManagedDisk : Microsoft.Azure.Management.Compute.Models.ManagedDiskParameters
Another way, we can use Azure new portal to check automation script to find it:
This information is available in another area of the Azure portal as well. Go to the 'Virtual machines' list in the portal, click the 'Columns' button, and add a column called "Uses Managed Disks".
To add to Jason Ye's answer, you can also run a similar command in Azure CLI 2.0. The command is:
az vm show -g rg_name -n vm_name
And the output for non-managed disk is:
...
"osDisk": {
"caching": "ReadWrite",
"createOption": "fromImage",
"diskSizeGb": 32,
"encryptionSettings": null,
"image": null,
"managedDisk": null,
"name": "rhel-un",
"osType": "Linux",
"vhd": {
"uri": "https://storageaccountname.blob.core.windows.net/vhds/....vhd"
}
And for managed disk:
...
"osDisk": {
"caching": "ReadWrite",
"createOption": "fromImage",
"diskSizeGb": 32,
"encryptionSettings": null,
"image": null,
"managedDisk": {
"id": "/subscriptions/sub_id/resourceGroups/rg_name/providers/Microsoft.Compute/disks/rhel_OsDisk_1...",
"resourceGroup": "rg_name",
"storageAccountType": "Standard_LRS"
},
"name": "rhel_OsDisk_1...",
"osType": "Linux",
"vhd": null
}
If looking for OS disk, this will work. Can mod for data disk.
$VmName="vmNameHere" #vmNameHere
$RGName="rgnameHere" #resourceGroupname
if((Get-AzureRmVM -Name $VmName -ResourceGroupName $RGName).StorageProfile.OsDisk.ManagedDisk -like ''){"$vmName,OS Disk,Unmanaged"}else{"$Vmname,OS Disk,Managed"}
Similar to Scottge's answer, but if you just go to the VM > Disks > select the disk, it opens a blade showing the disk information. At the top of this blade, "(unmanaged)" is displayed after the disk name if it is unmanaged. Nothing is displayed if it is managed.
Here is one way which I found, follow the below steps to determine the disk type:
Login to the Azure portal.
Select the VM in question. Select the
disk to check. Look at the disk's URL.
An Unmanaged Disk's URL will look like:
/storage_account_name.blob.core.windows.net/VM_name/VM_name.vhd
A Managed Disk's URL will look like:
/subscriptions/0cbded86-6088-430c-a320-xxxxxxxxxxxx/resourceGroups/Resource_Group_name/providers/Microsoft.Compute/disks/Disk_name
This thread is a few years old, but I've found a better way to check them all at once using Powershell. Microsoft just emailed out that they're retiring unmanaged disks in 2 years, so hopefully this helps others before then!
This will try to connect to Azure, install the module if needed, get all your VMs, and list out the disk name and ManagedDisk field. If you have a lot of VMs I'd suggest adding a filter to only show when the ManagedDisk field is empty.
Try{Get-AzSubscription > Out-Null}
Catch{
Try{Get-Module -ListAvailable -Name AZ.compute}
Catch{
Write-Host "Azure module does not exist, installing, importing, and connecting" -ForegroundColor Cyan
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Install-Module Az -Repository PSGallery -Force
}
Finally{
Import-Module AZ
Write-Verbose "Enter your Azure Admin Account creds" -ForegroundColor Cyan
Connect-AzAccount
Set-AzContext -Subscription "fe95db19-b91b-4492-990e-8c7c9ac355a4"
}
}
Finally{
Write-Host "You are connected to Azure" -ForegroundColor Cyan
}
#Get All VMs
$VMs = Get-AZVM
#Go through each VM one at a time
ForEach($VM in $VMs){
$VM.StorageProfile.OsDisk | Select Name,ManagedDisk
$VM.StorageProfile.DataDisks | Select Name,ManagedDisk
}
It is shown in Virtual Machine JSON preview as well under osDisk

Resources