I have a script that need to call child scripts in parallel. Child scripts using az cli and create/modify Azure PaaS objects in different Azure subscriptions. The problem is that as different scripts are using az account set --subscription <subscription-for-script>, they overlap and something that need to be created in subscription A by script A, created in subscription B cause a moment before script B set subscription to subscription B.
As az cli stores context in AzureProfile.json, I tried to create new folder per script and via $Env:AZURE_CONFIG_DIR specify different values per script. But I cannot find a way to isolate environment variables in child scripts, or specify AzureProfile context without using environment variables.
In parent script:
$listOfScripts | Foreach-Object -Parallel {
<block to run script with arguments>
} -AsJob -ThrottleLimit 50
and in each child script:
$Env:AZURE_CONFIG_DIR = "$RootPath\..\AzureProfiles\folderForScript"
az login --service-principal -u ${env:ARM_CLIENT_ID} -p ${env:ARM_CLIENT_SECRET} --tenant ${env:ARM_TENANT_ID}
az account set --subscription $subscription_id
Would be appreciate for advice how could be achieved running parallel independent scripts that uses different subscriptions to modify Azure PaaS objects
Update: Only solution that found - not to use az login and az account set inside scripts that run in parallel. Just connect via SPN in parent script and use parameter --subscription in each az command.
Run script block in parallel
Script blocks run in a runspace context in a PowerShell. It contains all the defined variables, functions & loaded modules. Initializing a runspace for script to run in takes time and resources. It must be run within their own runspace when scripts are running in parallel. Each runspace must load whatever module is needed and have any variable be explicitly passed in from the calling script.
The only variable that automatically appears in the parallel script block is the piped in object. Other variables are passed in using the $using: keyword.
Example
$computers = 'computerA','computerB','computerC','computerD'
$logsToGet = 'LogA','LogB','LogC'
# Read specified logs on each machine, using custom module
$logs = $computers | ForEach-Object -ThrottleLimit 10 -Parallel {
Import-Module MyLogsModule
Get-Logs -ComputerName $_ -LogName $using:logsToGet
}
References
PowerShell ForEach-object -parallel Feature
SO Thread for implementation
Related
I am running a Node.JS app that is executing some powershell commands to set the DNS services for my WiFi interface.
Unfortunately, the script fails without admin permissions, so I am trying to elevate the script within node.js
Start-Process -FilePath powershell.exe -ArgumentList {
Write-Host "Hello"
Set-DnsClientServerAddress -InterfaceIndex 10 -ServerAddresses ("127.0.0.1", "8.8.8.8")
sleep 5
} -verb RunAs
For some reason, this isn't applying the changes, despite the script outputting "hello" and waiting for 5 seconds. Does anyone know why that might be?
I am trying to run a shell script inside a Unix server using Azure Logic Apps.
I tried several approaches to execute the shell script 1.(in the diagram). Can anyone suggest me a new approach or any idea to execute the shell2 from shell 1.
#!/bin/sh
touch testing.txt
HOST = '10.2.166.122'
USER = 'johndoe'
PASSWD = 'abc#123'
FILE = 'shell2.sh'
PATH = '/appdata/files/samplefile/bin'
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
cd $PATH
execute $FILE
quit
END_SCRIPT
exit 0
Basically i need to pass the server credentials as well as server shell script location path as parameters.
You could run your shell scripts with Run Command remotely using the VM agent. Run Command can be used through the Azure portal, REST API, or Azure CLI for Linux VMs.
For more details you could refer to this doc: Run shell scripts in your Linux VM with Run Command.
And in your situation, I think what you want is the REST API, so you could post the request, and in the logic app you could also use the REST API to send the request.
And this is the REST API: Virtual Machines Run Commands - Run Command.
I have a PowerShell Azure Runbook in a Running state.
Pressing Stop via the Azure Portal results in an error message:
"Job could not be stopped".
Using the PowerShell cmdlet Stop-AzureRMAutomationJob results in the error message:
"InternalServerError: {"Message":"An error has occurred."}
From the documentation it looks like the job will be stopped after 3hrs, but is there any other way to stop a Runbook job or deal with a situation like this?
This issue went to the Microsoft product group to fix and should be fixed now.
first u should find the job ID with failed status...
Get-AzureRmAutomationJob -ResourceGroupName $RG -AutomationAccountName $AA | where {$.RunbookName -eq "runbook name" -and $.status -eq "Failed"}
then after stop the runbook
Stop-AzureRmAutomationJob -ResourceGroupName $RG -AutomationAccountName $AA -id JOBID
Hi after my vm gets created I run the Azure Custom Script Extension which runs a powershell script. So the script does some basic tasks like create a share create some files and creates a powershell script and stored it on the vm. the last step in the custom vm extension script I have it attempt to setup a task scheduler job to call a powershell script that was created above and the script should end fromt he extension.
The problem isthe extension gets stuck in a running state and never stops. I wrote a log to find out the progress of the script and it shows it went through the whole script fine but it gets stuck in running and the task scheduler script does not run the job.
When i log in the vm manually I ran the script manually and it works fine. So I am not sure what is causing it to hang and what user it is running as the vm extension that is.I tried to run a powershell background job instead of the scheduler and I got the same symptoms above.
The below code is the task scheduler I tried to run the vm custom extension that does not work when trying to set it up through the custom extension but runs fine manually setup from the vm.
$jobname = "MasterFileWatcher"
$script = "c:\test\MasterFileWatcher.ps1"
$repeat = (New-TimeSpan -Minutes 1)
$action = New-ScheduledTaskAction –Execute "$pshome\powershell.exe" -Argument "$script; quit"
$duration = ([timeSpan]::maxvalue)
$trigger = New-ScheduledTaskTrigger -Once -At (Get-Date).Date -
RepetitionInterval $repeat -RepetitionDuration $duration
$msg = "Enter the username and password that will run the task";
$credential = $Host.UI.PromptForCredential("Task username and
password",$msg,"$env:userdomain\$env:username",$env:userdomain)
$username = $credential.UserName
$password = $credential.GetNetworkCredential().Password
$username = "$env:userdomain\testuser"
$password = "testpass"
$settings = New-ScheduledTaskSettingsSet -AllowStartIfOnBatteries -DontStopIfGoingOnBatteries -StartWhenAvailable -RunOnlyIfNetworkAvailable -DontStopOnIdleEnd
Register-ScheduledTask -TaskName $jobname -Action $action -Trigger $trigger -RunLevel Highest -User $username -Password $password -Settings $settings
I found out the issue.
I had to set the username as system so the task scheduler could run as Localsystem account
Register-ScheduledTask -TaskName $jobname -Action $action -Trigger $trigger -RunLevel Highest -User "System" -Settings $settings
I have a PowerShell script that is run automatically when our monitoring service detects that a website is down.
It is supposed to stop the AppPool (using Stop-WebAppPool -name $AppPool;), wait until it is really stopped and then restart it.
Sometimes it the process does not actually stop, manifested by the error
Cannot Start Application Pool:
The service cannot accept control messages at this time.
(Exception from HRESULT: 0x80070425)"
when you try to start it again.
If it takes longer than a certain number of seconds to stop (I will chose that amount of time after I have timed several stops to see how long it usually takes), I want to just kill the process.
I know that I can get the list of processes used by workers in the AppPool by doing dir IIS:\AppPools\MyAppPool\WorkerProcesses\,
Process ID State Handles Start Time
---------- ----- ------- ----------
7124 Running
but I can't figure out how to actually capture the process id so I can kill it.
In case that Process ID is really the id of process to kill, you can:
$id = dir IIS:\AppPools\MyAppPool\WorkerProcesses\ | Select-Object -expand processId
Stop-Process -id $id
or
dir IIS:\AppPools\MyAppPool\WorkerProcesses\ | % { Stop-Process -id $_.processId }
In Command Prompt on the server, I just do the following for a list of running AppPool PIDs so I can kill them with taskkill or Task Mgr:
cd c:\windows\system32\inetsrv
appcmd list wp
taskkill /f /pid *PIDhere*
(Adding answer from Roman's comment, since there maybe cache issues with stej's solution)
Open Powershell as an Administrator on the web server, then run:
gwmi -NS 'root\WebAdministration' -class 'WorkerProcess' | select AppPoolName,ProcessId
You should see something like:
AppPoolName ProcessId
----------- ---------
AppPool_1 8020
AppPool_2 8568
You can then use Task Manager to kill it or in Powershell use:
Stop-Process -Id xxxx
If you get Get-WmiObject : Could not get objects from namespace root/WebAdministration. Invalid namespace then you need to enable the IIS Management Scripts and Tools feature using:
ipmo ServerManager
Add-WindowsFeature Web-Scripting-Tools