GetAzVirtualNetworkGatewayConnection Status for multiple Vnet gateways - azure

I have a requirement to gather the information of connection status for multiple virtual network gateways between multiple resource groups
Let's say
I have one vnetgateway in rg1 and vnetgateway in rg2.
I established a connection to vnet1tovnet2 and vice versa between both virtual network gateways. The connection is done
Now I have a command called Get-Azvirtualnetworkgatewayconnection -name ( here I have to give the connection name) - resource group name $rg
Now my question here is that. I want to fetch those connection names itself and pass it to Get-Azvirtualnetworkgatewayconnection -name as a parameter or variable.
I don't find a command for that. As to know the name of the connection i need to go to portal,fetch those details and then embed in powershell code. This can be done for one or 2
But i have multiple vnetgateway and multiple connections
Hence getting those connection names and then knowing the connection status (either connected or disconnected) will help me.
Anyone has any idea on this?
Please help me. Thanks

Well , this is solved
I had to just use get-azvirtualnetworkgatewayconnection -resourcegroupname 'myresourcegroup'
And now the output of this code is stored to a variable called $vnet let's say
Now, I used a foreach loop
Foreach ($v in $vnet.name){
Get-Azvirtualnetworkgatewayconnection -name $v -resourcegroupname 'myresourcegroup'}
That's it. This just pulls out the status of the connection details and i have then added select-object Name, resource group name , connectionstatus at the end to filter out the required properties for me as an output. That has done the job easily.
Thank you.
Posting this as it might be helpful for someone.
In case if someone is looking out for a whole script on this. Please comment down
I can update it here. Thanks.

Related

Session anti-affinity in Kubernetes

I am running an EKS cluster. And I have a use case which I will explain below.
So, I am trying to create a scalable CTF (Capture the Flag). Now, the problem is - there are a few challenges in which the participants have to write a few files within the pod. Now, obviously, I don't want, another participant to have the remote session when the first user was writing the files within the pod. If that happens, the second user will automatically get the solution.
In order to avoid this problem, we thought of implementing a solution like "session anti-affinity", i.e., if a pod has a session with one user, the ingress should send the request to another pod, but we are not able to understand how to implement the solution.
Please help us out.
If you are just looking for session affinity solution using ingress, you need to enable proxy protocol first. Which will have information of source ip. In ingress you use the information to achieve affinity.
But the problem statement you had mentioned is kind of locking. at given point only one user has to serviced. Not sure session affinity will help in solving the problem.

VM: Programatically keep a VM "awake"?

I have an Azure VM that I use to connect to a database as a workhorse. If I am remotely connected, e.g., through Bastion (or SSH/RDP), my service will connect and run no problem. If I close my connection, my service will connect and run no problem for some limited time limit. After several hours, the connection to the database will fail.
If I remote back into the VM, I notice its exactly where I left it, all windows still open, etc. If I run a connection to my database, it succeeds. Again, if I leave it alone for a while, the connection to my database fails.
I have tried running powershell commands akin to the belowto keep the machine "alive". What can I do to keep it talking to the internet? The problem seems to be that it is not connecting to the internet after some lengthy timeframe of not being logged into.
# try to keep awake
$wsh = New-Object -ComObject WScript.Shell
while (1) {
$wsh.SendKeys('+{F15}')
Start-Sleep -seconds 59
}
To keep the VM Alive we can change the settings from Power & sleep > Never
For additional settings you can select as below and keep VM awake without falling into sleep.
For more information please refer this GitHub link .

IIS ApplicationPool fails to start properly without error

We are facing a strange issue with our IIS Deployments.
ApplicationPools sometimes fail to start properly but do not throw errors when doing so.
The only containing site within the Application Pool is not responsive (not even returning 500 or the like, just times out after some time).
The ApplicationPool and Sites are up and running (not stopped) as far as IIS is concerned.
Restarting the Site or the ApplicationPool does not fix the issue.
However, removing the site and ApplicationPool and recreating it with identical Properties does fix it.
Once any ApplicationPool has reached this state, the only way to solve this (as far as we know) is recreating the entire ApplicationPool.
We would gladly do so in an automated way, but there is no error to catch and handle respectively.
Some background data:
We are using IIS Version 10
The ApplicationPool appears to start correctly. EventLog states that Application '<OUR_APP>' started successfully.
We suspect that the problem might be multiple ApplicationPool-starts happening simultaniously (as they are automatically triggered by our CI/CD Pipeline).
Now, I am by no means an IIS Expert, so my questions are:
Would it be possible, that many app pool starts (circa 20-60) happening at roughly the same time cause such behaviour?
What could I do to investigate this further?
Would it be possible, that many app pool starts (circa 20-60)
happening at roughly the same time cause such behaviour?
Difficult to say. An app pool is just an empty container, mostly what takes the time and places limits on this number is what your application code and dependencies are doing at startup and runtime with a little dotnet precompilation overhead.
What could I do to investigate this further?
Check the HTTPERR logs in the Windows folder - might provide a clue if your not seeing the request logged elsewhere.
monitor the w3wp.exe processes themselves - those are your apppools(AKA "app domains"). Its possible for them to get stuck and not "properly" crash which sounds like your case.
Assuming all your apps normally work and you just want a way to recover random failures, try this...
When you have a broken app pool, run the following on your server from PowerShell or ISE (as an Administrator) to view the running IIS worker processes:
Get-WmiObject Win32_Process -Filter "name = 'w3wp.exe'" | Select-Object ProcessId,CommandLine
Above outputs the worker processes ID's, and the arguments used to start them. Among the arguments you can see the sites name - use the correct ProcessId with the command Stop-Process -Force -Id X (replacing X with the ProcessId number) to forcibly kill the process. Does the app successfully start once you try and access it after killing the process?
If you know the name of the app pool to kill you can use this code to terminate the process:
$AppPoolName = 'NAMEOFMYAPPPOOL';
Stop-Process -Force -id (Get-WmiObject Win32_Process -Filter "name = 'w3wp.exe' AND CommandLine like '%-in%$($AppPoolName)%'").ProcessId
(substitute NAMEOFMYAPPPOOL for the name of the app pool, and run as Administrator)
If killing the stalled process is sufficient to let it restart successfully it would be fairly easy to script a simple health check. I would read the bindings of each site, make an HTTP request to each binding and confirm the app pool really is running/responsive and returns a 200 OK response. If the request fails after some reasonable timeout, try terminating the process and re-requesting the HTTP request to restart the app pool. Add some retry logic and maybe add a delay between attempts so it doesnt get stuck in a loop.
Just a thought - try giving each app pool its own temp folder - configured in web.config per site:
<system.web>
<compilation tempDirectory="D:\tempfiles\apppoolname" />
Cross talk in here during startup is a possible source of weirdness.
The problem seemed to be caused by our Deployment Scripts not waiting for Application-Pools to actually be in Stopped state, before continuing to remove old application files and replacing them with the new ones and immediately Starting the ApplicationPools again.
We noticed issues related to this earlier this year when files could not be deleted because the were still being used, even after stopping the ApplicationPool (which we "solved" by implementing a retry mechanism)...
Solution
Calling the following code after stopping the ApplicatonPool seems to solve the issue....
$stopWaitCount = 0;
while ((Get-WebAppPoolState -Name $appPool).Value -ne "Stopped" -and $stopWaitCount -lt 12)
{
$stopWaitCount++
Write-Log "Waiting for Application-Pool '$appPool' to stop..."
Start-Sleep -Seconds $stopWaitCount
}
We implemented this 2 days ago and the problem didn't occur in 100+ deployments since.

Azure: how can I make sure Runbook invoke only one time per server

I have virtual machines on Azure. Each master vm have few compute nodes running Slurm.
When a job finished, the compute nodes removed by Slurm shutdown script and the master stay running. I want to shutdown the master too.
I created an Azure Runbook that shutdown the master server.
I can add line, to the script that shutdown the compute nodes, to invoke that Runbook.
The problem :
Each compute node will send request to shutdown his master, which is the same for few compute nodes. What cause to send the same request many times.
Is there any way to know in the Runbook that request to shutdown the specific mastrer was received and skip all the other request to shutdown the same master.
Can I lock the Runbook or turn on a flag that the request was submitted?
Thanks.
Perhaps the easiest thing to do is to use Automation variables, like this:
$alreadySent = Get-AutomationVariable -Name ShutdownRequestSent
if (-not $alreadySent) {
# Send shutdown request
...
Set-AutomationVariable -Name ShutdownRequestSent -Value $true
}
You will have to figure out when and how to reset the variable to $false again.
This solution is simple but not entirely bullet-proof, and it may not be enough in your case. For example, if two jobs for this runbook reach the Get-AutomationVariable command around the same time, they will both send a shutdown request. Or, if the job sending a shutdown request, crashes for any reason before executing Set-AutomationVariable, the next job will send the second shutdown request. I don't know if this is a problem for you. If you need a stronger run-once guarantee, consider using Lease Blob.

Azure Web Roles and Instances

I have a web role with multiple instances. I'd like to send an email to my customers with their statistics every morning at 7AM.
My problem is the following: if I use a Cron job to do the work mails will be sent multiple times.
How can I configure my instances in order to send only one email to each of my customers?
Per my experience, I think you can try to use a unique instance id to be ensure there is a single instance works for emailing as cron job.
Here is a simple code.
import com.microsoft.windowsazure.serviceruntime.RoleEnvironment;
String instanceId = RoleEnvironment.getCurrentRoleInstance().getId();
if("<instace-id-for-running-cronjob>".equals(instanceId)) {
// Coding for cron job
.....
}
As reference, please see the description of the function RoleInstance.getId below.
Returns the ID of this instance.
The returned ID is unique to the application domain of the role's instance. If an instance is terminated and has been configured to restart automatically, the restarted instance will have the same ID as the terminated instance.
More details for using the class above from Azure SDK for Java, please see the list of classes below.
RoleEnvironment
RoleInstance
Hope it helps.
Typically the way to solve this problem with Multi-instance Cloud Services would be to use Master Election pattern. What you would do is at 7:00 AM (or whenever your CRON job will fire), all your instances will try to acquire lease on a blob. However only one instance will be successful in acquiring the lease while other instances will fail with PreConditionFailed (419) error (make sure you catch this exception!). The instance that acquires the lease is essentially a Master and that instance will send out email.
Obviously the problem with this approach is that what if this Master instance fails to deliver the messages. But I guess that's another question :).

Resources