We have a requirement to call a 3rd party API via a source with a fixed set of IPs. The IPs need to be static.
Since we are already using Logicapps(consumption) to do other processing, my initial thought was to continue using logic apps. I have already checked the concurrency and message size and they are all within acceptable range to use Logicapps.
My only question is on the IP of the Logicapp when it makes an API call. We had explored using Logic apps standard with VNET integration and Natgateway but this adds additional cost. The other option was to use APIM to provide a static IP, but the Logic would need to integrate with AAD for oAuth2 authentication and we would face challenges in managing the AAD tokens. We don't want to call AAD for a new token for each and every execution.
From the portal I can see a set of outbound IPs for my Logicapp.
I have already tested that the IP exposed is within this list. But I wanted to know if there is a chance that this can change in the future. I was not able to find any concrete Microsoft documentation around this question.
You should always take in account those IPs can change.
Microsoft exposes a downloadable list of Azure Ip Ranges here
This file contains the IP address ranges for Public Azure as a whole, each Azure region within Public, and ranges for several Azure Services (Service Tags) such as Storage, SQL and AzureTrafficManager in Public. Service Tags are each expressed as one set of cloud-wide ranges and broken out by region within that cloud. This file is updated weekly. New ranges appearing in the file will not be used in Azure for at least one week. Please download the new json file every week and perform the necessary changes at your site to correctly identify services running in Azure.
You can also get this data using the PowerShell Get-AzNetworkServiceTag in case you want to automate things. Example:
$serviceTags = Get-AzNetworkServiceTag -Location westeurope
$logicApps = $serviceTags.Values | Where-Object { $_.Name -eq "LogicApps" }
$logicApps.Properties
Region :
SystemService : LogicApps
ChangeNumber : 11
AddressPrefixes : {4.232.111.16/28, 4.232.111.32/27, 13.64.231.196/32, 13.65.39.247/32...}
Related
I have an application hosted in Azure PAAS. The connection string for the application is stored under 'Configuration' -> 'Connection strings'
My application has a PowerShell instance. I want to iterate through all the Connection strings present under 'Configuration' -> 'Connection strings'
I have seen the Azure document. As my application itself is the app, can there be a way to skip the details like 'subscriptionId', 'resourceGroupName' and 'name'?
This will help to make the code more generic.
As my application itself is the app, can there be a way to skip the
details like 'subscriptionId', 'resourceGroupName' and 'name'?
AFAIK, Its not possible to acquire the connection strings using Rest API, or PowerShell of an Azure web application without providing Resource group name or subscription.
The MS DOCUMENT you have followed is to list the connection strings which is correct but we need to pass those credentials to achieve the same.
If my understanding is correct as its your own application and if its publicly hosted then anyone will not be able to get the resource group name, application name(If you are using custom domain) or subscription details.
Alternatively, we can use the Az cli by providing the resource group only :-
For more information please refer the below links:-
SO THREAD|Get the list of azure web app settings to be swapped using PowerShell
If you are going to use the REST API calls for your code, then the simple answer is just: No.
I think in all cases the answer is going to be no honestly..
You can't drop those unique IDs, because those are required parameters to retrieve the correct data.
If you want to make the code more generic, then you should write the code to retrieve the values for those parameters. Instead of hardcoding the values.
Your powershell code will always need to authenticate, or use a Managed Identity, and the identity used to authenticate will always have the subscriptionid as value in its object. As for the rest, well i think you get the gist of what im suggesting.
We manage an Azure subscription operated by several countries. Each of them is quite independant about they can do (create/edit/remove resources). A guide of good practices has been sent to them, but we (security team) would like to ensure a set of NSG is systematically applied for every new subnet/vnet created.
Giving a look to Azure Triggers, I am not sure that subnet creation belongs to the auditable events. I also was told to give a look to Azure policy, but once again I am not sure this will match our expectations which are : For every new vnet/subnet, automatically apply a set of predefined NSG.
Do you have any idea about a solution for our need ?
I have done work like this in the past (not this exact issue) and the way I solved it was with an Azure Function that walked the subscription and looked for these kinds of issues. You could have the code run as a Managed Identity with Reader rights on the subscription to report issues, or as a Contributor to update the setting. Here's some code that shows how you could do this with PowerShell https://github.com/Azure/azure-policy/tree/master/samples/Network/enforce-nsg-on-subnet
You could consider using a Policy that has a DeployIfNotExists Action, to deploy an ARM template that contains all the data for the NSG. https://learn.microsoft.com/en-us/azure/governance/policy/samples/pattern-deploy-resources
You can get the ARM template by creating the NSG and getting the template:
GettingNSGTemplate
Note also that creating a subnet is audited, you can see it in the Activity Log for the VNet. See the screen shot.
AddingASubnet
I tried listing the VMs based on Resource Groups but i want to list the VMs based on network.
Can someone help me with this?
PagedList<VirtualMachine> resourceGroupVMs =
azure.virtualMachines()
.listByResourceGroup(resourceGroupName);
As I known, all of Azure SDK APIs are just calling the related REST APIs. So according to the REST API references for Virtual Machine, as below, you see there is not an API to list VMs by network.
Note: The List API in the figure above is to list VMs by resource group as the description said,
Lists all of the virtual machines in the specified resource group. Use the nextLink property in the response to get the next page of virtual machines.
So the workaround in Java to list VMs by network is to use azure.virtualmachines().listAll() to list all VMs and filter the results with the network profile for echo VM to get those you want.
I'm trying to build a small program to change the autoscale settings for our Azure WebApps, using the Microsoft.WindowsAzure.Management.Monitoring and Microsoft.WindowsAzure.Management.WebSites NuGet packages.
I have been roughly following the guide here.
However, we are interested in scaling WebApps / App Services rather than Cloud Services, so I am trying to use the same code to read the autoscale settings but providing a resource ID for our WebApp. I have already got the credentials required for making a connection (using a browser window popup for Active Directory authentication, but I understand we can use X.509 management certificates for non-interactive programs).
This is the request I'm trying to make. Credentials already established, and an exception is thrown earlier if they're not valid.
AutoscaleClient autoscaleClient = new AutoscaleClient(credentials);
var resourceId = AutoscaleResourceIdBuilder.BuildWebSiteResourceId(webspaceName: WebSpaceNames.NorthEuropeWebSpace, serverFarmName: "Default2");
AutoscaleSettingGetResponse get = autoscaleClient.Settings.Get(resourceId); // exception here
The WebApp (let's call it "MyWebApp") is part of an App Service Plan called "Default2" (Standard: 1 small), in a Resource Group called "WebDevResources", in the North Europe region. I expect that my problem is that I am using the wrong names to build the resourceId in the code - the naming conventions in the library don't map well onto what I can see in the Azure Portal.
I'm assuming that BuildWebSiteResourceId is the correct method to call, see MSDN documentation here.
However the two parameters it takes are webspaceName and serverFarmName, neither of which match anything in the Azure portal (or Google). I found another example which seemed to be using the WebApp's geo region for webSpaceName, so I've used the predefined value for North Europe where our app is hosted.
While trying to find the correct value for serverFarmName in the Azure Portal, I found the Resource ID for the App Service Plan, which looks like this:
/subscriptions/{subscription-guid}/resourceGroups/WebDevResources/providers/Microsoft.Web/serverfarms/Default2
That resource ID isn't valid for the call I'm trying to make, but it does support the idea that a 'serverfarm' is the same as an App Service Plan.
When I run the code, regardless of whether the resourceId parameters seem to be correct or garbage, I get this error response:
<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
{"Code":"SettingNotFound","Message":"Could not find the autoscale settings."}
</string>
So, how can I construct the correct resource ID for my WebApp or App Service Plan? Or alternatively, is there a different tree I should be barking up to programatially manage WebApp scaling?
Update:
The solution below got the info I wanted. I also found the Azure resource explorer at resources.azure.com extremely useful to browse existing resources and find the correct names. For example, the name for my autoscale settings is actually "Default2-WebDevResources", i.e. "{AppServicePlan}-{ResourceGroup}" which I wouldn't have expected.
There is a preview service https://resources.azure.com/ where you can inspect all your resources easily. If you search for autoscale in the UI you will easily find the settings for your resource. It will also show you how to call the relevant REST Api endpoint to read or update that resorce.
It's a great tool for revealing a lot of details for your deployed resources and it will actually give you an ARM template stub for the resource you are looking at.
And to answer your question, you could programmatically call the REST API from a client with updated settings for autoscale. The REST API is one way of doing this, the SDK another and PowerShell a third.
The guide which you're following is based on the Azure Service Management model, aka Classic mode, which is deprecated and only exists mainly for backward compatibility support.
You should use the latest
Microsoft.Azure.Insights nuget package for getting the autoscale settings.
Sample code using the nuget above is as below:
using Microsoft.Azure.Management.Insights;
using Microsoft.Rest;
//... Get necessary values for the required parameters
var client = new InsightsManagementClient(new TokenCredentials(token));
client.AutoscaleSettings.Get(resourceGroupName, autoScaleSettingName);
Besides, the autoscalesettings is a resource under the "Microsoft.Insights" provider and not under the "Microsoft.Web" provider, which explains why you are not able to find it with your serverfarm resourceId.
See the REST API Reference below for getting the autoscale settings.
GET
https://management.azure.com/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/microsoft.insights/autoscaleSettings/{autoscale-setting-name}?api-version={api-version}
Our Windows Azure roles need to know programmatically which datacenter the roles are in.
I know a REST API ( http://msdn.microsoft.com/en-us/library/ee460806.aspx) return location property.
And if I deploy to, let’s say, West US, the REST API returns West US as location property.
However, if I deploy to Anywhere US, the property is Anywhere US.
I would like to know which datacenter our roles are in even if location property is Anywhere US.
The reason why I ask is our roles need to download some data from Windows Azure storage.
The cost of downloading data from the same datacenter is free but the that of downloading data from the different datacenter is not free.
It is a great question and I have talked about it couple of time with different partners. Based on my understanding when you create your service and provide where you want this service to location ("South Central US" or "Anywhere US"), this information is stored at the RDFE server specific to your service in your Azure subscription. So when you choose a fixed location such as "South Central US" or "North Central US" the service actually deployed to that location and exact location is stored however when you choose "Anywhere US", the selection to choose "South Center" or "North Central" is done internally however the selected location never updated to RDFE server related to your service.
That's why whenever you try to get location info related to your service in your subscription over SM API or Powershell or any other way (even on Azure Portal it is listed as "Anywhere US" because the information is coming from same storage), you will get exactly what you have selected at first place during service creation.
#Sandrino idea will work as long as you can validate your service location IP address from a given IP address range published by Windows Azure team.
Two days ago Microsoft published an XML file containing a list of IP ranges per region. Based on this file and the IP address of your role you can find in which datacenter the role has been deployed.
<?xml version="1.0" encoding="UTF-8"?>
<regions>
...
<region name="USA">
<subregion name="South Central US">
<network>65.55.80.0/20</network>
<network>65.54.48.0/21</network>
...
</subregion>
<subregion name="North Central US">
<network>207.46.192.0/20</network>
<network>65.52.0.0/19</network>
...
</subregion>
</region>
</regions>
Note: It looks like the new datacenters (West and East US) are not covered by this XML file, which might make it useless in your case.
I had some services deployed in Anywhere US and in order to find out which data centre it was deployed in I had to log a support call with Microsoft. They were helpful and got me the information but even they had to go away and look it up. As a result of this I would never use and the the "Anywhere ..." locations. Knowing which data centre your services are running in is very important for knowing where to deploy things like SQL Azure and service bus which don't support affinity groups and don't have an Anywhere option.