We have a need to Add or Remove Web Apps from Virtual Neworks.
We have the virtual network already setup, so we created an app to create subnets and then allocate the Web App to the VN using that subnet.
I have the following SDK's consumed in our C# project:
<PackageReference Include="Microsoft.Azure.Management.AppService.Fluent" Version="1.38.0" />
<PackageReference Include="Microsoft.Azure.Management.Fluent" Version="1.38.0" />
<PackageReference Include="Microsoft.Azure.Management.Network.Fluent" Version="1.38.0" />
<PackageReference Include="Microsoft.Azure.Services.AppAuthentication" Version="1.6.2" />
I have tried to find a way to update the network settings for my Web App, but cannot seem to find an Update that works with it?
In the code below I have added a comment on the bit I don't know how to do.
var networks = azure.Networks.List().ToList();
var mainVN = networks.Find(n => n.Key == "MainVn");
webApp = azure.WebApps.Find(w => w.Name == "PrimaryApp")
var cidr = $"10.0.1.0/24";
string subNetName = webApp.Name + "-subnet";
await mainVN.Update().WithSubnet(subNetName, cidr).ApplyAsync();
webApp.Update() // <----- how do I do this bit? To add the subnet/VN to the app service?
// DELETE
await azure.WebApps.Manager.Inner.WebApps.DeleteSwiftVirtualNetworkAsync("rg", "webapp");
// ADD
await azure.WebApps.Manager.Inner.WebApps.CreateOrUpdateSwiftVirtualNetworkConnectionAsync("rg", "webapp",
new Microsoft.Azure.Management.AppService.Fluent.Models.SwiftVirtualNetworkInner {
});
Related
I have a simple ASP.NET Core 3.1 app deployed on an Azure App Service, configured with a .NET Core 3.1 runtime. One of my endpoints are expected to receive a simple JSON payload with a single "data" property, which is a base64 encoded string of a file. It can be quite long, I'm running into the following issue when a the JSON payload is 1.6 MBs.
On my local workstation, when I call my API from Postman, everything's working as expected, my breakpoint in the Controller's action method is reached, the data is populated, all good - it's only when I deploy (via Azure DevOps CICD Pipelines) the app to the Azure App Service. Whenever trying to call the deployed API from Postman, no HTTP response is received, but this: "Error: write EPIPE".
I've tried modifying the web.config to include both a maxRequestLength and maxAllowedContentLength properties:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<location path="." inheritInChildApplications="false">
<system.web>
<httpRuntime maxRequestLength="204800" ></httpRuntime>
</system.web>
<system.webServer>
<security>
<requestFiltering>
<requestLimits maxAllowedContentLength="419430400" />
</requestFiltering>
</security>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="dotnet" arguments=".\MyApp.API.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" hostingModel="inprocess" />
</system.webServer>
</location>
</configuration>
In the app's code, I've added to the Startup.cs:
services.Configure<IISServerOptions>(options => {
options.MaxRequestBodySize = int.MaxValue;
});
In the Program.cs, I've added:
.UseKestrel(options => { options.Limits.MaxRequestBodySize = int.MaxValue; })
In the controller, I've tried both of these attributes: [DisableRequestSizeLimit], [RequestSizeLimit(40000000)]
However, nothing's working so far - I'm pretty sure it has to be something configured on the App Service itself, not in my code, as locally everything's working. Yet, nothing so far helped in the web.config
It was related to the fact that in my App Service, I had to allow incoming client certificates, in the Configuration - turns out client certificates and large payloads don't mix well in IIS (apparently for more than a decade now): https://learn.microsoft.com/en-us/archive/blogs/waws/posting-a-large-file-can-fail-if-you-enable-client-certificates
None of the proposed workarounds in the above blog post fixed my issue, so I had to workaround: I've created an Azure Function (still using .NET Core 3.1 as a runtime stack) with a Consumption Plan, which is able to receive both the large payload and the incoming client certificate (I guess it doesn't use IIS under the hood?).
In my original backend, I added the original API's route to the App Service's "Certificate exclusion paths", to not get stuck waiting and timing out eventually with "Error: write EPIPE".
I've used Managed Identity to authenticate between my App Service and the new Azure Function (through a System Assigned identity in the Function).
The Azure Function takes the received certificate, and adds it to a new "certificate" property in the JSON body, next to the original "data" property, so my custom SSL validation can stay on the App Service, but the certificate is not being taken from the X-ARR-ClientCert header, but from the received payload's "certificate" property.
The Function:
#r "Newtonsoft.Json"
using System.Net;
using System.IO;
using System.Net.Http;
using System.Text;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
using System.Security.Cryptography.X509Certificates;
private static HttpClient httpClient = new HttpClient();
public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
{
var requestBody = string.Empty;
using (var streamReader = new StreamReader(req.Body))
{
requestBody = await streamReader.ReadToEndAsync();
}
dynamic deserializedPayload = JsonConvert.DeserializeObject(requestBody);
var data = deserializedPayload?.data;
var originalUrl = $"https://original-backend.azurewebsites.net/api/inbound";
var certificateString = string.Empty;
StringValues cert;
if (req.Headers.TryGetValue("X-ARR-ClientCert", out cert))
{
certificateString = cert;
}
var newPayload = new {
data = data,
certificate = certificateString
};
var response = await httpClient.PostAsync(
originalUrl,
new StringContent(JsonConvert.SerializeObject(newPayload), Encoding.UTF8, "application/json"));
var responseContent = await response.Content.ReadAsStringAsync();
try
{
response.EnsureSuccessStatusCode();
return new OkObjectResult(new { message = "Forwarded request to the original backend" });
}
catch (Exception e)
{
return new ObjectResult(new { response = responseContent, exception = JsonConvert.SerializeObject(e)})
{
StatusCode = 500
};
}
}
I would like to perform a scheduled task of exporting an Azure SQL database as DACPAC to the Blob Storage. I would like to know can I do this. Web Job? Powershell script?
We also can do this with WebJob. I create a demo with Microsoft.Azure.Management.Sql -Pre .Net SDK,and it works successfully for me.
More information about how to deploy webjob and create scheduled job please refer to the following documents.
creating-and-deploying-microsoft-azure-webjobs
create-a-scheduled-webjob-using-a-cron-expression
The following is my detail steps and sample code:
Prerequisites:
Registry an App in Azure AD and create service principle for it. More detail steps about how to registry app and get access token please refer to document.
Steps:
1.Create a C# console Application
2.Get accessToken by using registry App in Azure AD
public static string GetAccessToken(string tenantId, string clientId, string secretKey)
{
var clientCredential = new ClientCredential(clientId, secretKey);
var context = new AuthenticationContext("https://login.windows.net/" + tenantId);
var accessToken = context.AcquireTokenAsync("https://management.azure.com/", clientCredential).Result;
return accessToken.AccessToken;
}
3.Create Azure sqlManagementClient object
SqlManagementClient sqlManagementClient = new SqlManagementClient(new TokenCloudCredentials(subscriptionId, GetAccessToken(tenantId,clientId, secretKey)));
4.Use sqlManagementClient.ImportExport.Export to export .dacpac file to azure storage
var export = sqlManagementClient.ImportExport.Export(resourceGroup, azureSqlServer, azureSqlDatabase,
exportRequestParameters)
5. Go the the Bin/Debug path of the Application and Add all the contents in a .zip file.
Add the webjob from the Azure portal
Check the webjob log from the kudu tool
Check the backup file from the azure storage.
SDK info please refer to the Package.config file.
<?xml version="1.0" encoding="utf-8"?>
<packages>
<package id="Hyak.Common" version="1.0.2" targetFramework="net452" />
<package id="Microsoft.Azure.Common" version="2.1.0" targetFramework="net452" />
<package id="Microsoft.Azure.Common.Dependencies" version="1.0.0" targetFramework="net452" />
<package id="Microsoft.Azure.Management.Sql" version="0.51.0-prerelease" targetFramework="net452" />
<package id="Microsoft.Bcl" version="1.1.9" targetFramework="net452" />
<package id="Microsoft.Bcl.Async" version="1.0.168" targetFramework="net452" />
<package id="Microsoft.Bcl.Build" version="1.0.14" targetFramework="net452" />
<package id="Microsoft.IdentityModel.Clients.ActiveDirectory" version="2.28.3" targetFramework="net452" />
<package id="Microsoft.Net.Http" version="2.2.22" targetFramework="net452" />
<package id="Microsoft.Web.WebJobs.Publish" version="1.0.12" targetFramework="net452" />
<package id="Newtonsoft.Json" version="6.0.4" targetFramework="net452" />
</packages>
Demo code:
static void Main(string[] args)
{
var subscriptionId = "Your Subscription Id";
var clientId = "Your Application Id";
var tenantId = "tenant Id";
var secretKey = "secretkey";
var azureSqlDatabase = "Azure SQL Database Name";
var resourceGroup = "Resource Group of Azure Sql ";
var azureSqlServer = "Azure Sql Server";
var adminLogin = "Azure SQL admin login";
var adminPassword = "Azure SQL admin password";
var storageKey = "Azure storage Account Key";
var baseStorageUri = "Azure storage URi";//with container name endwith "/"
var backName = azureSqlDatabase + "-" + $"{DateTime.UtcNow:yyyyMMddHHmm}" + ".bacpac"; //back up sql file name
var backupUrl = baseStorageUri + backName;
ImportExportOperationStatusResponse exportStatus = new ImportExportOperationStatusResponse();
try
{
ExportRequestParameters exportRequestParameters = new ExportRequestParameters
{
AdministratorLogin = adminLogin,
AdministratorLoginPassword = adminPassword,
StorageKey = storageKey,
StorageKeyType = "StorageAccessKey",
StorageUri = new Uri(backupUrl)
};
SqlManagementClient sqlManagementClient = new SqlManagementClient(new TokenCloudCredentials(subscriptionId, GetAccessToken(tenantId,clientId, secretKey)));
var export = sqlManagementClient.ImportExport.Export(resourceGroup, azureSqlServer, azureSqlDatabase,
exportRequestParameters); //do export operation
while (exportStatus.Status != OperationStatus.Succeeded) // until operation successed
{
Thread.Sleep(1000 * 60);
exportStatus = sqlManagementClient.ImportExport.GetImportExportOperationStatus(export.OperationStatusLink);
}
Console.WriteLine($"Export DataBase {azureSqlDatabase} to Storage wxtom2 Succesfully");
}
catch (Exception)
{
//todo
}
}
Hi have you had a look at the following documentation which includes a PowerShell script and an Azure automation reference with sample script.
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-export-powershell
I have a network access control list in my cloud service similar to the below. How do I configure this programmatically instead of from the config file?
Some of these IP addresses can change. I want to resolve the IP address from a domain name and add the configuration:
<NetworkConfiguration>
<AccessControls>
<AccessControl name="security">
<Rule action="permit" description="Allow access from A" order="100" remoteSubnet="xxx.xxx.xxx.xxx/32" />
<Rule action="permit" description="Allow access from B" order="200" remoteSubnet="xxx.xxx.xxx.xxx/32" />
<Rule action="permit" description="Allow access from C" order="300" remoteSubnet="xxx.xxx.xxx.xxx/32" />
<Rule action="deny" description="Deny access to everyone else" order="400" remoteSubnet="0.0.0.0/0" />
</AccessControl>
</AccessControls>
You could create a separate role or an Azure Function that generates the new configuration and updates the service through REST: https://msdn.microsoft.com/en-us/library/azure/ee460812.aspx
Ok. I ended up writing a console application which gets called during the build which gets the IP address of the remove cloud service and checks whether it corresponds to what is in the configuration file.
If not, then I update it. Pretty straightforward.
Here is the build command:
$(SolutionDir)<MyProjectName>\$(OutDir)$(ConfigurationName)\MyExeName Update-FrontEnd-IPAddress-For-Azure-MicroService "$(SolutionDir)<AzureDeploymentProjectName>\ServiceConfiguration.Cloud.cscfg"
The console application does:
private static void HandleCheckRoleEnvironment(string[] args)
{
if (args[0] == "Check-Role-Environment")
{
Console.WriteLine("Found Command: Check-Role-Environment");
if (RoleEnvironment.IsAvailable && !RoleEnvironment.IsEmulated)
{
Console.WriteLine("Running in Azure Cloud Environment");
Environment.Exit(0);
return;
}
else
{
Console.WriteLine("NOT Running in Azure Cloud Environment");
Environment.Exit(1);
return;
}
}
}
Here is the code to update the config file:
private static void ExecuteUpdateFrontEndIPAddressForAzureMicroService(string configFilePath)
{
if (!File.Exists(configFilePath))
{
return;
}
var ipAddressList = Dns.GetHostAddresses("MyDomainName");
Console.WriteLine($"The IP address for MyDomainName is {ipAddressList[0].ToString()}");
var correctValue = $"{ipAddressList[0].ToString()}/32";
var document = new XmlDocument();
document.Load(configFilePath);
//Rule nodes
var rules = document.ChildNodes[1].LastChild.FirstChild.FirstChild.ChildNodes;
var rule = (from XmlNode p in rules
where p.Attributes["description"].Value == "Allow access from MyDomainName"
select p).FirstOrDefault();
var ipAddressValue = rule.Attributes["remoteSubnet"].Value;
Console.WriteLine($"The IP address in the config file is {ipAddressValue}");
if (correctValue != ipAddressValue)
{
rule.Attributes["remoteSubnet"].Value = correctValue;
document.Save(configFilePath);
Console.WriteLine("The config file has been updated with the correct IP address.");
}
else
{
Console.WriteLine("The config file is upto date and will not be updated.");
}
}
I am currently struggling to get something up and running on an nServiceBus hosted application. I have an azure ServiceBus queue that a 3rd party is posting messages to and I want my application (which is hosted locally at the moment) to receive these messages.
I have googled for answers on how to configure the endpoint but I have had no luck in a valid config. Has anyone ever done this as I can find examples of how to connect to Azure storage queues but NOT servicebus queue. (I need azure servicebus queues for other reasons)
The config I have is as below
public void Init()
{
Configure.With()
.DefaultBuilder()
.XmlSerializer()
.UnicastBus()
.AzureServiceBusMessageQueue()
.IsTransactional(true)
.MessageForwardingInCaseOfFault()
.UseInMemoryTimeoutPersister()
.InMemorySubscriptionStorage();
}
.
Message=Exception when starting endpoint, error has been logged. Reason: Input queue [mytimeoutmanager#sb://[*].servicebus.windows.net/] must be on the same machine as this Source=NServiceBus.Host
.
<configuration>
<configSections>
<section name="MessageForwardingInCaseOfFaultConfig" type="NServiceBus.Config.MessageForwardingInCaseOfFaultConfig, NServiceBus.Core" />
<section name="UnicastBusConfig" type="NServiceBus.Config.UnicastBusConfig, NServiceBus.Core" />
<section name="AzureServiceBusQueueConfig" type="NServiceBus.Config.AzureServiceBusQueueConfig, NServiceBus.Azure" />
<section name="AzureTimeoutPersisterConfig" type="NServiceBus.Timeout.Hosting.Azure.AzureTimeoutPersisterConfig, NServiceBus.Timeout.Hosting.Azure" />
</configSections>
<AzureServiceBusQueueConfig IssuerName="owner" QueueName="testqueue" IssuerKey="[KEY]" ServiceNamespace="[NS]" />
<MessageForwardingInCaseOfFaultConfig ErrorQueue="error" />
<!-- Use the following line to explicitly set the Timeout manager address -->
<UnicastBusConfig TimeoutManagerAddress="MyTimeoutManager" />
<!-- Use the following line to explicity set the Timeout persisters connectionstring -->
<AzureTimeoutPersisterConfig ConnectionString="UseDevelopmentStorage=true" />
<startup useLegacyV2RuntimeActivationPolicy="true">
<supportedruntime version="v4.0" />
<requiredruntime version="v4.0.20506" />
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0" />
</startup>
</configuration>
Try moving UnicastBus() to the end of your call, like this:
Configure.With()
.DefaultBuilder()
.XmlSerializer()
.AzureServiceBusMessageQueue()
.IsTransactional(true)
.MessageForwardingInCaseOfFault()
.UseInMemoryTimeoutPersister()
.InMemorySubscriptionStorage()
.UnicastBus(); // <- Here
And about those third parties posting messages to the queue. Keep in mind that they need to respect how NServiceBus handles serialization/deserialization. Here is how this is done in NServiceBus (the most important part is that the BrokeredMessage is initialized with a raw message, the result of a serialziation using the BinaryFormatter):
private void Send(Byte[] rawMessage, QueueClient sender)
{
var numRetries = 0;
var sent = false;
while(!sent)
{
try
{
var brokeredMessage = new BrokeredMessage(rawMessage);
sender.Send(brokeredMessage);
sent = true;
}
// back off when we're being throttled
catch (ServerBusyException)
{
numRetries++;
if (numRetries >= MaxDeliveryCount) throw;
Thread.Sleep(TimeSpan.FromSeconds(numRetries * DefaultBackoffTimeInSeconds));
}
}
}
private static byte[] SerializeMessage(TransportMessage message)
{
if (message.Headers == null)
message.Headers = new Dictionary<string, string>();
if (!message.Headers.ContainsKey(Idforcorrelation))
message.Headers.Add(Idforcorrelation, null);
if (String.IsNullOrEmpty(message.Headers[Idforcorrelation]))
message.Headers[Idforcorrelation] = message.IdForCorrelation;
using (var stream = new MemoryStream())
{
var formatter = new BinaryFormatter();
formatter.Serialize(stream, message);
return stream.ToArray();
}
}
If you want NServiceBus to correctly deserialize the message, make sure your thierd parties serialize it correctly.
I now had exactly the same problem and spent several hours to figure out how to solve it. Basically Azure timeout persister is only supported for Azure hosted endpoints that use NServiceBus.Hosting.Azure. If you use NServiceBus.Host process to host your endpoints, it uses NServiceBus.Timeout.Hosting.Windows namespace classes. It initialized a TransactionalTransport with MSMQ and there you get this message.
I used two methods to avoid it:
If you must use As_Server endpoint configuration, you can use .DisableTimeoutManager() in your initialization, it will skip the TimeoutDispatcher initialization completely
Use As_Client endpoint configuration, it doesn't use transactional mode for the transport and timeout dispatcher is not inialized
There could be a way to inject Azure timeout manager somehow but I have not found it yet and I actually need As_Client thingy, so it works fine for me.
My 'LocalClient' app is in a corporate LAN behind an HTTP proxy server (ISA). The first Azure API call i make - CloudQueue.CreateIfNotExist() - causes an exception: (407) Proxy Authentication Required. I tried following things:
Added the <System.Net> defaultProxy element to app.config, but it doesn't seem to be working (Reference: http://geekswithblogs.net/mnf/archive/2006/03/08/71663.aspx).
I configured 'Microsoft Firewall Client for ISA Server', but that did not help either.
Used a custom proxy handler as suggested here: http://dunnry.com/blog/2010/01/25/SupportingBasicAuthProxies.aspx. I am not able to get this working - getting a Configuration initialization exception.
As per MSDN, an HTTP proxy server can be specified in the connection string only in case of Development Storage (see http://msdn.microsoft.com/en-us/library/ee758697.aspx):
UseDevelopmentStorage=true;DevelopmentStorageProxyUri=http://myProxyUri
Is there any way to connect to the Azure Storage thru a proxy server?
I actually found that the custom proxy solution was not required.
Adding the following to app.config (just before the </configuration>) did the trick for me:
<system.net>
<defaultProxy enabled="true" useDefaultCredentials="true">
<proxy usesystemdefault="true" />
</defaultProxy>
</system.net>
The custom proxy solution (the third thing i tried as mentioned in my original question) worked perfectly. The mistake i was doing earlier was not putting the <configSections> element at the beginning of <configuration> in app.config as required. On doing that, the custom proxy solution given here solved my problem.
To by pass the proxy then please use like below, it works as expected and same has been tested.
public class AzureUpload {
// Define the connection-string with your values
/*public static final String storageConnectionString =
"DefaultEndpointsProtocol=http;" +
"AccountName=your_storage_account;" +
"AccountKey=your_storage_account_key";*/
public static final String storageConnectionString =
"DefaultEndpointsProtocol=http;" +
"AccountName=test2rdrhgf62;" +
"AccountKey=1gy3lpE7Du1j5ljKiupjhgjghjcbfgTGhbntjnRfr9Yi6GUQqVMQqGxd7/YThisv/OVVLfIOv9kQ==";
// Define the path to a local file.
static final String filePath = "D:\\Project\\Supporting Files\\Jar's\\Azure\\azure-storage-1.2.0.jar";
static final String file_Path = "D:\\Project\\Healthcare\\Azcopy_To_Azure\\data";
public static void main(String[] args) {
try
{
// Retrieve storage account from connection-string.
//String storageConnectionString = RoleEnvironment.getConfigurationSettings().get("StorageConnectionString");
//Proxy httpProxy = new Proxy(Proxy.Type.HTTP,new InetSocketAddress("132.186.192.234",8080));
System.setProperty("http.proxyHost", "102.122.15.234");
System.setProperty("http.proxyPort", "80");
System.setProperty("https.proxyUser", "ad001\\empid001");
System.setProperty("https.proxyPassword", "pass!1");
// Retrieve storage account from connection-string.
CloudStorageAccount storageAccount = CloudStorageAccount.parse(storageConnectionString);
// Create the blob client.
CloudBlobClient blobClient = storageAccount.createCloudBlobClient();
// Get a reference to a container.
// The container name must be lower case
CloudBlobContainer container = blobClient.getContainerReference("rpmsdatafromhospital");
// Create the container if it does not exist.
container.createIfNotExists();
// Create a permissions object.
BlobContainerPermissions containerPermissions = new BlobContainerPermissions();
// Include public access in the permissions object.
containerPermissions.setPublicAccess(BlobContainerPublicAccessType.CONTAINER);
// Set the permissions on the container.
container.uploadPermissions(containerPermissions);
// Create or overwrite the new file to blob with contents from a local file.
/*CloudBlockBlob blob = container.getBlockBlobReference("azure-storage-1.2.0.jar");
File source = new File(filePath);
blob.upload(new FileInputStream(source), source.length());*/
String envFilePath = System.getenv("AZURE_FILE_PATH");
//upload list of files/directory to blob storage
File folder = new File(envFilePath);
File[] listOfFiles = folder.listFiles();
for (int i = 0; i < listOfFiles.length; i++) {
if (listOfFiles[i].isFile()) {
System.out.println("File " + listOfFiles[i].getName());
CloudBlockBlob blob = container.getBlockBlobReference(listOfFiles[i].getName());
File source = new File(envFilePath+"\\"+listOfFiles[i].getName());
blob.upload(new FileInputStream(source), source.length());
System.out.println("File " + listOfFiles[i].getName()+ " upload successful");
}
//directory upload
/*else if (listOfFiles[i].isDirectory()) {
System.out.println("Directory " + listOfFiles[i].getName());
CloudBlockBlob blob = container.getBlockBlobReference(listOfFiles[i].getName());
File source = new File(file_Path+"\\"+listOfFiles[i].getName());
blob.upload(new FileInputStream(source), source.length());
}*/
}
}catch (Exception e)
{
// Output the stack trace.
e.printStackTrace();
}
}
}
.Net or C# then please add below code to "App.config"
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.2" />
</startup>
<system.net>
<defaultProxy enabled="true" useDefaultCredentials="true">
<proxy usesystemdefault="true" />
</defaultProxy>
</system.net>
</configuration>