How do I delete DenyAllOutBound rule in azure vm and database.
I am trying to connect to azure sql databse from azure machine.
Going with the assumption that you meant to ask how to delete the DenyAllOutBound rules in a Network Security Group - unfortunately you can't do that. The DenyAllOutBound rules in a Network Security Group is basically a default rule added by Azure.
What you should do instead is to create a custom higher priority rule (the lower the number, the higher the priority) that will overwrite the default rules. Custom rules are only allowed to have priority numbers from 100 to 4096.
E.g. For the rules below, you'd need to create a new one with a priority of 4096 or below, with your requirements.
Related
This question may sound a little odd, but here it goes: A customer of ours would like to get access to certain metrics of his environment of our product which we host on Azure for the customer. It's a pretty complicated deployment, but in the end it consists of an Application Gateway, some virtual machines and a dedicated Azure SQL database.
The customer now would want to get select metrics from this deployment forward to their own DataDog subscription, e.g. VM CPU metrics, database statistics and those things. DataDog obviously supports all this information (which is good), but as a default would slurp in information from all resources within our subscription (which is not OK).
Is there a way to fine-granularly define which data is forwarded to DataDog, e.g. the resources and also which type of metrics to forward for each resource? What are my options here? Is it enough to create a service principal with a limited reading right, or can I configure this somewhere else? I am unfortunately not familiar with DataDog.
The main thing which must be prevented is that the customer due to the metrics forwarding could get access to other metrics in our subscription - we need to control the exact scope of the metrics.
The pretty straightforward solution to this issue is to create a service principal via command line, and then to assign the monitoring role to this service principal only exactly for the resources you need. This even works down to a level of specific databases for example.
Kicker: This is not possible to do in such a granularity from the UI, but the az command line accepts assigning the monitoring reader permission on a deep resource ID level, even if the UI for this is not there. By finding the resource ID from the UI, and then using the resource ID from the command line, it's possible to achieve exactly this behaviour.
I published a rule created by rules engine and it got deployed. But how do I remove / cancel the policy or delete it ? I no longer need it as its causing continues redirect and my service is not accessible. Note that I am using Premium Verizon pricing tier
According to the documentation
Only a single policy per environment may be active at any given time.
You don't really remove it, you just overwrite it.
You can create a new draft, lock it, and deploy it on top of the existing one.
Also note that you can't have an empty policy.
We are using Azure to host many (100+) SQL Azure databases with an identical setup. Azure Security Center performs a weekly vulnerability scan. At present, we need to set up the baseline for each individual database. For instance, every time we add a new database, we need to classify dozens of fields to pass VA1288. This is a tedious process and it gets more complicated as we tighten the baseline.
Is it possible to create a base line template and link it to a SQL Azure instance and if so, how? We'd really like to get that green checkmark!
You can use PowerShell and Set-AzSqlDatabaseVulnerabilityAssessmentRuleBaseline to set a vulnerability assessment rule baseline on all the databases under a server (see example 3 on https://learn.microsoft.com/en-us/powershell/module/az.sql/Set-azSqlDatabaseVulnerabilityAssessmentRuleBaseline?view=azps-3.6.1) and then use WebJobs to run your script every day/week etc. (see https://github.com/projectkudu/kudu/wiki/WebJobs#user-content-scheduling-a-triggered-webjob).
I can configure the Azure CDN against a single storage account presently. What I'm wondering is in the event of a disaster, where that particular region becomes unavailable (outages etc..). If I need to refresh the cache at that point I don't have any regional fallbacks. What is the correct way of supporting multiple storage accounts with the CDN?
One way that I can see it is the Traffic Manager. Traffic Manager receives the request and sends it to one of the X CDNs configured for X Storage Accounts based on performance. That way if one of the regions become unavailable, Traffic Manager should fallback to another one. This is an expensive solution though, so I'm looking for something where I can get one CDN and X Storage Accounts ideally and the CDN should handle the world-wide performance, along with a fallback region.
Here are the steps to configure AFD:
Create AFD from Portal.
Click on Front Door Designer. You will have 3 sections. First is Frontend which will be already configured. Then Baclkend Pools and Routing rules.
Click on Backend Pools and add a new backend pool. Select Storage as Host type and then pick your Primary Storage blob page and provide priority as 1.
Once that is done configure the Health probes. Then add your second Storage blob page and then provide priority as 2.
Configure Routing rules and make sure you have /* as matching pattern. Also you can enable caching in the rule and you can cache based on the query string. Moreover if have a dynamic page, then you can enable dynamic compression.
Once that is done, try accessing AFD URL and check how it works.
Here is the Public Documentation for your reference: https://learn.microsoft.com/en-us/azure/frontdoor/front-door-routing-methods
You try using Azure FrontDoor. It is a combination of CDN and L7 load balancer. You can try implementing your ask with Azure FrontDoor.
Let me know if you face any difficulties.
am new to windows azure. I recently set up a vm and host a website, according to the SLA i need to have 2 VMs in the availability set. Now i did set up the second VM.
My questions what do i need to use the second VM for?
if i setup load balancing does azure redirect user to the second VM? this second VM has nothing in it.
Please i will like to know this and is it possible to replicate the content of the first VM to the second one, so each time the first one is down the second VM can take over.
Thanks
At first, You must understand the statement of minimum two machines to get 99.95% SLA. It is not about "reserving" resources for use in case of fault or update (fault domain and update domain in availability set). Your application must be created as multi-tenant, so You need to run Your application on two servers, connected to the availability set. You can synchronize storage with GlusterFS (if You use Linux) or other distributed file system. You also can use Azure Files service (SMB as a service) to share storage. For sharing DB (in example MySQL) You need a cluster (independent or distributed through Your two machines).
So... You must to start think in "cloud way" instead of typical one VM administration.