Azure AD Sync - Not updating the usagelocation from AD - azure

I have ADsync on a dc and I am trying to get it to pull the usagelocation from a users attributes, but it fails to populate, I have added the locale GB to msExchUsageLocation and also added a rule to the synchronization rules editor to obtain this information from the attribute and point it at the usagelocation, but still it shows blank. The only way to update the Usagelocation is to user the following ps script ;
Set-MsolUser -userprincipalname -User#domain.com -UsageLocation GB
I have a script that updates each users ad profile using csv, and wanted to incorporate the usagelocation in that, however as it stands, the ps script is the only way to update this.
Any Ideas would be greatly appreciated.

Azure AD's usageLocation syncs with On-prem AD's msExchUsageLocation by default. You can populate that attribute on prem and it'll sync up. Use two letter country codes in AD and Azure will translate them when you look at the user's profile.

Related

roleAssignment with current user id

I'm using Azure AD app registration principles to deploy resources via Azure Resource Manager to deploy via Pipelines.
During the deployment I need to set some permissions to the deployment user to ensure it has enough permission to - for example - upload files.
As I'm using different principles, and I'm not managing those in the code, I would like to know if there is a way to reference the "current user-principals - ID" during the deployment.
Something like:
deployment().properties.xx
or
environment()
https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions-deployment
https://learn.microsoft.com/en-us/azure/templates/microsoft.authorization/roleassignments?tabs=bicep
Otherwise, I would need to inject this information via parameter, I think. I could get that information by script - or there is a variable even present from azure dev ops.
Any ideas, help appreciated. Thanks.
Currently, it's not possible to get the objectId of the user deploying the template... we do have a backlog item for it.

Azure Automation: Run PowerShell after AD user added

I'm reviewing Azure Automation, but I couldn't find out if it is possible to run a PowerShell script whenever a new user is added to Active Directory? The scenario I'm researching is whenever a new Office365 account is added through admin.microsoft.com then I want to configure some email preferences for this user. I have my PowerShell script tested already (so these preferences should be set correctly), but now I'm trying to find out how exactly this script should be executed right after account is added.
Thanks,
You can inspect the Azure AD Audit logs for new user creation. You can export the Diagnostic Settings (logs) to Azure Monitor (see doc).
The following is an idea but I never tried it myself:
In Azure Monitor -> Logs you can find for example this query:
Modify it according to your needs and create an alert rule. In the alert rule, you can set up an action group that triggers your automation account with the PowerShell script.

Work on kubernetes with two accounts in PowerShell with Azure

I have two separate Azure accounts.
One for each project in which I am involved, these accounts are totally independent, that is, they do not share any type of resource and do not have the same domain. They are from two totally different companies.
I find that both accounts respond to me at the time of login from PowerShell and I can access those resources.
Both work with Kubernetes (kubectl) but only one of the two accounts is shown. Whatever you do always shows the content of co-owners of one and not the other.
I have the Azure CLI (v.2.0.76) and the PS version is (5.1)
someone know how to I can do?
EDIT with pictures -
Although the account is default, I am not able to access the kubernetes of the same
PS Default Account
Services from the other account..not the default account
services from other cluster
I just found the solution.
When we access from PS with Az Login and select the account, it allows us to access all the resources of that account (the one that is predetermined)
What I have done is basically see the
kubectl config view
This returns the result of all the clusters that it finds with its context. The next thing we have to do is tell kubectl what CONTEXT we want to work with in the following way:
kubectl config use-context "CONTEXT NAME"
And thats it.

Generate Azure Databricks Token using Powershell script

I need to generate Azure Databricks token using Powershell script.
I am done with creation of Azure Databricks using ARM template , now i am looking to generate Databricks token using powershell script .
Kindly let me know how to create Databricks token using Powershell script
The only way to generate a new token is via the api which requires you to have a token in the first place.
Or use the Web ui manually.
There is no official powershell commands for databricks, there are some unofficial ones but they still require you to generate a token manually first.
https://github.com/DataThirstLtd/azure.databricks.cicd.tools
Disclaimer I'm the author of these.
UPDATE: these powershell commands can now authenticate using a service principal instead of a bearer token (or can generate a bearer token for you).
so right now there is no way to use the API directly after deploying an Azure Databricks Workspace. I assume that you want to use it as part of an CI/CD pipeline - right? Reason is that you first need to manually create an API token which you can then use for all subsequent API requests.
But I will investigate and keep you updated here!
another option is to create it via terraform.
https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/resources/token
mind you, it creates the token as whomever you az login'd as. so if you az login as yourself (when it spawns a browser asking who to log in as), that's who the token will be created as (assuming that user has permissions in the databricks workspace) and contributor (or custom read role, reader role doesn't grant the right permissions) permissions into the resource group that houses the workspace.
you can always use az login -u username#email.com -p to log in as someone else, assuming that user doesn't have MFA then run the terraform init/plan/apply. mind you, if you have a backend storage, that user also has to have permissions to that backend storage as well so it can create/update any tfstate files stored there.

Programmatic method to identify Azure Log Analytics Available SKUs

I am trying to create an ARM template to deploy an Azure Log Analytics Workspace via ARM. The template works fine, except it needs to understand which SKUs are valid for the target subscription - PerGB2018 for new subscriptions or one of the older SKUs for non-migrated subscriptions.
Pricing models are detailed here:
https://learn.microsoft.com/en-gb/azure/monitoring-and-diagnostics/monitoring-usage-and-estimated-costs#new-pricing-model-and-operations-management-suite-subscription-entitlements
Available SKUs for workspace creation are listed here:
https://learn.microsoft.com/en-us/rest/api/loganalytics/workspaces/createorupdate
I don't know how to identify which ones are valid for the specific subscription prior to deployment and end up with errors and failing deployments where the default I pick is not valid. I cannot assume the person or system calling the template will understand and have access to the correct set of pricing SKUs. PerGB2018 cannot be used on non-migrated subscriptions so cannot be my default.
Can anyone share a method for determining which SKUs will work BEFORE trying to deploy and thus avoiding an error? I have checked the Monitor and Billing APIs in case it is listed there but cannot see anything, and the network calls from the portal page don't offer much insight :(
My preference is to avoid PowerShell as the rest of the deployment uses BASH to request deployment information and build out the parameter files.
Thank You
Inevitably, after asking the question have had a breakthrough - the BASH script below uses Azure CLI 2 to get an AAD Access Token and store it in token. Next we grab the subscription id and store it in subscriptionId.
Once we have the sub ID and a valid Access Token we use curl to call an API endpoint which lists the date of migration to the new pricing model.
token=$(az account get-access-token | jq ".accessToken" -r)
subscriptionId=$(az account show | jq ".id" -ropt)
optedIn=$(curl -X POST -H "Authorization:Bearer $token" -H "Content-Length:0" https://management.azure.com/subscriptions/$subscriptionId/providers/microsoft.insights/listmigrationdate?api-version=2017-10-01 | jq ".optedInDate" -r)
My understanding is that a value of "null" for optedIn means it is the legacy pricing SKUs.
Shout if you disagree or have a better answer!

Resources