I would like to use the IBM Terraform provider to provision a KeyProtect instance containing a standard key.
Getting a KeyProtect instance is easy: Use a service instance of type kms.
Does Terraform offer a way of inserting a specified key in the KeyProtect instance?
Not tested, but should work... ;-)
The IBM Terraform provider is only for the cloud resources, not for "application data". However, there is a REST API Provider which allows to execute calls to REST APIs.
IBM Cloud Key Protect provides such an interface and allows to either create or import a key. This toolchain deploy script shows an automated way of provisioning Key Protect and creating a new root key (read the security tutorial here). You basically need to code something similar to obtain the necessary token and other metadata:
curl -s -X POST $KP_MANAGEMENT_URL \
--header "Authorization: Bearer $KP_ACCESS_TOKEN" \
--header "Bluemix-Instance: $KP_GUID" \
--header "Content-Type: application/vnd.ibm.kms.key+json" -d #scripts/root-enckey.json
Update:
The Terraform provider has ibm_kms_key and some other resources now. It allows to import existing keys into either Key Protect or Hyper Protect Crypto Services.
Related
I want to connect Superset to a Databricks for querying the tables. Superset uses SQLAlchemy to connect to databases which requires a PAT (Personal Access Token) to access.
It is possible to connect and run queries when I use the PAT I generated on my account through Databricks web UI? But I do not want to use my personal token in a production env. Even so, I was not able to find how to generate a PAT like token for a Service Principal.
The working SQLAlchemy URI is looks like this:
databricks+pyhive://token:XXXXXXXXXX#aaa-111111111111.1.azuredatabricks.net:443/default?http_path=sql%2Fprotocolv1%qqq%wwwwwwwwwww1%eeeeeeee-1111111-foobar00
After checking the Azure docs, there are two ways on how to run queries between Databricks and another service:
Create a PAT for a Service Principal to be associated with Superset.
Create a user AD account for Superset.
For the first and preferred method, I was able to advance, but I was not able to generate the Service Principal's PAT:
I was able to register an app on Azure's AD.
So I got the tenant ID, client ID and create a secret for the registered app.
With this info, I was able to curl Azure and receive a JWT token for that app.
But all the tokens referred in the docs are JTW's OAUTH2 tokens, which does not seems to work with SQLAlchemy URI.
I know it's possible to generate a PAT for a Service Principal since there is a mention on how to read, update and delete a Service Principal's PAT on the documentation. But it has no information on how to create a PAT for a Service Principal.
I prefer to avoid using the second method (creating an AD user for Superset) since I am not allowed to create/manage users for the AD.
In summary, I have a working SQLAlchemy URI, but I want to use a generated token, associated with a Service Principal, instead of using my PAT. But I can't find how to generate that token (I only found documentation on how to generate OAUTH2 tokens).
You can create PAT for service principal as following (examples are taken from docs, do export DATABRICKS_HOST="https://hostname" before executing):
Add service principal into the Databricks workspace using SCIM API (doc):
curl -X POST '$DATABRICKS_HOST/api/2.0/preview/scim/v2/ServicePrincipals' \
--header 'Content-Type: application/scim+json' \
--header 'Authorization: Bearer <personal-access-token>' \
--data-raw '{
"schemas":[
"urn:ietf:params:scim:schemas:core:2.0:ServicePrincipal"
],
"applicationId":"<application-id>",
"displayName": "test-sp",
"entitlements":[
{
"value":"allow-cluster-create"
}
]
}'
Get AAD Token for service principal (doc, another option is to use az-cli):
export DATABRICKS_TOKEN=$(curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \
-d 'grant_type=client_credentials&client_id=<client-id>&resource=2ff814a6-3304-4ab8-85cb-cd0e6f879c1d&client_secret=<application-secret>' \
https://login.microsoftonline.com/<tenant-id>/oauth2/token|jq -r .accessToken)
Generate token using the AAD Token (doc):
curl -s -n -X POST "$DATABRICKS_HOST/api/2.0/token/create" --data-raw '{
"lifetime_seconds": 100,
"comment": "token for superset"
}' -H "Authorization: Bearer $DATABRICKS_TOKEN"
I am using IBM Cloud and sometimes when coming back from a coffee break I have to enter my credentials again. Is there a way to change the session expiration time? Could it be done programmatically?
The settings can be changed either in the IBM Cloud console (UI) or via REST API. In the UI you have to access the Identity and Access Management (IAM) settings.
The IBM Cloud API docs have a section for the platform services. There, the IAM services can be found. They have an API to fetch the current account settings and update the account settings. It includes the configuration values for session_expiration_in_seconds and session_invalidation_in_seconds to control the session expiration. You could use curl to update the settings like this:
curl -X PUT 'https://iam.cloud.ibm.com/v1/accounts/ACCOUNT_ID/settings/identity'
-H 'Authorization: Bearer TOKEN' -H 'Content-Type: application/json'
-d '{
"session_expiration_in_seconds": 3600,
"session_invalidation_in_seconds": 1800
}'
I need to add metadata to approle entity because policy path associated with approle is based on entity metadata. What i try to achieve is basically to do this command vault write identity/entity/id/<entity_id>/ metadata=stage=testusing vault provider for terraform. Does anyone know how to do that?
According to the official documentation on updating entities, you should be able to do this:
curl --header "X-Vault-Token: $VAULT_TOKEN" \
--request POST \
--data "{\"name\": \"<entity name>\", \"metadata\": {\"organization\": \"hashicorp\", \"team\": \"nomad\"}}" \
http://127.0.0.1:8200/v1/identity/entity/id/:id
Make sure to substitute VAULT_TOKEN with your Vault token and replace :id with the entity id.
I have created a Azure Data Factory pipeline which have multiple pipeline parameter,which I need to enter all the time when pipeline trigger.Now I want to trigger this pipeline from postman in my local system and i need to pass parameters to pipeline from post.
Do you really need to use postman? I've posted examples of doing this with Powershell and with Python.
Powershell: How to pass arguments to ADF pipeline using powershell
Python: https://gist.github.com/Gorgoras/1fe534fd9b454412f81c8203c773c483
If your only option is to use the rest api, you can read about it and get some examples here: https://learn.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-rest-api
Hope this helped!!
You can trigger Azure Data Factory via a policy in API Management.
I've added a sample here: https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Trigger%20Azure%20Data%20Factory%20Pipeline%20With%20Parameters.policy.xml
Azure Docs doesn't provide examples on how to pass a parameter which I find weird also nowhere else on the internet have I found an example of how to pass multiple parameters via REST API, I guess most people use ADF shell to trigger it or python script.
Anyway, if someone else stumbles on the same question then here's the solution (which is quite simple).
Firstly, Create an Azure App Registration and generate client ID and client secret value.
Authenticate via REST API to get the Bearer Token
curl --location --request POST 'https://login.microsoftonline.com/${TENANT_ID}/oauth2/token' \
--form 'grant_type="client_credentials"' \
--form 'client_id="${CLIENT_ID}"' \
--form 'client_secret="${CLIENT_SECRET_VALUE}"' \
--form 'resource="https://management.azure.com/"'
The response will contain a Bearer token, use it to trigger the pipeline. Replace subscription id, resource group name, and adf name.
curl --location --request POST 'https://management.azure.com/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RESOURCE_GROUP_NAME}/providers/Microsoft.DataFactory/factories/${ADF_NAME}/pipelines/trigger-pipeline-from-rest/createRun?api-version=2018-06-01' \
--header 'Authorization: Bearer ${BEARER_TOKEN}' \
--header 'Content-Type: application/json' \
--data-raw '{
"date":"2022-08-22",
"param1":"param1 value",
"param2":"some-value"
}'
Note: The app should have contributor access to ADF to trigger the pipeline.
I am able to deploy a Azure Machine learning prediction service in my workspace ws using the syntax
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=8,
tags={"method" : "some method"},
description='Predict something')
and then
service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = service_name,
workspace = ws)
as described in the documentation.
However, this exposes a service publicly and this is not really optimal.
What's the easiest way to shield the ACI service? I understand that passing an auth_enabled=True parameter may do the job, but then how can I instruct a client (say, using curl or Postman) to use the service afterwards?
See here for an example (in C#). When you enable auth, you will need to send the API key in the "Authorization" header in the HTTP request:
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", authKey);
See here how to retrieve the key.
First, retrieve the primary and secondary keys with a (Python) syntax like
service.get_keys()
If you are using curl, the syntax may look like this:
curl -H "Content-Type:application/json" -H "Authorization: Bearer <authKey>" -X POST -d '{"data": [some data]}' http://<url>:<port>/<method>
where <authKey> is one of the keys retrieved above.