Trouble creating Velero storage location with storage access key - azure

i'm trying to use Velero to backup an AKS cluster but for some reason i'm unable to set the backup location in velero.
i'm getting the error below
I can confirm the credentials-velero file I have obtains the correct storage access key, and the secret (cloud-credentials) reflects it as well.
Kind of at a lost as to why it's throwing me this error. Never used Velero before.
EDIT:
So I used the following commands to get the credential file:
Obtain the Azure Storage account access key
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list --account-name storsmaxdv --query "[?keyName == 'key1'].value" -o tsv`
then I create the credential file
cat << EOF > ./credentials-velero
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=${AZURE_STORAGE_ACCOUNT_ACCESS_KEY}
AZURE_CLOUD_NAME=AzurePublicCloud
EOF
then my install command is:
./velero install \
--provider azure
--plugins velero/velero-plugin-for-microsoft-azure:v1.3.0 \
--bucket velero \
--secret-file ./credentials-velero \
--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY[,subscriptionId=numbersandlettersandstuff] \
--use-volume-snapshots=false
I can verify Velero created a secret called cloud-credentials, and when I decrypt it with base64 I'm able to see what looks like the contents of my credentials-velero file. for example:
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=MYAZURESTORAGEACCOUNTKEY
AZURE_CLOUD_NAME=AzurePublicCloud

turns out it was the brackets in the install command that was causing the issue
--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY[,subscriptionId=numbersandlettersandstuff] \
removed the brackets to this:
--backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY,subscriptionId=numbersandlettersandstuff \
and now it works

not sure how your cred file formatting is and the command you are running.
Please try the below file and update the command as per need.
Example command :
./velero install --provider azure --plugins velero/velero-plugin-for-microsoft-azure:v1.0.1 --bucket velero-cluster-backups --backup-location-config resourceGroup=STORAGE-ACCOUNT-RESOURCEGROUP,storageAccount=STORAGEACCOUNT --use-volume-snapshots=false --secret-file ./credentials-velero
Cred file
AZURE_STORAGE_ACCOUNT_ACCESS_KEY=MYAZURESTORAGEACCOUNTKEY
AZURE_CLOUD_NAME=AzurePublicCloud
i would suggest checking out the secret that is getting created into the K8s cluster and check the formatting of that secret and data.
Refer more here : https://github.com/vmware-tanzu/velero/issues/2272
Check this plugin : https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure

1: Create servce pricple for velero in azure ad
you can create credential file in below format
AZURE_CLOUD_NAME=AzurePublicCloud
AZURE_SUBSCRIPTION_ID=*************
AZURE_TENANT_ID=**************
AZURE_CLIENT_ID=********
AZURE_CLIENT_SECRET=**********
AZURE_RESOURCE_GROUP=(name of your cluster resorce group where your pvc reside)

Related

Adding whl files to an Azure Synapse spark pool

According to the documentation, we should be able to add custom libraries as follows:
az synapse spark pool update --name testpool \
--workspace-name testsynapseworkspace --resource-group rg \
--package-action Add --package package1.jar package2.jar
However, when I try this with my python package whl files, I get an error message that the package does not exist.
> $new_package_names = "PACKAGE1-1.0.1-py3-none-any.whl PACKAGE2-1.0.6.3-py3-none-any.whl PACKAGE3-1.0.0-py3-none-any.whl PACKAGE4-1.0.1-py3-none-any.whl"
> az synapse spark pool update --name $pool_name --workspace-name $workspace_name --resource-group $resource_group --package-action Add --package $new_package_names
I receive the following error:
(LibraryDoesNotExistInWorkspace) The LibraryArtifact PACKAGE1-1.0.1-py3-none-any.whl PACKAGE2-1.0.6.3-py3-none-any.whl PACKAGE3-1.0.0-py3-none-any.whl PACKAGE4-1.0.1-py3-none-any.whl does not exist.
Code: LibraryDoesNotExistInWorkspace
Message: The LibraryArtifact PACKAGE1-1.0.1-py3-none-any.whl PACKAGE2-1.0.6.3-py3-none-any.whl PACKAGE3-1.0.0-py3-none-any.whl PACKAGE4-1.0.1-py3-none-any.whl does not exist.
The same works if I have only one package in the variable $new_package_names.
It looks to me like Azure thinks it's all one package instead of four different ones. All four are uploaded to the synapse workspace and available for selection when I do the same process manually. Does anyone know of a fix for this issue? Does it only work for .jar files for some reason?
Turns out that it really comes down to the format in which I pass the package names to the function. Something apparently changed internally as the previous way did not work anymore.
As MartinJaffer from Microsoft answered in the MS Q&A forum:
"""
If you are using az in powershell, there is a better way to go about this.
$new_package_names = "PACKAGE1-1.0.1-py3-none-any.whl" , "PACKAGE2-1.0.6.3-py3-none-any.whl" , "PACKAGE3-1.0.0-py3-none-any.whl" , "PACKAGE4-1.0.1-py3-none-any.whl"
az synapse spark pool update --name $pool_name --workspace-name $workspace_name --resource-group $resource_group --package-action Add --package #new_package_names
Here we changed new_package_names into an array type, and use the # splatter operator to seperate them.
As simpler example, it makes the following two excerpts be equivalent:
Copy-Item "test.txt" "test2.txt" -WhatIf
$ArrayArguments = "test.txt", "test2.txt"
Copy-Item #ArrayArguments -WhatIf
"""
Utilizing the splatter operator when passing the parameters worked perfectly.

terraform init creating a new workspace in automation

I have looked on the Internets but did not find anything close to an answer.
I have the following main.tf :
terraform {
cloud {
organization = "my-organization"
workspaces {
tags = ["app:myapplication"]
}
}
}
I am using terraform cloud and I would like to use workspace in automation.
In order to so, i need first to do a terraform init :
/my/path # terraform init
Initializing Terraform Cloud...
No workspaces found.
There are no workspaces with the configured tags
(app:myapplication) in your Terraform Cloud
organization. To finish initializing, Terraform needs at least one
workspace available.
Terraform can create a properly tagged workspace for you now. Please
enter a name to create a new Terraform Cloud workspace.
Enter a value:
I would like to do something of the kind :
terraform init -workspace=my-workspace
so that it is created if it does not exist. But I do not find anything. The only way to create a the first workspace is manually.
How to do that in automation with ci/cd?
[edit]
terraform workspace commands are not available before init
/src/terraform # terraform workspace list
Error: Terraform Cloud initialization required: please run "terraform
init"
Reason: Initial configuration of Terraform Cloud.
Changes to the Terraform Cloud configuration block require
reinitialization, to discover any changes to the available workspaces.
To re-initialize, run: terraform init
Terraform has not yet made changes to your existing configuration or
state.
You would need to use the TF Cloud/TFE API. You are using TF Cloud, but can modify the endpoint to target your installation to use TFE.
You first need to list the TF Cloud Workspaces:
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
https://app.terraform.io/api/v2/organizations/my-organization/workspaces
where my-organization is your TF Cloud organization. This will return the workspaces in a JSON format. You would then need to parse the JSON and iterate over the maps/hashes/dictionaries of existing TF Cloud workspaces. For each iteration, inside the data and then the name key would be the nested value for the name of the workspace. You would gather the names of the workspaces and check that against the name of the workspace you want to exist. If the desired workspace does not exist in the list of workspaces, then you create the TF Cloud workspace:
curl \
--header "Authorization: Bearer $TOKEN" \
--header "Content-Type: application/vnd.api+json" \
--request POST \
--data #payload.json \
https://app.terraform.io/api/v2/organizations/my-organization/workspaces
again substituting with your organization and your specific payload. You can then terraform init successfully with the backend specifying the Tf Cloud workspace.
Note that if you are executing this in automation as you specify in the question, then the build agent needs connectivity to TF Cloud.
I will not mark this as the answer, but I finally did this, which look like a bad trick to me :
export TF_WORKSPACE=myWorkspace
if terraform init -input=false; then echo "already exist"; else (sleep2; echo $TF_WORKSPACE) | terraform init; fi
terraform apply -auto-approve -var myvar=boo

Docker run cant find google authentication "oauth2google.DefaultTokenSource: google: could not find default credentials"

Hey there I am trying to figure out why i keep getting this error when running the docker run command. Here is what i am running
docker run -p 127.0.0.1:2575:2575 -v ~/.config:/home/.config gcr.io/cloud-healthcare-containers/mllp-adapter /usr/mllp_adapter/mllp_adapter --hl7_v2_project_id=****** --hl7_v2_location_id=us-east1 --hl7_v2_dataset_id=*****--hl7_v2_store_id=*****--export_stats=false --receiver_ip=0.0.0.0
I have tried both ubuntu and windows with an error that it failed to connect and to see googles service authentication documentation. I have confirmed the account is active and the keys are exported to the config below
randon#ubuntu-VM:~/Downloads$ gcloud auth configure-docker
WARNING: Your config file at [/home/brandon/.docker/config.json] contains these credential helper entries:
{
"credHelpers": {
"gcr.io": "gcloud",
"us.gcr.io": "gcloud",
"eu.gcr.io": "gcloud",
"asia.gcr.io": "gcloud",
"staging-k8s.gcr.io": "gcloud",
"marketplace.gcr.io": "gcloud"
}
I am thinking its something to do with the -v command on how it uses the google authentication. Any help or guidance to fix, Thank you
-v ~/.config:/root/.config is used to give the container access to gcloud credentials;
I was facing the same for hours and I decided check the source code even I not being a go developer.
So, there I figured out the we have a credentials option to set the credentials file. It's not documented for now.
The docker command should be like:
docker run \
--network=host \
-v ~/.config:/root/.config \
gcr.io/cloud-healthcare-containers/mllp-adapter \
/usr/mllp_adapter/mllp_adapter \
--hl7_v2_project_id=$PROJECT_ID \
--hl7_v2_location_id=$LOCATION \
--hl7_v2_dataset_id=$DATASET_ID \
--hl7_v2_store_id=$HL7V2_STORE_ID \
--credentials=/root/.config/$GOOGLE_APPLICATION_CREDENTIALS \
--export_stats=false \
--receiver_ip=0.0.0.0 \
--port=2575 \
--api_addr_prefix=https://healthcare.googleapis.com:443/v1 \
--logtostderr
Don't forget to put your credentials file inside your ~/.config folder.
Here it worked fine. I hope helped you.
Cheers

HashiCorp Vault No handler for route error despite secrets engine being enabled through the UI

Using the non-dev vault server, I went ahead and used “Enable new engine” in the UI for kv version 1 and created a secret.
As a test, I am using a token with root permissions to attempt the following and receive the no route error:
curl -H "X-Vault-Token: " -X GET https://vaultwebsite.com/v1/secret/kvtest1/test12/test123
{“errors”:[“no handler for route ‘secret/kvtest/anothertest/test’”]}
My understanding is that there shouldn’t be the no handler issue as I enabled that secrets engine through the UI. Am I missing a step, policy, or is this an API path issue?
One of my references was https://www.reddit.com/r/hashicorp/comments/c429fo/simple_vault_workflow_help/ which lead me to review the enabled amount.
My guess is that you've enabled a KV engine and wrote a secret to it, but the path secret/ is wrong.
For example, if i enable an engine and then try to read an existing value, then it works
$ vault secrets enable -version=1 -path kv kv
Success! Enabled the kv secrets engine at: kv/
$ curl --header "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/kv/foo
{"request_id":"2db249af-10de-01c5-4603-8f89a46897b5","lease_id":"","renewable":false,"lease_duration":2764800,"data":{"v6":"1"},"wrap_info":null,"warnings":null,"auth":null}
But if i now try to read from a non existing path, i'd get the same error as you, for example
$ curl --header "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/foobar/foo
{"errors":["no handler for route 'foobar/foo'"]}
It would help if you list your existing mounts and verify the path
$ curl --header "X-Vault-Token: $VAULT_TOKEN" $VAULT_ADDR/v1/sys/mounts
# or
$ vault secrets list
I was getting the same error when I was trying to access to a wrong path. For example, I enabled the ldap engine with specific path -path=ssh-client-signer
vault secrets enable -path=ssh-client-signer ssh
So the actual url for ssh sign should be ssh-client-signer/roles, no ssh/roles
curl \
--header "X-Vault-Token: ......" \
--request LIST \
http://vault:8200/v1/ssh-client-signer/roles

Azure VM extension update failure

I tried to add a custom script to VM through extensions. I have observed that when vm is created, Microsoft.Azure.Extensions.CustomScript type is created with name "cse-agent" by default. So I try to update extension by encoding the file with script property
az vm extension set \
--resource-group test_RG \
--vm-name aks-agentpool \
--name CustomScript \
--subscription ${SUBSCRIPTION_ID} \
--publisher Microsoft.Azure.Extensions \
--settings '{"script": "'"$value"'"}'
$value represents the script file encoded in base 64.
Doing that gives me an error:
Deployment failed. Correlation ID: xxxx-xxxx-xxx-xxxxx.
VM has reported a failure when processing extension 'cse-agent'.
Error message: "Enable failed: failed to get configuration: invalid configuration:
'commandToExecute' and 'script' were both specified, but only one is validate at a time"
From the documentation, it is mentioned that when script attribute is present,
there is no need for commandToExecute. As you can see above I haven't mentioned commandToExecute, it's somehow taking it from previous extension. Is there a way to update it without deleting it? Also it will be interesting to know what impact will cse-agent extension will create when deleted.
FYI: I have tried deleting 'cse-agent' extension from VM and added my extension. It worked.
the CSE-AGENT vm extension is crucial and manages all of the post install needed to configure the nodes to be considered a valid Kubernetes nodes. Removing this CSE will break the VMs and will render your cluster inoperable.
IF you are interested in applying changes to nodes in an existing cluster, while not officially supported, you could leverage the following project.
https://github.com/juan-lee/knode
This allows you to configure the nodes using a DaemonSet, which helps when you node pools have the auto-scaling feature enabled.
for simple Node alteration of the filesystem, a privilege pod with host path will also work
https://dev.to/dannypsnl/privileged-pod-debug-kubernetes-node-5129

Resources