Copy kubernetes pod's repository directly to azure storage account - azure

I'm trying to copy pod's repository directly to azure storage account using a pipe.
Instead of doing these two commands :
kubectl cp my_pod:my_repository/ . -n my_namespace
azcopy cp my_repository/ "https://my-storage.blob.core.windows.net/?sp=r..." --recursive=true
I would like to do something like this using "--from-to" azcopy parameter :
kubectl cp my_pod:my_repository/ -n my_namespace | azcopy cp "https://my-storage.blob.core.windows.net/?sp=r..." --from-to PipeBlob --recursive=true
Not sure if it's possible. maybe with xargs ?
I Hope I'm clear enough.

Related

Copying local directory via Terraform into Kubernetes Cluster

I am trying to copy some files from my local terraform directory into my datadog resources into a preexisting configuration path.
When I try the below in my datadog-values.yaml I do not see any of my configuration files copied into the location. I also cannot see any logs, even in debug mode, that are telling me whether it failed or the path was incorrect.
See datadog helm-charts
# agents.volumes -- Specify additional volumes to mount in the dd-agent container
volumes:
- hostPath:
path: ./configs
name: openmetrics_config
# agents.volumeMounts -- Specify additional volumes to mount in all containers of the agent pod
volumeMounts:
- name: openmetrics_config
mountPath: /etc/datadog-agent/conf.d/openmetrics.d
readOnly: true
What I've tried
I can manually copy the configuration files into the directory like below in a shell script. But Of course if the datadog names change on restart I have to manually update.
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl rollout restart deployment datadog-cluster-agent -n datadog
This volumes that you use here don't work as you wish to. This ./config directory is not your local directory. Kubernetes has no idea about your local machine.
But fear not.
There are few ways of doing it that and it all depending on your needs. They are:
Terraformed config
Terraformed mount
Terraformed copy config action
Terraformed config
To have config file terraformed means:
to have config updated in k8s whenever change of file occurs - we want terraform to track those changes
to have config uploaded before service using it will start (this is a configuration file after all, they configure something I assume)
DISCLAIMER - service won't reset after config change (it's achievable, but it's another topic)
To achieve this create config map for every config:
resource "kubernetes_config_map" "config" {
metadata {
name = "some_name"
namespace = "some_namespace"
}
data = {
"config.conf" = file(var.path_to_config)
}
}
and then use it in your volumeMounts. I assume that you're working with helm provider, so this should probably be
set {
name = "agents.volumeMounts"
value = [{
"mountPath": "/where/to/mount"
"name": kubernetes_config_map.config.metadata.0.name
}]
}
In example above I used single config and single volume for simplification, but for_each should be enough.
Terraformed mount
Another variant is that you don't want terraform to track configurations, then what you want to do is:
Create single storage (it can be mounted storage from your kube provider, can be also created dynamic volume in terraform - chose your poison)
Mount this storage to kubernetes volume (kubernetes_persistent_volume_v1 in terraform)
Set set {...} like in previous section.
Terraformed copy config action
Last one and my least favorited option is to call action to copy from terraform. It's last resort... Provisioners
Even terraform docs say it's bad, yet it has one advantage. It's super easy to use. You can simply call your shell command here - it could be: scp, rsync, or even (but please don't do it) kubectl cp.
To not encourage this solution more I'll just leave doc of null_resource which uses provisioner "remote-exec" (you can use "local-exec") here.

Azure CLI - az storage blob delete-batch pattern

I have a container called container1 in my Storage Account storageaccount1, with the following files:
blobs/tt-aa-rr/data/0/2016/01/03/02/01/20.txt
blobs/tt-aa-rr/data/0/2016/01/03/02/02/12.txt
blobs/tt-aa-rr/data/0/2016/01/03/02/03/13.txt
blobs/tt-aa-rr/data/0/2016/01/03/03/01/10.txt
I would like to delete the first 3, for that I use the following command:
az storage blob delete-batch --source container1 --account-key XXX --account-name storageaccount1 --pattern 'blobs/tt-aa-rr/data/0/2016/01/03/02/*' --debug
The files are not deleted and I see the following log:
urllib3.connectionpool : Starting new HTTPS connection (1): storageaccount1.blob.core.windows.net:443
urllib3.connectionpool : https://storageaccount1.blob.core.windows.net:443 "GET /container1?restype=container&comp=list HTTP/1.1" 200 None
What is wrong with my pattern?
If I try to delete file by file it works.
As stated in comments, you are not able to apply patterns to subfolders, only first level folders, as documented here. But if you want, you can easily write a script to list the blobs in your container, using the prefix to filter them az storage blob list and then apply the delete for each of the result blobs.
Here is what just worked for me — applied to the command you listed above.
az storage blob delete-batch --source container1 --account-key XXX --account-name storageaccount1 --pattern blobs/tt-aa-rr/data/0/2016/01/03/02/\* --debug
I didn't quote the pattern argument and I added an escape before the *. Using iTerm2 on a Mac. I didn't try --debug but the --dryrun argument was really helpful in getting it to tell me what it had matched (or not!).

How to get a swap config preview when swapping Azure App Service from cli?

When swapping the production slot with a staging slot for a Azure App Service through the portal you get a little warning in case the configs differ between the slots.
I would like to get the same warning when I swap from command line (for example with az in bash), is that possible, and if so how to do it?
There does not seem to be any way to get a confirmation before the swap is completed using Azure CLI.
If you want a confirmation dialog you need to script it separately, e.g. like this
read -r -p "Are you sure? [y/N] " response
if [[ "$response" =~ ^([yY][eE][sS]|[yY])+$ ]]
then
az webapp deployment slot swap -g MyResourceGroup -n MyUniqueApp --slot staging --target-slot production
fi
References
see this page for more info about the swapping slots using the cli.
and this page for details on conditionally executing statements in bash
Managed to do that using the Azure CLI and jq (install it first). That's the same call Azure portal does when doing the preview. So, I've added the Azure CLI task and then:
echo Phase One changes
az rest -m post -u https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/<your_rg>/providers/Microsoft.Web/sites/<your_webapp_name>/slots/<slot_name>/slotsdiffs?api-version=2016-08-01 --body {\"targetSlot\":\"production\"} | jq -r "[.value[].properties | select(.diffRule == \"SlotSettingsMissing\") | .description ] | join(\";\")"
echo Phase Two changes
az rest -m post -u https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/<your_rg>/providers/Microsoft.Web/sites/<your_webapp_name>/slots/<slot_name>/slotsdiffs?api-version=2016-08-01 --body {\"targetSlot\":\"production\"} | jq -r "[.value[].properties | select(.diffRule != \"SlotSettingsMissing\") | .description ] | join(\";\")"
Note, that the {subscriptionId} will be substituted so no need to do it manually. Other parameters in <> should be provided anyhow.
In the end I made a extension to the az cli that compares and diffs the configs. Was after all not very difficult to do, and at the same time I could extend its functionality a little bit and make it possible to also diff configs between different web apps, for example useful when the same service is deployed in more than one region.
(this extension is at the moment not publicly available anywhere, but could maybe if there was interest)

AzCopy upload file for Linux

I'm trying to upload a sample file to Azure from my Ubuntu machine using AzCopy for Linux but I keep getting the below error no matter what permission/ownership I change to.
$ azcopy --source ../my_pub --destination https://account-name.blob.core.windows.net/mycontainer --dest-key account-key
Incomplete operation with same command line detected at the journal directory "/home/jmis/Microsoft/Azure/AzCopy", do you want to resume the operation? Choose Yes to resume, choose No to overwrite the journal to start a new operation. (Yes/No) Yes
[2017/11/18 22:06:24][ERROR] Error parsing source location "../my_pub": Failed to enumerate directory /home/jmis/my_pub/ with file pattern *. Cannot find the path '/home/jmis/my_pub/'.
I have digged over the internet to find solutions, without having a luck I eventually ended up asking a question here.
Although AzCopy was having issues for Linux I'm able to do the above operation seamlessly with Azure CLI. The below code listed on Azure docs helped me do it:
#!/bin/bash
# A simple Azure Storage example script
export AZURE_STORAGE_ACCOUNT=<storage_account_name>
export AZURE_STORAGE_ACCESS_KEY=<storage_account_key>
export container_name=<container_name>
export blob_name=<blob_name>
export file_to_upload=<file_to_upload>
export destination_file=<destination_file>
echo "Creating the container..."
az storage container create --name $container_name
echo "Uploading the file..."
az storage blob upload --container-name $container_name --file $file_to_upload --name $blob_name
echo "Listing the blobs..."
az storage blob list --container-name $container_name --output table
echo "Downloading the file..."
az storage blob download --container-name $container_name --name $blob_name --file $destination_file --output table
echo "Done"
Going forward I will be using the Cool Azure CLI which is Linux compliant and Simple too.
We can use this script to upload single file with Azcopy(Linux):
azcopy \
--source /mnt/myfiles \
--destination https://myaccount.file.core.windows.net/myfileshare/ \
--dest-key <key> \
--include abc.txt
Use --include to specify which file you want to upload, here a example, please check it:
root#jasonubuntu:/jason# pwd
/jason
root#jasonubuntu:/jason# ls
test1
root#jasonubuntu:/jason# azcopy --source /jason/ --destination https://jasondisk3.blob.core.windows.net/jasonvm/ --dest-key m+kQwLuQZiI3LMoMTyAI8K40gkOD+ZaT9HUL3AgVr2KpOUdqTD/AG2j+TPHBpttq5hXRmTaQ== --recursive --include test1
Finished 1 of total 1 file(s).
[2017/11/20 07:45:57] Transfer summary:
-----------------
Total files transferred: 1
Transfer successfully: 1
Transfer skipped: 0
Transfer failed: 0
Elapsed time: 00.00:00:02
root#jasonubuntu:/jason#
More information about Azcopy on Linux, please refer to this link.

How to delete image from Azure Container Registry

Is there a way to delete specific tags only? I only found a way to delete the whole registry using the REST/cli-acr
Thanks
UPDATE COPIED FROM BELOW:
As an update, today we've released a preview of several features including repository delete, Individual Azure Active Directory Logins and Webhooks.
Original answer:
We are hardening up the registry for our GA release later this month. We've deferred all new features while we focus on performance, reliability and additional azure data centers, delivering ACR across all public data centers by GA.
We will provide deleting of images and tags in a future release.
We're started to use https://github.com/Azure/acr/ to track features and bugs.
Delete is captured here: https://github.com/Azure/acr/issues/33
Thanks for the feedback,
Steve
You can use Azure CLI 2.0 to delete images from a repository with a given tag:
az acr repository delete -n MyRegistry --repository MyRepository --tag MyTag
MyRegistry is the name of your Azure Container Registry
MyRepository is the name of the repository
MyTag denotes the tag you want to delete.
You can also choose to delete the whole repository by omitting --tag MyTag. More information about the az acr repository delete command can be found here: https://learn.microsoft.com/en-us/cli/azure/acr/repository#delete
Here is a powershell script that deletes all Azure Container Registry tags except for tags MyTag1 and MyTag2:
az acr repository show-tags -n MyRegistry --repository MyRepository | ConvertFrom-String | %{$_.P2 -replace "[`",]",""} | where {$_ -notin "MyTag1","MyTag2" } | % {az acr repository delete -n MyRegistry --repository MyRepository --tag $_ --yes}
It uses Azure CLI 2.0.
I had a similar problem where I wanted to remove historical images from the repository as our quota had reached 100%
I was able to do this by using the following commands in the Azure CLI 2.0. The process does the following : obtain a list of tags, filter it with grep and clean it up with sed before passing it to the delete command.
Get all the tags for the given repository
az acr repository show-tags -n [registry] --repository [repository]
Get all the tags that start with the specific input and pipe that to sed which will remove the trailing comma
grep \"[starts with] | sed 's/,*$//g'
Using xargs, assign the output to the variable X and use that as the tag.
--manifest :
Delete the manifest referenced by a tag. This also deletes any associated layer data and all other tags referencing the manifest.
--yes -y : Do not prompt for confirmation.
xargs -I X az acr repository delete -n [registry] --repository [repository] --tag X --manifest --yes
e.g. registry = myRegistry, repository = myRepo, I want to remove all tags that start with the tagname 'test' ( this would include test123, testing etc )
az acr repository show-tags -n myRegistry --repository myRepo | grep \"test | sed 's/,*$//g' | xargs -I X az acr repository delete -n myRegistry --repository myRepo --tag X --manifest --yes
More information can be found here Microsoft Azure Docs
Following answer from #christianliebel Azure CLI generates error unrecognized arguments: --tag MyTag:
➜ az acr repository delete -n MyRegistry --repository MyRepository --tag MyTag
az: error: unrecognized arguments: --tag MyTag
I was using:
➜ az --version
azure-cli 2.11.1
This works:
➜ az acr repository delete --name MyRegistry --image Myrepository:Mytag
This operation will delete the manifest 'sha256:c88ac1f98fce390f5ae6c56b1d749721d9a51a5eb4396fbc25f11561817ed1b8' and all the following images: 'Myrepository:Mytag'.
Are you sure you want to continue? (y/n): y
➜
Microsoft Azure CLI docs example:
https://learn.microsoft.com/en-us/cli/azure/acr/repository?view=azure-cli-latest#az-acr-repository-delete-examples
As an update, today we've released a preview of several features including repository delete, Individual Azure Active Directory Logins and Webhooks.
Steve
Following command helps while deleting specific images following a name or search pattern :-
az acr repository show-manifests -n myRegistryName --repository myRepositoryName --query '[].tags[0]' -o yaml | grep 'mySearchPattern' | sed 's/- /az acr repository delete --name myRegistryName --yes --image myRepositoryName:/g'
My use case was to delete all Container Registeries which were created before August in 2020 so I copied the output of the following command and then executed them, as my manifest names had creation Date like DDMMYYYY-HHMM:-
az acr repository show-manifests -n myRegistryName --repository myRepositoryName --query '[].tags[0]' -o yaml | grep '[0-7]2020-' | sed 's/- /az acr repository delete --name myRegistryName --yes --image myRepositoryName:/g'
Reference: Microsoft ACR CLI
I have used the REST Api to delete the empty tagged images from a particular repository, documentation available here
import os
import sys
import yaml
import json
import requests
config = yaml.safe_load(
open(os.path.join(sys.path[0], "acr-config.yml"), 'r'))
"""
Sample yaml file
acr_url: "https://youregistryname.azurecr.io"
acr_user_name: "acr_user_name_from_portal"
acr_password: "acr_password_from_azure_portal"
# Remove the repo name so that it will clean all the repos
repo_to_cleanup: some_repo
"""
acr_url = config.get('acr_url')
acr_user_name = config.get("acr_user_name")
acr_password = config.get("acr_password")
repo_to_cleanup = config.get("repo_to_cleanup")
def iterate_images(repo1, manifests):
for manifest in manifests:
try:
tag = manifest['tags'][0] if 'tags' in manifest.keys() else ''
digest = manifest['digest']
if tag is None or tag == '':
delete = requests.delete(f"{acr_url}/v2/{repo1}/manifests/{digest}", auth=(acr_user_name, acr_password))
print(f"deleted the Tag = {tag} , Digest= {digest}, Status {str(delete)} from Repo {repo1}")
except Exception as ex:
print(ex)
if __name__ == '__main__':
result = requests.get(f"{acr_url}/acr/v1/_catalog", auth=(acr_user_name, acr_password))
repositories = json.loads(result.content)
for repo in repositories['repositories']:
if repo_to_cleanup is None or repo == repo_to_cleanup:
manifests_binary = requests.get(f"{acr_url}/acr/v1/{repo}/_manifests", auth=(acr_user_name, acr_password))
manifests_json = json.loads(manifests_binary.content)
iterate_images(repo, manifests_json['manifests'])
I tried all commands but none worked.I though that it could be stacked so I went to my portal azure and deleted my repository by myself. It works

Resources