I have created a Windows Server container on an Azure Kubernetes Service (AKS) cluster using the Azure CLI. While trying to deploy my aspnet core app to the AKS cluster, I am stuck on this step of the above link. I have sample.yaml file on my Windows-10 hard drive that needs to run in the Azure cloud shell using the following command:
kubectl apply -f sample.yaml
Question: Where can I place the above sample.yaml file so I can run the above command in Azure Cloud Shell? I am assuming it probably has to be somewhere in my Azure storage account but where exactly it should be placed so above command can recognize its path? Currently it's giving an expected error: the path "sample.yaml" does not exist
You can directly create a file named sample.yaml using vi or nano or code sample.yaml in the Azure could shell then copy your YAML definition.
For example, type code sample.yaml in the Azure Bash. It opens a sample.yaml file then copy YMAL content and save it. The file automatically was stored in your current working path /home/user.
Or, you can upload your sample.yaml from your local to the Azure path.
Or, you also could persistently store your file into the Azure file share. To find the Azure file share, you can type df command.
Related
I am having azure VM, where I have mounted Azure File share as drive(V:), I am trying to create azure pipeline which will do a copy task to copy files from mounted Azure File share(v:) to local disk (D:).
In release pipeline I chose Copy task and I had given the source as Azure file share drive path (\\filesharepath\foldername) and destination as (D:\Foldername) but when I run the pipe line am getting the error
Unhandled: Not found sourcefolder: (\\filesharepath\foldername)
I also tried other way to create a PowerShell script task and created a powershell script (inside the azure VM) with Copy-Item command to do copy task but while running the pipeline got error
Copy-Item: cannot find drive. The drive does not exists.
while in the azure VM, I am able to run the powershell script and file copy does is happening, the issue occur only when I run the scripts through pipeline, how to overcome these issues?
How to configure Azure Blob Storage Container on an Yaml
- name: scripts-file-share
azureFile:
secretName: dev-blobstorage-secret
shareName: logs
readOnly: false```
The above is for the logs file share to configure on yaml.
But if I need to mount blob container? How to configure it?
Instead of azureFile do I need to use azureBlob?
And what is the configuration that I need to have below azureBlob? Please help
After the responses I got from the above post and also went through the articles online, I see there is no option for Azure Blob to mount on Azure AKS except to use azcopy or rest api integration for my problem considering the limitations I have on my environment.
So, after little bit research and taking references from below articles I could able to create a Docker image.
1.) Created the docker image with the reference article. But again, I also need support to run a bash script as I am running azcopy command using bash file. So, I tried to copy the azcopy tool to /usr/bin.
2.) Created SAS tokens for Azure File Share & Azure Blob. (Make sure you give required access permissions only)
3.) Created a bash file that runs the below command.
azcopy <FileShareSASTokenConnectionUrl> <BlobSASTokenConnectionUrl> --recursive=true
4.) Created a deployment yaml that runs on AKS. Added the command to run bash file in that.
This gave me the ability to copy the files from Azure File Share Folders to Azure Blob Container
References:
1.) https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#obtain-a-static-download-link
2.) https://github.com/Azure/azure-storage-azcopy/issues/423
I don't know how to connect to an existing Azure File Share from Azure Cloud Shell.
The command clouddrive seems to move my default cloud shell storage account. But I don't want to do that. I just want to access my existing Azure File Share storage. This can exist in any Azurea region (not just what's available for Cloud Shell, which is currently very limited)
When I tried to use clouddrive to mount my existing Azure Files account, I get the following error message:
ERROR: The storage account is not in the valid location. Expect: eastus Actual: canadacentral
I'd prefer not to move my existing Azure File Shares from canadacentral to eastus. Is there a workaround for this?
I'd like to just connect to my existing Azure File Shares through Cloud Shell and run commands in those directories.
Thank you!
Same question asked here:
https://github.com/MicrosoftDocs/azure-docs/issues/42001
https://serverfault.com/questions/992834/connect-to-azure-file-share-from-azure-cloud-shell
Azure cloud shell is an interactive, authenticated, browser-accessible shell which backend is running on cloud shell hosts. The cloud shell machines are temporary but your files are persisted through a mounted file share named clouddrive.
By using the advanced option, you can associate existing resources. Also, the associated Azure storage accounts must reside in the same region as the Cloud Shell machine that you're mounting them to. To find your current region you may run env in Bash and locate the variable ACC_LOCATION.
As the document stated, the canadacentral is not an available region for Cloud Shell, you should mount file storage in the available region. If so, you can run clouddrive unmount to unmount the current file share then select the existing file storage in the available region via clicking advanced settings in the initial login.
I opened Azure Cloud Shell and once the command prompt was ready, I tried git clone https://github.com/Azure-Samples/python-docs-hello-world and it was cloned successfully. However, i am unable to locate where the cloned files are. Need help with the process for locating using Azure Cloud Shell.
The Azure Cloud shell stores the files in a file share within a storage account that you either specified or Azure created for you.
When you use basic settings and select only a subscription, Cloud
Shell creates three resources on your behalf in the supported region
that's nearest to you:
Resource group: cloud-shell-storage-<region>
Storage account: cs<uniqueGuid>
File share: cs-<user>-<domain>-com-<uniqueGuid>
Source.
I'm trying to create an Azure VM and then copy an install file to the VM and then silently installing it. I have created a basic Azure Resource Group project, and can create and deploy the VM, but I can't figure out how to do everything from the powershell script.
It sounds like you could use a custom script extension to do what you want. In your ARM template, you can specify the url for a file and the command to run; Azure will handle getting the file onto your VM and running it based on your command. Here is an example from the Azure Quickstart Templates: https://github.com/Azure/azure-quickstart-templates/tree/master/windows-vm-custom-script
Hope this helps! :)