we have a azure windows vm which has couple of disks as part of azure storage attached to it.
As part of archiving the older data, want to copy the files to a local storage(on prem) and delete the files stored in those vhd's and thus reduce the size.
As you may already know , one way to do this is to physically copy the files from vm to local storage through a RDP session but this may be slow and may incur bandwidth usage cost.
I explored using azure storage explorer which uses azcopy in the background but i was able to see only vhd files. i tried to download or copy the vhd files but it fails after reaching 100%.
Please note that we're talking copying files of almost 500-800 GB to local storage.
Can someone please provide any suggestions regarding any better methods of copying these files.
Related
I have a VM and I want to be able to copy a large amount of files to a backup drive. I have remote desktop and tried to copy that way , but the copy halts and gives no error.
The size is about 120G of data ?
Copying files larger than 2 GB with RDP isn't supported. Check this article for details.
As an alternate option, you could try using the Azure Backup service if you intend to back up your Azure VM. Backups are stored in a Recovery Services vault with built-in management of recovery points. Configuration and scaling are simple, backups are optimized, and you can easily restore as needed.
As part of the backup process, a snapshot is taken, and the data is transferred to the Recovery Services vault with no impact on production workloads. You can also schedule your backups to a suitable time so that resources are optimally used.
I have an azure virtual machine which has some application specific CSV files(retrieved via ftp from on-premise) that needs to be stored into a blob (and eventually will be read and pushed into a Azure SQL DB via a worker role). Question is around pushing the files from VM to blob. Is it possible to get AzCopy without installing the SDK to have the files copied to the blob? Is there a better solution than this? Please read the points below for further information
Points to note:
1) Though files could be directly uploaded to a blob rather than getting them into the VM first and copying from there, for security reasons the files will have to be pulled into the VM and this cannot be changed.
2) I also thought about a worker role talking to a VM folder share (via common virtual network) to pull the files and upload to the blob, but this does not appear to be a right solution after reading some blogs - as it requires changes to both VMs (worker role VM and the Iaas VM).
3) Azure File Service is still in preview (?) and hence cannot be used.
Is it possible to get AzCopy without installing the SDK to have the
files copied to the blob?
Absolutely yes. You can directly download AzCopy binaries without installing SDK using the following links:
Version 3.1.0: http://aka.ms/downloadazcopy
Version 4.1.0: http://aka.ms/downloadazcopypr
Source: http://blogs.msdn.com/b/windowsazurestorage/archive/2015/01/13/azcopy-introducing-synchronous-copy-and-customized-content-type.aspx
Could someone please help me understand this? I created Virtual Machine in Azure running Windows Server 2012. I noticed Azure created a storage account automatically. When I go inside that storage account, click Containers tab, and under vhds name it shows a name-name2-2014-12-05.vhd which is 127 GB and it always has recent Last modified date. What is that for? Is that my live backup image of my entire server deployment? If so where can I see how often it backs up?
When I go inside that storage account, click Containers tab, and under
vhds name it shows a name-name2-2014-12-05.vhd which is 127 GB and it
always has recent Last modified date. What is that for?
Virtual Machines in Azure are Stateful in nature. What that means is that any changes you make to the Virtual Machines like installing software, creating files etc. are persisted. The way Azure achieves this is by storing the Virtual Machine VHD as a page blob in Azure Storage. What you see as name-name2-2014-12-05.vhd is the VHD using which Azure launches your VM.
Is that my live backup image of my entire server deployment?
It is your VM and not the backup image. If by mistake you delete it (though Azure makes it real hard for you to delete it but its possible), your VM is gone. If you want, you can take a backup of this and store it in some other place. Search for Create Azure Virtual Machine Images and you will find ample resources.
If so where can I see how often it backs up?
By default Azure keeps 2 extra copies (a total of 3 including the main) of it in the data center and if you have enabled geo-redundancy, then Azure keeps additional 3 copies in a separate datacenter. However please keep in mind that it is not a backup. Any changes you make to your VM are replicated to all the copies. You would need to come up with your backup approach.
My recommendation would be to read more about Azure Virtual Machines. I'm sure if you search for it, you will get plentiful of resources.
I currently have a Rackspace Cloud Server that I'd like to migrate to an Azure Virtual Machine. I recently got an MSDN subscription which gives me a certain level of hosting via Azure at no cost, where I'm currently paying for that level of service with Rackspace.
However, one of the nice things about Rackspace is that I can schedule nightly/weekly backups of the VM image. Is there any mechanism for doing this on Azure? I'm worried about protecting against corruption of the database (i.e. what if someone were to run an UPDATE statement and forget the WHERE clause). Is there a mechanism for this with Azure?
I know the VMs are stored as .VHD files in my local Azure storage, but the VM image is 127 gigs. Downloading that nightly even with FIOS internet isn't really going to fly as a solution.
You can perform an asynchronous blob copy to make a physical copy of a vhd. See here for REST API details. This operation is very fast within the same data center (maybe a few seconds?). You don't need to make raw REST calls though: There's a method already implemented in the Azure cross-platform command line interface, available here. The command is:
azure vm disk upload
You can also take blob snapshots, and return to a previous snapshot later. A snapshot is read-only (which you can copy from later) and takes up no space initially. However, as storage pages are changed, the snapshot grows.
One question though: why such a large VM image? Are you storing OS + data on same vhd? If so, it may make more sense to mount a separate Azure Drive (also stored in VHD in blob storage) to store data, and make independent copies / snapshots.
How would I write to a tmp/temp directory in windows azure website? I can write to a blob, but i'm using an NPM that requires me to give it file names so that it can directly write to those filenames.
Are you using Cloud Services (PaaS) or Virtual Machines (IaaS).
If PaaS, look at Windows Azure Local Storage. This option gives you up to 250gb of disk space per core. Its a great location for temporary storage of information in a way that traditional apps will be familiar with. However, its not persistent so if you put anything there you need to make sure will be available if the VM instance gets repaved, then copy it to Blob storage. Also, this storage is specific to a given role instance. So if you have two instances of the same role, they each have their own local storage buckets.
Alternatively, you can use Azure Drive, which allows you to keep the information persisted, but still doesn't allow multiple parallel writes.
If IaaS, then you can just mount a data disk to the VM and write to it directly. Data disks are already persisted to blob storage so there's little risk of data loss.
Just from my understanding and please correct me if anything wrong.
In Windows Azure Web Site, the content of your website will be stored in blob storage and mounted as a drive, which will be used for all instances your web site is using. And since it's in blob storage it's persistent. So if you need the local file system I think you can use the folders under your web site root path. But I don't think you can use the system tmp or temp folder.