We plan to migrate our Jenkins CI to Azure CI. The Jenkins CI used to produce a lot of artifacts, and we initially thought of creating Azure Artifacts for the same.
However, it was informed to me that Azure Artifacts are expensive, and there is another storage called Azure Blob Storage, which is cheaper.
My questions:
This Blob Storage is not a service provided by Azure DevOps services. On the Azure server we need to setup this storage by creating a subscription. Is my understanding correct?
How exactly is the performance difference between the two? I would expect the Artifact to be faster compared to Blob. Is this right?
Please let me know.
To answer your questions,
Yes, your understanding is correct.
Both are actually very different things.
As per Microsoft's documentation:
Azure Artifacts enables developers to share their code efficiently and manage all their packages from one place. With Azure Artifacts, developers can publish packages to their feeds and share it within the same team, across organizations, and even publicly. Developers can also consume packages from different feeds and public registries such as NuGet.org or npmjs.com. Azure Artifacts supports multiple package types such as NuGet, npm, Python, Maven, and Universal Packages.
Azure Blob Storage, on the other hand, is Microsoft's object storage solution for the cloud. Blob storage is optimized for storing massive amounts of unstructured data like text & binary data (photos, videos, etc.)
To conclude, if your "artifacts" are something which might be used or consumed by other people in your team/organization (like a NuGet package or npm package), then go with Azure Artifacts. If that's not the case and you just want to store them somewhere, go with Azure Blob Storage.
My opinion: you should go with Azure Blob Storage.
Related
Could someone please tell some examples where we can use Azure file share in azure instead of Azure Blobs. In the internet whenever I search I get it can be mounted or it follows SMB protocol. But still I am not understanding a single case where we can use Azure File share.
For this I tried to look into When to use Azure blob storage versus Azure file share?
-This is a similar question but doesn't answer my question.
Azure provides a variety of storage tools and services, including Azure Storage. To determine which Azure technology is best suited for your scenario, see Review your storage options in the Azure Cloud Adoption Framework.
For detailed information and examples refer to this article: https://learn.microsoft.com/en-us/azure/storage/common/storage-introduction
It depends mostly on your use-case and how you plan to access the data. If you simply want to mount and access your files Azure Files will be your best fit. If you are looking for the lowest cost and want to access your data programmatically through your application Azure Blob would be a better fit. Both are accessible through the portal or Azure Storage Explorer.
I also recommend this Learn module which covers the difference in data types and solutions.
Additional information: Azure Blob Storage vs Azure File Storage
Cost details of Azure Blob Storage pricing & Azure Files pricing
In short: if you ...
have an application that needs to store or access files in the cloud, use Blob Storage
need a file share that can be used by, for instance, a server, use File Shares
Azure Files shares can be mounted concurrently by cloud or on-premises deployments of Windows, Linux, and macOS. Azure Files shares can also be cached on Windows Servers with Azure File Sync for fast access near where the data is being used.
This means a File Share is, somewhat simplified, similar to a network share you would have in a local environment.
Azure Blob Storage helps you create data lakes for your analytics needs, and provides storage to build powerful cloud-native and mobile apps. Optimize costs with tiered storage for your long-term data, and flexibly scale up for high-performance computing and machine learning workloads.
This means Blob Storage is what you need when you're building powerful cloud-native and mobile apps.
I want to spike whether azure and the cloud is a good fit for us.
We have a website where users upload documents to our currently hosted website.
Every document has an equivalent record in a database.
I am using terraform to create the azure infrastructure.
What is my best way of migrating the documents from the local file path on the server to azure?
Should I be using file storage or blob storage. I am confused about the difference.
Is there anything in terraform that can help with this?
Based on your comments, I would recommend storing them in Blob Storage. This service is suited for storing and serving unstructured data like files and images. There are many other features like redundancy, archiving etc. that you may find useful in your scenario.
File Storage is more suitable in Lift-and-Shift kind of scenarios where you're moving an on-prem application to the cloud and the application writes data to either local or network attached disk.
You may also find this article useful: https://learn.microsoft.com/en-us/azure/storage/common/storage-decide-blobs-files-disks
UPDATE
Regarding uploading files from local computer to Azure Storage, there are actually many options available:
Use a Storage Explorer like Microsoft's Storage Explorer.
Use AzCopy command-line tool.
Use Azure PowerShell Cmdlets.
Use Azure CLI.
Write your own code using any available Storage Client libraries or directly consuming REST API.
There are a bunch of ways to manually sync blobs inside an Azure Storage account to a local file-system folder.
One way might be to use AzCopy to download all blobs of a container, and do it for all containers in the account. Of course this can't be scaled, and is only good for one-time operation, or an ad-hoc snapshot.
Another option is to use Blob events, and manually sync each blob once with the local file-system folder. This method is not available in all regions yet, and can't be trusted for long-term operation, since if for any reason they get out of sync, then they remain out of sync.
Is there a way to mirror an entire Azure Storage account, to a local folder?
Here are several ways you could follow:
The way of doing this is especially fast if few files has been added/updated/deleted. If many is added/updated/deleted it is still fast, but uploading/downloadig files to/from the blob will be the main time factor. The algorithm auditor going to describe was developed by me when he was implementing a non-live-editing Window Azure deployment model for Composite C1.
Refer to fast recursive local folder to/from azure blob.
Also, you could install GoodSync to sync files between their local and the cloud.
And you will need to install Gladinet Cloud Desktop 3.0 and mount your Azure Blob Storage account.When you click on the "Add New Cloud Sync Folder" link above, you will see this dialog.
We are new to Windows azure and are developing a web application. In the beginning of the project , we have deployed complete code to different environments which actually publish complete code and uploaded blob objects to azure storage as we linked sitefinity to hold blob objects in azure storage . But now as we are in the middle of development , we are just required to upload any new blob files created which can be quite less in numbers (1 or 2 or maybe few).Now I would like to know best process to sync these blob files to different azure storage environments which is for each cloud service. So ideally we would like to update staging cloud service and staging storage first and then test there and then once no bugs are found, then will be required to update UAT and production storages as well with the changed or new blob objects.
Please help.
You can use the Azure Storage Explorer to manually upload/download blobs from storage accounts very easily. For one or two blobs, this would be an easy solution, otherwise you will need to write a tool that connects to the blob storage via an API and does the copying for you.
I have a build script that it would be very useful to configure to dump some files into Azure blob storage so they can be picked up by my Azure web role.
My preferred plan was to find some way of mounting the blob storage on my build server as a mapped drive and simply using Robocopy copy to copy the files over. This will involve the least ammount of friction as I already am deploying some files like this to other web servers using WebDrive.
I found a piece of software that will allow me to do that: http://www.gladinet.com/
However on further investigation I found that it needs port 80 to run without some hairy looking hacking about on the server.
So is there another piece of software I could use or perhaps another way I haven't considered, such as deploying the files to a local folder that is automagically synced with blob storage?
Update in response to #David Makogon
I am using http://waacceleratorumbraco.codeplex.com/ this performs 2 way synchronisation between the blob storage and the web roles. I have tested this with http://cloudberrylab.com/ and I can deploy files manually to the blob and they are deployed correctly to the web roles. Also I have done the reverse and updated files in the web roles which have then been synced back to the blob and I have subsequently edited/downloaded them from blob storage.
What I'm really looking for is a way to automate the cloudberry side of things. So I don't have a manual step to copy a few files over. I will investigate the Powershell solutions in the meantime.
I know this is an old post - but in case someone else comes here... the answer is now "yes". I've been working on a CodePlex project to do exactly that. (All source code is available).
http://azuredrive.codeplex.com/
If you're comfortable using powershell in your build process then you could use the Cerebrata Cmdlets to upload the files. If that doesn't work for you, you could write a custom activity (but this sounds quite a bit more involved).
Mounting a cloud drive from a non-Windows Azure compute instance (e.g. your local build machine) is not supported.
Having said that: Even if you could mount a Cloud Drive from your build machine, your compute instances would need access to it too, and there can only be one writer. If your compute instances only needed read-only access, they'd need to create a snapshot after you upload new files.
This really doesn't sound like a good idea though. As knightpfhor suggested, the Cerebrata cmdlets provide this capability (look at Import-File). This allows you to push individual files into their own blobs. You can optimize further by pushing a single ZIP file into a blob. You can then use a technique similar to the one described by Nate Totten in his multi-tenant web role sample, to detect new zip files and expand them to your local storage. Nate's blog post is here.
Oh, and if you don't want to use the Cerebrata cmdlets, you can upload blobs directly with the Windows Azure Storage REST API (though the cmdlets are very simple to use and integrate seamlessly with PowerShell).