How to store files in Azure's filesystem - azure

I am writing a simple API that's hosted on Azure, and I need a place to store a config file that can be changed in the code. I want to place this in the webroot.
Before you say this is a terrible practice, I know. This API is very small(the free version of Azure is actually more than enough for me), and the file is less than 1MB in size. I don't want to buy blob storage or other things that are designed to be for large projects. I don't really care about scalability, I just want the data to persist.
The question is where and how can I store this file. Can I use the path "D:/home/site/" or something similar? What do I need to do to make this work? And if this is impossible, are there other options for me that hopefully isn't overkill?

The question is where and how can I store this file. Can I use the path "D:/home/site/" or something similar? What do I need to do to make this work?
It seems that you are hosting your API application on Azure app service and you'd like to find a place that can be used to persist a config file. As you mentioned, you can store your file in d:\home, and the file would be persistent and shared between all instances of your site. This article can help you understand the Azure App Service file system, please read it.
You can upload this config file when you deploy your API application or upload it via Kudu console.

Related

How to upload video/audio files from web, to NodeJS, to Google Cloud Storage?

This is unfamiliar ground for me and I've been poking around with resources trying to find a suitable solution. But essentially, I have a website setup that I want to allow users to upload MP3/FLAC files. Then I want to take those files and send them to a Google Cloud Bucket. The second part seems easy enough, plenty of NodeJS tutorials regarding that.
Since I'm pretty in the dark with how this is done, would I need to "upload" a file on my frontend and then hit my node-express api backend with some sort of fs. solution that looks up the file on my machine? If so, how could that be consistent between users, what if their directory structure is different on their machines?
Anyway, kinda shooting in the dark here. Would love to have some advice regarding this.
It's not really feasible for a backend to "reach into" a frontend machine to pull files from it. The client needs to provide the data directly to the backend.
Mostly commonly, Firebase client libraries are used to directly upload contents from a client machine to a storage bucket. If you don't do that, you'll need to create your own backend API that clients can invoke to send data.

How to simple edit local config file throuth API

In all servers we got some .env files, which sets configs for server (Node.JS) on start.
Now I want to edit this files from admin pane (another web-service, working with main server through API).
Is there any best practices or just good ideas how can I realize that?
First idea - create another web-server on instance, which will have only two API endpoints (read, write) and which will restart server after editing configs. This idea looking too heavy.
Second idea is to create bash script, which will send requests to admin servers to take actual configs and rewrite local .env file if find some changes, but here will be a lot unnecessary requests. (Request every minute, but configs will change 1 time per month).
What do you think? Any ideas?
You have a couple of options and it depends primarily on your deployment strategy..
If you have a distributed environment and/or your configuration changes often (i.e.: running multiple docker containers, rotating keys, etc.) I'd highly recommend using a K/V store and reading configuration(s) dynamically during application start. Check out HashiCorp Vault, etcd or even mongodb.
If your configuration contains sensitive data definitely use something like HashiCorp Vault. If you use a configuration tool like ansible, it has ansible-vault which will encrypt your secret(s) at rest and decrypt them during deployment.
I would highly advise against storing (even potentially) sensitive data such as api keys, tokens, etc. in version control. This is a pretty big attack vector and will lead you down a dark road.
Worst case scenario use environment variables. Almost all CI/CD tooling supports these and you can maintain separation of concerns.

Suggestion for storing and accessing resource files on Azure website and web role

I've got a website that will need to access a file on the file system (or somewhere) containing some template text used to send an email. I'd like a suggestion for where to store the file and how to access / find the file at runtime for both azure web roles and azure web sites.
So far, I've read about Azure Local Storage, but that seems to only be an option for web roles, and not available for azure websites (I think?). Plus, I'm not sure how the file would make its way into the storage.
The other option I was thinking about was adding the file to the VS solution and marking it as content, in which case I believe it would be deployed with the other files. But in this case, I don't know how to get the path to access the file form the .NET code. Also, with this, I believe that I would need to redeploy the entire solution in order to update the file.
I would appreciate any thoughts on this. Thanks...
Using a non-local storage system is your best approach, it is highly unlikely your speed requirements will be that intense it will need intense performance improvements.
I would recommend blob storage in the same region as your website/cloud service.
If you have extreme loads and need that file loaded rapidly, then have an in-memory cache set to 5 minutes or something low to store the template. Each time it checks the cache, if its not there it loads in the cache from storage then provides the resource.
You may look at using cache if you are getting a constant 1 request per second or higher. Anything lower than that then just stick to reading on demand directly from the blob storage.
If you really want to get something locally off the disk then do
Server.MapPath("~/YourFolder/YourFile.ext")

Trying to figure out if / how Cloud would be an advantage

Compared to plain vanilla PhP/MySQL, what's the upside of Cloud?
A typical block of contents would be approximately 30,000 snippets of text, each 300 characters or less in length.
I'm looking at some good documents on buckets and objects and wondering if there's any reason for me to dive into all that.
Just a rough idea would be appreciated. Am I barking up the wrong tree even thinking of Cloud for this?
p.s. just guessing: is the way to go to run MySQL in the Cloud?
It will depend on the cloud service you choose. On the cloud you can choose between an IaaS, a PaaS or a SaaS.
On an IaaS you will get an infrastructure as a service where you need to install MySQL, the web server, ...
On a PaaS, all these services could be enabled just with click of your mouse and you will just use the service without taking care of the config or the installation process.
This blog article will give you an idea about how to use a MySQL database on a PaaS.
Regarding the web server, for PHP could be something really easy like zip your project and use a command to deploy your application without any config. See here an example.

Use Sql Server FileStream or traditional File Server?

I am designing a system that's going to have about 10 millions+ users, each has a photo, which is about 1~2 MB.
We are going to deploy both database and web app using Microsoft Azure
I am wondering the way I should store the photos, there are currently two options,
1, Store all photos use Sql Server FileStream
2, Use File Server
I haven't experienced such large scale BLOB data using FileStream.
Can anybody give my any suggestion? The Cons and Pros?
And anyone with Microsoft Azure experiences concerning the large photos store is really appreciated!
Thx
Ryan.
I vote for neither. Use Windows Azure Blob storage. Simple REST API, $0.15/GB/month. You can even serve the images directly from there, if you make them public (like <img src="http://myaccount.blob.core.windows.net/container/image.jpg" />), meaning you don't have to funnel them through your web app.
Database is almost always a horrible choice for any large-scale binary storage needs. Database is best for relational-only systems, and instead, provide references in your database to the actual storage location. There's a few factors you should consider:
Cost - SQL Azure costs quite a lot per GB of storage, and has small storage limitations (50GB per database), both of which make it a poor choice for binary data. Windows Azure Blob storage is vastly cheaper for serving up binary objects (though has a bit more complicated pricing system, still vastly cheaper per GB).
Throughput - SQL Azure has pretty good throughput, as it can scale well, however, Windows Azure Blog storage has even greater throughput as it can scale to any number of nodes.
Content Delivery Network - A feature not available to SQL Azure (though a complex, custom wrapper could be created), but can easily be setup within minutes to piggy-back off your Windows Azure Blob storage to provide limitless bandwidth to your end-users, so you never have to worry about your binary objects being a bottleneck in your system. CDN costs are similar to that of Blob storage, but you can find all that stuff here: http://www.microsoft.com/windowsazure/pricing/#windows
In other words, no reason not to go with Blob storage. It is simple to use, cost effective, and will scale to any needs.
I can't speak on anything Azure related but for my money the biggest advantage of using FILESTREAM is that that data can get backed up inside the normal SQL Server backup process. The size of the data that you are talking about also suggests that FILESTREAM may be a good choice as well.
I've worked on a SCM system with a RDBMS back end and one of our big decisions was whether to store the file deltas on the file system or inside the DB itself. Because it was cross-RDBMS we had to cook up a generic non-FILESTREAM way of doing it but the ability to do a single shot backup sold us.
FILESTREAM is a horrible option for storing images. I'm surprised MS ever promoted it.
We're currently using it for our images on our website. Mainly the user generated images and any CMS related stuff that admins create. The decision to use FILESTREAM was made before I started. The biggest issue is related to serving the images up. You better have a CDN sitting in front. If not, plan on your system coming to a screeching halt. Of course, most sites have a CDN, but you don't want to be at the mercy of that service going down meaning your system will get overloaded. The amount of stress put on your sql server is the main problem here.
In terms of ease of backup. Your tradeoff there is that your db is MUCH MUCH LARGER and, therefore, the backup takes longer. Potentially, much longer and the system runs slower during the backup. Not to mention, moving backups around takes longer (i.e., restoring prod data in a dev environment or on local machines for dev purposes). Don't use this as a deciding factor.
Most cloud services have automatic redundancy of any files that you store on their system (i.e., aws's S3 and azure's blob). If you're on premise, just make sure you use a shared location for the images and make sure that location is backed up. I think the best option is to set it up so each image (other UGC file types too) has an entry in your db with a path to that file. Going one step further, separate the root path into a config setting and only store the remaining path with the entry. For example, root path in config might be a base url, a shared drive or virtual dir, or a blank entry. Then your entry might have "/files/images/image.jpg". This way, if you move your filestore, you can just update the root config. I would also suggest creating a FileStoreProvider interface (Singleton) that can be used for managing (saving, deleting, updating) these files. This way, if you switch between AWS, Azure, or on premise, you can just create a new Provider.
I have a client server DB, i manage many files (doc, txt, pdf, ...) and all of them go in a filestream BLOB. Customers has 50+ MB dbs. If in azure you can do the same go for it. Having all in the db is a wonderful thing. It is considered good policy also for Postgres and MySQL

Resources