Automatically sync two Amazon S3 buckets, besides s3cmd? - linux

Is there a another automated way of syncing two Amazon S3 bucket besides using s3cmd? Maybe Amazon has this as an option? The environment is linux, and every day I would like to sync new & deleted files to another bucket. I hate the thought of keeping all eggs in one basket.

You could use the standard Amazon CLI to make the sync.
You just have to do something like:
aws s3 sync s3://bucket1/folder1 s3://bucket2/folder2
http://aws.amazon.com/cli/

S3 buckets != baskets
From their site:
Data Durability and Reliability
Amazon S3 provides a highly durable storage infrastructure designed for mission-critical and primary data storage. Objects are redundantly stored on multiple devices across multiple facilities in an Amazon S3 Region. To help ensure durability, Amazon S3 PUT and COPY operations synchronously store your data across multiple facilities before returning SUCCESS. Once stored, Amazon S3 maintains the durability of your objects by quickly detecting and repairing any lost redundancy. Amazon S3 also regularly verifies the integrity of data stored using checksums. If corruption is detected, it is repaired using redundant data. In addition, Amazon S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.
Amazon S3’s standard storage is:
Backed with the Amazon S3 Service Level Agreement.
Designed to provide 99.999999999% durability and 99.99% availability of objects over a given year.
Designed to sustain the concurrent loss of data in two facilities.
Amazon S3 provides further protection via Versioning. You can use Versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. This allows you to easily recover from both unintended user actions and application failures. By default, requests will retrieve the most recently written version. Older versions of an object can be retrieved by specifying a version in the request. Storage rates apply for every version stored.
That's very reliable.

I'm looking for something similar and there are a few options:
Commercial applications like: s3RSync. Additionally, CloudBerry for S3 provides Powershell extensions for Windows that you can use for scripting, but I know you're using *nix.
AWS API + (Fav Language) + Cron. (hear me out). It would take a decently savvy person with no experience in AWS's libraries a short time to build something to copy and compare files (using the ETag feature of the s3 keys). Just providing a source/target bucket, creds, and iterating through keys and issuing the native "Copy" command in AWS. I used Java. If you use Python and Cron you could make short work of a useful tool.
I'm still looking for something already built that's open source or free. But #2 is really not a terribly difficult task.
EDIT: I came back to this post and realized nowadays Attunity CloudBeam is also a commercial solution many folks.

It is now possible to replicate objects between buckets in two different regions via the AWS Console.
The official announcement on the AWS blog explains the feature.

Related

Rename/move files in aws s3 to a different folder after X days automatically

Is there any s3 job/functionality to move all files that have been there for more than 10 days to another folder automatically (using files and folders for simplicity instead of objects)?
Or that have not been modified for more than 10 days?
My purpose is making a request using an sdk and retrieving the files that have been created in the last 10 days without deleting the others, just move them to a different folder
You can use Amazon S3 Lifecycles.
Please note, in this scenario, you usually move your objects to a different bucket.
Managing object lifecycle
Define S3 Lifecycle configuration rules for objects that have a
well-defined lifecycle. For example:
If you upload periodic logs to a bucket, your application might need
them for a week or a month. After that, you might want to delete them.
Some documents are frequently accessed for a limited period of time.
After that, they are infrequently accessed. At some point, you might
not need real-time access to them, but your organization or
regulations might require you to archive them for a specific period.
After that, you can delete them.
You might upload some types of data to Amazon S3 primarily for
archival purposes. For example, you might archive digital media,
financial and healthcare records, raw genomics sequence data,
long-term database backups, and data that must be retained for
regulatory compliance.
With S3 Lifecycle configuration rules, you can tell Amazon S3 to
transition objects to less-expensive storage classes, or archive or
delete them.
I would suggest you consider using AWS Step Functions for this. You can use implement the following workflow:
Use the S3 event to trigger the Step Function workflow. Information on that available here
Use the Wait state within Step Functions to pause for 10 days (the maximum is one year). Information available here
After this, you can trigger a Lambda function that will move the object to a new folder in S3.
I would suggest that you move object to a different S3 bucket rather than a different folder. This is because you want to avoid a loop where the movement of your object triggers another Step Functions workflow. You can limit the object prefix on the event rule, but it is safer not to worry about this.

Do I need AWS S3 instead the "uploads" folder located in my Node.js container which is hosted by AWS ECS?

I have a REST API Express.js server running on a Docker container and hosted by AWS ECS, this server accepts media uploads (photos and videos) by different clients, and currently the files system of this server contains an "uploads" folder that stores, obviously, the uploaded media by the clients. Accepting and filtering the uploaded media is done using Multer and stuff... I'm about to implement a simple video on-demand streaming service for the clients (I think that this is easy after watching a YouTube tutorial, link in the bottom).
My question is should I use AWS S3 (and Lambda I think) instead of the current approach?
my concerns for the long term are:
costs
scalability
and for the short term: I'm more satisfied with the current approach because I don't need to learn new technologies like Lambda functions and the other AWS services needed.
by the way I'm planning to implement this approach, do you think this way is efficient and scalable enough compared to S3?
De-coupling storage and your business logic is not only a good practice, but also it has many benefits, such as:
not running out of space in your container
highly available and redundant data storage in S3
ability to horizontally scale your ECS service and load balance it
availability of your files to be processed by other applications
easy backup of your media files

Where to store configuration of an elastic beanstalk application?

I create a small nodejs application which run on aws elasticbeanstalk. At the moment the application configuration is store in a json file. I want to create an frontend to manipulate some parts of this configuration and read about MEAN stack. But Amazon has no MongoDB support. So what is the best pratice in aws elasticbeanstalk to handle configurations for an application? To store this in S3 Bucket is very easy but I think the performace is not very good.
Best regards
How much configuration data are you talking about? If it is typical small amount, and it only changes once in a while, but you need it available each time the application restarts, S3 is probably the easiest and cheapest option. Spinning up a MongoDB instance, just to store a small amount of mostly-read-only data is probably overkill. What makes you think the performance is not very good?
AWS usually recommends DynamoDB for such cases, but in this case you are getting vendor lock in. Also choose of the configuration storage depend on requirements how fast new changes need to be applied to the instances?
Good option to use mysql as configuration db, because you avoid vendor lock in, you can deliver configuration changes as fast as they has been applied and in app can be used memcached interface of the mysql.

Difference between Object Storage And File Storage [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Could someone explain what difference between Object Storage and File Storage is please?
I read about Object Storage on wiki, also I read http://www.dell.com/downloads/global/products/pvaul/en/object-storage-overview.pdf, also I read amazons docs(S3), openstack swift and etc. But could someone give me an example to understand better?
All the difference is only that for 'object storage' objects we add more metadata?
For example how to store image like object using some programming language (for example python)?
Thanks.
IMO, Object storage has nothing to do with scale because someone could build a FS which is capable of storing a huge number of files, even in a single directory.
It is also not about the access methods. HTTP access to data in filesystems has been available in many well known NAS systems.
Storage/Access by OID is a way to handle data without bothering about naming it. It could be done on files too. I believe there is an NFS protocol extension that allows this.
I would muster this: Object storage is a (new/different) ''object centric'' way of thinking of data, its access and management.
Think about these points:
What are snapshots today? They are point in time copies of a volume. When a snapshot is taken, all files in the volume are snapped too. Whether all of them like it or not, whether all of them need it or not. A lot of space can get used(wasted?) for a complete volume snapshot while only a few files needed to be snapped.
In an object storage system, you will rarely see snapshots of volumes, objects will be snapshot-ed, perhaps automatically. This is object versioning. All objects need not be versioned, each individual object can tell if it is versioned.
How are files/volumes protected from a disaster? Typically, in a Disaster Recovery(DR) setup, entire volumes/volume-sets are setup for replication to a DR site. Again, this does not bother whether individual files want to be replicated or not. The unit of disaster protection is the volume. Files are small fry.
In an object storage system, DR is not volume centric. Object metadata can decide how many copies should exist and where(geo locations/fault domains).
Similarly for other features:
Tiering - Objects placed in storage tiers/classes based on its metadata independent of other unrelated objects.
Life - Objects move between tiers, change the number of copies, etc, individually, instead of as a group.
Authentication - Individual objects can get authenticated from different authentication domains if required.
As you can see, the change in thinking is that in an object store, everything is about an object.
Contrast this with the traditional way of thinking about and management and access larger containers like volumes(containing files) is not object storage.
The features above and their object-centric-ness fits well with the requirements of unstructured data and hence the interest.
If a storage system is object(or file) centric instead of volume centric in its thinking, (irrespective of the access protocol or the scale,) it is an object storage system.
Disclosure - I work for a vendor (NetApp) that develops and sells both large filesystem and object storage platforms, I'll try to keep this as implementation neutral as I can, but my cognitive biases may unconciously influence my answer.
There are many differences from both an access, programmability and implementation point of view, however given this is likely to be read primarily by programmers rather than infrastructure or storage people, I’ll focus on that aspect here.
The main difference from an external / programming point of view, is that an object in an object store is created or deleted or updated as a complete unit, you can't append data to an object and you can't update a portion of an object "in place", you can however replace it while still keeping the same object ID. Creating, Reading, Updating and Deleting objects is typically done via relatively straightforward APIs, which are almost always REST-ful or REST based and encourages a mindset that the store is a programmable resource or perhaps as multi-tenant remote service. While most of the object stores I'm aware of support byte-range reads within an object, in general objects stores were initially designed to work with whole objects . Good examples of object storage API’s are those used by Amazon S3 (the default standard for object storage access), OpenStack Swift, and Azure Blob Service REST API. Describing the back end implementations behind these APIs would be a book all by itself.
On the other hand files in a filesystem have a broader set of functions that can be applied to them, including appending data, and updating data in place. The programming model is more complex than an object store and is now almost always accessed programatically via a "POSIX" style of interface and generally tries to make the most efficient use of CPU and memory and encourages a mindset that the filesystem is a private local resource. NFS and SMB does allow for a filesystem to be made available as a multi-tenanted resource, however these are often treated with suspicion by programmers as they sometimes have subtle differences in how they react compared to "local" filesystems despite their full support for POSIX semantics. To update files in a local filesystem, you will probably use API’s such as https://www.classes.cs.uchicago.edu/archive/2017/winter/51081-1/LabFAQ/lab2/fileio.html or https://msdn.microsoft.com/en-us/library/mt794711(v=vs.85).aspx. Talking about the relative merits of filesystem implementations e.g. NTFS vs BTRFS vs XFS vs WAFL vs ZFS has a tendency to result in a religious war that is rarely worth anyones time, though if you buy me a beer I’ll happily share my opinions with you.
From a use-case point of view, if you wanted to keep a large number of photo’s, or videos, or binary build artefacts, then an object store is often a good choice. If on the other hand you wanted to persistently store data in a binary tree and update that data in place on the storage media then an object store simply wouldn’t work, and you’d be much better off with a filesystem (you could also use raw block devices for that, but I haven’t seen anybody do that since the early 90s)
The other big differences are that filesystems a designed to be strongly consistent, and are usually accessed over low to moderate latency (50 microseconds - 50 milliseconds) networks whereas object stores are often eventually consistent, and distributed over a shared nothing infrastructure connected together over low bandwidth high latency wide area networks and their time to first byte can sometimes be measured in multiples of whole seconds. Performing lots of small (4K - 16K) random reads from an object store is likely to cause frustration and performance problems.
The other main benefit of an object store vs a filesystem is that you can be reasonably sure that anything you put in an object store will remain there until you ask for it again and that it will never run out of space so long as you keep paying for the monthly charges. These resources are generally run at large scale with built in replication, version control, automated recovery etc etc and nothing short of Hurricane Harvey style disaster will make the data disappear (even then, you have easy options to make another copy in another location). With a filesystem, especially one that you are expecting you or your local operations people to manage, you have to hope that everything is getting backed up and that it doesnt fill up accidentally and cause everything to melt down when you cant update your data anymore.
I've tried to be conscise, but to add to the confusion the words "filesystem" and "object store” get applied to things which are nothing like the descriptions I’ve used above, e.g. NFS the Network file system isn’t actually a filesystem, its a way of implementing the posix storage API’s via remote procedure calls, and VMware’s VSAN stores its data in something they refer to as an "object store" which allows high speed in place updates of the virtual machine images.
There are some very fundamental differences between File Storage and Object Storage.
File storage presents itself as a file system hierarchy with directories, sub-directories and files. It is great and works beautifully when the number of files is not very large. It also works well when you know exactly where your files are stored.
Object storage, on the other hand, typically presents itself via. a RESTful API. There is no concept of a file system. Instead, an application would save a object (files + additional metadata) to the object store via. the PUT API and the object storage would save the object somewhere in the system. The object storage platform would give the application a unique key (analogous to a valet ticket) for that object which the application would store in the application database. If an application wanted to fetch that object, all they would need to do is give the key as part of the GET API and the object would be fetched by the object storage.
Hope this is now clear.
The simple answer is that object accessed storage systems or services utilize APIs and other object access methods for storing, retrieving and looking up data as opposed to traditional file or NAS. For example with file or NAS, you access storage using NFS (Network File System) or CIFS (e.g. windows file share) aka SMB aka SAMBA where the file has a name/handle with associated meta data determined by the file system.
The meta data includes info about create, access, modified and other dates, permissions, security, application or file type, or other attributes. Files are limited by the file system in terms of their size, as well as the number of files per file system. Likewise, file systems are limited by their total or aggregate size in terms of space capacity and the number of files in the filesystem.
Object access is different in that while file or NAS front-end or gateways or plugins are available for many solutions or services, primary access is via an API where an object can be of arbitrary size (up to the maximum of the object system) along with variable sized meta data (depends on the object system/service implementation). With most object storage systems/services you can specify anywhere from a few Kbytes of user defined meta data or GBytes. What would you use GBytes of meta data for? How about in addition to normal info, adding more data for policies, managements, where other copies are located, thumbnails or small previews of videos, audio, etc.
Some examples of object access APIs or interfaces include Amazon Web Services (AWS) simple storage services (S3) or other HTTP and REST based ones, SNIA CDMI. Different solutions will also support IOS (e.g. iphone/ipad) access, SOAP, Torrent, WebDav, JSON, XAM among others plus NFS/CIFS. In addition many of the object storage systems or services support programmatic bindings for python among others. The APIs allow you to essentially open a stream and then get or put, list and other functions supported by the API/system to determine how you will use it.
For example, I use both Rackspace Cloud files and Amazon S3 (in addition to EBS and Glacier) for backing up, storing, and archiving data. I can access the objects stored via a web browser or tools including Jungle disk (JD) which is what I backup and synchronize files with. JD handles the object management and moves data to both Rackspace as well as Amazon for me. If I were inclined, I could also do some programming using the APIs and then directly access either of those sites supplying my security credentials to do things with my stored objects.
Here is a link to object and cloud storage primer from a session I did in Holland last year that has some simple examples of objects and access.
http://storageio.com/DownloadItems/Nijkerk_Nov2012/SIO_IndustryTrends_CloudObjectStorage.pdf
Using the programmatic binding, you would define your data structures or objects in your program and then use the APIs or calls for storing, retrieving, listing of data, meta data access etc. If there is a particular object storage system, software or service that you are looking to work with or need to know how to program to, go to their site and you should find their SDK or API info with examples. With objects, once you create your initial bucket or container on a service or with a product/system, you then simply create and store additional objects as you go.
Here is a link as an example to AWS S3 API/programming:
http://docs.aws.amazon.com/AmazonS3/latest/API/IntroductionAPI.html
In theory object storage systems are talked about has having unlimited numbers of objects, or object size, in reality, most systems, solutions, software or services are limited by what they have either tested or currently support, which can be billions of objects, with objects sizes of 5GByte or larger. Pay attention to the limits on specific services or products as to what is actually tested, supported vs. what is architecturally possible or what is implemented on webex or powerpoint.
Again its very service and product/service/software dependent as to the number of objects, size of the objects, size of meta data, and amount of data that can be moved in/out via their APIs. However, it is generally safe to assume that object storage can be much more scalable (depending on implementation) than file systems (without using global name space, federation, file virtualization or other techniques).
Also in my book Cloud and Virtual Data Storage Networking (CRC Press) that is Intel Recommended Reading, you will find more information about cloud and object storage.
I will be adding more related material to www.objectstorage.us soon.
Cheers gs
Object Storage = Block Storage
+ Rich Metadata
- File hierarchy
Block Storage uses a filesystem to point where content is stored.
Object Storage uses a identifyer to point to content and his context.
This is my understanding of reading Content-addressed vs. location-addressed
Block Storage needs a filesystem and structuring so with bigger files sytems comes more overhead.
The Object storage has a lot of context about the file and doesn't need the file hierarchy.
The explanation on page 7 of the Dell paper clearly shows this..What troubled me to, was that on the scale of the hard disk itself it isn't explained.
I found that a Hard Disk itself always uses a Block storage mechanism (though that seems to be changing to)
(though that seems to be changing to)
some other insights can be found here
This answer doesn't even explain anything about the differences.
There are some very fundamental differences between File Storage and Object Storage.
File storage presents itself as a file system hierarchy with directories, sub-directories and files. It is great and works beautifully when the number of files is not very large. It also works well when you know exactly where your files are stored.
Object storage, on the other hand, typically presents itself via. a RESTful API. There is no concept of a file system. Instead, an application would save a object (files + additional metadata) to the object store via. the PUT API and the object storage would save the object somewhere in the system. The object storage platform would give the application a unique key (analogous to a valet ticket) for that object which the application would store in the application database. If an application wanted to fetch that object, all they would need to do is give the key as part of the GET API and the object would be fetched by the object storage.
This explained a large portion of it; but you argued about the meta data.
Object storage has no sense of folders, or any kind of organization structure which makes it easy for a human to organize. File Storage, of course, does have all those folders that make it so easy for a human to organize and shuffle through...In a server environment with the number of files in a scale that is astronomical, folders are just a waste of space and time.
Databases you say? Well they're not talking about the Object storage itself, they are saying your http service (php, webmail, etc) has the unique ID in its database to reference a file that may have a human recognizable name.
Metadata, well where is this file stored you say? That's what the metadata is for. Your single file is split up into a bunch of small pieces and spread out of geographic location, servers, and hard drives. These small pieces also contain more data, they contain parity information for the other pieces of data, or maybe even outright duplication.
The metadata is used to locate every piece of data for that file over different geographic locations, data centres, servers and hard drives as well as being used to restore any destroyed pieces from hardware failure. It does this automatically. It will even fluidly move these pieces around to have a better spread. It will even recreate a piece that is gone and store it on a new good hard drive.
This maybe a simple explanation; but I think it might help you better understand. I believe file storage can do the same thing with the metadata; but file storage is storage that you can organize as a human (folders, hierarchy and such) whereas object storage has no hierarchy, no folders, just a flat storage container.
Actually you can mount an bucket/container and access the objects or subfolders (and their objects) from Linux. For example, I have s3fs installed on Ubuntu that I have setup a mount point to one of my S3 buckets and able to do regular cp, ls and other functions just as though it were another filesystem. The key is getting the software tool of which there are plenty that allows you to map a bucket/container and present it as mount point. There are also software tools that allow you to access S3 and other buckets/containers via iSCSI in addition to as NAS.
Most companies with object based solutions have a mix of block/file/object storage chosen based on performance/cost reqs.
From a use case perspective:
Ultimately object storage was created to address unstructured data which is growing explosively, far quicker than structured data.
For example, if a database is structured data, unstructured would be a word doc or PDF.
How do you search 1 billion PDFs in a file system? (if it could even store that many in the first place).
How quickly could you search just the metadata of 1 billion files?
Object storage is currently used more for long term or archival, cheap and deep storage, that keeps track of more detail of what that data is. This metadata becomes very powerful when searching or mining very large data sets. Sometimes you can get what you need from the metadata without even accessing the data itself. Object storage solutions can typically replicate automatically with geographic failover built-in.
The problem is that application would have to be re-written to use object access methods rather than file hierarchy (which is simpler from a app dev perspective). It's really a change in the philosophy of data storage, and storing more actionable information about that data from a management standpoint as well as usage.
Quick example might be an MRI scan image. On Filesystem you have owner/creation date, but not much else. If it were an object, all of the information surrounding the MRI could be stored along with it in metadata, like patient name, MRI center location, the requesting Dr., insurance carrier, etc.
Block/file are more well suited for local access or OTLP where performance is more important than retention and cost.
For example, you would not want to wait minutes for a Word doc to open, but you could wait a few minutes for a data mining/business intelligence process to complete.
Another example would be a legal search where you have to search everything from 5 years ago to present. With retention policies in place to decrease the active data set and cost, how would you even do that without restoring from tape?
Object storage is a great solution for replacing long term archival methods like tape.
Setting up replication and failover for block and file can get very expensive in the enterprise and usually requires very expensive software and services.
Note: At the lower level, object storage access happens via the RESTful API which is more like a web request than accessing a file at the end of a path.
Here is a good article worth to read:
https://cloudian.com/blog/object-storage-vs-file-storage/
cited from the ariticle:
To start, object storage overcomes many of the limitations that file storage faces. Think of file storage as a warehouse. When you first put a box of files in there, it seems like you have plenty of space. But as your data needs grow, you’ll fill up the warehouse to capacity before you know it. Object storage, on the other hand, is like the warehouse, except with no roof. You can keep adding data infinitely – the sky’s the limit.
If you’re primarily retrieving smaller or individual files, then file storage shines with performance, especially with relatively low amounts of data. Once you start scaling, though, you may start wondering, “How am I going to find the file I need?”
In this case, you can think of object storage as valet parking while file storage is more like self-parking (yes, another analogy, but bear with me!). When you pull your car into a small lot, you know exactly where your car is. However, imagine that lot was a thousand times larger – it’d be harder to find your car, right?
Because object storage has customizable metadata and all the objects live on a flat address space, it’s similar to handing your keys over to a valet. Your car will be stored somewhere, and when you need it, the valet will get the car for you. It might take a little longer to retrieve your car, but you don’t have to worry about wandering around looking for it.
I think the white paper explains the idea of object storage quite well. I am not aware of any standard way to use object storage devices (in the sense of a SCSI OSD) from a user application.
Object storage is in use in some large scale storage products like the storage appliances of Panasas. However, these appliances then export a file system to the end user. It is IMHO fair to say that the T10 OSD idea never really caught momentum.
Related ideas to the OSD standard can be found in cloud storage systems like S3 and RADOS.

Use Sql Server FileStream or traditional File Server?

I am designing a system that's going to have about 10 millions+ users, each has a photo, which is about 1~2 MB.
We are going to deploy both database and web app using Microsoft Azure
I am wondering the way I should store the photos, there are currently two options,
1, Store all photos use Sql Server FileStream
2, Use File Server
I haven't experienced such large scale BLOB data using FileStream.
Can anybody give my any suggestion? The Cons and Pros?
And anyone with Microsoft Azure experiences concerning the large photos store is really appreciated!
Thx
Ryan.
I vote for neither. Use Windows Azure Blob storage. Simple REST API, $0.15/GB/month. You can even serve the images directly from there, if you make them public (like <img src="http://myaccount.blob.core.windows.net/container/image.jpg" />), meaning you don't have to funnel them through your web app.
Database is almost always a horrible choice for any large-scale binary storage needs. Database is best for relational-only systems, and instead, provide references in your database to the actual storage location. There's a few factors you should consider:
Cost - SQL Azure costs quite a lot per GB of storage, and has small storage limitations (50GB per database), both of which make it a poor choice for binary data. Windows Azure Blob storage is vastly cheaper for serving up binary objects (though has a bit more complicated pricing system, still vastly cheaper per GB).
Throughput - SQL Azure has pretty good throughput, as it can scale well, however, Windows Azure Blog storage has even greater throughput as it can scale to any number of nodes.
Content Delivery Network - A feature not available to SQL Azure (though a complex, custom wrapper could be created), but can easily be setup within minutes to piggy-back off your Windows Azure Blob storage to provide limitless bandwidth to your end-users, so you never have to worry about your binary objects being a bottleneck in your system. CDN costs are similar to that of Blob storage, but you can find all that stuff here: http://www.microsoft.com/windowsazure/pricing/#windows
In other words, no reason not to go with Blob storage. It is simple to use, cost effective, and will scale to any needs.
I can't speak on anything Azure related but for my money the biggest advantage of using FILESTREAM is that that data can get backed up inside the normal SQL Server backup process. The size of the data that you are talking about also suggests that FILESTREAM may be a good choice as well.
I've worked on a SCM system with a RDBMS back end and one of our big decisions was whether to store the file deltas on the file system or inside the DB itself. Because it was cross-RDBMS we had to cook up a generic non-FILESTREAM way of doing it but the ability to do a single shot backup sold us.
FILESTREAM is a horrible option for storing images. I'm surprised MS ever promoted it.
We're currently using it for our images on our website. Mainly the user generated images and any CMS related stuff that admins create. The decision to use FILESTREAM was made before I started. The biggest issue is related to serving the images up. You better have a CDN sitting in front. If not, plan on your system coming to a screeching halt. Of course, most sites have a CDN, but you don't want to be at the mercy of that service going down meaning your system will get overloaded. The amount of stress put on your sql server is the main problem here.
In terms of ease of backup. Your tradeoff there is that your db is MUCH MUCH LARGER and, therefore, the backup takes longer. Potentially, much longer and the system runs slower during the backup. Not to mention, moving backups around takes longer (i.e., restoring prod data in a dev environment or on local machines for dev purposes). Don't use this as a deciding factor.
Most cloud services have automatic redundancy of any files that you store on their system (i.e., aws's S3 and azure's blob). If you're on premise, just make sure you use a shared location for the images and make sure that location is backed up. I think the best option is to set it up so each image (other UGC file types too) has an entry in your db with a path to that file. Going one step further, separate the root path into a config setting and only store the remaining path with the entry. For example, root path in config might be a base url, a shared drive or virtual dir, or a blank entry. Then your entry might have "/files/images/image.jpg". This way, if you move your filestore, you can just update the root config. I would also suggest creating a FileStoreProvider interface (Singleton) that can be used for managing (saving, deleting, updating) these files. This way, if you switch between AWS, Azure, or on premise, you can just create a new Provider.
I have a client server DB, i manage many files (doc, txt, pdf, ...) and all of them go in a filestream BLOB. Customers has 50+ MB dbs. If in azure you can do the same go for it. Having all in the db is a wonderful thing. It is considered good policy also for Postgres and MySQL

Resources