Storing data persistently on IPFS - web

Recently I developed an alternative to Google Drive using IPFS (the decentralized storage technology). The app serverd it's purpose but suffered from 2 major problems:
App was super cool for small files, but on large files, the download was very slow and eventually stopped.
Data was not persistent, means I lost few files after few hours of upload.
My questions:
Is IPFS a persistent storage system? If no what measures can be used to make it persistent?

Understood your question.So coming to the points.
Is IPFS a persistent storage system ?
IPFS is a distributed system that can (among other things) resolve a content hash to the content it represents. This content can never truly be guaranteed to be available (maybe you're offline, maybe all of the peers with it are offline, maybe you're behind a powerful NAT, maybe the network split and the peers with the content are on the other partition).
In IPFS the simple or decentralized system an object is online only when the nodes that are holding the object spend energy.
And your second part,IPFS is mainly for the permanence and permanence!=persistent.IPFS itself currently handles this by means of "pinning", which excludes an object and its children from garbage collection within one IPFS node.
Work is going on to make it more persistent.One of them is Filecoin (paper), and there a couple of concrete ideas for an ipfs-cluster tool.

Related

How do i store images in distributed system the right way?

I'm trying to create a distributed system which contains mobile app, web userpanel and an API that communicates with DB. I want the user to be able to upload a profile image both from the mobile app and the web userpanel but what is the best and "right" way to store images accross a distributed system? Cant really find anything describing best practices on this topic.
I know that the filepath should be in database, and the image in a file system. But should that file system be on the API server or where?
Here is an diagram of what i think the distributed system should be like.
The "right" way to do something complex like image hosting depends on factors like expected traffic and performance expectations. Designing large systems involves a lot of tradeoffs, so it's best to nail down what requirements are for your system are in order to make decisions that serve those requirements.
As for your question, this diagram is roughly correct - you want to store the location of the uploaded image separate from the image itself. If you wanted your solution to be more scalable, an approach would be turning your file system into its own service with its own API. You would store a hash of the file in your database to reference it rather than its path, then request that image (or a URL to that image) from the new storage service by asking the storage service's API for the file that has the stored hash.
The reason this is more scalable is that the storage service is free to become its own distributed system when we don't require that every file has an associated file system path within a single namespace. A hash is a good candidate for a replacement of the filesystem path, but you could come up with your own storage ID scheme depending on your needs.
However, this may be wildly out of scope for what you are trying to design. If you only expect to have a few thousand users, storing your images and database on your API server in the file system isn't necessarily wrong, but you might experience growing pains if the requirements of your system grow.
Google's site reliability engineer classroom has a lesson on building a distributed image server, which is an adjacent problem to what you're looking to do: https://sre.google/classroom/imageserver/

Best persistent data storage system for an alternative to global variables?

I am building a Node.js application which uses a few global variables to track data such as online users and statuses, information about other servers, and ongoing events, but having this information be lost in the event of server restart/crash is not ideal.
As these things are frequently read & modified, I figure it would not be a good idea to put that extra strain on my existing MySQL database. I have looked into Redis but unfortunately my application is hosted on a Windows server so I would have to use an old unsupported version of it which isn't ideal.
I'm currently considering setting up a NoSQL database such as MongoDB, but I'm not sure if this is an efficient solution and if it would be too much on my relatively weak server to have an application and 2 different databases running.
What would be the best solution for persistent storage of data that needs to be frequently accessed and updated by an application?
Making my comments into an answer...
If it's a reasonable amount of data, you can just write JSON to a single data file. No database required. Just overwrite the file with a new block of JSON to save the new state. This is very fast, efficient and simple. I've used this before as a quick and easy way to regularly save snapshots of state that you want to be able to reload if your server restarts. Read the state into memory upon server start, then use it from memory, then regularly save a new snapshot to disk however often your application desires.
If some data changes a lot and some data doesn't change very much, you can break the data into multiple files so you're writing less data on the more frequent interval. Obviously, there is a threshold of amount of data or frequency of writes or complexity of data access where a database would be warranted, but you should at least consider the simpler option first and only add a new database when you think you really need it.
If you cluster your servers in the future, that would speak to a multi-user database (one with appropriate concurrency management features) to be your master keeper of state, but you're going to have other design issues to work through if you're trying to share multi-user state (like online status) across all clustered servers as you can no longer keep that in memory for any server unless all state changes are broadcast to all servers so they can update their in-memory copy of the data or unless you make users sticky to a particular server (which complicates load balancing in clustering). That does somewhat call for a redis-like central store that all clustered servers can access.

If a node of a DHT fails, will the values become unavailable?

I'm reading up about DHTs, but struggle to find information on what the consequences are for DHT values when a node fails.
As far as I understand, without redundancy of data (hash table values) the failure of a single node would simply make the values stored in that node unavailable. But if I wanted to use DHTs as storage for any system, I would like that system to be able to rely on the availability of all storage at any time, right? Maybe data redundancy is outsourced to be an independent problem here, but then this would mean that the aspect of decentralization of a DHT introduces additional points of failures, which seems like a huge downside of DHTs.
So how are values kept accessible, if the node responsible for those values fails?
As far as I understand, without redundancy of data (hash table values) the failure of a single node would simply make the values stored in that node unavailable.
That is tautological. Yes, if you choose no redundancy then there is no redundancy.
But if I wanted to use DHTs as storage for any system, I would like that system to be able to rely on the availability of all storage at any time, right?
That depends on how much availability you actually need. No system is 100% reliable.
And DHTs usually are not used as a storage system. Not for long-lived bulk data anyway. It should be considered a dynamic value lookup system, similar to DNS, but distributed and peer-to-peer.
So how are values kept accessible, if the node responsible for those values fails?
The simplest approach is to publish the data with redundancy, i.e. write it to multiple nodes. Either to the N nodes closest to the target ID or with some other deterministic key derivation that can choose multiple addresses.
The responsibility of republishing the data to compensate for churn of storage nodes can also lie with the originator of the data. This keeps the implementation complexity and the security/game-theoretic aspects simple.
Storage nodes themselves could also perform redundancy republishing to ensure that data remains available in the absence of the originating node. The problem with this approach is that it is difficult to secure and incentivize correctly on public networks, especially when there are multiple implementations.
In closed environments this is more feasible.

Hyperledger Fabric private data collection to distribute large files

We are currently researching on Hyperledger Fabric and from the document we know that a private data collection can be set up among some subset of organizations. There would be a private state DB (aka. side DB) on each of these organizations and per my understanding, the side DB is just like a normal state DB which normally adopts CouchDB.
One of our main requirements is that we have to distribute files (e.g. PDFs) among some subset of the peers. Each file has to be disseminated and stored at the related peers, so a centralized storage like AWS S3 or other cloud storage / server storage is not acceptable. As the file maybe large, the physical copies must be stored and disseminate off-chain. The transaction block may only store the hash of these documents.
My idea is that we may make use of the private data collection and the side DB. The physical files can be stored in the side DB (maybe in the form of base64string?) and can be distributed via Gossip Protocol (a P2P protocol) which is a feature in Hyperledger Fabric. The hash of the document along with other transaction details can be stored in a block as usual. As they are all native features by Hyperledger Fabric, I expect the transfer of the files via Gossip Protocol and the creation of the corresponding block will be in-sync.
My question is:
Is this way feasible to achieve the requirement? (Distribution of the files to different peers while creating a new block) I kinda feel like it is hacky.
Is this a good way / practice to achieve what we want? I have been doing research but I cannot find any implementation similar to this.
Most of the tutorial I found online pre-assumes that the files can be stored in a single centralized storage like cloud or some sort of servers, while our requirement demands a distribution of the files as well. Is my idea described above acceptable and feasible? We are very new to Blockchain and any advice is appreciated!
Is this way feasible to achieve the requirement? (Distribution of the files to different peers while creating a new block) I kinda feel like it is hacky.
So the workflow of private data distribution is that the orderer bundles the private data transaction containing only a hash to verify the data to a new block. So you dont have to do a workaround for this since private data provides this per default. The data itself gets distributed between authorized peers via gossip data dissemination protocol.
Is this a good way / practice to achieve what we want? I have been doing research but I cannot find any implementation similar to this.
Yes and no. Sry to say so. But this depends on your file sizes and amount. Fabric is capable of providing rly high throughput. I would test things out and see if it meets my requirements.
The other approach would be to do a work around and use IPFS (a p2p file system). You can read more about that approach here here
And here is an article discussing storing 'larger files' on chain. Maybe this gives some constructive insights aswell. But keep in mind this is an older article.
Check out IBM Blockchain Document Store, it is the implementation of storing any document (pdf or otherwise) both on and off chain. It has been done.
And while the implementation isn't publicly available, there is vast documentation on it's usage, can probably disseminate some information from it

Difference between Object Storage And File Storage [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Could someone explain what difference between Object Storage and File Storage is please?
I read about Object Storage on wiki, also I read http://www.dell.com/downloads/global/products/pvaul/en/object-storage-overview.pdf, also I read amazons docs(S3), openstack swift and etc. But could someone give me an example to understand better?
All the difference is only that for 'object storage' objects we add more metadata?
For example how to store image like object using some programming language (for example python)?
Thanks.
IMO, Object storage has nothing to do with scale because someone could build a FS which is capable of storing a huge number of files, even in a single directory.
It is also not about the access methods. HTTP access to data in filesystems has been available in many well known NAS systems.
Storage/Access by OID is a way to handle data without bothering about naming it. It could be done on files too. I believe there is an NFS protocol extension that allows this.
I would muster this: Object storage is a (new/different) ''object centric'' way of thinking of data, its access and management.
Think about these points:
What are snapshots today? They are point in time copies of a volume. When a snapshot is taken, all files in the volume are snapped too. Whether all of them like it or not, whether all of them need it or not. A lot of space can get used(wasted?) for a complete volume snapshot while only a few files needed to be snapped.
In an object storage system, you will rarely see snapshots of volumes, objects will be snapshot-ed, perhaps automatically. This is object versioning. All objects need not be versioned, each individual object can tell if it is versioned.
How are files/volumes protected from a disaster? Typically, in a Disaster Recovery(DR) setup, entire volumes/volume-sets are setup for replication to a DR site. Again, this does not bother whether individual files want to be replicated or not. The unit of disaster protection is the volume. Files are small fry.
In an object storage system, DR is not volume centric. Object metadata can decide how many copies should exist and where(geo locations/fault domains).
Similarly for other features:
Tiering - Objects placed in storage tiers/classes based on its metadata independent of other unrelated objects.
Life - Objects move between tiers, change the number of copies, etc, individually, instead of as a group.
Authentication - Individual objects can get authenticated from different authentication domains if required.
As you can see, the change in thinking is that in an object store, everything is about an object.
Contrast this with the traditional way of thinking about and management and access larger containers like volumes(containing files) is not object storage.
The features above and their object-centric-ness fits well with the requirements of unstructured data and hence the interest.
If a storage system is object(or file) centric instead of volume centric in its thinking, (irrespective of the access protocol or the scale,) it is an object storage system.
Disclosure - I work for a vendor (NetApp) that develops and sells both large filesystem and object storage platforms, I'll try to keep this as implementation neutral as I can, but my cognitive biases may unconciously influence my answer.
There are many differences from both an access, programmability and implementation point of view, however given this is likely to be read primarily by programmers rather than infrastructure or storage people, I’ll focus on that aspect here.
The main difference from an external / programming point of view, is that an object in an object store is created or deleted or updated as a complete unit, you can't append data to an object and you can't update a portion of an object "in place", you can however replace it while still keeping the same object ID. Creating, Reading, Updating and Deleting objects is typically done via relatively straightforward APIs, which are almost always REST-ful or REST based and encourages a mindset that the store is a programmable resource or perhaps as multi-tenant remote service. While most of the object stores I'm aware of support byte-range reads within an object, in general objects stores were initially designed to work with whole objects . Good examples of object storage API’s are those used by Amazon S3 (the default standard for object storage access), OpenStack Swift, and Azure Blob Service REST API. Describing the back end implementations behind these APIs would be a book all by itself.
On the other hand files in a filesystem have a broader set of functions that can be applied to them, including appending data, and updating data in place. The programming model is more complex than an object store and is now almost always accessed programatically via a "POSIX" style of interface and generally tries to make the most efficient use of CPU and memory and encourages a mindset that the filesystem is a private local resource. NFS and SMB does allow for a filesystem to be made available as a multi-tenanted resource, however these are often treated with suspicion by programmers as they sometimes have subtle differences in how they react compared to "local" filesystems despite their full support for POSIX semantics. To update files in a local filesystem, you will probably use API’s such as https://www.classes.cs.uchicago.edu/archive/2017/winter/51081-1/LabFAQ/lab2/fileio.html or https://msdn.microsoft.com/en-us/library/mt794711(v=vs.85).aspx. Talking about the relative merits of filesystem implementations e.g. NTFS vs BTRFS vs XFS vs WAFL vs ZFS has a tendency to result in a religious war that is rarely worth anyones time, though if you buy me a beer I’ll happily share my opinions with you.
From a use-case point of view, if you wanted to keep a large number of photo’s, or videos, or binary build artefacts, then an object store is often a good choice. If on the other hand you wanted to persistently store data in a binary tree and update that data in place on the storage media then an object store simply wouldn’t work, and you’d be much better off with a filesystem (you could also use raw block devices for that, but I haven’t seen anybody do that since the early 90s)
The other big differences are that filesystems a designed to be strongly consistent, and are usually accessed over low to moderate latency (50 microseconds - 50 milliseconds) networks whereas object stores are often eventually consistent, and distributed over a shared nothing infrastructure connected together over low bandwidth high latency wide area networks and their time to first byte can sometimes be measured in multiples of whole seconds. Performing lots of small (4K - 16K) random reads from an object store is likely to cause frustration and performance problems.
The other main benefit of an object store vs a filesystem is that you can be reasonably sure that anything you put in an object store will remain there until you ask for it again and that it will never run out of space so long as you keep paying for the monthly charges. These resources are generally run at large scale with built in replication, version control, automated recovery etc etc and nothing short of Hurricane Harvey style disaster will make the data disappear (even then, you have easy options to make another copy in another location). With a filesystem, especially one that you are expecting you or your local operations people to manage, you have to hope that everything is getting backed up and that it doesnt fill up accidentally and cause everything to melt down when you cant update your data anymore.
I've tried to be conscise, but to add to the confusion the words "filesystem" and "object store” get applied to things which are nothing like the descriptions I’ve used above, e.g. NFS the Network file system isn’t actually a filesystem, its a way of implementing the posix storage API’s via remote procedure calls, and VMware’s VSAN stores its data in something they refer to as an "object store" which allows high speed in place updates of the virtual machine images.
There are some very fundamental differences between File Storage and Object Storage.
File storage presents itself as a file system hierarchy with directories, sub-directories and files. It is great and works beautifully when the number of files is not very large. It also works well when you know exactly where your files are stored.
Object storage, on the other hand, typically presents itself via. a RESTful API. There is no concept of a file system. Instead, an application would save a object (files + additional metadata) to the object store via. the PUT API and the object storage would save the object somewhere in the system. The object storage platform would give the application a unique key (analogous to a valet ticket) for that object which the application would store in the application database. If an application wanted to fetch that object, all they would need to do is give the key as part of the GET API and the object would be fetched by the object storage.
Hope this is now clear.
The simple answer is that object accessed storage systems or services utilize APIs and other object access methods for storing, retrieving and looking up data as opposed to traditional file or NAS. For example with file or NAS, you access storage using NFS (Network File System) or CIFS (e.g. windows file share) aka SMB aka SAMBA where the file has a name/handle with associated meta data determined by the file system.
The meta data includes info about create, access, modified and other dates, permissions, security, application or file type, or other attributes. Files are limited by the file system in terms of their size, as well as the number of files per file system. Likewise, file systems are limited by their total or aggregate size in terms of space capacity and the number of files in the filesystem.
Object access is different in that while file or NAS front-end or gateways or plugins are available for many solutions or services, primary access is via an API where an object can be of arbitrary size (up to the maximum of the object system) along with variable sized meta data (depends on the object system/service implementation). With most object storage systems/services you can specify anywhere from a few Kbytes of user defined meta data or GBytes. What would you use GBytes of meta data for? How about in addition to normal info, adding more data for policies, managements, where other copies are located, thumbnails or small previews of videos, audio, etc.
Some examples of object access APIs or interfaces include Amazon Web Services (AWS) simple storage services (S3) or other HTTP and REST based ones, SNIA CDMI. Different solutions will also support IOS (e.g. iphone/ipad) access, SOAP, Torrent, WebDav, JSON, XAM among others plus NFS/CIFS. In addition many of the object storage systems or services support programmatic bindings for python among others. The APIs allow you to essentially open a stream and then get or put, list and other functions supported by the API/system to determine how you will use it.
For example, I use both Rackspace Cloud files and Amazon S3 (in addition to EBS and Glacier) for backing up, storing, and archiving data. I can access the objects stored via a web browser or tools including Jungle disk (JD) which is what I backup and synchronize files with. JD handles the object management and moves data to both Rackspace as well as Amazon for me. If I were inclined, I could also do some programming using the APIs and then directly access either of those sites supplying my security credentials to do things with my stored objects.
Here is a link to object and cloud storage primer from a session I did in Holland last year that has some simple examples of objects and access.
http://storageio.com/DownloadItems/Nijkerk_Nov2012/SIO_IndustryTrends_CloudObjectStorage.pdf
Using the programmatic binding, you would define your data structures or objects in your program and then use the APIs or calls for storing, retrieving, listing of data, meta data access etc. If there is a particular object storage system, software or service that you are looking to work with or need to know how to program to, go to their site and you should find their SDK or API info with examples. With objects, once you create your initial bucket or container on a service or with a product/system, you then simply create and store additional objects as you go.
Here is a link as an example to AWS S3 API/programming:
http://docs.aws.amazon.com/AmazonS3/latest/API/IntroductionAPI.html
In theory object storage systems are talked about has having unlimited numbers of objects, or object size, in reality, most systems, solutions, software or services are limited by what they have either tested or currently support, which can be billions of objects, with objects sizes of 5GByte or larger. Pay attention to the limits on specific services or products as to what is actually tested, supported vs. what is architecturally possible or what is implemented on webex or powerpoint.
Again its very service and product/service/software dependent as to the number of objects, size of the objects, size of meta data, and amount of data that can be moved in/out via their APIs. However, it is generally safe to assume that object storage can be much more scalable (depending on implementation) than file systems (without using global name space, federation, file virtualization or other techniques).
Also in my book Cloud and Virtual Data Storage Networking (CRC Press) that is Intel Recommended Reading, you will find more information about cloud and object storage.
I will be adding more related material to www.objectstorage.us soon.
Cheers gs
Object Storage = Block Storage
+ Rich Metadata
- File hierarchy
Block Storage uses a filesystem to point where content is stored.
Object Storage uses a identifyer to point to content and his context.
This is my understanding of reading Content-addressed vs. location-addressed
Block Storage needs a filesystem and structuring so with bigger files sytems comes more overhead.
The Object storage has a lot of context about the file and doesn't need the file hierarchy.
The explanation on page 7 of the Dell paper clearly shows this..What troubled me to, was that on the scale of the hard disk itself it isn't explained.
I found that a Hard Disk itself always uses a Block storage mechanism (though that seems to be changing to)
(though that seems to be changing to)
some other insights can be found here
This answer doesn't even explain anything about the differences.
There are some very fundamental differences between File Storage and Object Storage.
File storage presents itself as a file system hierarchy with directories, sub-directories and files. It is great and works beautifully when the number of files is not very large. It also works well when you know exactly where your files are stored.
Object storage, on the other hand, typically presents itself via. a RESTful API. There is no concept of a file system. Instead, an application would save a object (files + additional metadata) to the object store via. the PUT API and the object storage would save the object somewhere in the system. The object storage platform would give the application a unique key (analogous to a valet ticket) for that object which the application would store in the application database. If an application wanted to fetch that object, all they would need to do is give the key as part of the GET API and the object would be fetched by the object storage.
This explained a large portion of it; but you argued about the meta data.
Object storage has no sense of folders, or any kind of organization structure which makes it easy for a human to organize. File Storage, of course, does have all those folders that make it so easy for a human to organize and shuffle through...In a server environment with the number of files in a scale that is astronomical, folders are just a waste of space and time.
Databases you say? Well they're not talking about the Object storage itself, they are saying your http service (php, webmail, etc) has the unique ID in its database to reference a file that may have a human recognizable name.
Metadata, well where is this file stored you say? That's what the metadata is for. Your single file is split up into a bunch of small pieces and spread out of geographic location, servers, and hard drives. These small pieces also contain more data, they contain parity information for the other pieces of data, or maybe even outright duplication.
The metadata is used to locate every piece of data for that file over different geographic locations, data centres, servers and hard drives as well as being used to restore any destroyed pieces from hardware failure. It does this automatically. It will even fluidly move these pieces around to have a better spread. It will even recreate a piece that is gone and store it on a new good hard drive.
This maybe a simple explanation; but I think it might help you better understand. I believe file storage can do the same thing with the metadata; but file storage is storage that you can organize as a human (folders, hierarchy and such) whereas object storage has no hierarchy, no folders, just a flat storage container.
Actually you can mount an bucket/container and access the objects or subfolders (and their objects) from Linux. For example, I have s3fs installed on Ubuntu that I have setup a mount point to one of my S3 buckets and able to do regular cp, ls and other functions just as though it were another filesystem. The key is getting the software tool of which there are plenty that allows you to map a bucket/container and present it as mount point. There are also software tools that allow you to access S3 and other buckets/containers via iSCSI in addition to as NAS.
Most companies with object based solutions have a mix of block/file/object storage chosen based on performance/cost reqs.
From a use case perspective:
Ultimately object storage was created to address unstructured data which is growing explosively, far quicker than structured data.
For example, if a database is structured data, unstructured would be a word doc or PDF.
How do you search 1 billion PDFs in a file system? (if it could even store that many in the first place).
How quickly could you search just the metadata of 1 billion files?
Object storage is currently used more for long term or archival, cheap and deep storage, that keeps track of more detail of what that data is. This metadata becomes very powerful when searching or mining very large data sets. Sometimes you can get what you need from the metadata without even accessing the data itself. Object storage solutions can typically replicate automatically with geographic failover built-in.
The problem is that application would have to be re-written to use object access methods rather than file hierarchy (which is simpler from a app dev perspective). It's really a change in the philosophy of data storage, and storing more actionable information about that data from a management standpoint as well as usage.
Quick example might be an MRI scan image. On Filesystem you have owner/creation date, but not much else. If it were an object, all of the information surrounding the MRI could be stored along with it in metadata, like patient name, MRI center location, the requesting Dr., insurance carrier, etc.
Block/file are more well suited for local access or OTLP where performance is more important than retention and cost.
For example, you would not want to wait minutes for a Word doc to open, but you could wait a few minutes for a data mining/business intelligence process to complete.
Another example would be a legal search where you have to search everything from 5 years ago to present. With retention policies in place to decrease the active data set and cost, how would you even do that without restoring from tape?
Object storage is a great solution for replacing long term archival methods like tape.
Setting up replication and failover for block and file can get very expensive in the enterprise and usually requires very expensive software and services.
Note: At the lower level, object storage access happens via the RESTful API which is more like a web request than accessing a file at the end of a path.
Here is a good article worth to read:
https://cloudian.com/blog/object-storage-vs-file-storage/
cited from the ariticle:
To start, object storage overcomes many of the limitations that file storage faces. Think of file storage as a warehouse. When you first put a box of files in there, it seems like you have plenty of space. But as your data needs grow, you’ll fill up the warehouse to capacity before you know it. Object storage, on the other hand, is like the warehouse, except with no roof. You can keep adding data infinitely – the sky’s the limit.
If you’re primarily retrieving smaller or individual files, then file storage shines with performance, especially with relatively low amounts of data. Once you start scaling, though, you may start wondering, “How am I going to find the file I need?”
In this case, you can think of object storage as valet parking while file storage is more like self-parking (yes, another analogy, but bear with me!). When you pull your car into a small lot, you know exactly where your car is. However, imagine that lot was a thousand times larger – it’d be harder to find your car, right?
Because object storage has customizable metadata and all the objects live on a flat address space, it’s similar to handing your keys over to a valet. Your car will be stored somewhere, and when you need it, the valet will get the car for you. It might take a little longer to retrieve your car, but you don’t have to worry about wandering around looking for it.
I think the white paper explains the idea of object storage quite well. I am not aware of any standard way to use object storage devices (in the sense of a SCSI OSD) from a user application.
Object storage is in use in some large scale storage products like the storage appliances of Panasas. However, these appliances then export a file system to the end user. It is IMHO fair to say that the T10 OSD idea never really caught momentum.
Related ideas to the OSD standard can be found in cloud storage systems like S3 and RADOS.

Resources