I am not a developer so this is not a technical question. We're looking at using CEPH storage by adding it to our current application but I can't seem to get an answer for how CEPH store files if we plan to use CEPH Object Storage. If I send a 1GB file to CEPH Object store, does CEPH split the file in "Chunks" and store it across multiple OSD? OR does CEPT storage that single 1GB file on multiple OSD?
Thank you for answering my question.
Yes, Ceph stripes data (similar to RAID 0). You can refer to HOW CEPH CLIENTS STRIPE DATA for the detail.
Related
I programmed an API with nodejs and express like million others out there and it will go live in a few weeks. The API currently runs on a single docker host with one volume for persistent data containing images uploaded by users.
I'm now thinking about scalability and a high availability setup where the question about network volumes come in. I've read a lot about NFS volumes and potentially the S3 Driver for a docker swarm.
From the information I gathered, I sorted out two possible solutions for the swarm setup:
Docker Volume Driver
I could connect each docker host either to an S3 Bucket or EFS Storage with the compose file
Connection should work even if I move VPS Provider
Better security if I put a NFS storage on the private part of the network (no S3 or EFS)
API Multer S3 Middleware
No attached volume required since the S3 connection is done from within the container
Easier swarm and docker management
Things have to be re-programmed and a few files needs to be migrated
On a GET request, the files will be provided by AWS directly instead of the API
Please, tell me your opposition on this. Am I getting this right or do I miss something? Which route should I take? Is there something to consider with latency or permissions when mounting from different hosts?
Tipps on S3, EFS are definitely welcome, since I have no knowledge yet.
I would not recommend saving to disk, instead use S3 API directly - create buckets and write in your app code.
If you're thinking of mounting a single S3 bucket as your drive there are severe limitations with that. The 5Gb limit. Anytime you modify contents in any way the driver will reupload the entire bucket. If there's any contention it'll have to retry. Years ago when I tried this the fuse drivers weren't stable enough to use as part of a production system, they'd crash and you have to remount. It was a nice idea but could only be used as an ad hoc kind of thing on the command line.
As far as NFS for the love of god don't do this to yourself you're taking on responsibility for this on yourself.
EFS can't really comment, by the time it was available most people just learned to use S3 and it is cheaper.
I have my virtual machine running Linux. I've created it via new "Resource manager". Then I added data disk to it.
Then I created new Virtual Machine. And I want it to use the same data disk attached to the first one (at least in read-only mode).
When I try to "attach existing disk" to this new machine I get this error:
Failed to attach existing disk 'DISK-NAME.vhd' to the virtual machine 'MACHINE-NAME'. Error: Failed to acquire lease while creating disk 'DISK-NAME.vhd' using blob with URI https://BLOB-URI-disk1.vhd. Blob is already in use.
How do I attach existing data disk which is in use by another machine to my current machine?
Simply, you can't.
A disk in Azure can only be attached to a single VM at a time. in order to attach it to another VM you need to disconnect it from the first.
If you need to have data shared amongst many machines, you could use Azure File shares which provides SMB 2.1 and SMB 3.0. Most modern Linux versions can connect to this quite seamlessly.
If you need block storage, i.e. sharing an actual disk, you would need to spin up a separate VM and use a protocol like iscsi (or NFS) to share that disk amongst multiple machines.
Maybe the "StorSimple" Solution from Microsoft Azure could be a way to go? I would describe it as SAN on Azure.
I have not tested it today, but it should be possible to connect several virtual machines to it, and share the files.
You can find more information in the documentation:
Azure StorSimple
Extending Michael's answer a bit, based on your comments under his answer:
If the goal is to provide data access when a VM goes down for some reason, and the data is on an attached disk, then you can detach the disk from the downed VM and reattach it to another VM (the detach breaks the lease, and the reattach creates a new lease). But be aware: This is a time-consuming process - it might take a minute or two for each operation. But you can certainly do it, and you can do it programmatically.
Regarding disk replication: Yes, Azure disks are triple-replicated (or replicated 6 times, if you enable georeplication). But logically, it's a single disk; it's replicated for durability, not for you to attach to different replicas.
Michael mentioned Azure File Service. Maybe it wasn't clear what that was but... there's no Virtual Machine involved with File Service - it's a durable-storage SMB service, with its own SLA unrelated to your VMs. You may attach to it from multiple VMs and read/write files as you would a locally-attached disk (which seems to be the problem you're trying to tackle).
Regarding replication of data across VM's: If you choose to go this route, and make physical copies yourself, it's strictly up to you how you do it - there is no "best way." But this is the type of thing database engines are built for (and you can imagine how complex they are, dealing with replication, journaling, errors, etc.).
We're using Managed VMs, and can currently serve files from the local disk in the VM (which is a standard magnetic HD), as well as serving from Google Cloud Storage (which is also backed by magnetic HDs).
https://cloud.google.com/appengine/docs/managed-vms/
As we're working with large files (high-res geo images) in a latency-sensitive context, we'd like to be able to use Local SSDs with our Managed VMs app (it's okay that the data is not persistent, it just needs to be fast and work with large files). At some point, we may want to use other services that are fast and designed for working with large files (e.g. Blobstore?), but we have a workflow already set up to work with files so it should be easiest to set up a faster file system now. Is it possible to use Local SSD storage with Managed VMs?
Here's info on Local SSDs. They need to be created at instance creation time (for Google Compute instances, which Managed VMs are creating behind the scenes). It looks like Local SSDs can be created via command line, gcloud compute, or an API, but it's not clear where we'd configure any of these things since Managed VMs is doing the instance creation for us. Presumably we'd do this in app.yaml, Dockerfile, or in a gcloud command, but it's not obvious how this would work.
https://cloud.google.com/compute/docs/disks/local-ssd
Apologies, this isn't currently available. We don't really let you customize the VM specs, aside from the disk size. If you max out the disk size, you will get the highest IOPS we support. We kicked around the idea of tmpfs, but it's not available just yet.
I'm looking for the best way to switch between using the local filesystem and the Amazon S3 filesystem.
I think ideally I would like a wrapper to both filesystems that I can code against. A configuration change would tell the wrapper which filesystem to use. This is useful to me because a developer can use their local filesystem, but our hosted environments can use Amazon S3 by just changing a configuration option.
Are there any existing wrappers that do this? Should I write my own wrapper?
Is there another approach I am not aware of that would be better?
There's a project named s3fs that offers a subset of POSIX file system function on top of S3. There's no native Amazon-provided way to do this.
However, you should think long and hard about whether or not this is a sensible option. S3 is an object store, not a regular file system, and it has quite different performance and latency characteristics.
If you're looking for high iops, NAS-style storage then Amazon EFS (in preview) would be more appropriate. Or roll your own NFS/CIFS solution using EBS volumes or SoftNAS or Gluster.
I like your idea to build a wrapper that can use either the local file system or S3. I'm not aware of anything existing that would provide that for you, but would certainly be interested to hear if you find anything.
An alternative would be to use some sort of S3 file system mount, so that your application can always use standard file system I/O but the data might be written to S3 if your system has that location configured as an S3 mount. I don't recommend this approach because I've never heard of an S3 mounting solution that didn't have issues.
Another alternative is to only design your application to use S3, and then use some sort of S3 compatible local object storage in your development environment. There are several answers to this question that could provide an S3 compatible service during development.
There's a service called JuiceFS that can do what you want.
According to their documentation:
JuiceFS is a POSIX-compatible shared filesystem specifically designed
to work in the cloud.
It is designed to run in the cloud so you can utilize the cheap price
of object storage service to store your data economically.
It is a
POSIX-compatible filesystem so you can access your data seamlessly as
accessing local files.
It is a shared filesystem so you can share your
files across multiple machines.
s3 is one of the backends supported, you can even configure it to replicate files to a different object storage system on another cloud.
I know that we can use the VM Depot to get started with the Neo4J in Azur but one thing that is not clear is where should we physically store the DB files. I tried to look around in the net if there are any recommendations on where the physical files would be stored so that then a VM crashes or restarts, the data is not lost.
can someone share their thoughts or point me to a address where some more details can be found on do and don'ts of Neo4j on Azure for a production environment.
Regards
Kiran
When you set up a Neo4j VM via VM Depot, that image, by default, configures the database files to reside within the same VM as the server itself. The location is specified in neo4j-server.properties. This lets you simply spin up the VM and start using Neo4j immediately.
However: You'll soon discover that your storage space is limited (I believe the VM instances are set up with a 127GB disk). To work with larger databases, you'll need to attach an additional disk (or disks), each disk up to 1TB in size. These disks, as well as the main VM disk, are backed by blob storage, meaning they're durable - persistent disks.
How you ultimately configure this is up to you, depending on the size of the database and its purpose. The only storage to avoid, if you need persistence, is the scratch disk provided (which is a locally-attached drive with no durability).
The documentation announcing that VM doesn't say. But when you install neo4j as a package on to other similar linux systems (the VM in question is a linux VM) then the data usually goes into /var/lib/neo4j/data. Here's an example:
user#host:/var/lib/neo4j/data$ pwd
/var/lib/neo4j/data
user#host:/var/lib/neo4j/data$ ls
graph.db keystore log neo4j-service.pid README.txt rrd
user#host:/var/lib/neo4j/data$ cat README.txt
Neo4j Data
=======================================
This directory contains all live data managed by this server, including
database files, logs, and other "live" files.
The main directory you really have to have is the "graph.db" directory. That's going to contain the bulk of the data. May as well back up the entirety of this directory. Some of the files (like the .pid file and the README.txt) of course aren't needed.
Now, there's no guarantee that in the VM that it's going to be /var/lib/neo4j/data but it's going to be something very similar. And what you're going to want is going to be a directory whose name ends in .db since that's the default for new neo4j databases.
To narrow down further, once you get that VM running, just run updatedb then locate *.db | grep neo4j and that's almost certain to find it quickly.