I have two VM running on Centos7. One is active and another one is passive server.
And created 200GB size LUN in SAN for common share path for both VMs. If I upload files on one server, then same can be seen on another one. Even it helps on failover case of single VM.
Can someone please share me how to setup this method ?.
You might want to use NFS Server so that your can freely share directories plus when one of your nfs client went down you can easily start another then share your files again.
Related
I'm trying to set up a situation where I drop files into a folder on one Azure VM, and they're automatically copied to another Azure VM. I was thinking about mapping a drive from the receiver to the sender and using a file watch/copy program to send the files over the mapped drive.
What's a good recommendation for a file watch/copy program that's simple and efficient, and what security setups do I need to get the two Azure boxes to "talk" to each other? They're in the same account/resource group/etc, so I'm not going outside of a virtual network or anything like that.
By default, VMs in the same virtual network can talk to each other (this is true even if default NSGs are applied). So you wouldn't have to do anything special to get that type of communication working.
To answer the second part, you might want to consider just using built-in FCI rules to execute a short script to do the copy. See this link for a short intro into FCI rules.
Alternatively, you could use a service such as Azure files to have files shared between those servers using CIFS. It really depends on why you are trying to have a copy of the file on two servers.
Hope that helps!
I can see there are some implemented Web, DB servers are able to run as a container, it occurred to me that why not be able to implement as a file server with a centralized storage (e.g. SAN)
Does anyone try this before, or any recommendation to me?
My basic idea is use 2-3 docker images to create the file servers (mostly Windows servers) and they are mounting on the same storage. For the front-end, I may go or DFS namespaces to normalize the UNC path.
Windows based images have Server service disabled out of the box. It's impossible to start it either since drivers are removed as well. It will not be possible to do in Windows containers.
I have my virtual machine running Linux. I've created it via new "Resource manager". Then I added data disk to it.
Then I created new Virtual Machine. And I want it to use the same data disk attached to the first one (at least in read-only mode).
When I try to "attach existing disk" to this new machine I get this error:
Failed to attach existing disk 'DISK-NAME.vhd' to the virtual machine 'MACHINE-NAME'. Error: Failed to acquire lease while creating disk 'DISK-NAME.vhd' using blob with URI https://BLOB-URI-disk1.vhd. Blob is already in use.
How do I attach existing data disk which is in use by another machine to my current machine?
Simply, you can't.
A disk in Azure can only be attached to a single VM at a time. in order to attach it to another VM you need to disconnect it from the first.
If you need to have data shared amongst many machines, you could use Azure File shares which provides SMB 2.1 and SMB 3.0. Most modern Linux versions can connect to this quite seamlessly.
If you need block storage, i.e. sharing an actual disk, you would need to spin up a separate VM and use a protocol like iscsi (or NFS) to share that disk amongst multiple machines.
Maybe the "StorSimple" Solution from Microsoft Azure could be a way to go? I would describe it as SAN on Azure.
I have not tested it today, but it should be possible to connect several virtual machines to it, and share the files.
You can find more information in the documentation:
Azure StorSimple
Extending Michael's answer a bit, based on your comments under his answer:
If the goal is to provide data access when a VM goes down for some reason, and the data is on an attached disk, then you can detach the disk from the downed VM and reattach it to another VM (the detach breaks the lease, and the reattach creates a new lease). But be aware: This is a time-consuming process - it might take a minute or two for each operation. But you can certainly do it, and you can do it programmatically.
Regarding disk replication: Yes, Azure disks are triple-replicated (or replicated 6 times, if you enable georeplication). But logically, it's a single disk; it's replicated for durability, not for you to attach to different replicas.
Michael mentioned Azure File Service. Maybe it wasn't clear what that was but... there's no Virtual Machine involved with File Service - it's a durable-storage SMB service, with its own SLA unrelated to your VMs. You may attach to it from multiple VMs and read/write files as you would a locally-attached disk (which seems to be the problem you're trying to tackle).
Regarding replication of data across VM's: If you choose to go this route, and make physical copies yourself, it's strictly up to you how you do it - there is no "best way." But this is the type of thing database engines are built for (and you can imagine how complex they are, dealing with replication, journaling, errors, etc.).
I know that we can use the VM Depot to get started with the Neo4J in Azur but one thing that is not clear is where should we physically store the DB files. I tried to look around in the net if there are any recommendations on where the physical files would be stored so that then a VM crashes or restarts, the data is not lost.
can someone share their thoughts or point me to a address where some more details can be found on do and don'ts of Neo4j on Azure for a production environment.
Regards
Kiran
When you set up a Neo4j VM via VM Depot, that image, by default, configures the database files to reside within the same VM as the server itself. The location is specified in neo4j-server.properties. This lets you simply spin up the VM and start using Neo4j immediately.
However: You'll soon discover that your storage space is limited (I believe the VM instances are set up with a 127GB disk). To work with larger databases, you'll need to attach an additional disk (or disks), each disk up to 1TB in size. These disks, as well as the main VM disk, are backed by blob storage, meaning they're durable - persistent disks.
How you ultimately configure this is up to you, depending on the size of the database and its purpose. The only storage to avoid, if you need persistence, is the scratch disk provided (which is a locally-attached drive with no durability).
The documentation announcing that VM doesn't say. But when you install neo4j as a package on to other similar linux systems (the VM in question is a linux VM) then the data usually goes into /var/lib/neo4j/data. Here's an example:
user#host:/var/lib/neo4j/data$ pwd
/var/lib/neo4j/data
user#host:/var/lib/neo4j/data$ ls
graph.db keystore log neo4j-service.pid README.txt rrd
user#host:/var/lib/neo4j/data$ cat README.txt
Neo4j Data
=======================================
This directory contains all live data managed by this server, including
database files, logs, and other "live" files.
The main directory you really have to have is the "graph.db" directory. That's going to contain the bulk of the data. May as well back up the entirety of this directory. Some of the files (like the .pid file and the README.txt) of course aren't needed.
Now, there's no guarantee that in the VM that it's going to be /var/lib/neo4j/data but it's going to be something very similar. And what you're going to want is going to be a directory whose name ends in .db since that's the default for new neo4j databases.
To narrow down further, once you get that VM running, just run updatedb then locate *.db | grep neo4j and that's almost certain to find it quickly.
I am application developer and don't know much about virtual machine(VM).
however, our application is resided on a VM. frequent patch need be apply to fix/update this application. For diaster recovery, It was suggest to backup every thing on the server. so, once server is restored, no application need be re-installed and configured.
our network administrator thinks it can be done by cloning VM. but if we want to backup the clone to a tape. it would expose VM to backup drive. any one who can access to it can erase the VM and every thing woudl be gone. it is very risky.
I would appreciate it if you could let me what you think on this or any suggestion.
Cloning is perfectly acceptable.
You don't have to backup to tape... It can be done to a NAS for example, and with the proper security and setup, backups cannot be deleted by unauthorized people.
You can use any NAS and VM replication software like Veeam, Acronis or Nakivo. It will totally solve your problems. All software has various permission settings so you can control who can and who can not delete your data.