How does the Jiva Volume Folder structure looks like? what are the files inside the folder - openebs

I have deployed one OpenEBS volume and seeing some default files like volume-head.img, volume-snap.img etc. I need to understand the folder structure of Jiva volume. I am assuming volume-snap.img is for snapshot. Is it a safe approach to keep snapshot related files inside the original volume? How we can recover data if the same volume goes for wrong?

Related

Recover deleted folder from Google VPS

We have a VPS running on Google Cloud which had a very important folder in a user directory. An employee of ours deleted that folder and we can't seem to figure out how to recover it. I came across extundelete but it seems the partition needs to be unmounted for it to work but I don't understand how I would do it on Google. This project took more than a year and that was the latest copy after a fire which took out the last copy from our local servers.
Could anyone please help or guide me in the right direction?
Getting any files back from your VM's disk may be tricky (at best) or impossible (most probably) if the files got overwritten.
Easiest way would be to get them back from a copy or snapshot of your VM's disk. If you have a snapshot of your disk (either taken manually or automatically) from before when the folder in question got delete then you will get your files back.
If you don't have any backups then you may try to recover the files - I've found many guides and tutorials, let me just link the ones I believe would help you the most:
Unix/Linux undelete/recover deleted files
Recovering accidentally deleted files
Get list of files deleted by rm -rf
------------- UPDATE -----------
Your last chance in this battle is to make two clones of the disk
and then detach original disk from the VM and attach one of the clones to keep your VM running. Then use second clone for any experiments. Keep the original untouched in case you mess up the second clone.
Now create a new Windows VM and attach your second clone as the additional disk. At this moment you're ready to try various data redovery software;
UFS Explorer
Virtual Machine Data Recovery
There are plenty of others to try from too.
Another approach would be to create an image from the original disk and export it as a VMDK imagae (and save it to a storage bucket). Then download it to yor local computer and then use for example VMware VMDK Recovery or other specialized software for extracting data from virtual machines disk images.

Azure IoT Storage bind to device local storage

I would like to achieve "Give modules access to a device's local storage". I tried almost every scenario, still couldn't get the data from the module to the host.
In module, all data is stored in /app location. I tried binding /app to host location /etc/iotedge, Also tried binding lots of scenario, it every time creates a 'edgeHub' folder and stores .sst files and logs, so I'm sure its initiating the bind, But why does the data doesn't appear in the host machine? Data are image .jpg files.
Recommend not to bind folder under /etc/iotedge folder. For example bind under home folder.
Recommend not to bind /app inside the container as I believe some application runtime or exe is in that folder. It is good to use another folder.
You need to use docker mount instead of docker volume.
Example: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-store-data-blob

Copy local files to Azure Blob - long file names

I need to copy/sync a folder, containing numerous sub folders and files, from a local machine (Windows Server 2012) to our Azure Blob container. Some paths exceed 260chars.
I attempted to use AzCopy (https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy), but got an exception with a long file name.
What are the options for copying files from a local PC to an Azure Blob container, which have pretty long folder/file names? Something like RoboCopy, but then I'd need to map a folder to my blob storage, and I'm not sure that's possible.
Azure Blob Storage doesn't have the concept of folders. There's just: container name + blob name (though a blob's container name can contain separator characters like \ which makes it appear like a path).
And a container's name cannot exceed 63 characters (and must be lowercase). There's no getting around that. If you're trying to store your local server's path as the container name, and that path exceeds 63 characters, it's not going to work.
Azure File Shares (which are backed by Azure Storage) don't have this limitation, as they support standard file I/O operations and directory structures. If you take this route, you should be able to copy your folder structure as-is. There are a few differences:
File shares may be mounted (as an smb share), allowing you to just copy your content over (e.g. xcopy)
You may make SDK/API calls to copy files (slightly different API)
A file share is limited to 5TB, with total 1000 IOPS across the share

Neo4j Azure hosting and Database location

I know that we can use the VM Depot to get started with the Neo4J in Azur but one thing that is not clear is where should we physically store the DB files. I tried to look around in the net if there are any recommendations on where the physical files would be stored so that then a VM crashes or restarts, the data is not lost.
can someone share their thoughts or point me to a address where some more details can be found on do and don'ts of Neo4j on Azure for a production environment.
Regards
Kiran
When you set up a Neo4j VM via VM Depot, that image, by default, configures the database files to reside within the same VM as the server itself. The location is specified in neo4j-server.properties. This lets you simply spin up the VM and start using Neo4j immediately.
However: You'll soon discover that your storage space is limited (I believe the VM instances are set up with a 127GB disk). To work with larger databases, you'll need to attach an additional disk (or disks), each disk up to 1TB in size. These disks, as well as the main VM disk, are backed by blob storage, meaning they're durable - persistent disks.
How you ultimately configure this is up to you, depending on the size of the database and its purpose. The only storage to avoid, if you need persistence, is the scratch disk provided (which is a locally-attached drive with no durability).
The documentation announcing that VM doesn't say. But when you install neo4j as a package on to other similar linux systems (the VM in question is a linux VM) then the data usually goes into /var/lib/neo4j/data. Here's an example:
user#host:/var/lib/neo4j/data$ pwd
/var/lib/neo4j/data
user#host:/var/lib/neo4j/data$ ls
graph.db keystore log neo4j-service.pid README.txt rrd
user#host:/var/lib/neo4j/data$ cat README.txt
Neo4j Data
=======================================
This directory contains all live data managed by this server, including
database files, logs, and other "live" files.
The main directory you really have to have is the "graph.db" directory. That's going to contain the bulk of the data. May as well back up the entirety of this directory. Some of the files (like the .pid file and the README.txt) of course aren't needed.
Now, there's no guarantee that in the VM that it's going to be /var/lib/neo4j/data but it's going to be something very similar. And what you're going to want is going to be a directory whose name ends in .db since that's the default for new neo4j databases.
To narrow down further, once you get that VM running, just run updatedb then locate *.db | grep neo4j and that's almost certain to find it quickly.

Fstab cache seems funky, filling up

We have an Amazon Server with an S3 mounted partition. I like to think the mount is working, but the directory specified under the use_cache directive is filling up very rapidly, and is not shrinking back down, is this normal?
The config in fstab is s3fs#filemanager /home/user/mounts/FileManager fuse user,use_cache=/home/user/tmp,allow_other,uid=NN,gid=NNN 0 0
Both the mounted directory and the cache are growing at the same rate. Am I doing it wrong?
From the documentation:
If enabled via "use_cache" option, s3fs automatically maintains a local cache of files in the folder specified by use_cache. Whenever s3fs needs to read or write a file on s3 it first downloads the entire file locally to the folder specified by use_cache and operates on it. When fuse release() is called, s3fs will re-upload the file to s3 if it has been changed.
The folder specified by use_cache is just a local cache. It can be deleted at any time. s3fs re-builds it on demand. Note: this directory grows unbounded and can fill up a file system dependent upon the bucket and reads to that bucket. Take precaution by using a quota system or routinely clearing the cache (or some other method).

Resources