I have a disk image (from dd).
Is it possible to save it to NFS (AWS EFS).
Of course I can mount it (loop) but it is over 1.5TB of small files and cp or rsync works very slowly.
I also tried by aws file sync, but unfortunately I get an error: Input/output error.
Hosts:
HOST A: Mounted image dd + nfs server
HOST B: Host with AWS file sync
Yes you can. Use EFS File Sync for the best performance:
https://aws.amazon.com/blogs/aws/efs-file-sync-faster-file-transfer-to-amazon-efs-file-systems/
Related
I am using Azure file share to sync large amounts of data between multiple machines. I followed the mounting docs to mount the file share to an Azure VM (running Ubuntu 20.04):
https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-linux?tabs=smb311
The Azure VM currently has a /data location with a 64 GB data drive that I elected as an additional option when the VM was created. I then mounted the fileshare at the /data location, mostly as a convenience to have the fileshare directory located in the same place as the local data. The file share mount is now located at /data/storage-account-name/fileshare-name.
When I run df, I can see the /data mount (filesystem /dev/sda1) and the /data/storage-account-name/fileshare-name mount (filesystem //storage-account-name.file.core.windows.net/fileshare-name). The two locations seem to be totally separate mounts, and everything is working as expected with the file share.
However, is it bad practice to mount the file share "on top" of the /data disk like this? Is it preferred to mount at /mnt or /media for any reason? Or is the mount location somewhat arbitrary?
I have a storage account where fileshare has been mounted which includes nearly 300+ files in that fileshare. Now if I try unmounting it with below command,
sudo umount /xyx/files
Then what is the command to mount it back? Is it
P
sudo mount /xyx/files ???
I have mounted initially from windows share to Linux OS . Do I need to use the same command or the above mount command?
If I use the same command then will there be any loss of my files?
I tried to reproduce the same in my environment I have mount a file share in storage account:
First make sure you have checked your storage account accessible from public network as below:
I tried to mount a sample file in azure storage account
I have mounted my sample files successfully as below:
To unmount the azure file share, make use of below command:
sudo umount /mnt/<yourfileshare>
In the event that any files are open, or any processes are running in the working directory, the file system cannot be unmounted.
To unmount a file share drive you can make use below command
net use <drive-letter> /delete
When you try to unmount the files once the execution has been complete the mount point will be deleted from that moment, we can't access the data through the mount point on the storage. if you want to restore the file if you have already enabled the soft delete. in file share some files are deleted in that time you can disable the soft delete and in file share you can enable show deleted shares option and you can make use of undelete.
Reference: Mount Azure Blob storage as a file system on Linux with BlobFuse | Microsoft Learn
I am uploading almost 7 TB of files and folders from my remote server to the s3 bucket but I can not see any files on the s3 bucket. only a few files I can see on s3 that was copied successfully.
I have one ec2 server on which I have mounted an s3 bucket using this link
on the remote server, I am using the following script. I have also tested this script and it was working fine for the small size of files
rsync -uvPz --recursive -e "ssh -i /tmp/key.pem" /eb_bkup/OMCS_USB/* appadmin#10.118.33.124:/tmp/tmp/s3fs-demo/source/backups/eb/ >> /tmp/log.txt &
The log file I am generating is showing me files are being copied and all the relevant information like transfer speed, filename, etc. But on the s3 bucket, I can not see any file after the 1st one is copied.
Each file size is from 500MB to 25GB.
Why I cannot see these files on S3?
Amazon S3 is an object storage service, not a filesystem. I recommend you use the AWS Command-Line Interface (CLI) to copy files rather than mounting S3 as a disk.
The AWS CLI includes a aws s3 sync command that is ideal for your purpose -- it will synchronize files between two locations. So, if something fails, you can re-run it and it will not copy files that have already been copied.
So the issue I was facing that rsync was copying files on target ec2 and first creating a temporary file and then writing it over S3 bucket. So multiple rsync job was running and the local ebs volume storage on EC2 server was full. That is why rsync wasn't able to create temp files and was kept copying/writing on the socket.
I have a question regarding file transfer from Amazon efs to my local machine with a simple shell script. The manual procedure I follow is:
Copy the file from efs to my Amazon ec2 instance using sudo cp
Copy from ec2 to my local machine using scp or FileZilla (drag and drop)
Is there a way it can be done running a shell script in which I give two inputs: source file address and save destination directory?
Can two steps be reduced to one i.e. directly copying from efs to local machine?
You should be able to mount to the local machine and access the remote file system locally on your machine.
http://docs.aws.amazon.com/efs/latest/ug/mounting-fs.html
With mounting, you can access the file locally with your machine resources to edit the remote files.
While SCP can work, you need to keep them in sync all the time between your local and remote.
Hope it helps.
I want to mount a folder which is on some other machine to my linux server. To do that i am using the following command
mount -t nfs 192.xxx.x.xx:/opt/oracle /
Which is executing with the following error
mount.nfs: access denied by server while mounting 192.xxx.x.xx:/opt/oracle
Do anyone knows what's going on ??? I am new to linux.
Depending on what distro you're using, you simply edit the /etc/exports file on the remote machine to export the directories you want, then start your NFS daemon.
Then on the local PC, you mount it using the following command:
mount -t nfs {remote_pc_address}:/remote/dir /some/local/dir
Please try with your home directory as per my knowledge you can't dump anything directly on root like that.
For more reference, find full configuration steps here.