Unable to mount disk to directory [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm getting this error while mounting disk to directory. Please let me know what should I do ?
[root#ip-172-31-39-36 ec2-user]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 100G 0 disk
[root#ip-172-31-39-36 ec2-user]# mkdir filesystem
[root#ip-172-31-39-36 ec2-user]# mount /dev/xvdf /filesystem
mount: mount point /filesystem does not exist
[root#ip-172-31-39-36 ec2-user]# ls
filesystem
[root#ip-172-31-39-36 ec2-user]# mount /dev/xvdf /filesystem
mount: mount point /filesystem does not exist

You are creating a directory called filesystem in the current directory and not under root. Either of the following fixes should work:
A. Make the filesystem directory under root
[root#ip-172-31-39-36 ec2-user]# mkdir /filesystem
[root#ip-172-31-39-36 ec2-user]# mount /dev/xvdf /filesystem
B. Use the filesystem directory created under the current directory as mount point
[root#ip-172-31-39-36 ec2-user]# mkdir filesystem
[root#ip-172-31-39-36 ec2-user]# mount /dev/xvdf filesystem

Related

PLEASE HELP! I mounted the new HDD to existing folder (where another HDD has been mounted), and all files (in other HDD) have disappeared [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 days ago.
Improve this question
I have recently bought a new 8TB HDD (barracuda) and tried to connect and mount the HDD to ubuntu system.
However, I made some mistakes and accidentally mounted the HDD to existing folder, where another HDD has been mounted.
Specifically,
(Before buying new HDD) I had 1 SSD & 1 HDD (will denote as HDD1 from now on), where
SSD: /dev/sda ==> Mounted on /home/{username}/SSD via
mount /dev/sda /home/{username}/SSD
HDD1: /dev/sdc1 ==> Mounted on /home/{username}/HDD1 via
mount /dev/sdc1 /home/{username}/HDD1
After buying new HDD, I have connected the HDD and
tried to do:
HDD2: /dev/sdb1 ==> Mount on /home/{username}/HDD2
what I have actually done
mount /dev/sdb1 /home/{username}
--> Mounted /dev/sdb1 to /home/{username}
After running this comman, all the files in HDD1 and SSD has been removed, and has been overwritten by HDD2's files.
I have read the post where the files are shadowed if mounted over it, but I am keep getting trouble restoring these files, and can't even find the files via
du -sh *
or
df -h
at root.
Are there any chances of this procedure overwriting the files in HDD1 and SSD? Are there any ways to restore back the files? PLEASE HELP!!!!!
WHAT I HAVE TRIED
1.
sudo -i
sudo umount /home/{username}
but
error: target is busy
came up.
So I have killed all the processes running on /home/{username} via
fuser -ck /home/{username}
Now I am completely locked in a state where I can't open /home/{username}/HDD and only terminal is available.
2.
I have tried df -h after doing (at root user, via sudo -i)
and following messages come up.
df: /home/{username}/SSD: Input/output error
df: /home/{username}/HDD1: Input/output error
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 13G 0% /dev
...
/dev/loop14 56M 56M 0 100% /snap/core18/2697
/dev/sdb1 7.3T 3.5T 3.5T 51% /home/{username}

write error disk full in EC2 Machine [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I have my EC2 linux instance where some softwares are installed.
I downloaded a new zip and was trying to unzip it.
I got this error write error (disk full?). Continue? (y/n/^C) n
The zip is not corrupted and I can unzip it from other instances.
I change instance type from small to medium and then large.Nothing worked.
I ran df -h .
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 16G 56K 16G 1% /dev
tmpfs 16G 0 16G 0% /dev/shm
/dev/xvda1 9.8G 9.7G 0 100% /
I think /dev/xvda1 is culprit. How can i increase the size of this?
What is this /dev/xvda1
It is not a matter of instance type. You must change the volume (EBS) size.
Go to console and select the EBS of that instance , click action dropdown menu , then click modify volume ( A form will appear with the current volume size, increase it )
Try to remove some kilobytes to be able to run (3). rm -rf /tmp/* for example.
Grow/Expand your filesystem :
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
NOTES :
check Step(1) by lsblk command and check step (3 ) by df -h
Scale down your instance before receiving a huge billing the end of month 😅 ( Let it small as it was )

Linux and Hadoop : Mounting a disk and increasing the cluster capacity [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
First of all, I'm a total noob at hadoop and linux.I have a cluster of five nodes , which when starts shows a each node capacity only 46.6 GB while each machine had around 500 gb space which i dont know how to allocate to these nodes.
(1) Do I have to change the datanode and namenode file size(i checked these and it shows the same space remaining as in the Datanode information tab)? if so how should i do that.
(2)Also this 500gb disk is only shown when i do $lsblk command and not when i do $df -H command. Does that mean its not mounted? These are the results of the commands. Can someone explain what does this mean..
[hadoop#hdp1 hadoop]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 50G 0 disk
\u251c\u2500sda1 8:1 0 500M 0 part /boot
\u2514\u2500sda2 8:2 0 49.5G 0 part
\u251c\u2500VolGroup-lv_root (dm-0) 253:0 0 47.6G 0 lvm /
\u2514\u2500VolGroup-lv_swap (dm-1) 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 512G 0 disk
[hadoop#hdp1 hadoop]$ sudo df -H
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
51G 6.7G 41G 15% /
tmpfs 17G 14M 17G 1% /dev/shm
/dev/sda1 500M 163M 311M 35% /boot
Please help. Thanks in advance.
First can someone help me understand why its showing different memory disks and what it means and where does it reside ?! I seem to not able to figure it out
You are right. Your second disk (sdb) is not mounted anywhere. If you are going to dedicate the whole disk to hadoop data, here is how you should format and mount it:
Format your disk:
mkfs.ext4 -m1 -O dir_index,extent,sparse_super /dev/sdb
For mounting edit the file /etc/fstab. Add this line:
/dev/sdb /hadoop/disk0 ext4 noatime 1 2
After that, create the directory /hadoop/disk0 (it doesn't have to be named like that. you could use a directory of your choice).
mkdir -p /hadoop/disk0
Now you are ready to mount the disk:
mount -a
Finally, you should let hadoop know that you want to use this disk as hadoop storage. Your /etc/hadoop/conf/hdfs-site.xml should contain these config parameters
<property><name>dfs.name.dir</name><value>/hadoop/disk0/nn</value></property>
<property><name>dfs.data.dir</name><value>/hadoop/disk0/dn</value></property>

File last access time. How to mount root filesystem with atime,norelatime [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I've installed a simple LAMP system based on Debian 7.2.0 (32 bits). On my server I want to know when each of PHP files was used (accessed) by web server. When I check last access times of php files (with command ls -alu), they are wrong.
I've found that it is because of relatime option used for mounting of the root filesystem. I've tried to edit my /etc/fstab and to put norelatime,atime options there but it does not work. My current /etc/fstab is:
UUID=d4bb10f1-1428-4ee4-916c-55e800263c3f / ext4 atime,norelatime,errors=remount-ro 0 1
UUID=6db7a3c7-6ff9-43ac-b959-5175039bb84b none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto 0 0
After a reboot, when I type mount, I get:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=127786,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=103240k,mode=755)
/dev/disk/by-uuid/d4bb10f1-1428-4ee4-916c-55e800263c3f on / type ext4 (rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=352700k)
All the partitions have relatime option. Any help?
http://www.linux-archive.org/fedora-development/120241-why-relatime-immune-remount.html and https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/582799 indicate that this does not work on Fedora or Ubuntu, and presumably the same is true for Debian. To quote from the first linked article:
You have to:
echo 0 > /proc/sys/fs/default_relatime
and then mount/remount with 'atime' and it should work.

Cannot write to mounted (external) HDD [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have tried mounting my external (usb) HDD, but eventhough the permissions match (between the user and the mounted disk) I cannot write, even as root. I tried mounting using pmount and "normal" mount.
System info:
Linux b2 2.6.39.4-4 #1 Fri Aug 19 14:41:59 CEST 2011 ppc GNU/Linux
User info:
zero#b2:~$ id -a
uid=1001(zero) gid=100(users) groups=100(users),46(plugdev)
pmount test:
zero#b2:~$ pmount /dev/sdb1 HDD
zero#b2:~$ mount
...
/dev/sdb1 on /media/HDD type ntfs (rw,noexec,nosuid,nodev,uid=1001,gid=100,umask=077,nls=utf8)
zero#b2:~$ stat /media/HDD/
File: `/media/HDD/'
Size: 4096 Blocks: 8 IO Block: 512 directory
Device: 811h/2065d Inode: 5 Links: 1
Access: (0700/drwx------) Uid: ( 1001/ zero) Gid: ( 100/ users)
zero#b2:~$ touch /media/HDD/testtouch
touch: cannot touch `/media/HDD/testtouch': Permission denied
I also cannot add any new directories.
Interestingly enough I CAN edit and save existing files (but not copy etc.)
test writing to existing file:
root#b2:/home/zero# mount -t ntfs /dev/sdb1 -o umask=022,gid=100,uid=1001 TEST/
root#b2:/home/zero# mount -l
...
/dev/sdb1 on /home/zero/TEST type ntfs (rw,umask=022,gid=100,uid=1001)
zero#b2:~$ cat TEST/test
zero#b2:~$ echo "writing text" > TEST/test
zero#b2:~$ cat TEST/test
writing text
Any ideas?
Read/write access to NTFS filesystems in the Linux kernel. Use ntfs-3g (FUSE) if you need read/write access.
sudo apt-get install ntfs-3g
sudo mount -t ntfs-3g /dev/sdb1 /media/HDD
sudo touch /media/HDD/I_can_write,_my_friends
NTFS-3G homepage:
http://www.tuxera.com/community/ntfs-3g-download/
More or support of NTFS in Debian:
https://wiki.debian.org/NTFS

Resources