I have been trying to follow instructions on how to increase the tmp directory on our VPS from 512mb to 3gb. I successfully modified the tmpdsksize variable in securetmp to 3072000 and saved it using the vi editor and then I entered these lines into the command line:
/etc/init.d/cpanel stop
/etc/init.d/httpd stop
/etc/init.d/lsws stop
/etc/init.d/mysql stop
umount -l /tmp
umount -l /var/tmp
mv /usr/tmpDSK /usr/tmpDSK_back
/scripts/securetmp
/etc/init.d/cpanel start
/etc/init.d/httpd start
/etc/init.d/lsws start
/etc/init.d/mysql start
This is meant to recreate your tmp directory on the VPA.
However this did not work and I now have no tmp directory. The VPS is working and the problem that led me to try increase the tmp directory size has now been fixed. The original problem was running a large select query on the database. But I am concerned about the lack of the tmp directory as this was not my intention. Is it ok to run without one?
The problem with it not creating one seems to come down to running /scripts/securetmp.
Basically when I run this I get errors so my tmp directory is not recreated. The errors I get are these:
root [~]# /scripts/securetmp
/scripts/securetmp: line 1: !/usr/bin/perl: No such file or directory
/scripts/securetmp: line 7: syntax error near unexpected token `}'
/scripts/securetmp: line 7: `BEGIN { unshift #INC, '/usr/local/cpanel'; }'
root [~]# /scripts/securetmp: line 7: syntax error near unexpected token `}'
Any ideas where I am going wrong? I don't have a ton of Linux experience, it's a case of Google and learn. I am accessing the VPS remotely using Putty. I have Googled around lots but can't find much info on /scripts/securetmp errors. Everywhere that talks about increasing tmp directory size just acts like running that line will work. I did not modify lines 1 and 7 when changing the tmp directory size.
The VPS is running Cent OS 6.3.
Running scripts/securetmp to increase my tmpDSK size didn't work for me either: That script simply deleted the partition so I was left with no tmpDSK!
This is on an Xen VPS server with WHM/cpanel.
After many hours of persistence, I found this post:
How to increase the size of disk space /tmp (/usr/tmpDSK) partition in linux server
Only thing I had to change was:
1.) Stop MySql service and process kill the tailwatchd process.
[root#server ~]# /etc/init.d/mysqld stop
[root#server ~]# kill -9 2522
To:
1.) Stop MySql service and process kill the tailwatchd process.
[root#server ~]# /etc/init.d/cpanel stop
[root#server ~]# /etc/init.d/mysql stop
(To start these services again when you've finnished, change the stop to start)
Also at step No. 11
11.)Edit the fstab and replace /tmp entry line with :-
/usr/tmpDSK /tmp ext3 loop,noexec,nosuid,rw 0 0
Here is how to access and edit that pesky etc/fstab with SSH:
To make sure this partition is mounted automatically after every reboot, edit the /etc/fstab and replace /tmp entry line with the following one.
/usr/temp-disk /tmp ext3 rw,noexec,nosuid,loop 0 0
[root#server ~]# pico -w /etc/fstab
You should see something like this:
code:
/dev/hda3 / ext3 defaults,usrquota 1 1
/dev/hda1 /boot ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /proc proc defaults 0 0
none /dev/shm tmpfs defaults 0 0
/dev/hda2 swap swap defaults 0 0
At the bottom add
code:
/usr/temp-disk /tmp ext3 rw,noexec,nosuid,loop 0 0
While we are at it we are going to secure /dev/shm. Look for the mount line for /dev/shm and change it to the following:
none /dev/shm tmpfs noexec,nosuid 0 0
Umount and remount /dev/shm for the changes to take effect.
[root#server ~]# umount /dev/shm
[root#server ~]# /dev/shm
Hit: Ctrl + x to exit, y to save
Well I didn't quite do that either.
Here is my etc/fstab:
/dev/sda1 / ext3 defaults,usrquota,grpquota 1 1
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs noexec,nosuid 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
/dev/sda2 swap swap defaults 0 0
/usr/tmpDSK /tmp ext3 loop,noexec,nosuid,rw 0 0
/tmp /var/tmp ext3 defaults,bind,noauto 0 0
I already had the /usr/tmpDSK line, so I just replaced that line with the one recommended, leaving the bottom /tmp line intact.
Everything now works great.
My 1G tmpDSK which was 85% full, has now been increased to 2G, and only 7% full.
I also didn't restore the contents of my tmp backup (it was over-full of crudd).
Best to check first though that everything is still working OK - you might have something in that previous tmp file that's needed.
Related
I had a fully working Amazon Linux 2 instance, running on t2.small instance type. I wanted to try changing the instance to a t2.medium type to test. As I have done in the past, I simply shut down the instance, changed the type, and then restarted the instance.
After the restart, apache was down and my sites were un-reachable. I was able to login to the instance and when trying to start apache I discovered that the root drive was now read only which prevented start/etc. Through some troubleshooting I was able to get the drive remounted and thing running as normal, but everytime I restart the instance, it goes back to read-only and I have to perform the same fix each time to get it back to normal. I believe it's an issue with my /etc/fstab root device UUID not matching the current root device UUID. I never changed any of the attached EBS volumes, so I'm not sure how the change occured.
Some relevant info:
$ cat /etc/os-release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
To discover the UUID mismatch/fix, I performed the following:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 50G 0 disk
└─xvda1 202:1 0 50G 0 part /
xvdb 202:16 0 50G 0 disk
xvdf 202:80 0 50G 0 disk
└─xvdf1 202:81 0 50G 0 part
$ sudo blkid
/dev/xvda1: LABEL="/" UUID="2a7884f1-a23b-49a0-8693-ae82c155e5af" TYPE="xfs" PARTLABEL="Linux" PARTUUID="4d1e3134-c9e4-456d-a253-374c91394e99"
/dev/xvdf1: LABEL="/" UUID="a8346192-0f62-444c-9cd0-655ed0d49a8b" TYPE="ext4" PARTLABEL="Linux" PARTUUID="2688b30d-29ef-424f-9196-05ec7e4a0d80"
I had read that a possible fix would be to perform the following:
$ sudo mount -o remount,rw /
mount: /: can't find UUID=-1a7884f1-a23b-49a0-8693-ae82c155e5af.
Obviously, that didn't work. So I looked at my /etc/fstab:
#
UUID=-1a7884f1-a23b-49a0-8693-ae82c155e5af / xfs defaults,noatime 1 1
/swapfile swap swap defaults 0 0
Seeing this mismatch, I tried:
sudo mount -o remount nouuid /
Which worked, made the root writeable and I was able to get services back up and running.
So, this is how I've come to the belief that it has to do with the mismatch of the UUID in fstab.
My Questions:
Should I change the entry in /etc/fstab to match the current UUID: 2a7884f1-a23b-49a0-8693-ae82c155e5af
Any idea why this happened and how I can prevent it from happening in the future?
This is all done as the root user.
The script for backups at /usr/share/perl5/PVE/VZDump/LXC.pm sets a default mount point
my $default_mount_point = "/mnt/vzsnap0";
But regardless of whether I use the GUI or the command line I get the following error:
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0:
Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
And lines 160 - 161 in that script is:
my $rootdir = $default_mount_point;
mkpath $rootdir;
After the installation before I created any images or did any backups I setup two things.
(1) SSHFS mount for /mnt/backups
(2) Added all other drives as Linux LVM
What I did for the drive addition is as simple as:
pvcreate /dev/sdb1
pvcreate /dev/sdc1
pvcreate /dev/sdd1
pvcreate /dev/sde1
vgextend pve /dev/sdb1
vgextend pve /dev/sdc1
vgextend pve /dev/sdd1
vgextend pve /dev/sde1
lvextend pve/data /dev/sdb1
lvextend pve/data /dev/sdc1
lvextend pve/data /dev/sdd1
lvextend pve/data /dev/sde1
For the SSHFS instructions see my blog post on it: https://6ftdan.com/allyourdev/2018/02/04/proxmox-a-vm-server-for-your-home/
Here are filesystem directory permission related files and details.
cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 9.0M 1.6G 1% /run
/dev/mapper/pve-root 37G 8.0G 27G 24% /
tmpfs 7.9G 43M 7.8G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/fuse 30M 20K 30M 1% /etc/pve
sshfs#10.0.0.10:/mnt/raid/proxmox_backup 1.4T 725G 672G 52% /mnt/backups
tmpfs 1.6G 0 1.6G 0% /run/user/0
ls -dla /mnt
drwxr-xr-x 3 root root 0 Aug 12 20:10 /mnt
ls /mnt
backups
ls -dla /mnt/backups
drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups
The command that I desire to succeed is:
vzdump 103 --compress lzo --node ProxMox --storage backup --remove 0 --mode snapshot
For the record the container image is only 8GB in size.
Cloning containers does work and snapshots work.
Q & A
Q) How are you running the perl script?
A) Through the GUI you click on Backup now, then select your storage (I have backups and local and the both produce this error), then select the state of the container (Snapshot, Suspend, Stop each produce the same error), then compression type (none, LZO, and gzip each produce the same error). Once all that is set you click Backup and get the following output.
INFO: starting new backup job: vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0
INFO: Starting Backup of VM 103 (lxc)
INFO: Backup started at 2019-08-18 16:21:11
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: Passport
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0: Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
INFO: Failed at 2019-08-18 16:21:11
INFO: Backup job finished with errors
TASK ERROR: job errors
From this you can see that the command is vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0 . I've also tried logging in with a SSH shell and running this command and get the same error.
Q) It could be that the directory's "immutable" attribute is set. Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
A) root#ProxMox:~# lsattr /
--------------e---- /tmp
--------------e---- /opt
--------------e---- /boot
lsattr: Inappropriate ioctl for device While reading flags on /sys
--------------e---- /lost+found
lsattr: Operation not supported While reading flags on /sbin
--------------e---- /media
--------------e---- /etc
--------------e---- /srv
--------------e---- /usr
lsattr: Operation not supported While reading flags on /libx32
lsattr: Operation not supported While reading flags on /bin
lsattr: Operation not supported While reading flags on /lib
lsattr: Inappropriate ioctl for device While reading flags on /proc
--------------e---- /root
--------------e---- /var
--------------e---- /home
lsattr: Inappropriate ioctl for device While reading flags on /dev
lsattr: Inappropriate ioctl for device While reading flags on /mnt
lsattr: Operation not supported While reading flags on /lib32
lsattr: Operation not supported While reading flags on /lib64
lsattr: Inappropriate ioctl for device While reading flags on /run
Q) Can you manually created /mnt/vzsnap0 without any issues?
A) root#ProxMox:~# mkdir /mnt/vzsnap0
mkdir: cannot create directory ‘/mnt/vzsnap0’: Permission denied
Q) Can you replicate it in a clean VM ?
A) I don't know. I don't have an extra system to try it on and I need the container's I have on it. Trying it within a VM in ProxMox… I'm not sure. I suppose I could try but I'd really rather not have to just yet. Maybe if all else fails.
Q) If you look at drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups, it looks like there are is a user with id 1001 which has access to the backups, so not even root will be able to write. You need to check why it is 1001 and which group is represented by 1002. Then you can add your root as well as the user under which the GUI runs to the group with id 1002.
A) I have no problem writing to the /mnt/backups directory. Just now did a cd /mnt/backups; mkdir test and that was successful.
From the message
mkdir /mnt/vzsnap0: Permission denied
it is obvious the problem is the permissions for /mnt directory.
It could be that the directory `s "immutable" attribute is set.
Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
As a reference:
The lower-case i in lsattr output indicates that the file or directory is set as immutable: even root must clear this attribute first before making any changes to it. With root access, you should be able to remove this with chattr -i /mnt, but there is probably a reason why this was done in the first place; you should find out what the reason was and whether or not it's still applicable before removing it. There may be security implications.
So, if this is the case, try:
chattr -i /mnt
to remove it.
References
lsattr output
According to inode flags—attributes manual page:
FS_IMMUTABLE_FL 'i':
The file is immutable: no changes are permitted to the file
contents or metadata (permissions, timestamps, ownership, link
count and so on). (This restriction applies even to the supe‐
ruser.) Only a privileged process (CAP_LINUX_IMMUTABLE) can
set or clear this attribute.
As long as the bounty is still up I'll give it to a legitimate answer that fixes the problem described here.
What I'm writing here for you all is a work around I've thought of which works. Note, it is very slow.
Since I am able to write to the /mnt/backups directory, which exists on another system on the network, I went ahead and changed the Perl script to point to /mnt/backups/vzsnap0 instead of /mnt/vzsnap0.
Bounty remains for anyone who can get the /mnt directory to work for the mount path to successfully mount vzsnap0 for the backup script..
1)
Perhaps your "/mnt/vzsnap0" is mounted as read only?
It may tell from your:
/dev/pve/root / ext4 errors=remount-ro 0 1
'errors=remount-ro' means in case of mistake remounting the partition like readonly. Perhaps this setting applies for your mounted filesystem as well.
Can you try remounting the drive as in the following link? https://askubuntu.com/questions/175739/how-do-i-remount-a-filesystem-as-read-write
And if that succeeds, manually create the directory afterwards?
2) If that didn't help:
https://www.linuxquestions.org/questions/linux-security-4/mkdir-throws-permission-denied-error-in-a-directoy-even-with-root-ownership-and-777-permission-4175424944/
There, someone remarked:
What is the filesystem for the partition that contains the directory.[?]
Double check the permissions of the directory, or whether it's a
symbolic link to another directory. If the directory is an NFS mount,
rootsquash can prevent writing by root.
Check for attributes (lsattr). Check for ACLs (getfacl). Check for
selinux restrictions. (ls -Z)
If the filesystem is corrupt, it might be initially mounted RW but
when you try to write to a bad area, change to RO.
Great, turns out this is a pretty long-standing issue with Ubuntu Make which is faced by many people.
I saw a workaround mentioned by an Ubuntu Developer in the above link.
Just follow the below steps:
sudo -s
unset SUDO_UID
unset SUDO_GID
Then run umake to install your application as normal.
you should now be able to install to any directory you want. Works flawlessly for me.
try ls laZ /mnt to review the security context, in case SE Linux is enabled. relabeling might be required then. errors=remount-ro should also be investigated (however, it is rather unlikely lsattr would fail, unless the /mnt inode itself is corrupted). Creating a new directory inode for these mount-points might be worth a try; if it works, one can swap them.
Just change /mnt/backups to /mnt/sshfs/backups
And the vzdump will work.
I have an AWS instance.
Suddenly I can see this new mount option stripe=32736
/dev/xvdb on /var/lib/elasticsearch0 type ext4 (rw,relatime,stripe=32736,data=ordered)
/dev/xvdc on /var/lib/elasticsearch1 type ext4 (rw,relatime,stripe=32736,data=ordered)
But this option not appears in fstab
root#thorin:~# cat /etc/fstab
# HEADER: This file was autogenerated at 2017-09-12 16:38:10 +0200
# HEADER: by puppet. While it can still be managed manually, it
# HEADER: is definitely not recommended.
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvdb /var/lib/elasticsearch0 ext4 defaults 0 0
/dev/xvdc /var/lib/elasticsearch1 ext4 defaults 0 0
11.0.0.228://el_backup /srv/backup/el nfs4 tcp,nolock,rsize=32768,wsize=32768,intr,noatime,actimeo=3 0 0
So, I have two questions.
Why have this happend?
How can I fix this?
Some time ago I fix this in another machine. I was something related with RAID headers. There was some tool to set stripe to 0 but I can't find this now.
I can answer second question.
tune2fs -f -E stride=0 /dev/xvdb
I created a d2.xlarge EC2 instance on AWS which returns the following output:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 1.8T 0 disk
xvdc 202:32 0 1.8T 0 disk
xvdd 202:48 0 1.8T 0 disk
The default /etc/fstab looks like this
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvdb /mnt auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
Now, I make an EXT4 filesystem for xvdc
$ sudo mkfs -t ext4 /dev/xvdc
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 488375808 4k blocks and 122101760 inodes
Filesystem UUID: 2391499d-c66a-442f-b9ff-a994be3111f8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done
blkid returns a UID for the filesystem
$ sudo blkid /dev/xvdc
/dev/xvdc: UUID="2391499d-c66a-442f-b9ff-a994be3111f8" TYPE="ext4"
Then, I mount it on /mnt5
$ sudo mkdir -p /mnt5
$ sudo mount /dev/xvdc /mnt5
It gets succesfully mounted. Till there, the things work fine.
Now, I reboot the machine(first stop it and then start it) and then SSH into the machine.
I do
$ sudo blkid /dev/xvdc
It returns me nothing. Where did the filesystem go which I created before the reboot? I guess the filesystem for mounts remain created even after the reboot cycle.
Am I missing something to mount a partition on an AWS EC2 instance?
I followed this http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html and it does not seem to work as described above
You need to read up on EC2 Ephemeral Instance Store volumes. When you stop an instance with this type of volume the data on the volume is lost. You can reboot by performing a reboot/restart operation, but if you do a stop followed later by a start the data is lost. A stop followed by a start is not considered a "reboot" on EC2. When you stop an instance it is completely shut down and when you start it back later it is basically recreated on different backing hardware.
In other words what you describe isn't an issue, it is expected behavior. You need to be very aware of how these volumes work before depending on them.
I use following command to mount "/dev/sdb1" to "/storage" directory:
mount -t ext3 /dev/sdb1 /storage
After run above command, I can use "df -h" can see it:
/dev/sdb1 147G 188M 140G 1% /storage
But after i restart the server, it disappear, and i have to run mount command again.
Is there a command that can keep the mount even if i restart the server?
Add the following line to your /etc/fstab file:
# device name mount point fs-type options dump-freq pass-num
/dev/sdb1 /storage ext3 defaults 0 0
You can run (as root):
echo "/dev/sdb1 /storage ext3 defaults 0 0" >> /etc/fstab
You need to add relevant information to /etc/fstab.