How can i move svn root repository to different file system? - linux

I'm trying to move my svn root repository because my current svn file system size is low, so only i am moving to different file system. last two days i am searching about that but i am not clear. my environment configurations is a follows below,
OS : Centos 6
svn version: svn, version 1.6.11 (r934486)
root directory: /var/www/svn/
File System Details:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 8.2G 1.2G 88% /
tmpfs 5.7G 72K 5.7G 1% /dev/shm
/dev/sda2 9.9G 8.2G 1.3G 87% /usr
/dev/sda3 9.9G 8.8G 557M 95% /var
/dev/sda6 422G 61G 339G 16% /data
I want to move svn root repository from "/var" to "/data" file system.
please help me... what is the command to change svn root repository.
Advance in Thanks.

From filesystem's POV, repository on server is just directory (subdir of /var/www/svn/ or /var/www/svn/ directly - it isn't clear from description and wrong used lingua). In order to change physical location, you have just mv SRC DEST and later change configuration of used (if any) Subversion server (-r option for svnserve, SVNPath|SVNParentPath for Apache)

Related

How should I Merge mounted drive(Volume) with root drive in AWS EC2

I have mounted 600 GB of extra space in my AWS EC2 at /data. But as I started using Jenkins i realized that My Jenkins is not using any of that extra space and now I am left with only 1.5 GB of storage.
Is there any way to merge the extra storage with root storage?
Result of df -h command
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.9G 68K 7.9G 1% /dev
tmpfs 7.9G 4.0K 7.9G 1% /dev/shm
/dev/xvda1 7.9G 6.3G 1.5G 82% /
/dev/xvdf 600G 1.8G 598G 1% /data
I want to merge /dev/xvda1 and /dev/xvdf.
Is it even possible?
Edit: Someone suggested to move my jenkins to new drive. If it will not not hamper my current work then i think it will be a good solution. Any opinions on this?
Quick way:
you can stop your jenkins instance and create AMI image on it.
Then create base on this image as a new EC2 instance with large storage directly.

Mounting a directory containing multiple hard drive mount points using NFS

I have 4 hard drivers mounted in a directory:
/dev/sda1 11T 62M 11T 1% /all-hdds/hdd1 │ 36 #MpiParams=ports=#-#
/dev/sdb1 11T 62M 11T 1% /all-hdds/hdd2 │ 37 #PluginDir=
/dev/sdc1 11T 62M 11T 1% /all-hdds/hdd3 │ 38 #PlugStackConfig=
/dev/sdd1 11T 62M 11T 1% /all-hdds/hdd4
Is it possible to export all-hdds as a single NFS point and mount it on other clients? I tried it and I can see all the hdd1, hdd2, etc directories on the client side but when I create files inside them they don't show up on the host so I think maybe I'm hitting some sort of limitation?
Let's assume that /all-hdds itself is mounted from /dev/sde1.
When /all-hdds/hdd1 is not mounted, /all-hdds (on sde1) still has a directory /hdd1, which is empty. When you mount sda1, you mounted the root of the filesystem in sda1 onto /all-hdds/hdd1.
But when you export /all-hdds/hdd1 over NFS, it's confined only to the filesystem on /dev/sde1. So if on the client you mounted /all-hdds onto e.g. /client-mountpoint and then created a file /client-mountpoint/hdd1/test, what actually happens is that the file /hdd1/test is created on the filesystem /all-hdds, stored on /dev/sde1.
Of course, you don't see that file, because it is hidden by the filesystem on sda1, mounted onto /all-hdds/hdd1.
What this means is that you need to export all the filesystems, and explicitly tell the server that you want to export them as a tree.
That entails setting fsid=0 mount option on the root of the exported tree, and setting the nohide option on the sub-exports.
The full guide is here (the installation part is Ubuntu-specific, the export part isn't).
Do note that this will mean that the client mounts yoursever:/ rather than yourserver:/all-hdds - NFSv4 only has one root.

Fails to `mkdir /mnt/vzsnap0` for Container Backups with Permission Denied

This is all done as the root user.
The script for backups at /usr/share/perl5/PVE/VZDump/LXC.pm sets a default mount point
my $default_mount_point = "/mnt/vzsnap0";
But regardless of whether I use the GUI or the command line I get the following error:
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0:
Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
And lines 160 - 161 in that script is:
my $rootdir = $default_mount_point;
mkpath $rootdir;
After the installation before I created any images or did any backups I setup two things.
(1) SSHFS mount for /mnt/backups
(2) Added all other drives as Linux LVM
What I did for the drive addition is as simple as:
pvcreate /dev/sdb1
pvcreate /dev/sdc1
pvcreate /dev/sdd1
pvcreate /dev/sde1
vgextend pve /dev/sdb1
vgextend pve /dev/sdc1
vgextend pve /dev/sdd1
vgextend pve /dev/sde1
lvextend pve/data /dev/sdb1
lvextend pve/data /dev/sdc1
lvextend pve/data /dev/sdd1
lvextend pve/data /dev/sde1
For the SSHFS instructions see my blog post on it: https://6ftdan.com/allyourdev/2018/02/04/proxmox-a-vm-server-for-your-home/
Here are filesystem directory permission related files and details.
cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 9.0M 1.6G 1% /run
/dev/mapper/pve-root 37G 8.0G 27G 24% /
tmpfs 7.9G 43M 7.8G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/fuse 30M 20K 30M 1% /etc/pve
sshfs#10.0.0.10:/mnt/raid/proxmox_backup 1.4T 725G 672G 52% /mnt/backups
tmpfs 1.6G 0 1.6G 0% /run/user/0
ls -dla /mnt
drwxr-xr-x 3 root root 0 Aug 12 20:10 /mnt
ls /mnt
backups
ls -dla /mnt/backups
drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups
The command that I desire to succeed is:
vzdump 103 --compress lzo --node ProxMox --storage backup --remove 0 --mode snapshot
For the record the container image is only 8GB in size.
Cloning containers does work and snapshots work.
Q & A
Q) How are you running the perl script?
A) Through the GUI you click on Backup now, then select your storage (I have backups and local and the both produce this error), then select the state of the container (Snapshot, Suspend, Stop each produce the same error), then compression type (none, LZO, and gzip each produce the same error). Once all that is set you click Backup and get the following output.
INFO: starting new backup job: vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0
INFO: Starting Backup of VM 103 (lxc)
INFO: Backup started at 2019-08-18 16:21:11
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: Passport
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0: Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
INFO: Failed at 2019-08-18 16:21:11
INFO: Backup job finished with errors
TASK ERROR: job errors
From this you can see that the command is vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0 . I've also tried logging in with a SSH shell and running this command and get the same error.
Q) It could be that the directory's "immutable" attribute is set. Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
A) root#ProxMox:~# lsattr /
--------------e---- /tmp
--------------e---- /opt
--------------e---- /boot
lsattr: Inappropriate ioctl for device While reading flags on /sys
--------------e---- /lost+found
lsattr: Operation not supported While reading flags on /sbin
--------------e---- /media
--------------e---- /etc
--------------e---- /srv
--------------e---- /usr
lsattr: Operation not supported While reading flags on /libx32
lsattr: Operation not supported While reading flags on /bin
lsattr: Operation not supported While reading flags on /lib
lsattr: Inappropriate ioctl for device While reading flags on /proc
--------------e---- /root
--------------e---- /var
--------------e---- /home
lsattr: Inappropriate ioctl for device While reading flags on /dev
lsattr: Inappropriate ioctl for device While reading flags on /mnt
lsattr: Operation not supported While reading flags on /lib32
lsattr: Operation not supported While reading flags on /lib64
lsattr: Inappropriate ioctl for device While reading flags on /run
Q) Can you manually created /mnt/vzsnap0 without any issues?
A) root#ProxMox:~# mkdir /mnt/vzsnap0
mkdir: cannot create directory ‘/mnt/vzsnap0’: Permission denied
Q) Can you replicate it in a clean VM ?
A) I don't know. I don't have an extra system to try it on and I need the container's I have on it. Trying it within a VM in ProxMox… I'm not sure. I suppose I could try but I'd really rather not have to just yet. Maybe if all else fails.
Q) If you look at drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups, it looks like there are is a user with id 1001 which has access to the backups, so not even root will be able to write. You need to check why it is 1001 and which group is represented by 1002. Then you can add your root as well as the user under which the GUI runs to the group with id 1002.
A) I have no problem writing to the /mnt/backups directory. Just now did a cd /mnt/backups; mkdir test and that was successful.
From the message
mkdir /mnt/vzsnap0: Permission denied
it is obvious the problem is the permissions for /mnt directory.
It could be that the directory `s "immutable" attribute is set.
Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
As a reference:
The lower-case i in lsattr output indicates that the file or directory is set as immutable: even root must clear this attribute first before making any changes to it. With root access, you should be able to remove this with chattr -i /mnt, but there is probably a reason why this was done in the first place; you should find out what the reason was and whether or not it's still applicable before removing it. There may be security implications.
So, if this is the case, try:
chattr -i /mnt
to remove it.
References
lsattr output
According to inode flags—attributes manual page:
FS_IMMUTABLE_FL 'i':
The file is immutable: no changes are permitted to the file
contents or metadata (permissions, timestamps, ownership, link
count and so on). (This restriction applies even to the supe‐
ruser.) Only a privileged process (CAP_LINUX_IMMUTABLE) can
set or clear this attribute.
As long as the bounty is still up I'll give it to a legitimate answer that fixes the problem described here.
What I'm writing here for you all is a work around I've thought of which works. Note, it is very slow.
Since I am able to write to the /mnt/backups directory, which exists on another system on the network, I went ahead and changed the Perl script to point to /mnt/backups/vzsnap0 instead of /mnt/vzsnap0.
Bounty remains for anyone who can get the /mnt directory to work for the mount path to successfully mount vzsnap0 for the backup script..
1)
Perhaps your "/mnt/vzsnap0" is mounted as read only?
It may tell from your:
/dev/pve/root / ext4 errors=remount-ro 0 1
'errors=remount-ro' means in case of mistake remounting the partition like readonly. Perhaps this setting applies for your mounted filesystem as well.
Can you try remounting the drive as in the following link? https://askubuntu.com/questions/175739/how-do-i-remount-a-filesystem-as-read-write
And if that succeeds, manually create the directory afterwards?
2) If that didn't help:
https://www.linuxquestions.org/questions/linux-security-4/mkdir-throws-permission-denied-error-in-a-directoy-even-with-root-ownership-and-777-permission-4175424944/
There, someone remarked:
What is the filesystem for the partition that contains the directory.[?]
Double check the permissions of the directory, or whether it's a
symbolic link to another directory. If the directory is an NFS mount,
rootsquash can prevent writing by root.
Check for attributes (lsattr). Check for ACLs (getfacl). Check for
selinux restrictions. (ls -Z)
If the filesystem is corrupt, it might be initially mounted RW but
when you try to write to a bad area, change to RO.
Great, turns out this is a pretty long-standing issue with Ubuntu Make which is faced by many people.
I saw a workaround mentioned by an Ubuntu Developer in the above link.
Just follow the below steps:
sudo -s
unset SUDO_UID
unset SUDO_GID
Then run umake to install your application as normal.
you should now be able to install to any directory you want. Works flawlessly for me.
try ls laZ /mnt to review the security context, in case SE Linux is enabled. relabeling might be required then. errors=remount-ro should also be investigated (however, it is rather unlikely lsattr would fail, unless the /mnt inode itself is corrupted). Creating a new directory inode for these mount-points might be worth a try; if it works, one can swap them.
Just change /mnt/backups to /mnt/sshfs/backups
And the vzdump will work.

Hudson: returned status code 141: fatal: write error: No space left on device

I copied one of the existing project and created a new project in Hudson. While running build it says "returned status code 141: fatal: write error: No space left on device"
Like suggested in other forums I checked free space and inode used in file system and nothing seems problematic here. Hudson is running as service and Hudons user has been given sudo privilege. Older job can be run so nothing different in new cloned job.
Disk Space
bash-4.1$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_dev-lv_root
20G 19G 28K 100% /
tmpfs 1.9G 192K 1.9G 1% /dev/shm
/dev/sda1 485M 83M 377M 19% /boot
/dev/mapper/vg_dev-lv_home
73G 26G 44G 38% /home
i-nodes used
bash-4.1$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg_dev-lv_root
1310720 309294 1001426 24% /
tmpfs 490645 4 490641 1% /dev/shm
/dev/sda1 128016 46 127970 1% /boot
/dev/mapper/vg_dev-lv_home
4833280 117851 4715429 3% /home
Hudson build log
bash-4.1$ cat log
Started by user anonymous
Checkout:workspace / /var/lib/hudson/jobs/Demo/workspace - hudson.remoting.LocalChannel#1d4ab266
Using strategy: Default
Checkout:workspace / /var/lib/hudson/jobs/Demo/workspace - hudson.remoting.LocalChannel#1d4ab266
Fetching changes from the remote Git repository
Fetching upstream changes from ssh://demouser#10.10.10.10:20/home/git-repos/proj.git
ERROR: Problem fetching from origin / origin - could be unavailable. Continuing anyway
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=ERROR: (Underlying report) : Error performing command: git fetch -t ssh://demouser#10.10.10.10:20/home/git-repos/proj.git +refs/heads/*:refs/remotes/origin/*
Command "git fetch -t ssh://demouser#10.10.10.10:20/home/git-repos/proj.git +refs/heads/*:refs/remotes/origin/*" returned status code 141: fatal: write error: No space left on device
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=ERROR: Could not fetch from any repository
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=FATAL: Could not fetch from any repository
ha:AAAAWB+LCAAAAAAAAABb85aBtbiIQSmjNKU4P08vOT+vOD8nVc8DzHWtSE4tKMnMz/PLL0ldFVf2c+b/lb5MDAwVRQxSaBqcITRIIQMEMIIUFgAAckCEiWAAAAA=hudson.plugins.git.GitException: Could not fetch from any repository
at hudson.plugins.git.GitSCM$3.invoke(GitSCM.java:887)
at hudson.plugins.git.GitSCM$3.invoke(GitSCM.java:845)
at hudson.FilePath.act(FilePath.java:758)
at hudson.FilePath.act(FilePath.java:740)
at hudson.plugins.git.GitSCM.gerRevisionToBuild(GitSCM.java:845)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:622)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1483)
at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:507)
at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:424)
at hudson.model.Run.run(Run.java:1366)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:145)
Your error message is quite clear: There is no space left on device.
This is verified by your df output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_dev-lv_root 20G 19G 28K 100% /
This tells you, you have a root partition / with a total size of 20GB which is use by 100%.
20GB is probably a bit small in your case. As this "partition" is managed by LVM (/dev/mapper/vg...) it is possible to extend it to create more space for your data.
Otherwise you have to check, if there is some "garbage" laying around which can be removed.
You can use something like xdiskusage / to find out, what is occupying your precious disk space.
But if you don't understand the concept of a file system, maybe it is easier to find someone else to do it for you.
I had a very similar issue, it turned out to be a 40 gig log file from a "neverending" build which had been running for 8 hours

linux disk usage report inconsistancy after removing file. cpanel inaccurate disk usage report [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
relevant software:
Red Hat Enterprise Linux Server release 6.3 (Santiago)
cpanel installed 11.34.0 (build 7)
background and problem:
I was getting a disk usage warning (via cpanel) because /var seemed to be filling up on my server. The assumption would be that there was a log file growing too large and filling up the partition. I recently removed a large log file and changed my syslog config to rotate the log files more regularly. I removed something like /var/log/somefile and edited /etc/rsyslog.conf. This is the reason I was suspicious of the disk usage report warning issued by cpanel that I was getting because it didn't seem right.
This is what df was reporting for the partitions:
$ [/var]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.9G 511M 8.9G 6% /
tmpfs 5.9G 0 5.9G 0% /dev/shm
/dev/sda1 99M 53M 42M 56% /boot
/dev/sda8 883G 384G 455G 46% /home
/dev/sdb1 9.9G 151M 9.3G 2% /tmp
/dev/sda3 9.9G 7.8G 1.6G 84% /usr
/dev/sda5 9.9G 9.3G 108M 99% /var
This is what du was reporting for /var mount point:
$ [/var]# du -sh
528M .
clearly something funky was going on. I had a similar kind of reporting inconsistency in the past and I restarted the server and df reporting seemed to be correct after that. I decided to reboot the server to see if the same thing would happpen.
This is what df reports now:
$ [~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 9.9G 511M 8.9G 6% /
tmpfs 5.9G 0 5.9G 0% /dev/shm
/dev/sda1 99M 53M 42M 56% /boot
/dev/sda8 883G 384G 455G 46% /home
/dev/sdb1 9.9G 151M 9.3G 2% /tmp
/dev/sda3 9.9G 7.8G 1.6G 84% /usr
/dev/sda5 9.9G 697M 8.7G 8% /var
This looks more like what I'd expect to get.
For consistency this is what du reports for /var:
$ [/var]# du -sh
638M .
question:
This is a nuisance. I'm not sure where the disk usage reports issued by cpanel get their info but it clearly isn't correct. How can I avoid this inaccurate reporting in the future? It seems like df reporting wrong disk usage is a strong indicator of the source problem but I'm not sure. Is there a way to 'refresh' the filesystem somehow so that the df report is accurate without restarting the server? Any other ideas for resolving this issue?
If you remove a file, but it is still open by some process, the disk space is not recovered -- the process continues to access that file. This is a common problem with log files, because syslogd keeps them all open.
The disk space reported by du doesn't include this file because it works by walking down the directory hierarchy adding up the sizes of all the files it finds. But this file can't be found in any directory, so it's not counted. df reports the actual space used in the filesystem.
The logfile rotation script sends a signal to syslogd telling it to close an reopen all its log files. You can accomplish this with:
killall -HUP syslogd
You also need to do this to get syslogd to use your modified syslog.conf.

Resources