Why can't .exes see files in /tmp in Windows Subsystem for Linux when the current working directory is a subdirectory of /mnt/c? - exe

When I run a .exe file from within WSL when the current working directory is /mnt/c or a subdirectory thereof, it seems to be unable to see files in /tmp. For example, cd /mnt/c; notepad.exe $(mktemp) throws "The system cannot find the file specified", even though non-exe executables work fine, such as cat $(mktemp). Note that mktemp itself works fine, giving the correct output and actually creating the file.
Interestingly, I've noticed that this does not happen when the current working directory is /, among others. There, notepad.exe $(mktemp) works fine. However, when the current working directory is /mnt/c or a subdirectory of it, this strange behavior occurs.
Why can exes in WSL see /tmp when the current working directory is outside /mnt/c, but not from inside it? What causes this to happen?
If it matters, I'm on WSL1 Ubuntu 18.04 LTS.
The output of ps awx with notepad.exe $(mktemp) running is
1 ? Ssl 0:58 /init
5 tty1 Ss 0:00 /init
6 tty1 S 0:09 -bash
3227 tty1 S 0:00 /init /mnt/c/Windows/system32/notepad.exe /tmp/tmp.KQwVgByK8u
3228 tty2 Ss 0:00 /init
3229 tty2 S 0:00 -bash
3255 tty2 R 0:00 ps awx
Process 3227's mount info is
=== /proc/3227/mountinfo ===
2 2 0:2 / / rw,noatime - lxfs rootfs rw
3 2 0:3 / /dev rw,noatime - tmpfs none rw,mode=755
4 2 0:4 / /sys rw,nosuid,nodev,noexec,noatime - sysfs sysfs rw
5 2 0:5 / /proc rw,nosuid,nodev,noexec,noatime - proc proc rw
6 3 0:6 / /dev/pts rw,nosuid,noexec,noatime - devpts devpts rw,gid=5,mode=620
7 2 0:7 / /run rw,nosuid,noexec,noatime - tmpfs none rw,mode=755
8 7 0:8 / /run/lock rw,nosuid,nodev,noexec,noatime - tmpfs none rw
9 7 0:9 / /run/shm rw,nosuid,nodev,noatime - tmpfs none rw
10 7 0:10 / /run/user rw,nosuid,nodev,noexec,noatime - tmpfs none rw,mode=755
11 5 0:11 / /proc/sys/fs/binfmt_misc rw,relatime - binfmt_misc binfmt_misc rw
12 4 0:12 / /sys/fs/cgroup rw,relatime - tmpfs cgroup rw,mode=755
13 12 0:13 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices
14 2 0:14 / /mnt/c rw,noatime - drvfs C:\134 rw,uid=1000,gid=1000,case=off```

Old question, but I had the same problem just now. Found the solution to this on superuser:
https://superuser.com/a/1728907/144431
You need to update wsl utilities to the latest (apparently this years-old bug was finally fixed). On Ubuntu, it's just a matter of doing:
sudo add-apt-repository ppa:wslutilities/wslu
sudo apt update
sudo apt upgrade

Related

How does OCI/runc system path constraining work to prevent remounting such paths?

The background of my question is a set of test cases for my Linux-kernel Namespaces discovery Go package lxkns where I create a new child user namespace as well as a new child PID namespace inside a test container. I then need to remount /proc, otherwise I would see the wrong process information and cannot lookup the correct process-related information, such as the namespaces of the test process inside the new child user+PID namespaces (without resorting to guerilla tactics).
The test harness/test setup is essentially this and fails without --privileged (I'm simplifying to all caps and switching off seccomp and apparmor in order to cut through to the real meat):
docker run -it --rm --name closedboxx --cap-add ALL --security-opt seccomp=unconfined --security-opt apparmor=unconfined busybox unshare -Umpfr mount -t proc /proc proc
mount: permission denied (are you root?)
Of course, the path of least of least resistance as well as least beauty is to use --privileged, which will get the job done and as this is a throw-away test container (maybe there is beauty in the sheer lack of it).
Recently, I became aware of Docker's --security-opt systempaths=unconfined, which (afaik) translates into an empty readonlyPaths in the resulting OCI/runc container spec. The following Docker run command succeeds as needed, it just returns silently in the example, so it was carried out correctly:
docker run -it --rm --name closedboxx --cap-add ALL --security-opt seccomp=unconfined --security-opt apparmor=unconfined --security-opt systempaths=unconfined busybox unshare -Umpfr mount -t proc /proc proc
In case of the failing setup, when running without --privilege and without --security-opt systempaths=unconfined, the mounts inside the child user and PID namespaces inside the container look as follows:
docker run -it --rm --name closedboxx --cap-add ALL --security-opt seccomp=unconfined --security-opt apparmor=unconfined busybox unshare -Umpfr cat /proc/1/mountinfo
693 678 0:46 / / rw,relatime - overlay overlay rw,lowerdir=/var/lib/docker/overlay2/l/AOY3ZSL2FQEO77CCDBKDOPEK7M:/var/lib/docker/overlay2/l/VNX7PING7ZLTIPXRDFSBMIOKKU,upperdir=/var/lib/docker/overlay2/60e8ad10362e49b621d2f3d603845ee24bda62d6d77de96a37ea0001c8454546/diff,workdir=/var/lib/docker/overlay2/60e8ad10362e49b621d2f3d603845ee24bda62d6d77de96a37ea0001c8454546/work,xino=off
694 693 0:50 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw
695 694 0:50 /bus /proc/bus ro,relatime - proc proc rw
696 694 0:50 /fs /proc/fs ro,relatime - proc proc rw
697 694 0:50 /irq /proc/irq ro,relatime - proc proc rw
698 694 0:50 /sys /proc/sys ro,relatime - proc proc rw
699 694 0:50 /sysrq-trigger /proc/sysrq-trigger ro,relatime - proc proc rw
700 694 0:51 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755
701 694 0:51 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755
702 694 0:51 /null /proc/latency_stats rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755
703 694 0:51 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755
704 694 0:51 /null /proc/sched_debug rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755
705 694 0:56 / /proc/scsi ro,relatime - tmpfs tmpfs ro
706 693 0:51 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755
707 706 0:52 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666
708 706 0:49 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw
709 706 0:55 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k
710 706 0:52 /0 /dev/console rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666
711 693 0:53 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro
712 711 0:54 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755
713 712 0:28 /docker/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0 /sys/fs/cgroup/systemd ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,xattr,name=systemd
714 712 0:31 /docker/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0 /sys/fs/cgroup/cpuset ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,cpuset
715 712 0:32 /docker/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0 /sys/fs/cgroup/net_cls,net_prio ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,net_cls,net_prio
716 712 0:33 /docker/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0 /sys/fs/cgroup/memory ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,memory
717 712 0:34 /docker/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0 /sys/fs/cgroup/perf_event ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,perf_event
718 712 0:35 /docker/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0 /sys/fs/cgroup/devices ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,devices
719 712 0:36 /docker/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0 /sys/fs/cgroup/blkio ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,blkio
720 712 0:37 /docker/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0 /sys/fs/cgroup/pids ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,pids
721 712 0:38 / /sys/fs/cgroup/rdma ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,rdma
722 712 0:39 /docker/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0 /sys/fs/cgroup/freezer ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,freezer
723 712 0:40 /docker/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0 /sys/fs/cgroup/cpu,cpuacct ro,nosuid,nodev,noexec,relatime - cgroup cgroup rw,cpu,cpuacct
724 711 0:57 / /sys/firmware ro,relatime - tmpfs tmpfs ro
725 693 8:2 /var/lib/docker/containers/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/sda2 rw,stripe=256
944 693 8:2 /var/lib/docker/containers/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0/hostname /etc/hostname rw,relatime - ext4 /dev/sda2 rw,stripe=256
1352 693 8:2 /var/lib/docker/containers/eebfacfdc6e0e34c4e62d9f162bdd7c04b232ba2d1f5327eaf7e00011d0235c0/hosts /etc/hosts rw,relatime - ext4 /dev/sda2 rw,stripe=256
what mechanism exactly is blocking the fresh mount of procfs on /proc?
what is preventing me from unmounting /proc/kcore, etc.?
Quite some more digging turned up this answer to "About mounting and unmounting inherited mounts inside a newly-created mount namespace" which points in the correct direction, but needs additional explanations (not least due to basing on a misleading paragraph about mount namespaces being hierarchical from man pages which Michael Kerrisk fixed some time ago).
Our starting point is when runc sets up the (test) container, for masking system paths especially in the container's future /proc tree, it creates a set of new mounts to either mask out individual files using /dev/null or subdirectories using tmpfs. This results in procfs being mounted on /proc, as well as further sub-mounts.
Now the test container starts and at some point a process unshares into a new user namespace. Please keep in mind that this new user namespace (again) belongs to the (real) root user with UID 0, as a default Docker installation won't enable running containers in new user namespaces.
Next, the test process also unshares into a new mount namespace, so this new mount namespace belongs to the newly created user namespace, but not to the initial user namespace. According to section "restrictions on mount namespaces" in mount_namespaces(7):
If the new namespace and the namespace from which the mount point list was copied are owned by different user namespaces, then the new mount namespace is considered less privileged.
Please note that the criterion here is: the "donor" mount namespace and the new mount namespace have different user namespaces; it doesn't matter whether they have the same owner user (UID), or not.
The important clue now is:
Mounts that come as a single unit from a more privileged mount namespace are locked together and may not be separated in a less privileged mount namespace. (The unshare(2) CLONE_NEWNS operation brings across all of the mounts from the original mount namespace as a single unit, and recursive mounts that propagate between mount namespaces propagate as a single unit.)
As it now is not possible anymore to separate the /proc mountpoint as well as the masking submounts, it's not possible to (re)mount /proc (question 1). In the same sense, it is impossible to unmount /proc/kcore, because that would allow unmasking (question 2).
Now, when deploying the test container using --security-opt systempaths=unconfined this results in a single /proc mount only, without any of the masking submounts. In consequence and according to the man page rules cited above, there is only a single mount which we are allowed to (re)mount, subject to the CAP_SYS_ADMIN capability including also mounting (besides tons of other interesting functionality).
Please note that it is possible to unmount masked /proc/ paths inside the container while still in the original (=initial) user namespace and when possessing (not surprisingly) CAP_SYS_ADMIN. The (b)lock only kicks in with a separate user namespace, hence some projects striving for deploying containers in their own new user namespaces (which unfortunately has effects not least on container networking).

Fails to `mkdir /mnt/vzsnap0` for Container Backups with Permission Denied

This is all done as the root user.
The script for backups at /usr/share/perl5/PVE/VZDump/LXC.pm sets a default mount point
my $default_mount_point = "/mnt/vzsnap0";
But regardless of whether I use the GUI or the command line I get the following error:
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0:
Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
And lines 160 - 161 in that script is:
my $rootdir = $default_mount_point;
mkpath $rootdir;
After the installation before I created any images or did any backups I setup two things.
(1) SSHFS mount for /mnt/backups
(2) Added all other drives as Linux LVM
What I did for the drive addition is as simple as:
pvcreate /dev/sdb1
pvcreate /dev/sdc1
pvcreate /dev/sdd1
pvcreate /dev/sde1
vgextend pve /dev/sdb1
vgextend pve /dev/sdc1
vgextend pve /dev/sdd1
vgextend pve /dev/sde1
lvextend pve/data /dev/sdb1
lvextend pve/data /dev/sdc1
lvextend pve/data /dev/sdd1
lvextend pve/data /dev/sde1
For the SSHFS instructions see my blog post on it: https://6ftdan.com/allyourdev/2018/02/04/proxmox-a-vm-server-for-your-home/
Here are filesystem directory permission related files and details.
cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 9.0M 1.6G 1% /run
/dev/mapper/pve-root 37G 8.0G 27G 24% /
tmpfs 7.9G 43M 7.8G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/fuse 30M 20K 30M 1% /etc/pve
sshfs#10.0.0.10:/mnt/raid/proxmox_backup 1.4T 725G 672G 52% /mnt/backups
tmpfs 1.6G 0 1.6G 0% /run/user/0
ls -dla /mnt
drwxr-xr-x 3 root root 0 Aug 12 20:10 /mnt
ls /mnt
backups
ls -dla /mnt/backups
drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups
The command that I desire to succeed is:
vzdump 103 --compress lzo --node ProxMox --storage backup --remove 0 --mode snapshot
For the record the container image is only 8GB in size.
Cloning containers does work and snapshots work.
Q & A
Q) How are you running the perl script?
A) Through the GUI you click on Backup now, then select your storage (I have backups and local and the both produce this error), then select the state of the container (Snapshot, Suspend, Stop each produce the same error), then compression type (none, LZO, and gzip each produce the same error). Once all that is set you click Backup and get the following output.
INFO: starting new backup job: vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0
INFO: Starting Backup of VM 103 (lxc)
INFO: Backup started at 2019-08-18 16:21:11
INFO: status = stopped
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: CT Name: Passport
ERROR: Backup of VM 103 failed - mkdir /mnt/vzsnap0: Permission denied at /usr/share/perl5/PVE/VZDump/LXC.pm line 161.
INFO: Failed at 2019-08-18 16:21:11
INFO: Backup job finished with errors
TASK ERROR: job errors
From this you can see that the command is vzdump 103 --node ProxMox --mode snapshot --compress lzo --storage backups --remove 0 . I've also tried logging in with a SSH shell and running this command and get the same error.
Q) It could be that the directory's "immutable" attribute is set. Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
A) root#ProxMox:~# lsattr /
--------------e---- /tmp
--------------e---- /opt
--------------e---- /boot
lsattr: Inappropriate ioctl for device While reading flags on /sys
--------------e---- /lost+found
lsattr: Operation not supported While reading flags on /sbin
--------------e---- /media
--------------e---- /etc
--------------e---- /srv
--------------e---- /usr
lsattr: Operation not supported While reading flags on /libx32
lsattr: Operation not supported While reading flags on /bin
lsattr: Operation not supported While reading flags on /lib
lsattr: Inappropriate ioctl for device While reading flags on /proc
--------------e---- /root
--------------e---- /var
--------------e---- /home
lsattr: Inappropriate ioctl for device While reading flags on /dev
lsattr: Inappropriate ioctl for device While reading flags on /mnt
lsattr: Operation not supported While reading flags on /lib32
lsattr: Operation not supported While reading flags on /lib64
lsattr: Inappropriate ioctl for device While reading flags on /run
Q) Can you manually created /mnt/vzsnap0 without any issues?
A) root#ProxMox:~# mkdir /mnt/vzsnap0
mkdir: cannot create directory ‘/mnt/vzsnap0’: Permission denied
Q) Can you replicate it in a clean VM ?
A) I don't know. I don't have an extra system to try it on and I need the container's I have on it. Trying it within a VM in ProxMox… I'm not sure. I suppose I could try but I'd really rather not have to just yet. Maybe if all else fails.
Q) If you look at drwxr-xr-x 1 1001 1002 80 Aug 12 20:40 /mnt/backups, it looks like there are is a user with id 1001 which has access to the backups, so not even root will be able to write. You need to check why it is 1001 and which group is represented by 1002. Then you can add your root as well as the user under which the GUI runs to the group with id 1002.
A) I have no problem writing to the /mnt/backups directory. Just now did a cd /mnt/backups; mkdir test and that was successful.
From the message
mkdir /mnt/vzsnap0: Permission denied
it is obvious the problem is the permissions for /mnt directory.
It could be that the directory `s "immutable" attribute is set.
Try lsattr / and see if /mnt has the lower-case "i" attribute set to it.
As a reference:
The lower-case i in lsattr output indicates that the file or directory is set as immutable: even root must clear this attribute first before making any changes to it. With root access, you should be able to remove this with chattr -i /mnt, but there is probably a reason why this was done in the first place; you should find out what the reason was and whether or not it's still applicable before removing it. There may be security implications.
So, if this is the case, try:
chattr -i /mnt
to remove it.
References
lsattr output
According to inode flags—attributes manual page:
FS_IMMUTABLE_FL 'i':
The file is immutable: no changes are permitted to the file
contents or metadata (permissions, timestamps, ownership, link
count and so on). (This restriction applies even to the supe‐
ruser.) Only a privileged process (CAP_LINUX_IMMUTABLE) can
set or clear this attribute.
As long as the bounty is still up I'll give it to a legitimate answer that fixes the problem described here.
What I'm writing here for you all is a work around I've thought of which works. Note, it is very slow.
Since I am able to write to the /mnt/backups directory, which exists on another system on the network, I went ahead and changed the Perl script to point to /mnt/backups/vzsnap0 instead of /mnt/vzsnap0.
Bounty remains for anyone who can get the /mnt directory to work for the mount path to successfully mount vzsnap0 for the backup script..
1)
Perhaps your "/mnt/vzsnap0" is mounted as read only?
It may tell from your:
/dev/pve/root / ext4 errors=remount-ro 0 1
'errors=remount-ro' means in case of mistake remounting the partition like readonly. Perhaps this setting applies for your mounted filesystem as well.
Can you try remounting the drive as in the following link? https://askubuntu.com/questions/175739/how-do-i-remount-a-filesystem-as-read-write
And if that succeeds, manually create the directory afterwards?
2) If that didn't help:
https://www.linuxquestions.org/questions/linux-security-4/mkdir-throws-permission-denied-error-in-a-directoy-even-with-root-ownership-and-777-permission-4175424944/
There, someone remarked:
What is the filesystem for the partition that contains the directory.[?]
Double check the permissions of the directory, or whether it's a
symbolic link to another directory. If the directory is an NFS mount,
rootsquash can prevent writing by root.
Check for attributes (lsattr). Check for ACLs (getfacl). Check for
selinux restrictions. (ls -Z)
If the filesystem is corrupt, it might be initially mounted RW but
when you try to write to a bad area, change to RO.
Great, turns out this is a pretty long-standing issue with Ubuntu Make which is faced by many people.
I saw a workaround mentioned by an Ubuntu Developer in the above link.
Just follow the below steps:
sudo -s
unset SUDO_UID
unset SUDO_GID
Then run umake to install your application as normal.
you should now be able to install to any directory you want. Works flawlessly for me.
try ls laZ /mnt to review the security context, in case SE Linux is enabled. relabeling might be required then. errors=remount-ro should also be investigated (however, it is rather unlikely lsattr would fail, unless the /mnt inode itself is corrupted). Creating a new directory inode for these mount-points might be worth a try; if it works, one can swap them.
Just change /mnt/backups to /mnt/sshfs/backups
And the vzdump will work.

/media directory not working anymore

I can't automount USB sticks on my linux because I have several problems with /media directory.
Here is my ls -al result on / (I just kept the media and mnt directories for you) :
total 116
drwxr-xr-x 25 root root 4096 juin 13 09:39 .
drwxr-xr-x 25 root root 4096 juin 13 09:39 ..
drwx------ 8 acarbonaro acarbonaro 8192 janv. 1 1970 media
drwxr-xr-x 2 root root 4096 avril 11 2014 mnt
This already seems strange as for other users it is often owned by root.
When I try to sudo chown root:root media it says permission denied.
When I try to sudo chown 755 media it doesn't say anything but when I ls -l after nothing has changed.
The other problem : I don't know why but the media directory is empty I can't find the user directory that used to be in it.
When I plug a USB flash drive, it cannot auto mount. I have to mount it manually in another directory, which is not impossible but clearly not handy.
Thank you for your help.
EDIT:
Here is my df -T result :
Sys. de fichiers Type blocs de 1K Utilisé Disponible Uti% Monté sur
udev devtmpfs 4015584 8 4015576 1% /dev
tmpfs tmpfs 805680 1212 804468 1% /run
/dev/sda1 ext4 115214888 9815468 99523708 9% /
none tmpfs 4 0 4 0% /sys/fs/cgroup
none tmpfs 5120 0 5120 0% /run/lock
none tmpfs 4028392 522580 3505812 13% /run/shm
none tmpfs 102400 600 101800 1% /run/user
/dev/sda2 ext4 130654772 18532260 105462572 15% /home
/dev/sdb2 vfat 14938864 218480 14720384 2% /media
EDIT:
I don't know the answer to my problem, but rebooting reset the /media directory as it was before and it works agian.
I assume the problem was that you have yanked the USB stick out of port without unmounting. UNIX is not very keen to parts of its FS disappearing. Next time, umount it first, then remove.

Remounting as read-write a destination directory on device

How to remount as read-write a destination directory on device? (one folder) I need to replace file, but it's on "Read-only file system", not allow to change permissions. Path to folder: /etc/foo/bar. I need to remount /bar folder. Embedded Linux (busybox), Linux version 2.6.18_pro500
mount -o rw,remount [destination folder]
I tried following, with no success:
<root#elocal:/etc/foo/bar> ls -la
total 6
drwxr-xr-x 2 root 0 98 Jan 18 2011 .
drwxrwxr-x 7 root 0 105 Feb 10 2011 ..
-rw-r--r-- 1 root 0 1052 Jan 18 2011 file1
-rw-r--r-- 1 root 0 270 Jan 18 2011 file2
-rw-r--r-- 1 root 0 1088 Jan 18 2011 file3
-rw-r--r-- 1 root 0 270 Jan 18 2011 file4
mount -o rw,remount /etc/foo/bar
mount: can't find /etc/foo/bar in /proc/mounts
output mount command:
mount
rootfs on / type rootfs (rw)
/dev/root on / type squashfs (ro)
proc on /proc type proc (rw)
ramfs on /var type ramfs (rw)
sysfs on /sys type sysfs (rw)
tmpfs on /dev type tmpfs (rw)
devpts on /dev/pts type devpts (rw)
/dev/mtdblock4 on /nvram type jffs2 (rw)
output of cat /proc/mounts
cat /proc/mounts
rootfs / rootfs rw 0 0
/dev/root / squashfs ro 0 0
proc /proc proc rw 0 0
ramfs /var ramfs rw 0 0
sysfs /sys sysfs rw 0 0
tmpfs /dev tmpfs rw 0 0
devpts /dev/pts devpts rw 0 0
/dev/mtdblock4 /nvram jffs2 rw 0 0
Normally, you would use mount -oremount,rw / (/ is the mount point, not /etc/foo/bar).
However, this will not work in your case, per the df output,
rootfs / rootfs rw 0 0
/dev/root / squashfs ro 0 0
your rootfs is using squashfs, which is a read-only file system. See Wikipedia link. Basically, when the filesystem image is created on the build system, it is compressed. Once it is on the target system, it cannot be changed.
You will need to go back to the build system and change the contents and re-build the filesystem image.

Hudson server always stopped every morning day

I've got this regular problem every morning that my build server (Hudson) is always stopped every morning so I have to manually start it, is there any reason why or any location that I can started to look for the error message?
Here's the error diagnostic that I did:
ascari:~# ps -ef | grep -i hud
root 5959 5944 0 09:00 pts/0 00:00:00 grep -i hud
ascari:~# cd /etc/init.d
ascari:/etc/init.d# ./hudson start
ascari:/etc/init.d# ps -ef | grep -i hud
hudson 6004 1 0 09:00 ? 00:00:00 /usr/bin/daemon --name=hudson -- inherit --env=HUDSON_HOME=/var/lib/hudson --output=/var/log/hudson/hudson.log -- user=hudson --pidfile=/var/run/hudson/hudson.pid -- /usr/bin/java -Xms512m -Xmx1 024m -Dhttp.proxyHost=proxy.domain.com -Dhttp.proxyPort=3128 -Dhttp.nonProxyHo sts="localhost|ascari|*.domain.com" -jar /usr/share/hudson/hudson.war --webroo t=/var/run/hudson/war
hudson 6005 6004 48 09:00 ? 00:00:01 /usr/bin/java -Xms512m -Xmx1024m -Dhttp.proxyHost=proxy.domain.com -Dhttp.proxyPort=3128 -Dhttp.nonProxyHosts= "localhost|ascari|*.domain.com" -jar /usr/share/hudson/hudson.war --webroot=/v ar/run/hudson/war
root 6008 5944 14 09:01 pts/0 00:00:00 grep -i hud
ascari:/etc/init.d# df -k -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 327M 125M 185M 41% /
tmpfs 1.5G 0 1.5G 0% /lib/init/rw
udev 10M 96K 10M 1% /dev
tmpfs 1.5G 0 1.5G 0% /dev/shm
/dev/sda9 4.7G 295M 4.1G 7% /home
/dev/sda8 4.2G 155M 3.8G 4% /tmp
/dev/sda5 4.6G 3.0G 1.4G 69% /usr
/dev/sda6 65G 32G 30G 52% /var
ascari:/etc/init.d# uname -a
Linux ascari 2.6.26-2-686 #1 SMP Sun Jun 21 04:57:38 UTC 2009 i686 GNU/Linux
ascari:/etc/init.d#
Have you checked the logfile (referenced above) and set the --logfile argument (as documented here) ?
Rescheduling the project build solve the problem.
The Hudson process was killed by the Linux kernel due to the memory over consumption.

Resources