I have a shared hosting provider who does not mount /proc for security reason.
I want to execute a binary file written in GO which needs the path in which it was started. This is done by using readlink and virtual link /proc/self/exe
(see source https://github.com/golang/go/blob/master/src/os/executable_procfs.go)
But this link can't be found due to the fact, that /proc is not mounted.
Arg[0] is not possible because you can call the file via "./app".
Is there a nother option to get execution path? Thanks for any help!
Related
I have the following problematic and I am not sure what is happening. I'll explain briefly.
I work on a cluster with several nodes which are managed via slurm. All these nodes share the same disk memory (I think it uses NFS4). My problem is that since this disk memory is shared by a lots of users, we have a limit a mount of disk memory per user.
I use slurm to launch python scripts that runs some code and saves the output to a csv file and a folder.
Since I need more memory than assigned, what I do is I mount a remote folder via sshfs from a machine where I have plenty of disk. Then, I configure the python script to write to that folder via an environment variable, named EXPERIMENT_PATH. The script example is the following:
Python script:
import os
root_experiment_dir = os.getenv('EXPERIMENT_PATH')
if root_experiment_dir is None:
root_experiment_dir = os.path.expanduser("./")
print(root_experiment_dir)
experiment_dir = os.path.join( root_experiment_dir, 'exp_dir')
## create experiment directory
try:
os.makedirs(experiment_dir)
except:
pass
file_results_dir = os.path.join( root_experiment_dir, 'exp_dir' , 'results.csv' )
if os.path.isfile(file_results_dir):
f_results = open(file_results_dir, 'a')
else:
f_results = open(file_results_dir, 'w')
If I directly launch this python script, I can see the created folder and file in my remote machine whose folder has been mounted via sshfs. However, If I use sbatch to launch this script via the following bash file:
export EXPERIMENT_PATH="/tmp/remote_mount_point/"
sbatch -A server -p queue2 --ntasks=1 --cpus-per-task=1 --time=5-0:0:0 --job-name="HOLA" --output='./prueba.txt' ./run_argv.sh "python foo.py"
where run_argv.sh is a simple bash taking info from argv and launching, i.e. that file codes up:
#!/bin/bash
$*
then I observed that in my remote machine nothing has been written. I can check the mounted folder in /tmp/remote_mount_point/ and nothing appears as well. Only when I unmount this remote folder using: fusermount -u /tmp/remote_mount_point/ I can see that in the running machine a folder has been created with name /tmp/remote_mount_point/ and the file is created inside, but obviously nothing appears in remote machine.
In other words, it seems like by launching through slurm, it bypasses the sshfs mounted folder and creates a new one in the host machine which is only visible once the remote folder is unmounted.
Anyone knows why this happens and how to fix it? I emphasize that this only happens if I launch everything through slurm manager. If not, then everything works.
I shall emphasize that all the nodes in the cluster share the same disk space so I guess that the mounted folder is visible from all machines.
Thanks in advance.
I shall emphasize that all the nodes in the cluster share the same disk space so I guess that the mounted folder is visible from all machines.
This is not how it works, unfortunately. Trying to put it simply; you could say that mount point inside mount points (here SSHFS inside NFS) are "stored" in memory and not in the "parent" filesystem (here NFS) so the compute nodes have no idea there is an SSHFS mount on the login node.
For your setup to work, you should create the SSHFS mount point inside your submission script (which can create a whole lot of new problems, for instance regarding authentication, etc.)
But before you dive into that, you probably should enquiry whether the cluster has another filesystem ("scratch", "work", etc.) where there you could temporarily store larger data than what the quota allows in your home filesystem.
I am running a VM on my machine and have mounted a host folder inside VM using sshfs (auto-mounted via fstab).
abc#xyz:/home/machine/test on /home/vm/test type fuse.sshfs (rw,relatime,user_id=0,group_id=0,allow_other)
That folder has an executable which I want to run inside the VM. But I also need some capabilities before running that executable. So my script looks like:
#!/bin/bash
# Some preprocessing.
sudo setcap CAP_DAC_OVERRIDE+ep /home/vm/test/my_exec
/home/vm/test/my_exec
But I am getting below error :
Failed to set capabilities on file `/home/vm/test/my_exec' (Operation not supported)
The value of the capability argument is not permitted for a file. Or the file is not a regular (non-symlink) file
But if I copy executable inside the VM (say in /tmp/), then it works perfectly fine. Is this a known limitation of sshfs or am I missing something here ?
File capabilities are implemented on Linux with extended attributes (specifically the security.capability attribute), and not all filesystems implement extended attributes.
sshfs in particular does not.
sshfs can only perform operations which the remote user is authorized to perform. You're logged into the remote host as abc, so you can only perform actions over sshfs which abc can perform -- which doesn't include setcap, since that operation can only be performed by root. Using sudo on your local machine doesn't change that.
I want to mount a folder which is on some other machine to my linux server. To do that i am using the following command
mount -t nfs 192.xxx.x.xx:/opt/oracle /
Which is executing with the following error
mount.nfs: access denied by server while mounting 192.xxx.x.xx:/opt/oracle
Do anyone knows what's going on ??? I am new to linux.
Depending on what distro you're using, you simply edit the /etc/exports file on the remote machine to export the directories you want, then start your NFS daemon.
Then on the local PC, you mount it using the following command:
mount -t nfs {remote_pc_address}:/remote/dir /some/local/dir
Please try with your home directory as per my knowledge you can't dump anything directly on root like that.
For more reference, find full configuration steps here.
I have some question related to linux boot process. Initramfs is the first stage rootfile system loaded.
Init process inside iniramfs is responsible to mount actual rootfile system from harddisk to / directory.
Now my question is where is / directory created by init (init process of initramfs) to mount actual root partition. Is it in ram or hardisk ?
Also once actual root partiton is mounted then what happens to initramfs ?
If initramfs is deleted from ram then what happens to / folder created by initramfs ?
Please suggest , can some explain how does this magic works.
//Allan
What /sbin/init (of initramfs) does is, loads the filesystems and necessary modules. Then it tries to load the targeted real "rootfs". Then it switches from initramfs to real rootfs and "/" is on the harddisk. "/" is created when you installed the systems, done harddrive formating. Note, it's about reading the filesystem's content thus it's a prerequisite to load the required module first. If you've a ext3 partition of "/", then ext3.ko will be loaded and so.
Answer to second question - after doing the required fs module loading, it switches from initramfs's init to real rootfs's init and the usual booting process starts of and initramfs is removed from memory. This switching is done through pivot_root().
Answer to third - initramfs doesn't create any directory, it just load existing initramfs.img image into ram.
So, in short, loading iniramfs or rootfs isn't about creating any directory, it's about loading existing filesystem images. Just after boot - it uses initramfs to load must needed filesystems module, as if it can read the real filesystem. Hope it'll help!
With initrd there are two options:
Using pivot_root to rotate the final filesystem into position, or
Emptying the root and mounting the final filesystem over it.
More info can be found here.
I'm asking in both contexts: technically and stylistically.
Can my application/daemon keep a pidfile in /opt/my_app/run/?
Is it very bad to do so?
My need is this: my daemon runs under a specific user, and the implementor must mkdir a new directory in /var/run, chown, and chgrp it to make my daemon run. Seems easier to just keep the pidfile local (to the daemon).
I wouldn't put a pidfile under an application installation directory such as /opt/my_app/whatever. This directory could be mounted read-only, could be shared between machines, could be watched by a daemon that treats any change there as a possible break-in attempt…
The normal location for pidfiles is /var/run. Most unices will clean this directory on boot; under Ubuntu this is achieved by /var/run an in-memory filesystem (tmpfs).
If you start your daemon from a script that's running as root, have it create a subdirectory /var/run/gmooredaemon and chown it to the daemon-running user before suing to the user and starting the daemon.
On many modern Linux systems, if you start the daemon from a script or launcher that isn't running as root, you can put the pidfile in /run/user/$UID, which is a per-user equivalent of the traditional /var/run. Note that the root part of the launcher, or a boot script running as root, needs to create the directory (for a human user, the directory is created when the user logs in).
Otherwise, pick a location under /tmp or /var/tmp, but this introduces additional complexity because the pidfile's name can't be uniquely determined if it's in a world-writable directory.
In any case, make it easy (command-line option, plus perhaps a compile-time option) for the distributor or administrator to change the pidfile location.
The location of the pid file should be configurable. /var/run is standard for pid files, the same as /var/log is standard for logs. But your daemon should allow you to overwrite this setting in some config file.
/opt is used to install 'self-contained' applications, so nothing wrong here. Using /opt/my_app/etc/ for config files, /opt/my_app/log/ for logs and so on - common practice for this kind of application.
This away you can distribute your applications as a TGZ file instead of maintaining a package for every package manager (at least DEB since you tagged ubuntu). I would recommend this for in-house applications or situations where you have great control over the environment. The reasoning is that it makes no sense if the safe costs more than what you are putting inside (the work required to pack the application should not eclipse the effort required to write the application).
Another convention, if you're not running the script as root, is to put the pidfile in ~/.my_app/my_app.pid. It's simpler this way while still being secure as the home directory is not world-writeable.