Running gvfs after building - gnome

I am trying to run a local build of gvfs. I have followed the Newcomers document to set up a working build environment, built gvfs from sources and am now trying to figure out how to run it.
The docs have instructions on running applications or the GNOME shell, which say I need to kill the current instance, then launch the newly-built binary with jhbuild run, as in:
$ killall gnome-weather
$ jhbuild run gnome-weather
or, in the case of the shell,
$ jhbuild run gnome-shell --replace
For gvfs, I see that it spawns a bunch of processes (all children of P1 running under my account), the first of them (lowest PID) being gvfsd. So I tried the following:
$ killall gvfsd
$ jhbuild run gvfs
Which gives me the error message:
jhbuild run: Unable to execute the command 'gvfs': [Errno 2] No such file or directory
If instead I try
$ jhbuild run gvfsd
I get the same message. Same when I try any of the above two with --replace.
Since gvfs is a daemon rather than an application, I searched around a bit and came across this post, which suggests launching daemons with
jhbuild run dbus-launch --exit-with-session name-of-daemon
No joy either... no matter whether I use gvfs or gvfsd for the name, I get the error message
Couldn't exec gvfs: No such file or directory
(reporting the name I specified in the command).
Is this the correct way to launch gvfs at all? If not, what is? If it is, how can I find out what's going wrong?
EDIT: Apparently, the code I intend to modify is part of the gvfs-mtp-volume-monitor binary – but essentially the same goes here. How do I launch my own version of the binary rather than the one that came with my OS distro?

jhbuild run can be used for gvfs in the same manner.
For gvfsd do the following:
jhbuild run ~/jhbuild/install/libexec/gvfsd -r
The -r switch tells gvfsd to replace any running version. gvfsd will also start gvfsd-fuse if it was built and you didn't disable it via a command-line switch.
You will also need to replace any volume monitors (and other processes you need), such as:
killall gvfs-mtp-volume-monitor
jhbuild run ~/jhbuild/install/libexec/gvfs-mtp-volume-monitor
Care must be taken with anything that is invoked over dbus:
Namespaces may change between versions. If that happened between the version shipped with your OS and the current one, the latter will not work unless you tweak your dbus config to reflect that.
If dbus is used to spawn processes, it will fall back to the binaries shipped with your OS. Again you would need to modify your dbus config (specifically .service entries) to point to your binaries.

Related

WSL2 distro shell can't launch a file copied from outside

The situation in short
I can't launch an executable (binary or a script) in a WSL2 distro if it wasn't created inside this distro
I can launch scripts and binaries that were created inside the distro shell (not using /mnt/c or /mnt/d in any way)
But I can't launch anything that was created outside and copied inside from Windows (using /mnt/c or /mnt/d)
I can see the copied files in the file system, can "cat" them, can look them up with "which", but I cannot launch them by entering the path into the command line
The questions I have in regards to all this
How come that the shell can't see the files while utils you run from the shell can?
How do I make the shell see files that were copied from outside?
If I can't make the shell launch the files, then how do I launch them?
The Situation in detail
I have Windows 10 with WSL2 and two distros
Ubuntu-20.04
Alpine
In Ubuntu I have a "Hello, World!" project written in C
It compiles in Ubuntu and then and runs in Ubuntu just fine
But, when I copy it from Ubuntu to Windows
cp hello /mnt/d/
and then go to Alpine and copy it inside from Windows
cp /mnt/d/hello .
I then have trouble launching it inside Alpine
Here is the output of file hello command in Ubuntu with some extra formatting (just in case)
$ file hello
hello:
ELF 64-bit LSB shared object,
x86-64,
version 1 (SYSV),
dynamically linked,
interpreter /lib64/ld-linux-x86-64.so.2,
BuildID[sha1]=021352ab7bf244e340c3c42ce34225b74baa6618,
for GNU/Linux 3.2.0,
not stripped
Here's what I have in Alpine
$ cp /mnt/d/hello .
$ ls -l
-rwxr-xr-x 1 pavel pavel 16760 Apr 19 19:07 hello
$ ./hello
-ash: ./hello: not found
Now same with a script copied from Windows
Copy the script inside Alpine from Windows
$ cp /mnt/d/hello.sh .
Checking the contents
$ cat hello.sh
#!/bin/ash
echo Hello!
Setting the execute permission just in case
$ chmod agu+x hello.sh
Trying to run it
$ ./hello.sh
-ash: ./hello.sh: not found
But, I can launch the hello.sh by explicitly calling the ash tool and passing the script path as the argument
$ ash ./hello.sh
Hello!
At the same time, a script created inside Alpine runs just by entering it's path to the command line
$ cat << EOF > hello-local.sh
> #!/bin/ash
> echo Local hello!
> EOF
$ chmod agu+x hello-local.sh
$ ./hello-local.sh
Local hello!
Also, I couldn't make a file that would run from one that wouldn't either by copying it with cp
cp hello.sh hello2.sh
or by copying it with cat
cat hello.sh > hello3.sh
cmod agu+x hello3.sh
Why do I need to copy things from outside
It all started when I wanted to explore how Docker for Windows uses Linux namespaces to separate containers
The distro that Docker for Windows uses is called docker-desktop
The docker-desktop distro neither has utilities that I need for my experiments, nor a package manager to get those utilities
So I tried to copy them from outside
But now Docker for Windows studies is not the only concern
I want to understand this magic that is happening just as bad
To be fair, there really are three separate questions here, but not necessarily the questions you listed in your post:
Secondary question -- Why does your script that you copied to Alpine fail?
As #MarkPlotnick covered in the comments (and you confirmed), it was due to the script having DOS/Windows line endings (CRLF). In general, try to avoid creating or editing Linux text files using Windows tools unless you are sure that they are using Linux line-endings.
Secondary question -- Why does your C program fail when you compile on Ubuntu and copy the binary to Alpine?
Also as #MarkPlotnick mentioned in the comments, this is because Ubuntu uses glibc as the standard library implementation by default, but Alpine uses musl. See a number of questions here for more information. The first one in the list sorted by "relevance" is actually a pretty good one to start with.
Main question -- How to explore the docker-desktop distro
Really, your main goal seems to be how to gain access to certain tools inside the docker-desktop distro in order to learn more about it.
I was going to say, "don't" (with more explanation), but the reality is that I think it's a potentially good learning experience. I've done it, to some degree, so who am I to say it's "too dangerous" or recommend against it? ;-)
I will give fair warning, though -- The docker-desktop distro isn't intended to be run by users. Docker Desktop "injects" links and sockets into your other WSL2 distros (which you can enable/disable per-distro in Docker Desktop) so that its tools, processes, etc., are available to all your WSL2 (and PowerShell/CMD) instances.
I'd personally try to avoid making any changes to the docker-desktop distro itself. They'll likely be overwritten anyway by Docker Desktop when it extracts a new rootfs.
However, we can still gain access to the tools we need by accessing them from another distribution, but without copying them into docker-desktop.
First, a note -- As I think you have probably already figured out,docker-desktop is also musl-basesd. So you'll want to use tools from another musl-based distro like Alpine.
This can be easily accomplished by running the following line once in your Alpine instance (as root):
echo "/ /mnt/wsl/instances/Alpine none defaults,bind,X-mount.mkdir 0 0" >> /etc/fstab
That will add a mount to the Alpine instance into the tmpfs /mnt/wsl mount. You can see my Super User answer here for more details on that.
Once you wsl --terminate Alpine and restart it, you'll have access to the Alpine files from any other WSL2 distribution.
As a useful (for your intent) example, install the util-linux package in Alpine to get access to the lsns command.
Then, in the docker-desktop distro (which I assume you already know to access with wsl -u root -d docker-desktop, but I'll include that command here for other future readers), to list the namespaces:
/mnt/host/wsl/instances/Alpine/usr/bin/lsns
The docker-desktop instance automounts at a slightly different directory than default (see cat /etc/wsl.conf), so you need to adjust the path to /mnt/host/wsl instead of /mnt/wsl.
But with that in place, you can run all (most?) of your Alpine binaries directly in docker-desktop without having to modify it directly. If you have a script in your home directory that you want to run in docker-desktop, for instance:
/mnt/host/wsl/instances/Alpine/home/users/<yourusername>/hello.sh
Note that if you have a binary that requires a dynamically-linked library on Alpine, I'm assuming you'll need to adjust your LD_LIBRARY_PATH accordingly, although I haven't tested. For instance:
LD_LIBRARY_PATH=/mnt/host/wsl/instances/Alpine/usr/lib /mnt/host/wsl/intances/Alpine/usr/bin/<whatever>

Make chosen version of Elasticsearch run as a service in Linux

I have an issue with later versions of ES, so have to use 7.10.2 currently.
This means that the previous method I used to install ES as a service, i.e. apt-get, doesn't work You can't choose an older version this way: it currently installs 7.16.3.
So I followed the procedure on this page for 7.10: everything worked: I was able to run ES as an app and also as a "daemon". Clearly I could simply put the "daemon" startup line in a script which runs on boot.
But what's the optimum way of turning this "daemon arrangement" into a service which you can control with systemctl, and which starts automatically when the machine boots?
PS I don't want to get involved with Docker. I'm sure that's a useful thing but I'm convinced there is a simpler way of doing it, using available Linux sys tools.
I found a workaround... this doesn't in fact create a service of the "systemd" type which can be controlled by systemctl. There seem to be one or two problems which make this non-trivial.
1) You can't start ES as root! I assume (not sure) that most services are being run by root. Anyway this was something I couldn't find a solution to.
2) I am not sure whether a shell script file called by a service is allowed to end... or should continue endlessly: initially I thought this would be sufficient. This is a shell script (run_es_daemon.sh) which does indeed start up ES (as a daemon process) when run by manually in a terminal. There is no issue to do with the fact that the script ends and you then close the terminal: the daemon process continues to run:
#!/bin/bash
# start ES as a daemon...
cd /home/mike/Elasticsearch/elasticsearch-7.10.2
./bin/elasticsearch -d -p pid
... but it never worked using a xxx.service file in /etc/systemd/system/ (maybe because of 1) above). So I also tried adding these lines under the above ones:
while true
do
echo "bubbles"
sleep 60
done
... didn't work either.
In the end I found a simple workaround solution was to start up the daemon process by using crontab:
#reboot /home/mike/sysadmin/run_es_daemon.sh
... but I'd still like to know how to set it up as a true service, which starts at boot...

Why is my script to start UWSGI not functioning at bootup?

I wonder if you can help.
I am running the following versions:
OS: SMP Debian 3.2.81-1 x86_64
uWSGI: uWSGI 2.0.11.2
I installed uWSGI manually, as I want to use a specific version. Using the following commands: -
apt-get install build-essential psmisc python-dev libxml2 libxml2-dev python-setuptools
cd /opt/
wget http://projects.unbit.it/downloads/uwsgi-2.0.11.2.tar.gz
tar -zxvf uwsgi-2.0.11.2.tar.gz
mv uwsgi-2.0.11.2/ uwsgi/
cd uwsgi/
python setup.py install
I am trying to replicate the setup on another server that the project is already working on in a live environment (I am essentially setting up a test server environment).
The original server has uWSGI running on boot. To figure out how this is happening, I used
htops
I've been able to identify that uWSGI is running on the existing server with a set of command line switches. I've managed to track down the script that initialises uWSGI with these switches in the init.d folder.
I copied this script to my test server, and ran it using
service script.sh start
After various troubleshooting, mainly involving permissions on socket folders etc, now when I run this script it starts, and if I run htop I can see uWSGI is running and it has the exact same command switches I need.
I thought simply putting the script in init.d and giving it execute permission
chmod +x script.sh
Would be enough so that it starts when the server is switched on... but this appears to not be the case. Because when I issue
reboot
At the terminal, the terminal reboots but when I go into htops, and check for the uWSGI process it is not running.
If however directly after reboot I issue the following command
service script.sh start
The service starts just fine, and I can once again see it in htops.
Research online lead me to the suggestion that I should try to set the script to run automatically using chkconfig. I installed chkconfig using
apt-get chkconfig
and then ran the following command
chkconfig --list
I noticed that all the runtime levels where set to off for the script I am trying to get to execute on boot.
I ran the following command
chkconfig /etc/init.d/script.sh on
And now when I check the script runtime switches with chkconfig, it shows me the following output for my script:
script.sh 0:off 1:off 2:on 3:on 4:on 5:on 6:off
However when I reboot the uWSGI process is still not starting.
Yet if I simply type
service script.sh start
At the terminal the service runs ok, and uWSGI runs fine.
How can I set the script to run when the server restarts?
Edit:
Further research on the live server that is working fine, has determined that it does not appear to be using systemd to launch uWSGI on startup. I logged into the live server and while there is a
/etc/systemd
folder, it has just one folder in it system and no files. The system folder has the following files in it:
multi-user.target.wants sockets.target.wants syslog.service
So there does not appear to be anything uWSGI related in here.
Also what is making me think this is likely something to do with the
/etc/init.d
folder, is that when I run htop and examine the running services (or daemons) not quite sure of the correct terminology in linux. uWSGI is showing in here as running with a signature of command line switches, and the script I have found in /etc/init.d has this exact uWSGI command and same signature of switches, so I'm fairly convinced this is the part of the system that is starting the uWSGI daemon , I just can't figure out what I need to do , to get it to run apart from copying the same file to /etc/init.d on the new server and giving it execute permission.
The OS of the live server is :
SMP Debian 3.2.73-2+deb7u1 x86_64
and the OS I am running on the new server is
SMP Debian 3.2.81-1 x86_64
So they seem fairly similiar? Although I'm not sure how significant the 8 incremements in the least significant digits in the version number is.
On the new server there is no /etc/systemd folder , and on the live server there is a /etc/systemd as explained above. So it does appear to have been installed seperately to the main OS install (as I have a later version of Debian and it wasn't installed on my system by default) - so perhaps there is something related to systemd that is causing the script to start on the live server, but I'm not too sure.
Jessie
In the recent Debian (Jessie) the initv scripts do not work the way they did. And given your kernel version you are not running a Debian that uses initv scripts. The current Debian uses systemd and scripts in /etc/rc.d are run by compatibility features of systemd (the service command is now a systemd command that tries to behave like the old initv command).
You have two options:
Add a line calling the script from /etc/rc.local:
/etc/rc.d/script.sh
This is a rather dirty fix, since it depends on another compatibility feature of systemd. Also, the location of the script does not matter anymore.
Write a full systemd service for uwsgi (this is what I do, and what is recommended by the uwsgi documentation). You would need to create a file called /etc/systemd/system/uwsgi.service with a content similar to:
[Unit]
Description=uwsgi emperor
After=rsyslog.service
[Service]
PIDFile=/run/uwsgi-emperor.pid
ExecStart=/bin/uwsgi --ini /etc/uwsgi/emperor.ini
ExecReload=/bin/uwsgi --reload /run/uwsgi-emperor.pid
ExecStop=/bin/uwsgi --stop /run/uwsgi-emperor.pid
Restart=always
KillSignal=SIGQUIT
Type=notify
StandardError=syslog
NotifyAccess=all
[Install]
WantedBy=multi-user.target
I use the emperor mode (which is also the mode recommended by uwsgi for use with systemd), although it is possible to hack it to run a single process uwsgi (see further reading below).
You will also need to enable the service to be used by the multi-user.target, which will run at boot. You need to perform this as root:
systemctl enable uwsgi.service
And uwsgi will start with the next boot (it will not start straight away, to make it start you need systemctl start uwsgi.service).
Further reading:
The Arch linux wiki about systemd is very thorough
The Debian wiki on systemd is good, but outdated in some places (notably, it tells you that you need to install it which is not the case in Jessie)
Weezy
You're mixing things up a little there: chkconfig is a script of the RedHat family of OSes. Making it work for Debian was not easy in the past, and I do not believe it is easy to do so now.
Weezy still uses the initv rc.d folders alright, for each runlevel one rc.d folder:
/etc/rc.d/rc0.d/
/etc/rc.d/rc1.d/
/etc/rc.d/rc2.d/
/etc/rc.d/rc3.d/
/etc/rc.d/rc4.d/
/etc/rc.d/rc5.d/
/etc/rc.d/rc6.d/
You can check the runlevel you are in with the (appropriately named) runlevel command. Then you need to check whether there is a softlink to the script in the correct /etc/rc.d/rc*.d folder. If there is no softlink to the script you need to add it with something of the lines:
ln -s /etc/rc.d/init.d/script.dh /etc/rc.d/rc$(runlevel | cut -d ' ' -f 2).d/script.sh
And that is almost all about how initv scripts work. If you are going into runlevel 2 when the machine boots (I believe that's the default on Debian), what init performs is simply service <script> start for every file in /etc/rc.d/rc2.d.

Starting a program in a chroot environment returns immediately

I am working in a virtual environment, trying to start open vm tools in a chroot environment.
I tested with bash and it seems to work fine.
I used ./configure --options --prefix=/home/chroot_env to install the program, then using ldd on vmtoolsd, i copied the corresponding libraries to the /lib directory.
Now when I start chroot /home/chroot_env /bin/vmtoolsd, nothing happens, the chroot returns directly. Launching the same binary in the normal environment does work.
Does someone have an idea why it isn't working, the correct libraries are there, and it works with bash.
EDIT : strace showed that vmtoolsd is trying to access /dev/console, I added mount --bind /dev/ /home/chroot_env/dev/ but it is still failing.
EDIT2 : another strace showed it was looking for another plugin loaded dynamically, i added it and it worked, conclusion strace is great for debugging such issue!
When you run a program and nothing happens, you can always run it with strace in order to see which syscalls are made. This is an easy way to obtain the list of the files (regular or not) that are opened. In your case, check that your program doesn't try to access a file that is not in the chroot.

gdb appears to ignore executable capabilities

I am debugging a program that makes use of libnetfilter_queue. The documentation states that a userspace queue-handling application needs the CAP_NET_ADMIN capability to function. I have done this using the setcap utility as follows:
$ sudo setcap cap_net_raw,cap_net_admin=eip ./a.out
I have verified that the capabilities are applied correctly as a) the program works and b) getcap returns the following output:
$ getcap ./a.out
./a.out = cap_net_admin,cap_net_raw+eip
However, when I attempt to debug this program using gdb (e.g. $ gdb ./a.out) from the command line, it fails on account of not having the correct permissions set. The debugging functionality of gdb works perfectly otherwise and debugs as per normal.
I have even attempted to apply these capabilities to the gdb binary itself to no avail. I did this as it seemed (as documented by the manpages that the "i" flag might allowed the debugee to inherit the capability from the debugger.
Is there something trivial I am missing or can this really not be done?
I run into same problem and at beginning I thought the same as above that maybe gdb is ignoring the executable's capability due to security reason. However, reading source code and even using eclipse debugging gdb itself when it is debugging my ext2fs-prog which opens /dev/sda1, I realize that:
gdb is no special as any other program. (Just like it is in the matrix, even the agents themselves they obey the same physical law, gravity etc, except that they are all door-keepers.)
gdb is not the parent process of debugged executable, instead it is grand father.
The true parent process of debugged executable is "shell", i.e. /bin/bash in my case.
So, the solution is very simple, apart from adding cap_net_admin,cap_net_raw+eip to gdb, you have also apply this to your shell. i.e. setcap cap_net_admin,cap_net_raw+eip /bin/bash
The reason that you have also to do this to gdb is because gdb is parent process of /bin/bash before create debugged process.
The true executable command line inside gdb is like following:
/bin/bash exec /my/executable/program/path
And this is parameter to vfork inside gdb.
For those who have the same problem, you can bypass this one by executing gdb with sudo.
A while ago I did run into the same problem. My guess is that running the debugged program with the additional capabilities is a security issue.
Your program has more privileges than the user that runs it. With a debugger a user can manipulate the execution of the program. So if the program runs under the debugger with the extra privileges then the user could use these privileges for other purposes than for which the program intended to use them. This would be a serious security hole, because the user does not have the privileges in the first place.
For those running GDB through an IDE, sudo-ing GDB (as in #Stéphane J.'s answer) may not be possible. In this case, you can run:
sudo gdbserver localhost:12345 /path/to/application
and then attach your IDE's GDB instance to that (local) GDBServer.
In the case of Eclipse CDT, this means making a new 'C/C++ Remote Application' debug configuration, then under the Debugger > Connection tab, entering TCP / localhost / 12345 (or whatever port you chose above). This lets you debug within Eclipse, whilst your application has privileged access.
I used #NickHuang's solution until, with one of system updates, it broke systemd services (too much capabilities on bash for systemd to start it or some such). Switched to leaving bash alone and instead pass a command to gdb to invoke the executable directly. The command is
set startup-with-shell off
OK, so I struggled a bit with this so I thought I'd combine answers and summarise.
The easy solution is just to sudo gdb as suggested but just be a bit careful. What you're doing here is running the debugged program as root. This may well cause it to operate differently than when you run it from the command line as a normal user. Could be a bit confusing. Not that I would EVER fall into this trap... Oopsies.
This will be fine if you're running the debugged program as root with sudo OR if the debugged program has the setuid bit set. But if the debugged program is running with POSIX capabilities (setcap / getcap) then you need to mirror these more granular permissions in bash and gdb as Nick Huang suggested rather than just brute forcing permissions with 'sudo'.
Doing anything else may lead you to a bad place of extreme learning.

Resources