Gio Mount Returns Different Outputs - linux

I'm trying to make python script for mounting using Gio Module, however when i add my script to crontab or run it as a service, i only get filesystem root:
In shell:
gio mount -l
returns every mountable drive and volume,
however, when i run:
sudo gio mount -l
or
sudo -u myuser gio mount -l
i only get Filesystem root and floppy.
The difference i realized is,
sudo, or my script auto ran by system returns the volumes with type "GUnixVolume",
and just "gio mount -l" returns type GProxyDrive.
So what is the difference, and how can i detect external drives when my script is ran by system?

Related

Blob Storage Permament mounting in redhat linux

I have a linux server where i had mounted blog storage but it is temporary mount everytime i restart the machine i have to run this below command manually
sudo blobfuse /sfp/publicstorage134/blobstorage123 --tmp-path=/mnt/rec/mountpath --config-file=/user1/connection_sf.cfg -o attr_timeout=180 -o entry_timeout=120 -o negative_timeout=180 -o allow_other
How can i make this stoarge mount permanently instead of mounting with this command after every restart. Is it possible to put this in /etc/fstab?
The recommendation is to create a script, such as mount.sh or you can also add blobfuse directly to /etc/fstab
Add the following line to use mount.sh:
/<path_to_blobfuse>/mount.sh </path/to/desired/mountpoint> fuse _netdev
OR
Add the following line to run without mount.sh:
blobfuse /home/azureuser/mntblobfuse fuse delay_connect,defaults,_netdev,--tmp-path=/home/azureuser/tmppath,--config-file=/home/azureuser/connection.cfg,--log-level=LOG_DEBUG,allow_other 0 0

How to mount /proc in Docker Container

Why can't I mount the /proc device from the container during the build process?
If I run docker build -t test . with this Dockerfile:
FROM debian:stable-slim
RUN bash -c 'ls {/proc,/dev,/sys}'
I can see that all special devices are populated. But if I try this Dockerfile:
FROM debian:stable-slim
RUN bash -c 'ls {/proc,/dev,/sys}'
RUN mount --bind /proc /mnt
I get the following error:
mount: /mnt: permission denied.
The command '/bin/sh -c mount --bind /proc /mnt' returned a non-zero code: 32
I know it's possible to use --privileged mode in docker run, but my goal is not to access the host's /proc but to just mount the /proc device from container in a file system that I'm generating inside the container with debootstrap. So that I can install some packages, specifically default-jre.
My Docker Version: 20.10.8
EDIT
My goal is to create a custom live-cd like here, so I can't use the container's base OS.

Why won't mount.cifs use my credential file?

I have a script that needs to mount a Windows share to a Linux box, run a script, then unmount it. Despite following the man page for mount.cifs the command fails to recognize the credential file.
I made sure file sharing packages were present:
sudo yum install samba-client samba-common cifs-utils
Created drive that network share will mount to
sudo mkdir /share/
Created the credential file
sudo vim /root/.cifs
.cifs file contents
username=uname
password=pword
Created my .sh file
sudo vim /usr/bin/scritp.sh
script.sh contents
#!bin/bash
mount.cifs //ipaddress/share /share/ -o credentials=/root/.cifs
<script which makes use of the share>
umount /share/
Made the script executable
sudo chmod u+x /usr/bin/script.sh
Tested script
cd /usr/bin
sudo ./script.sh
Despite having the credential file specified, I am still prompted for a password for root user (connecting to Windows share with no "root" user"
Output from running script:
Password for root#//ipaddress/share:
Can anyone figure out what I have done wrong? It seems consistent with all documentation I have read.
For some reason, modifying the script to the following worked:
mount -t cifs -o credentials=/root/.cifs //ipaddress/share /share/
cd /share/
./script.sh
umount /share/
Not sure why, since mount -t cifs just invokes mount.cifs, but if you are experiencing the same issue, that's how I finally got around it.

mount cifs too long due to chown for each file

I need to run an application on a VM , where I can do my set up in a script that will be run as root when the machine is built.
In this script I would like to mount a windows FS, so using CIFS.
So I am writing the following in the fstab:
//win/dir /my/dir cifs noserverino,ro,uid=1002,gid=1002,credentials=/root/.secret 0 0
After this, still in the same script, I try to mount it:
mount /my/dir
THat results in output of 2 lines for each file:
chown: changing ownership of `/my/dir/afile' Read-only file system
Because I have a lot of files, this takes forever...
With the same fstab I have asked an admin to manually mount the same directory :
sudo mount /my/dir
-> this is very quick with NO extra output.
I assume the difference of behavior is due to the fact that the script is run as root.
Any idea how to avoid the issue while keeping the idea of the script run as root ( this is not under my control )
Cheers.
Renaud

Not able to run shell script in busybox without mounting procfs

I am trying to run a shell script in busybox rootfs with linux kernel version 4.4.4. Test script tries to mount procfs.
#!/bin/sh
mount -t proc none /proc
I can run this script with sh test.sh, but if I tries to run using ./test.sh, it says /bin/sh test.sh not found. Strange thing is after mounting procfs manually
mount -t proc none /proc
I can run ./test.sh. For busybox I am using default config with static enabled.

Resources