Plex Media Server And encFS - linux

I'm trying to Spin Plex Media server in Docker. And I want to mass my media as a Volume, but encrypted. Flow:
1. Mount volume from external storage on underlaying host.
2. mount volume to docker container as volume
3. encrypt data with encfs inside the docker
4. access data in Plex
5. Enjoy your media
The issue is than:
mount shows:
encfs on /media type fuse.encfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions)
Data is readable from unix level. BUT it is not readable for Plex (shows as emtpy folder only).
I susspect encFS or Plex itself not supporting Fuse mounts...
Any ideas? any flags for mounting? Any way to change mount type (can be other "proxy" container).

I haven't used encFS however, in case it helps or you're not aware, the default plex user (usually 'plex') must be an owner or in a group that is assigned to the media files. In addition, if plex is showing an empty folder, it may be simply that the folder does not have read, write AND execute set i.e. chmod 775 (the folder needs the execute bit in order to list contents which is why 664 won't work). I wrote a guide over on Tech-KnowHow for this yesterday, which outlines a few ways to get this done, if you need any help with it just leave me a comment and I'll see what I can do while it's all still fresh in my mind.
Good luck!
Direct link: https://www.tech-knowhow.com/2016/03/how-to-plex-permissions-linux/

Related

Docker - Access host /proc

This is a duplicate of a post I have created in the docker forum. Thus I am going to close this / the other one once this problem is solved. But since no one answers in the docker forum and my problem persists, I'll post it again, looking forward to get an answer.
I would like to expose a server monitoring app as a docker container. The app I have written relies on /proc to read system information like CPU utilization or disk stats. Thus I have to forward the information provided in hosts /proc virtual file system to my docker container.
So I made a simple image (using the first or second intro on docker website: Link) and started it:
docker run -v=/proc:/host/proc:ro -d hostfiletest
Assuming the running container could read from /host/proc to obtain information about the host system.
I fired up a console inside the container to check:
docker exec -it {one of the funny names the container get} bash
And checked the content of /host/proc.
Easiest way to check it was getting the content of /host/proc/sys/kernel/hostname - that should yield the hostname of the vm I am working on.
But I get the hostname of the container, while /host/proc/uptime gets me the correct uptime of the vm.
Do I miss something here? Maybe something conceptual?
Docker version 17.05.0-ce, build 89658be running on Linux 4.4.0-97-generic (VM)
Update:
I found several articles describing how to run a specific monitoring app inside a containing using the same approach I mentioned above.
Update:
Just tried using an existing Ubuntu image - same behavior. Running the image privileged and with pid=host doesn't help.
Greetings
Peepe
The reason of this problem is that /proc is not a normal filesystem. According to procfs, it is like an interface to access some kernel data and system information. This interface provides a file-like structure, so it can make people misunderstand that it is a normal directory.
Files in /proc are also not normal files. They are empty (size = 0). You can check by yourself.
$ stat /proc/sys/kernel/hostname
File: /proc/sys/kernel/hostname
Size: 0 Blocks: 0 IO Block: 1024 regular empty file
So the file doesn't hold any data, but when you read the file, the kernel will dynamically return to you a corresponding system information.
To answer your question, /proc/sys/kernel/hostname is just an interface to access the hostname. And depending on where you access that interface, on the host or on the container, you will get the corresponding hostname. This is also applied when you use bind mount -v /proc:/hosts/proc:ro, since bind mount will provide an alternative view of /proc. If you call the interface /hosts/proc/sys/kernel/hostname, the kernel will return the hostname of the box where you are in (the container).
In short, think about/proc/sys/kernel/hostname as a mirror, if your host stands in front of it, it will reflect the host. If it is the container, it will reflect the container.
I know its a few months later no but I came across the same problem today.
In my case I was using psutil in Python to read disk stats of the hosts from inside a docker container.
The solution was to mount the whole host files system as read only into the docker container with -v /:/rootfs:ro and specify the path to proc as psutil.PROCFS_PATH = '/rootfs/proc'.
Now the psutil.disk_partitions() lists all partitions from the host files system. As the hostname is also contained within the proc hierarchy, I guess this also works for other host system information as long the the retrieving command points to /rootsfs/proc.

How do you manage permissions when developing in a Docker container?

When developing in a Docker container on Linux, there is a problem with permissions: how to manage file ownership and permissions between the host and the container.
Imagine that I have a Docker image that runs Ubuntu and an Apache server. Using the default settings for (recent versions of) Apache, the document root will be /var/www/html and Apache will be run as the www-data user.
In order to do some development, I expose the document root via Docker with -v /path/to/my/files:/var/www/html. And this is where the problem arises:
The files in /path/to/my/files are owned by the containers www-data user. If I'm lucky and my host has a www-data user, it will be that user; otherwise, it will be a distinct user local to the container. The permissions on those files will (probably) be 0755.
So, when I'm working away as myself (a user called jsmith), those files cannot be edited by me because of incorrect file permissions & ownership.
I could change the ownership of the files to jsmith, but that will cause problems with Apache - it will have difficulty accessing files in the document root.
I could change the permissions to 0777, but any new files I create in the course of my work will be owned by jsmith.
The end result is that it is necessary to constantly adjust ownership & permissions on the development files. Other people must have this problem, but every post I've seen on the topic of using Docker in a development workflow just kind of overlooks this problem.
I do have a solution, but I'm not entirely happy with it:
I set up a folder at /src/myproject. This holds my development files and is owned by www-data:www-data.
Using BindFS, I mount /src/myproject at ~/myproject, mapping www-data:www-data to jsmith:jsmith. This allows me to edit files in ~/myproject without messing around with permissions.
The Apache Docker container mounts the /src/myproject directory with -v /src/myproject:/var/www/html. Apache sees the www-data ownership of the files and has no problems.
This works well, but seems overly complicated. How do other people solve this problem?
I realize I'm very likely too late but this might be of help to someone.
In your Dockerfile, you could do this:
RUN usermod -u 1000 www-data
RUN groupmod -g 1000 www-data
This may work in some setups.
I can think of two solutions:
Use a common group id among all developers and images. The uid may end up being numeric in the container, but the gid would give at least read access, and optionally write access, without giving it out globally. Use the setgid bit on the containing directories to have files automatically created with this gid. This isn't the cleanest approach, and may lead to giving out access to other group members, but it may be much easier to manage depending on your organization's workflow.
The second option is named volumes, which I believe were added after you asked this question. They let you have the data exist with the uid/gid's known to the containers. This has the downside of moving the data into the internal docker directories where it's less easy to manage outside of a container. However, there are microservices approaches that keep the volume synchronized with an outside source (git pull, rsync, etc) using a dedicated container that mounts the same volume. You essentially move all of the reads and writes of the data into containers, including any backups, update routines, and testing scripts.
Update: A third option I often use for development environments is a run an entrypoint script as root that compares the mounted volume uid/gid to the uid/gid of a user inside the container. If they do not match, the uid/gid of the user inside the container is updated to match the host. This allows developers to reuse the same image across multiple hosts where the uid/gid of each developer may be different on their location machine. The code to do this is included in my bin/fix-perms script that is part of my example base image. The last step of my entrypoint script is to then use gosu to drop from root back to the user, now with the changed uid/gid, and all files written will now match those of the user on the host.
If you happen to be running on MacOS, a recent feature called osxfs automatically corrects for uid/gid mismatches with host volumes.

Subversion (svn) repository on NTFS partition in Linux?

Can I create and use an svn repository on an NTFS partition when working with svn in Linux? That is, repository on the NTFS partition and checkouts and commits to and from an EXT4 partition.
I realize that NTFS support in Linux is limited and does not support permissions and symbolic links for example. Would that, or any other limitations, cause any issues?
The reason I am asking is because I am thinking about either 1) moving my repository to my Dropbox folder (which resides on an NTFS partition) or 2) moving my repository to a memory stick (which could potentially be NTFS partitioned).
My use case is very simple. I am the only person using the repository. Currently my repository resides on EXT4 and I either access it from the same machine as the repository is located on, or from a second machine thorough svn+ssh://. However, if I went with one of the options above, the access strategy would obviously change.
I would be hesitant to do this because, as you stated, NTFS partitions don't support Unix style permissions.
The Subversion repository directory is usually owned and can only be written to by the user who runs whatever Subversion server process is running. For example, if you're using Apache httpd, and you're Apache user is called httpd, the user who owns the repository is httpd and this would be the only user with write permissions on the files and directories.
A NTFS partition on a Windows box does have permissions set correctly because the Subversion server process would use Windows permission settings. A Linux server will have problems.
Also NTFS partitions are case preserving and not case sensitive, I don't know how this would affect the Subversion server process running on a Linux box. Again, a Windows Subversion server process would be fine with this. A Linux server may have problems.
Unfortunately, I can't say for certain one way or another. I've never tried it, nor seen it done. However, there is a post on the Wandisco Forum that covers this very scenario. The user was able to get around his problems, but I would be hesitant to say that all is beer and candy from then on.
Please say you're not doing this, so you can share a file:// protocol Subversion repository among multiple users. This is a big, fat no-no. Instead, you should at least run the svnserve process, and have users accessing your repository via the svn:// protocol. It's very simple to setup svnserve -- even as a Windows service. The only problem may be that port 3620 (The Subversion server port) is being blocked by your firewall or router.
Dropbox multiboot ntfs folder sync.
In an earlier closed thread by vanadium people we're wanting solution to sync Dropbox on multiple boot systems in one ntfs directory. Vanadium had a good suggestion that I tweaked a little bit to solve.
You must install it in Windows or other system and setup Dropbox folder from Dropbox.
Reboot into Linux system. (I used Ubuntu 18)
Install Dropbox to Ext 4 partition.
Open file manager to Home folder and delete Dropbox directory. Leave this file manager open.
Open a new file manager to the main directory ntfs or other that other os Dropbox folder is in.
Hit ctr + h then drag the Dropbox folder to the directory you deleted it from. (This creates a symbol link shortcut to the Dropbox folder you want)
Now sync Dropbox in Linux.
If you want Dropbox to load at startup you must set the partition folder to auto mount on startup in terminal.
1 - Write down the UUID of the drive that you want to mount by executing the following command:
sudo blkid
2 - Then edit the fstab:
sudo gedit /etc/fstab
3 - Add at the end of the file fstab:
UUID=D638F77338F7514B /media/baraldi/win_www ntfs defaults 0 0
Be sure the UUID matches what you recorded in the first step
4 - Restart)
Or Use the "Disks" app.
Load the Disks app (In System) and select the disk with the filesystem you want to mount on startup.
Then select the filesystem on that disk and click on the gears (for configuration).
Select "Edit Mount Options" from the popup menu.
On the setup options, click to check the "Mount on Startup" box. (This will add the entry to fstab when you click on "OK").
Reboot, and your filesystem should be available.
I agree with other comments here regarding manually adding lines to fstab via CLI/text editor. If you take the time to look at your fstab file it will help you understand what changes have been made and, ultimately the CLI method will become faster for you.

Remove filesystem with overlay changed bytes

I develop a software that uses a set of big files.
I cannot download all them.
I need to reproduce timeout error that cannot be reproduce otherwise.
There are stage host. I mounted its remote folder with sshfs but I cannot launch local
server instance because it can change these files. It requires write permissions.
With "sshfs -o ro" it fails to start.
I want to know is it possible to say to save changes locally that could overlay actual bytes in remote files?
You should be able to use UnionFS or AUFS (or any other Union mount filesystem) to use these two folders together. You would have the readonly mount with sshfs and merge it with local folder that would be preferred. Reading would occur from the remote filesystem until write has been done.

Linux disc partition and Nginx

in the linux bible book, i've found that it will be useful to install linux on different partitions; for example separating /var will be benefinc to avoid that an attacker will fill the hard drive and stops the OS (since the page will be in (/var/www/), and letting the application which is in /usr running, (nginx for example) how can we do this?
am sorry for that question, because am new in linux system, when i've tried the first time to load another partition (the d: in windows), it asked me to mount it first (i've made a shortcut to a document in the d: and the shortcut dont work untill i mount the partition), so does it make sense to make 5 partitions (/boot, /usr, /var, /home, /tmp) to load the OS?
do the web hosters make the same strategy?
even you divide the partitions.
Attacker can fill the logs and make the web service unstable. Which mostly or defaultly located in /var/log folder. Some distros even log folder in /etc/webserver/log folder.
there are some uploading related flaws that made php upload features fill up the file limit on tmp folder.
This will not protect you at all. You must look the security from another perspective.

Resources