How to run ffmpeg as apache user - linux

I installed ffmpeg on centos as root user. How can update permission so that apache (httpd) can run the ffmpeg command?
-rwxr-xr-x. 1 root root 24M Mar 4 03:43 /root/bin/ffmpeg
I tried to link to /usr/bin
cd /usr/bin
ls -s /root/bin/ffmpeg
But when still not works. I guess because apache not have shell avaiable?
su apache -c whoami
This account is currently not available

As #pbu comment states, if you follow this installation guide http://trac.ffmpeg.org/wiki/CompilationGuide/Generic but replacing '$HOME' with '/usr/local', the apache (httpd) user will be able to execute it.

Related

laravel centOS 7 chmod 755/775 permission denied"could not be opened: failed to open stream: Permission denied", only allows if I set to 777

I have linux:
Linux version 3.10.0-693.21.1.el7.x86_64
(builder#kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat
4.8.5-16) (GCC) ) #1 SMP Wed Mar 7 19:03:37 UTC 2018
If I set the permission to 777 on storage, laravel works, but if I set it to 755 or 775 it says:
"The stream or file
"/home/admin/domains/linkshift.eu/public_html/storage/logs/laravel-2018-11-08.log"
could not be opened: failed to open stream: Permission denied"
I have tried searching for an answer, but nothing else worked, I have tried doing
Permissions Issue with Laravel on CentOS
but it still doesn't work
Edit: I also have direct admin installed
Look like the log file is generated using root user and you are running the laravel from a different user. Make sure the log file is written by same user. Or give permission to your user.
sudo chown -R laravel-user:laravel-user /path/to/your/laravel/root/directory
Run these commands after every deploy
chmod -R 775 storage/framework
chmod -R 775 storage/logs
chmod -R 775 bootstrap/cache
Still If not working, It can maybe also because of SELinux.
Check selinux status on terminal:
sestatus
If status is enabled, write command for disable SElinux (not recommended)
setenforce Permissive
or you can do like below.
yum install policycoreutils-python -y # might not be necessary, try the below first
semanage fcontext -a -t httpd_sys_rw_content_t "/path/to/your/laravel/root/directory/storage(/.*)?" # add a new httpd read write content to sellinux for the specific folder, -m for modify
semanage fcontext -a -t httpd_sys_rw_content_t "/path/to/your/laravel/root/directory/bootstrap/cache(/.*)?" # same as the above for b/cache
restorecon -Rv /var/www/html/ # this command is very important to, it's like a restart to apply the new rules
Selinux is intended to restrict access even to root users, so only the necessary stuff might be accessed, at least on a generalist overview, it's extra security, disabling it is not a good practise, there are many links to learn Selinux, but for this case it is not even required.
Could you show the result of
ls -l /home/admin/domains/linkshift.eu/public_html/storage/logs
and have you tried
php artisan config:cache
php artisan config:clear
composer dump-autoload -o

"basename: missing operand" on su command

I've added superuser sroot with the following command.
useradd -o -r -c "service root" -g 0 -u 0 -m -d /root -s /bin/bash sroot
When I try to switch to that user I get the following:
[admin#machine ~]$ su - sroot
Password:
TERM=[xterm-r6]?
basename: missing operand
Try `basename --help' for more information.
whoami shows that I'm root now but commands that requires root access still cannot be executed.
When I login under usual root everything works fine.
[admin#machine ~]$ uname -a
Linux <myhostname> 2.6.18-194.el5PAE #1 SMP Fri Apr 2 15:37:44 EDT 2010 i686 i686 i386 GNU/Linux
Thanks in advance!
Well, I tested with non-standard linux commands, but with scripts added with installed rpm. Those commands check $LOGNAME variable and require to be root only, not sroot.
Thanks #thatotherguy for your comment, which directed me to the right path to search.

sudo must be setuid root error already tried everything

I am getting following error while trying to switch to root.
[~]# sudo su -
sudo: must be setuid root
and I have confirmed the permission of sudo file set to correct
[~]# ls -l /usr/bin/sudo
---s--x--x 2 root root 190904 Mar 10 2014 /usr/bin/sudo*
also the user is already wheels group. please help
Please make sure that the user has normal jail access

sudo must be setuid root error

I am getting the following error while switching to root user
[~]# sudo su -
sudo: must be setuid root
The current permission of sudo is
[~]# ls -l /usr/bin/sudo
---s--x--x 2 root root 190904 Mar 10 2014 /usr/bin/sudo*
It's may CLOUDLINUX 5.11 x86_64 cPanel live server. Any suggestions on how to fix this?
Try to Enter system with recovery mode.(maybe Esc or Shift when start.)
Then choose the content row with 'root' in recovery menu.
Then:
#mount -o remount,rw /
#chown root:root /usr/bin/sudo
#chmod 4755 /usr/bin/sudo
now, restart...
try:
sudo ls
but if following exception raise:
#sudo: /usr/lib/sudo/sudoers.so must be owned by uid 0
#sudo: fatal error, unable to load plugins
Then you need entering recovery mode again and try:
#chown root /usr/lib/sudo/sudoers.so
restart...
I have fixed it my self. Currently the user is set to jailed shell and now I changed it to normal shell and could switch to root. – Techiescorner

Running app inside Docker as non-root user

After yesterday's news of Shocker, it seems like apps inside a Docker container should not be run as root. I tried to update my Dockerfile to create an app user however changing permissions on app files (while still root) doesn't seem to work. I'm guessing this is because some LXC permission is not being granted to the root user maybe?
Here's my Dockerfile:
# Node.js app Docker file
FROM dockerfile/nodejs
MAINTAINER Thom Nichols "thom#thomnichols.org"
RUN useradd -ms /bin/bash node
ADD . /data
# This next line doesn't seem to have any effect:
RUN chown -R node /data
ENV HOME /home/node
USER node
RUN cd /data && npm install
EXPOSE 8888
WORKDIR /data
CMD ["npm", "start"]
Pretty straightforward, but when I ls -l everything is still owned by root:
[ node#ed7ae33e76e1:/data {docker-nonroot-user} ]$ ls -l /data
total 64K
-rw-r--r-- 1 root root 383 Jun 18 20:32 Dockerfile
-rw-r--r-- 1 root root 862 Jun 18 16:23 Gruntfile.js
-rw-r--r-- 1 root root 1.2K Jun 18 15:48 README.md
drwxr-xr-x 4 root root 4.0K May 30 14:24 assets/
-rw-r--r-- 1 root root 416 Jun 3 14:22 bower.json
-rw-r--r-- 1 root root 930 May 30 01:50 config.js
drwxr-xr-x 4 root root 4.0K Jun 18 16:08 lib/
drwxr-xr-x 42 root root 4.0K Jun 18 16:04 node_modules/
-rw-r--r-- 1 root root 2.0K Jun 18 16:04 package.json
-rw-r--r-- 1 root root 118 May 30 18:35 server.js
drwxr-xr-x 3 root root 4.0K May 30 02:17 static/
drwxr-xr-x 3 root root 4.0K Jun 18 20:13 test/
drwxr-xr-x 3 root root 4.0K Jun 3 17:38 views/
My updated dockerfile works great thanks to #creak's clarification of how volumes work. Once the initial files are chowned, npm install is run as the non-root user. And thanks to a postinstall hook, npm runs bower install && grunt assets which takes care of the remaining install steps and avoids any need to npm install -g any node cli tools like bower, grunt or coffeescript.
Check this post: http://www.yegor256.com/2014/08/29/docker-non-root.html In rultor.com we run all builds in their own Docker containers. And every time before running the scripts inside the container, we switch to a non-root user. This is how:
adduser --disabled-password --gecos '' r
adduser r sudo
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
su -m r -c /home/r/script.sh
r is the user we're using.
Update 2015-09-28
I have noticed this post getting a bit of attention. A word of advice for anyone who is potentially interested in doing something like this. I would try to use Python or another language as a wrapper for your script executions. Doing native bash scripts I had problems when trying to pass through a variety of arguments to my containers. Specifically there was issues with the interpretation/escaping of " and ' characters by the shell.
I was needing to change the user for a slightly different reason.
I created a docker image housing a full featured install of ImageMagick and Ffmpeg with a desire that I could do transformations on images/videos within my host OS. My problem was that these are command line tools, so it is slightly trickier to execute them via docker and then get the results back into the host OS. I managed to allow for this by mounting a docker volume. This seemed to work okay except that the image/video output was coming out as being owned by root (i.e. the user the docker container was running as), rather than the user whom executed the command.
I looked at the approach that #François Zaninotto mentioned in his answer (you can see the full make script here). It was really cool, but I preferred the option of creating a bash shell script that I would then register on my path. I took some of the concepts from the Makefile approach (specifically the user/group creation) and then I created the shell script.
Here is an example of my dockermagick shell script:
#!/bin/bash
### VARIABLES
DOCKER_IMAGE='acleancoder/imagemagick-full:latest'
CONTAINER_USERNAME='dummy'
CONTAINER_GROUPNAME='dummy'
HOMEDIR='/home/'$CONTAINER_USERNAME
GROUP_ID=$(id -g)
USER_ID=$(id -u)
### FUNCTIONS
create_user_cmd()
{
echo \
groupadd -f -g $GROUP_ID $CONTAINER_GROUPNAME '&&' \
useradd -u $USER_ID -g $CONTAINER_GROUPNAME $CONTAINER_USERNAME '&&' \
mkdir --parent $HOMEDIR '&&' \
chown -R $CONTAINER_USERNAME:$CONTAINER_GROUPNAME $HOMEDIR
}
execute_as_cmd()
{
echo \
sudo -u $CONTAINER_USERNAME HOME=$HOMEDIR
}
full_container_cmd()
{
echo "'$(create_user_cmd) && $(execute_as_cmd) $#'"
}
### MAIN
eval docker run \
--rm=true \
-a stdout \
-v $(pwd):$HOMEDIR \
-w $HOMEDIR \
$DOCKER_IMAGE \
/bin/bash -ci $(full_container_cmd $#)
This script is bound to the 'acleancoder/imagemagick-full' image, but that can be changed by editing the variable at the top of the script.
What it basically does is:
Create a user id and group within the container to match the user who executes the script from the host OS.
Mounts the current working directory of the host OS (using docker volumes) into home directory for the user we create within the executing docker container.
Sets the tmp directory as the working directory for the container.
Passes any arguments that are passed to the script, which will then be executed by the '/bin/bash' of the executing docker container.
Now I am able to run the ImageMagick/Ffmpeg commands against files on my host OS. For example, say I want to convert an image MyImage.jpeg into a PNG file, I could now do the following:
$ cd ~/MyImages
$ ls
MyImage.jpeg
$ dockermagick convert MyImage.jpeg Foo.png
$ ls
Foo.png MyImage.jpeg
I have also attached to the 'stdout' so I could run the ImageMagick identify command to get info on an image on my host, for e.g.:
$ dockermagick identify MyImage.jpeg
MyImage.jpeg JPEG 640x426 640x426+0+0 8-bit DirectClass 78.6KB 0.000u 0:00.000
There are obvious dangers about mounting the current directory and allowing any arbitrary command definition to be passed along for execution. But there are also many ways to make the script more safe/secure. I am executing this in my own non-production personal environment, so these are not of highest concern for me. But I would highly recommend you take the dangers into consideration should you choose to expand upon this script. It's also worth me mentioning that this script doesn't take an OS X host into consideration. The make file that I steal ideas/concepts from does take this into account, so you could extend this script to do so.
Another limitation to note is that I can only refer to files currently in the path for which I am executing the script. This is because of the way I am mounting the volumes, so the following would not work:
$ cd ~/MyImages
$ ls
MyImage.jpeg
$ dockermagick convert ~/DifferentDirectory/AnotherImage.jpeg Foo.png
$ ls
MyImage.jpeg
It's best just to go to the directory containing the image and execute against it directly. Of course I am sure there are ways to get around this limitation too, but for me and my current needs, this will do.
This one is a bit tricky, it is actually due to the image you start from.
If you look at the source, you notice that /data/ is a volume. So everything you do in the Dockerfile will be discarded and overridden at runtime by the volume that gets mounted then.
You can chown at runtime by changing your CMD to something like CMD chown -R node /data && npm start.
Note: I answer here because, given the generic title, this Question pops up in google when you look for a solution to "Running app inside Docker as non-root user". Hope it helps those who are stranded here.
With Alpine Linux you can create a system user like this:
RUN adduser -D -H -S -s /bin/false -u 1000 myuser
Everything in the Dockerfile after this line is executed with myuser.
myuser user has:
no password assigned
no home dir
no login shell
no root access.
This is from adduser --help:
-h DIR Home directory
-g GECOS GECOS field
-s SHELL Login shell
-G GRP Add user to existing group
-S Create a system user
-D Don't assign a password
-H Don't create home directory
-u UID User id
-k SKEL Skeleton directory (/etc/skel)
Note: This answer is given because many people looking for non-root usage will end up here. Beware, this does not address the issue that caused the problem, but is addressing the title and clarification to an answer given by #yegor256, which uses a non-root user inside the container. This answer explains how to accomplish this for non-debian/non-ubuntu use-case. This is not addressing the issue with volumes.
On Red Hat-based systems, such as Fedora and CentOS, this can be done in the following way:
RUN adduser user && \
echo "user ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/user && \
chmod 0440 /etc/sudoers.d/user
In your Dockerfile you can run commands as this user by doing:
RUN su - user -c "echo Hello $HOME"
And the command can be run as:
CMD ["su","-","user","-c","/bin/bash"]
An example of this can be found here:
https://github.com/gbraad/docker-dev/commit/644c51002f4b8e6fe5bb745638542a4c3d908b16

Resources