Intel Pin Tool permission error - ubuntu-14.04

The following works fine and I get the edgecnt out from 'ls' program on my ubuntu 14.04 system..
$ ../../../pin.sh -t obj-intel64/edgcnt.so -- /bin/ls
but while using it on my node application I got permission denied error:
$ ../../../pin.sh -t obj-intel64/edgcnt.so -- /home/samira/Documents/benchmarks/lets-chat/
/home/samira/Documents/benchmarks/lets-chat/ : Permission denied
I searched all the web about that but I haven't found any solution. I tried runing both node application and pin tool as root but it didn't solve the problem. Also used the pid to run:
s# ../../../pin -pid 14191 -t obj-intel64/edgcnt.so -o myout.log
E: Could not attach to process 14191: need execute and read access to /proc/14191/exe
I tried to change the permission of /proc/ folder but the operation was not permitted even for the root.
Any idea?

while using pin on your node application,got permition denied,have you change the user group?
as a root,"root#server:~# echo 0 > /proc/sys/kernel/yama/ptrace_scope"
may help your question.
"s# ../../../pin -pid 14191 -t obj-intel64/edgcnt.so -o myout.log
E: Could not attach to process 14191: need execute and read access to /proc/14191/exe" I think this means the pid you want to trace doesn't exsist.

Related

Laravel Permission denied error on Linux Server

I am facing a very strange error with my Laravel application on the production server (Linux). Whenever the users of my application login for the first time in morning, they get a permission denied error which read something like
file_put_contents(/var/www/html/PROJECT/storage/framework/cache/data/0c/e5/0ce52dca12715a327eb4c1b4bff36293ea67c719): Failed to open stream: No such file or directory
To overcome this, the first thing I have to do in the morning is to give permission to the entire project by
sudo chmod -R 777 PROJECT
And then it runs just fine.
This is slowly getting very annoying as it is happening every morning. Why are the permissions getting revoked automatically and is there a permanent solution for this?
Please help me and thank you all in advance
I think your application run some command also. That's why your storage permission is overrated by system user. (by which user cron execute).
The thing is, storage directory should have write access to both system user and webserver(apache/nginx)
BTW, Symfony framework has some nice solution for this kind of situation which also can be application in Laravel Application.
Please look at this:
https://symfony.com/doc/current/setup/file_permissions.html
In your case, this command would be like:
HTTPDUSER=$(ps axo user,comm | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1)
sudo setfacl -dR -m u:"$HTTPDUSER":rwX -m u:$(whoami):rwX /var/www/html/PROJECT/storage
sudo setfacl -R -m u:"$HTTPDUSER":rwX -m u:$(whoami):rwX /var/www/html/PROJECT/storage
You can also achieve above solution if you know know what is your webserver user.
sudo chown -R "$local_user":"$webserver_user" "/var/www/html/PROJECT/storage"
sudo chmod -R 0777 "/var/www/html/PROJECT/storage"
If my opinion, setfacl is better solution if you have setfacl installed in your server.
You can check if your webserver has enough permission to the directory instead of giving 777 to the project as this is your production environment which is not at all recommendable.
Also try php artisan cache:clear

Failed to open /dev/mem: Permission denied

Today, I tried to use this command on my raspberry Pi:
sox -t mp3 /home/pi/test.mp3 -t wav - | /home/pi/PiFmRds/src/pi_fm_rds -audio -
But I got this error message :
Failed to open /dev/mem: Permission denied.
Terminating: cleanly deactivated the DMA engine and killed the carrier.
sudo: ./sox : command not found
I've tried to place "sudo" before the command but I got the same error.
How can I resolve this please ? (and sorry if I did a mistake, I started playing with my raspberry today and this is also my first question on this website)
Thanks in advance !
Putting sudo in front of sox will not help you since I am pretty sure the error message "Failed to open /dev/mem" comes from pi_fm_rds. And that is still started without sudo.
You are actually executing two commands. sox is the first, and pi_fm_rds the second. You are sending the output of the first command to the second (via the pipe |).
To call pi_fm_rds with root access you can choose one of these three options:
Call pi_fm_rds with sudo
sox -t mp3 /home/pi/test.mp3 -t wav - | sudo /home/pi/PiFmRds/src/pi_fm_rds -audio -
Or add your user to the kmem group (which allows access to /dev/mem) - requires logout/reboot.
sudo usermod -a -G kmem userName
or make the program setuid root - or setgid kmem
chown root:root /home/pi/PiFmRds/src/pi_fm_rds
chmod u+s /home/pi/PiFmRds/src/pi_fm_rds

access permission in linux

I m running linux on my android phone using gnuroot debian.I had installed gcc compiler.I had
make a c program on linux and compiled it with
command
g++ helloworld.c -o helloworld
I got helloworld file within the same directory and executing it with command
./helloworld
I got message "bash : ./hello world: Permission denied"
then I use chmod u+x helloworld
then I again execute it with same above command again I got same message permission denied
then I use command for changing the permission
sudo chmod u+x helloworld
Then I again got the same message permission denied
when I m listing the file after using chmod
I got there is no change in permission
Plz help I will be very grateful to you
Android mounts /storage/emulated with the noexec parameter, which means that files there can't be executed regardless of permissions. You need to put your binary somewhere not under there (and not under somewhere that's just a symlink to there).

rsync - mkstemp failed: Permission denied (13) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have the following setup to periodically rsync files from server A to server B. Server B has the rsync daemon running with the following configuration:
read only = false
use chroot = false
max connections = 4
syslog facility = local5
log file = /var/adm/rsyncd.log
munge symlinks = false
secrets file = /etc/rsyncd.secrets
numeric ids = false
transfer logging = true
log format = %h %o %f %l %b
[BACKUP]
path = /path/to/archive
auth users = someuser
From server A I am issuing the following command:
rsync -adzPvO --delete --password-file=/path/to/pwd/file/pwd.dat /dir/to/be/backedup/ someuser#192.168.100.100::BACKUP
BACKUP directory is fully read/write/execute to everyone. When I run the rsync command from server A, I see:
afile.txt
989 100% 2.60kB/s 0:00:00 (xfer#78, to-check=0/79)
for each and everyfile in the directory I wish to backup. It fails when I get to writing tmp files:
rsync: mkstemp "/.afile.txt.PZQvTe" (in BACKUP) failed: Permission denied (13)
Hours of googling later and I still can't resolve what seems to be a very simple permission issue. Advice? Thanks in advance.
Additional Information
I just noticed the following occurs at the beginning of the process:
rsync: failed to set permissions on "/." (in BACKUP): Permission denied (13)
Is it trying to set permission on "/"?
Edit
I am logged in as the user - someuser. My destination directory has full read/write/execute permission for everyone, including it's contents. In addition, the destination directory is owned by someuser and in someuser's group.
Follow up
I've found using SSH solves this
Make sure the user you're rsync'd into on the remote machine has write access to the contents of the folder AND the folder itself, as rsync tried to update the modification time on the folder itself.
Even though you got this working, I recently had a similar encounter and no SO or Google searching was of any help as they all dealt with basic permission issues wheres the solution below is somewhat of an off setting that you wouldn't even think to check in most situations.
One thing to check for with permission denied that I recently found having issues with rsync myself where permissions were exactly the same on both servers including the owner and group but rsync transfers worked one way on one server but not the other way.
It turned out the server with problems that I was getting permission denied from had SELinux enabled which in turn overrides POSIX permissions on files/folders. So even though the folder in question could have been 777 with root running, the command SELinux was enabled and would in turn overwrite those permissions which produced a "permission denied"-error from rsync.
You can run the command getenforce to see if SELinux is enabled on the machine.
In my situation I ended up just disabling SELINUX completely because it wasn't needed and already disabled on the server that was working fine and just caused problems being enabled. To disable, open /etc/selinux/config and set SELINUX=disabled. To temporarily disable you can run the command setenforce 0 which will set SELinux into a permissive state rather then enforcing state which causes it to print warnings instead of enforcing.
Rsync daemon by default uses nobody/nogroup for all modules if it is running under root user. So you either need to define params uid and gid to the user you want, or set them to root/root.
I encountered the same problem and solved it by chown the user of the destination folder. The current user does not have the permission to read, write and execute the destination folder files. Try adding the permission by chmod a+rwx <folder/file name>.
This might not suit everyone since it does not preserve the original file permissions but in my case it was not important and it solved the problem for me. rsync has an option --chmod:
--chmod This option tells rsync to apply one or more comma-separated lqchmodrq strings to the permission of the files in the transfer. The
resulting value is treated as though it was the permissions that the
sending side supplied for the file, which means that this option can
seem to have no effect on existing files if --perms is not enabled.
This forces the permissions to be what you want on all files/directories. For example:
rsync -av --chmod=Du+rwx SRC DST
would add Read, Write and Execute for the user to all transferred directories.
I had a similar issue, but in my case it was because storage has only SFTP, without ssh or rsync daemons on it. I could not change anything, bcs this server was provided by my customer.
rsync could not change the date and time for the file, some other utilites (like csync) showed me other errors: "Unable to create temporary file Clock skew detected".
If you have access to the storage-server - just install openssh-server or launch rsync as a daemon here.
In my case - I could not do this and solution was: lftp.
lftp's usage for syncronization is below:
lftp -c "open -u login,password sftp://sft.domain.tld/; mirror -c --verbose=9 -e -R -L /srs/folder /rem/folder"
/src/folder - is the folder on my PC, /rem/folder - is sftp://sft.domain.tld/rem/folder.
you may find mans by the link lftp.yar.ru/lftp-man.html
Windows: Check permissions of destination folders. Take ownership if you must to give rights to the account running the rsync service.
I had the same issue in case of CentOS 7. I went through lot of articles ,forums but couldnt find out the solution.
The problem was with SElinux. Disabling SElinux at the server end worked.
Check SELinux status at the server end (from where you are pulling data using rysnc)
Commands to check SELinux status and disable it
$getenforce
Enforcing ## this means SElinux is enabled
$setenforce 0
$getenforce
Permissive
Now try running rsync command at the client end ,it worked for me.
All the best!
I have Centos 7 server with rsyncd on board:
/etc/rsyncd.conf
[files]
path = /files
By default selinux blocks access for rsyncd to /files folder
# this sets needed context to my /files folder
sudo semanage fcontext -a -t rsync_data_t '/files(/.*)?'
sudo restorecon -Rv '/files'
# sets needed booleans
sudo setsebool -P rsync_client 1
Disabling selinux is an easy but not a good solution
I had the same issue, so I first SSH into the server to confirm that I able to log in to the server by using the command:
ssh -i /Users/Desktop/mypemfile.pem user#ec2.compute-1.amazonaws.com
Then in New Terminal
I copied a small file to the server by using SCP, to make sure I am able to make a connection:
scp -i /Users/Desktop/mypemfile.pem /Users/Desktop/test.file user#ec2.compute-1.amazonaws.com:/home/user/test/
Then In the same new terminal, I tried running rsync:
rsync -avz -e "ssh -i /Users/Desktop/mypemfile.pem" /Users/Desktop/backup/image.img.gz user#ec2.compute-1.amazonaws.com:
If you're on a Raspberry pi or other Unix systems with sudo you need to tell the remote machine where rsync and sudo programs are located.
I put in the full path to be safe.
Here's my example:
rsync --stats -paogtrh --progress --omit-dir-times --delete --rsync-path='/usr/bin/sudo /usr/bin/rsync' /mnt/drive0/ pi#192.168.10.238:/mnt/drive0/
I imagine a common error not currently mentioned above is trying to write to a mount space (e.g., /media/drivename) when the partition isn't mounted. That will produce this error as well.
If it's an encrypted drive set to auto-mount but doesn't, might be an issue of auto-unlocking the encrypted partition before attempting to write to the space where it is supposed to be mounted.
I had the same error while syncing files inside of a Docker container and the destination was a mounted volume (Docker for mac), I run rsync via su-exec <user>. I was able to resolve it by running rsync as root with -og flags (keep owner and group for destination files).
I'm still not sure what caused that issue, the destination permissions were OK (I run chown -R <user> for destination dir before rsync), perhaps somehow related to Docker for Mac slow filesystem.
Take attention on -e ssh and jenkins#localhost: in next example:
rsync -r -e ssh --chown=jenkins:admin --exclude .git --exclude Jenkinsfile --delete ./ jenkins#localhost:/home/admin/web/xxx/public
That helped me
P.S. Today, i realized that when you change (add) jenkins user to some group, permission will apply after slave (agent) restart. And my solution (-e ssh and jenkins#localhost:) need only when you can't restart agent/server.
Yet still another way to get this symptom: I was rsync'ing from a remote machine over ssh to a Linux box with an NTFS-3G (FUSE) filesystem. Originally the filesystem was mounted at boot time and thus owned by root, and I was getting this error message when I did an rsync push from the remote machine. Then, as the user to which the rsync is pushed, I did:
$ sudo umount /shared
$ mount /shared
and the error messages went away.
The group user name for the destination directory and sub directories should be same as per the user.
if the user is 'abc' then the destination directory should be
lrwxrwxrwx 1 abc abc 34 Jul 18 14:05 Destination_directory
command chown abc:abc Destination_directory
Surprisingly nobody have mentioned all powerful SUDO.
Had the same problem and sudo fixed it
run in root access ssh chould solve this problem
or chmod 0777 /dir/to/be/backedup/
or chown username:user /dir/to/be/backedup/

Running Remote Root Scripts on Fedora

I'd like to automate root scripting actions on my remote Fedora server via SSH without having to install the scripts on the server. To do this, I'm trying to use Bash's inline script notation. This works fine in Ubuntu, but I'm getting strange errors on Fedora.
e.g.
#!/bin/bash
ssh -t myuser#myserver <<EOI
su -
ls /root
exit
exit
EOI
This gives me the output:
standard in must be a tty
ls: cannot open directory /root: Permission denied
I've also tried:
#!/bin/bash
ssh -t myuser#myserver <<EOI
sudo ls /root
exit
EOI
but this gives me:
sudo: no tty present and no askpass program specified
If I manually ssh in and run these commands, they run fine since myuser is in the sudoers file. I've Googled these errors and have tried some fixes, but nothing's worked so far. How do I resolve this?
Looks like you're being prompted for the password but have no way to enter it. Here's a few things that should help.
Try an extra -t option: ssh -tt myuser#myserver <<EOI
Also this is a handy trick to log on as root without the root password being enabled: sudo su -
As a last resort you can setup your user to sudo without a password using visudo. You might see some comments like these to help you out:
# Uncomment to allow members of group sudo to not need a password
# (Note that later entries override this, so you might need to move
# it further down)
# %sudo ALL=NOPASSWD: ALL

Resources