I installed barnyard2 for snort, but when i run command below this error appear.
[root#localhost snort]# barnyard2 -c /etc/snort/barnyard2.conf -d /var/log/snort/ -f snort.log -w /etc/snort/bylog.waldo /etc/snort/gen-msg.map /etc/snort/sid-msg.map -C /etc/snort/classification.config
Running in Continuous mode
--== Initializing Barnyard2 ==--
Initializing Input Plugins!
Initializing Output Plugins!
Parsing config file "/etc/snort/barnyard2.conf"
+[ Signature Suppress list ]+
----------------------------
+[No entry in Signature Suppress List]+
----------------------------
+[ Signature Suppress list ]+
Barnyard2 spooler: Event cache size set to [2048]
ERROR: Can not get write access to logging directory "/var/log/barnyard2". (directory doesn't exist or permissions are set incorrectly or it is not a directory at all)
Fatal Error, Quitting..
Barnyard2 exiting
and permission is:
[root#localhost snort]# ls -l /var/log/barnyard2
-rwxrwxrwx. 1 root root 0 Aug 14 16:35 /var/log/barnyard2
in this link this problem was solved but i don't understand how ...
https://forums.freebsd.org/threads/barnyard2-start-service-error.51378/
It looks like directory flag is missing there. The error message says
ERROR: Can not get write access to logging directory "/var/log/barnyard2". (directory doesn't exist or permissions are set incorrectly or it is not a directory at all)
Probably the last case of /var/log/barnyard2 being not a directory at all might apply.
Backup the file and try creating a directory /var/log/barnyard2 with permissions 640 and corresponding ownership.
EDIT: As long as you do not know the contents of /var/log/barnyard2, rename or move the file to some place ( as root 'mv /var/log/barnyard2 /var/log/barnyard2.old'). Restarting barnyard2 now could help, it might create the directory with appropriate permissions by itself. Otherwise as root type 'mkdir /var/log/barnyard2' and then set permissions by typing 'chmod 640 /var/log/barnyard2'. Additionally check the user under which barnyard2 is running by typing 'ps -u | grep "barnyard2"'. Then find the appropriate group to that user by typing 'groups <user>' and then set the ownership of the directory to the corresponding user by typing 'chown <user>:<group> /var/log/barnyard2'.
'/var/log/barnyard2' should be the log directory. In your case it is a file. So, delete the file and create a directory instead. Here are the steps. Enter the commands as a root user.
rm /var/log/barnyard2
mkdir /var/log/barnyard2
Hi I'm new in Linux and I have been trying synchronize two folder with rsync command. I'm using CentOS and when I execute command (#rsync -zvr /tmp/f1/ /tmp/f2/) through command line is working fine, but through rc.local on rebooting is not working. The following message is showed:
sending incremental file list
rsync: change_dir "/tmp/f1" failed: Permission denied (13)
rsync: ERROR: cannot stat destination "/tmp/f2/": Permission denied (13)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(554) [receiver=3.0.6]
rsync: connection unexpectedly closed (9 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]
Please some help?
You are having trouble with SELinux. SELinux is a module which allows for much more fine grained access control than file system permissions and ACLs do. Among others, it will disallow access to files for rsync by default, if it is not run by a user from a terminal. Now how can you let it access the files you want?
There are two options. If you are only dealing with directories no other service (including httpd or such) needs access to, you can do the following:
semanage fcontext -a -t public_content_t "/tmp/f1(/.*)?"
semanage fcontext -a -t public_content_t "/tmp/f2(/.*)?"
This should persistently change the SELinux rules to make the directories /tmp/f1 and /tmp/f2 accessible by rsync. In fact, it will set the public_content_t type on the directories and the files. Nodes with that type are accessible by rsync. However, there is a catch, as mentioned: A node (directory or file) can only have one type. Many services have other requirements for files they access, (e.g. sshd requires ssh_t), so you cannot do this in /etc for example.
Another solution is to persistently allow rsync access to all files. This is fine if you do not run the rsync daemon:
setsebool -P rsync_full_access 1
Afterwards, rsync will be able to access all files, even if run from init and not from a users terminal.
Why does it make a difference if rsync is started by a daemon or by a user?
(this is only true for the most common, targeted policy)
SELinux knows users, and normal users use the SELinux-user unconfined_u. unconfined_u is allowed to do pretty much everything the file system ACLs allow it to do. However, init and such are running as system_u, and system_u is far more constrained. This helps to prevent attacks on httpd and other exposed daemons.
If you have just rebooted /tmp will have been cleared and so /tmp/f1 and /tmp/f2 will not exist
rc.local usually runs quite late in the boot sequence so I'd guess that /tmp is mounted rw but it's possible that it is still only mounted ro
On one of our remote systems mkdir -p $directory fails when the directory exists. which means it shows
mkdir: cannot create directory '$directory' : file exists
This is really puzzling, as I believed the contract of -p was that is always succeed when the directory already exists. And it works on the other systems I tried.
there is a user test on all of these systems, and directory=/home/test/tmp.
This could be caused if there is already a file by the same name located in the directory.
Note that a directory cannot contain both a file and folder by the same name on linux machines.
Check to see if there is a file (not a directory) with a name same as $directory.
mkdir -p won't create directory if there is a file with the same name is existing in the same directory. Otherwise it will work as expected.
Was your directory a FUSE-based network mount by any chance?
In addition to a file with that name already existing (other answer), this can happen when a FUSE process that once mounted something at this directory crashed (or was killed, e.g. with kill -9 or via the Linux OOM killer).
Check in mount if the FUSE mount is still listed there. If yes, you should be able to unmount it and fix the situation using fusermount -uz.
To see what is happening in detail, run strace -fy mkdir -p $directory, which shows all syscalls involved and their return values.
I consider the error messages emitted in this case a bug in mkdir -p (in particular the gnulib library):
When you run it on a dir that had a FUSE process mounted but that process crashed, it says
mkdir: cannot create directory ‘/mymount’: File exists
which is rather highly inaccurate, because the underlying stat() call returns ENOTCONN (Transport endpoint is not connected); but mkdir propagates up the less-specific error from the previous mkdir() sycall.
It's extra confusing because the man page says:
-p, --parents
no error if existing, make parent directories as needed
so it shouldn't error if the dir exists, yet ls -l / shows:
d????????? ? ? ? ? ? files
so according to this (d), it is a directory, but it isn't according to test -d.
I believe a better error message (which mkdir -p should emit in this case) would be:
mkdir: cannot create directory ‘/mymount’: Transport endpoint is not connected
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I have the following setup to periodically rsync files from server A to server B. Server B has the rsync daemon running with the following configuration:
read only = false
use chroot = false
max connections = 4
syslog facility = local5
log file = /var/adm/rsyncd.log
munge symlinks = false
secrets file = /etc/rsyncd.secrets
numeric ids = false
transfer logging = true
log format = %h %o %f %l %b
[BACKUP]
path = /path/to/archive
auth users = someuser
From server A I am issuing the following command:
rsync -adzPvO --delete --password-file=/path/to/pwd/file/pwd.dat /dir/to/be/backedup/ someuser#192.168.100.100::BACKUP
BACKUP directory is fully read/write/execute to everyone. When I run the rsync command from server A, I see:
afile.txt
989 100% 2.60kB/s 0:00:00 (xfer#78, to-check=0/79)
for each and everyfile in the directory I wish to backup. It fails when I get to writing tmp files:
rsync: mkstemp "/.afile.txt.PZQvTe" (in BACKUP) failed: Permission denied (13)
Hours of googling later and I still can't resolve what seems to be a very simple permission issue. Advice? Thanks in advance.
Additional Information
I just noticed the following occurs at the beginning of the process:
rsync: failed to set permissions on "/." (in BACKUP): Permission denied (13)
Is it trying to set permission on "/"?
Edit
I am logged in as the user - someuser. My destination directory has full read/write/execute permission for everyone, including it's contents. In addition, the destination directory is owned by someuser and in someuser's group.
Follow up
I've found using SSH solves this
Make sure the user you're rsync'd into on the remote machine has write access to the contents of the folder AND the folder itself, as rsync tried to update the modification time on the folder itself.
Even though you got this working, I recently had a similar encounter and no SO or Google searching was of any help as they all dealt with basic permission issues wheres the solution below is somewhat of an off setting that you wouldn't even think to check in most situations.
One thing to check for with permission denied that I recently found having issues with rsync myself where permissions were exactly the same on both servers including the owner and group but rsync transfers worked one way on one server but not the other way.
It turned out the server with problems that I was getting permission denied from had SELinux enabled which in turn overrides POSIX permissions on files/folders. So even though the folder in question could have been 777 with root running, the command SELinux was enabled and would in turn overwrite those permissions which produced a "permission denied"-error from rsync.
You can run the command getenforce to see if SELinux is enabled on the machine.
In my situation I ended up just disabling SELINUX completely because it wasn't needed and already disabled on the server that was working fine and just caused problems being enabled. To disable, open /etc/selinux/config and set SELINUX=disabled. To temporarily disable you can run the command setenforce 0 which will set SELinux into a permissive state rather then enforcing state which causes it to print warnings instead of enforcing.
Rsync daemon by default uses nobody/nogroup for all modules if it is running under root user. So you either need to define params uid and gid to the user you want, or set them to root/root.
I encountered the same problem and solved it by chown the user of the destination folder. The current user does not have the permission to read, write and execute the destination folder files. Try adding the permission by chmod a+rwx <folder/file name>.
This might not suit everyone since it does not preserve the original file permissions but in my case it was not important and it solved the problem for me. rsync has an option --chmod:
--chmod This option tells rsync to apply one or more comma-separated lqchmodrq strings to the permission of the files in the transfer. The
resulting value is treated as though it was the permissions that the
sending side supplied for the file, which means that this option can
seem to have no effect on existing files if --perms is not enabled.
This forces the permissions to be what you want on all files/directories. For example:
rsync -av --chmod=Du+rwx SRC DST
would add Read, Write and Execute for the user to all transferred directories.
I had a similar issue, but in my case it was because storage has only SFTP, without ssh or rsync daemons on it. I could not change anything, bcs this server was provided by my customer.
rsync could not change the date and time for the file, some other utilites (like csync) showed me other errors: "Unable to create temporary file Clock skew detected".
If you have access to the storage-server - just install openssh-server or launch rsync as a daemon here.
In my case - I could not do this and solution was: lftp.
lftp's usage for syncronization is below:
lftp -c "open -u login,password sftp://sft.domain.tld/; mirror -c --verbose=9 -e -R -L /srs/folder /rem/folder"
/src/folder - is the folder on my PC, /rem/folder - is sftp://sft.domain.tld/rem/folder.
you may find mans by the link lftp.yar.ru/lftp-man.html
Windows: Check permissions of destination folders. Take ownership if you must to give rights to the account running the rsync service.
I had the same issue in case of CentOS 7. I went through lot of articles ,forums but couldnt find out the solution.
The problem was with SElinux. Disabling SElinux at the server end worked.
Check SELinux status at the server end (from where you are pulling data using rysnc)
Commands to check SELinux status and disable it
$getenforce
Enforcing ## this means SElinux is enabled
$setenforce 0
$getenforce
Permissive
Now try running rsync command at the client end ,it worked for me.
All the best!
I have Centos 7 server with rsyncd on board:
/etc/rsyncd.conf
[files]
path = /files
By default selinux blocks access for rsyncd to /files folder
# this sets needed context to my /files folder
sudo semanage fcontext -a -t rsync_data_t '/files(/.*)?'
sudo restorecon -Rv '/files'
# sets needed booleans
sudo setsebool -P rsync_client 1
Disabling selinux is an easy but not a good solution
I had the same issue, so I first SSH into the server to confirm that I able to log in to the server by using the command:
ssh -i /Users/Desktop/mypemfile.pem user#ec2.compute-1.amazonaws.com
Then in New Terminal
I copied a small file to the server by using SCP, to make sure I am able to make a connection:
scp -i /Users/Desktop/mypemfile.pem /Users/Desktop/test.file user#ec2.compute-1.amazonaws.com:/home/user/test/
Then In the same new terminal, I tried running rsync:
rsync -avz -e "ssh -i /Users/Desktop/mypemfile.pem" /Users/Desktop/backup/image.img.gz user#ec2.compute-1.amazonaws.com:
If you're on a Raspberry pi or other Unix systems with sudo you need to tell the remote machine where rsync and sudo programs are located.
I put in the full path to be safe.
Here's my example:
rsync --stats -paogtrh --progress --omit-dir-times --delete --rsync-path='/usr/bin/sudo /usr/bin/rsync' /mnt/drive0/ pi#192.168.10.238:/mnt/drive0/
I imagine a common error not currently mentioned above is trying to write to a mount space (e.g., /media/drivename) when the partition isn't mounted. That will produce this error as well.
If it's an encrypted drive set to auto-mount but doesn't, might be an issue of auto-unlocking the encrypted partition before attempting to write to the space where it is supposed to be mounted.
I had the same error while syncing files inside of a Docker container and the destination was a mounted volume (Docker for mac), I run rsync via su-exec <user>. I was able to resolve it by running rsync as root with -og flags (keep owner and group for destination files).
I'm still not sure what caused that issue, the destination permissions were OK (I run chown -R <user> for destination dir before rsync), perhaps somehow related to Docker for Mac slow filesystem.
Take attention on -e ssh and jenkins#localhost: in next example:
rsync -r -e ssh --chown=jenkins:admin --exclude .git --exclude Jenkinsfile --delete ./ jenkins#localhost:/home/admin/web/xxx/public
That helped me
P.S. Today, i realized that when you change (add) jenkins user to some group, permission will apply after slave (agent) restart. And my solution (-e ssh and jenkins#localhost:) need only when you can't restart agent/server.
Yet still another way to get this symptom: I was rsync'ing from a remote machine over ssh to a Linux box with an NTFS-3G (FUSE) filesystem. Originally the filesystem was mounted at boot time and thus owned by root, and I was getting this error message when I did an rsync push from the remote machine. Then, as the user to which the rsync is pushed, I did:
$ sudo umount /shared
$ mount /shared
and the error messages went away.
The group user name for the destination directory and sub directories should be same as per the user.
if the user is 'abc' then the destination directory should be
lrwxrwxrwx 1 abc abc 34 Jul 18 14:05 Destination_directory
command chown abc:abc Destination_directory
Surprisingly nobody have mentioned all powerful SUDO.
Had the same problem and sudo fixed it
run in root access ssh chould solve this problem
or chmod 0777 /dir/to/be/backedup/
or chown username:user /dir/to/be/backedup/
A command I executed in cygwin hosed up a bunch of files. Now I cannot delete them. Omitting most of the 'ls' output, here is what I'm dealing with:
% ls -l
ls: cannot access WSERV001.txt: No such file or directory
-rw-r--r-- 1 mccppk mkgroup-l-d 50 Sep 17 16:57 WSERV001.text
??????????? ? ? ? ? ? WSERV001.txt
% rm WSERV001.txt
rm: cannot remove `WSERV001.txt': No such file or directory
% touch WSERV001.txt
touch: cannot touch `WSERV001.txt': Permission denied
The .text file is normal. The .txt file (directory entry anyway) is obviously hosed. Any ideas on how to get the .txt file deleted?
I had the same problem and fixed it as follow (under Win7):
Open a cmd windows (run as Administrator)
takeown /r /f DRIVE:\PATH
icacls DRIVE:\PATH /grant USERNAME:F /T
where USERNAME is your win7 username under which you are running this.
Also make sure cron.exe is NOT running for user USERNAME or SYSTEM (can be checked from the TaskManager) and that no programs from cygwin are running.
Once all has been checked and done, you should be able to delete your files.
Hope this helps,
Jean
I have a reproducible case and none of what is suggested here helps because of permissions restrictions.
Under sygwin:
[Sakis#t0000000000]$ ll
total 0
drwxr-x--- 1 ???????? ???????? 0 Jul 4 02:51 t0000000000_1.db/
[Sakis#t0000000000]$
Trying to take the owner from an admin cmd console:
c:\t000000000
0>takeown /r /f t0000000000_1.db
ERROR: Access is denied.
Trying to delete from an admin cmd console:
c:\t000000000
0>rmdir /S t0000000000_1.db
t0000000000_1.db, Are you sure (Y/N)? Y
Access is denied.
Cannot also change the owner from the windows GUI. It complains that you should have read permissions.
--- RESOLVED ---
Finally, I have managed to delete it by entering a cmd command with administration privileges and execute:
rm -r <dir>
TIP: You have to make sure that the directory in not used at all. You can use the procmon to find you who locks that directory.
Attempts to use chown and chmod, even as root, failed (I don't recall the error).
I'm pretty sure my disk is fine. I run DiskCheckup daily for a strong history of SMART settings, and checked it this morning. No worries there.
Since the original problem and post, I got busy, and just now got back to that same local shell window. Those files were gone. This was a local cygwin shell on my laptop, so I know that no one else "helped". Strange. Those .txt files were just not there anymore.
I'm still curious what would cause ls to output all question-marks like that for all of the file metadata, except for the filename. But the main issue is resolved.
As admin, this should fix:
chown <yourusername> WSERV001.txt
chmod 666 WSERV001.txt
rm -f WSERV001.txt
If not, you might have disk errors.