I have path where some log files are generated dynamically everyday with timestamp and 400 (-r--------) permission , so the owner of these files can view the logs.
Logs path : /dir_01/abc_01/logpath
Log files :
-r-------- LogFile_20141001
-r-------- LogFile_20141002
-r-------- LogFile_20141003
I want others to view the logs, but I can't give read permission to logs for others and copying the logs every-time to a location (eg : /dir_02/logs) & giving permission there, so that others can see, is really difficult, as logs are created dynamically. Is there any way, that whenever the logs are created in actualy logs path i.e . /dir_01/abc_01/logpath , the same is updated on some other path like /dir_02/logs with read permission to others. Is mounting will be helpful for this scenario, if so, then how.
This is possible to use umask option (for some of filesystems e.g vfat) during mouting, and then all files created in this dir will have required permissions, but definitively better option is to use extended acls, then all files created in dir(s) will have set up permission according to your requirements.
Umask syscall (not umask mount option) sets up permission but only for calling process. It means, that if another process which have another umask, creates file/dir - the permission will not appropriate to your requirements.
I can't get if these are the same files or not:
/dir_01/abc_01/logpath
/dir_02/logs
But if you want to do something exactly in the moment of creating a file, then you need notify to monitor dir (to catch a event) and execute another action when file is created.
Related
There is a (linux) directory like below:
/a/b/c/d
which is accessible from multiple machines (NFS)
Now, the owner of everything in that directory is dir_owner.
However, someone/ something who/ which can sudo as dir_owner is changing the permissions of directory d to 777 and then writing to that location as a different user. This different user is always the same user, say, unauthorised_user.
I was hoping to track what is causing this. Could someone please help me out on this? Let me know if any more information is required.
you can use the stat command, which is used for viewing file or file system status.
More information and parameters for stat on the following webpage.
https://ss64.com/bash/stat.html
Another command is auditd, which should be able to be configured writing audit records to the disk, more information at the following webpage,
https://man7.org/linux/man-pages/man8/auditd.8.html
while Importing an unmanaged cluster the job error
Permission denied ?
enter image description here
OpsCenter developer here. It's hard to give a meaningful answer with so little information.
If you click through on the error, you'll get a more in-depth description.
If you look at the opscenterd.log file, you'll find additional context. You can log the LCM job-events to that file by setting the 'lcm' logger to debug in opscenterd's logback.xml.
But if I'm guessing, you don't have filesystem permissions to write in the home directory of the user you're logging in as. Log in as the user you specified in your LCM ssh-credentials and try to touch ./test-file and see if you get a permissions error. If you do, you'll need to resolve that outside of LCM before you can proceed. LCM needs to write a temporary file to your home-directory.
This is driving me up the wall. I am still a little new to Linux but i do understand how to do most of day to day stuff I need to do. What I am trying to do is mount a Amazon S3 bucket to a mount point on my server.
I running Ubuntu server 12.04, it is fully up to date. I followed this guided,
http://www.craiglotter.co.za/2012/04/20/how-to-install-s3fs-on-an-ubuntu-server/
In how to install FUSE & S3FS on my server. But it just says it can not 'establish security credentials'. I have use a psswd_s3fs file within in the etc and have tried a .passwd_s3fs file within the home folder(/home/USERNAME - that is where I put it). These files do have the access key ID and the secret access key (ID:ACESSKEY) <- format used.
If I changed the chmod on either file, form 600 to say 777, it reports back that this is wrong it needs to have no other permissions. So I know its using the files.
So what I am doing wrong?
Also made a new user, the access details I have been using are for the default user login, but it would not take them either. Not sure how to asign that user to a selected bucket or do I have to do it some other way?
Please help?
UPDATE :
What wanted to know is if the detail I got from Amazon are right, so I downloaded TntDrive to test it in windows and there was no problems. Mounted my drive without any issues....
try this link http://code.google.com/p/s3fs/wiki/FuseOverAmazon
and also remember that :- The credentials files may not have lax permissions as this creates a security hole.
ie.
~/.passwd-s3fs may not have others/group permissions and /etc/passwd-s3fs may not have others permissions. Set permissions on these files accordingly:
% chmod 600 ~/.passwd-s3fs
% sudo chmod 640 /etc/passwd-s3fs
it should work, its working for me.
Please make sure that you:
1 ) Use this format if you have only one set of
credentials:
accessKeyId:secretAccessKey
2 ) If you have more than one set of credentials, this syntax is also recognized:
bucketName:accessKeyId:secretAccessKey
3 ) Password files can be stored in two locations:
/etc/passwd-s3fs [0640]
$HOME/.passwd-s3fs [0600]
OK,
Do not know why I had this problem, as I did make the file within Linux but basic my password file was not in a Linux readable format.
I used do2unix (just Google it and you find out about it). That fixed my problem and then I could see the Amazon S3 sever.
The next issue I add was that samba would not share the drive out, got to use '-o allow_other' option when mounting the drive.
Note that you will / might have to enable 'user_allow_other' in fuse.conf. (/etc/fuse.conf) You can not miss the option it just has a # in front of it, just remove and then save it.
Hope that helps someone else.
I have multiple developers trying to rsync files (symfony php projects) to the same remote location. This has been setup as follows:
Each user has their own login on the remote server
Each user is a member of the same group on the server, say "mygroup"
Files locally and at the rsync destination are owned by a user and the group. E.g. someuser:mygroup
As far as I am aware you must own a directory in order to set its access and modification times to an arbitrary value, being a member of the owning group is not enough. For this reason if user A tries to rsync directories owned by user B rsync outputs the following errors:
rsync: failed to set times on "/some/path": Operation not permitted (1)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1058) [sender=3.0.5]
So, what is the correct way to setup users & groups when multiple users rsync to the same remote location?
What Let_Me_Be said then deploy from Git (or Mercurial) to testing or staging then rsync from there to live. Better still use something like Hudson/Jenkins to manage the whole shooting match for you.
I wish to delete files from a network Pc. The user has full control over a shared folder on the PC from which to delete files.
I have this code :
if(status)
{
if(File::Exists(selectedfile))
System::IO::File::Delete(selectedfile);
else
MessageBox::Show("File does not exist.");
}
else
{
if(!System::IO::Directory::Exists(selectedfile))
MessageBox::Show("The directory does not exists.");
try{
System::IO::Directory::Delete(selectedfile,true);
if(System::IO::Directory::Exists(selectedfile))
{
deleted =false;
System::IO::Directory::Delete(selectedfile,true);
}
else
deleted = true;
}
I included the second delete in the Directory loop because the folder is not deleted at first attempt, only the files inside the folder are deleted. However, I get an Access denied whenever I try to delete the empty folder.
How to make sure that the directory and all it's content are deleted.
This is quite normal, one of the things that a multi-tasking operating system needs to do. The directory is in fact marked for deletion but it cannot be removed yet because one or more processes has a handle open on the directory. In the case of Windows, that is commonly a process that uses the the directory as its default working directory. Or maybe you've got an Explorer window open, looking at how your program is doing its job. Explorer uses ReadDirectoryChangesW() to get notified about changes in the directory so it knows when to refresh the view.
The directory will be physically removed from the drive as soon as the last handle is closed. While it exists in this zombified state, any attempt to do anything with the directory will produce an access error (Windows error code 5).
You'll need to account for this behavior in your program. Definitely remove the second Directory::Exists() test, when you didn't get an exception from the Delete call you'll need to assume that the directory got deleted. That will be accurate, eventually.
You need file server functionality on computer A and B and write a client on computer C.
The server could be a kind of FTP server, where you have to explicity configure which directories are handled on both sites.
The server can be a Windows share. You can use UNC file names to address these files and use the Windows API on computer C. When you have mapped network drives at computer C you can work with the network files as you would do with local files.
The computers A and B must be configured so that there are shares with sufficient rights.