There is a (linux) directory like below:
/a/b/c/d
which is accessible from multiple machines (NFS)
Now, the owner of everything in that directory is dir_owner.
However, someone/ something who/ which can sudo as dir_owner is changing the permissions of directory d to 777 and then writing to that location as a different user. This different user is always the same user, say, unauthorised_user.
I was hoping to track what is causing this. Could someone please help me out on this? Let me know if any more information is required.
you can use the stat command, which is used for viewing file or file system status.
More information and parameters for stat on the following webpage.
https://ss64.com/bash/stat.html
Another command is auditd, which should be able to be configured writing audit records to the disk, more information at the following webpage,
https://man7.org/linux/man-pages/man8/auditd.8.html
Related
I need to access log files from my network folder through Groovy script I'm very new to the groovy scripting. Please help me
Here I'm using Ready API
Expected result is to access my network log file and print the error logs
well,
If you are running the project with a credential(windows user) that has access to the required file, you can access directly.
String fileContents = new File('<>').text
Assert the content as you like.
If you don't have direct access, first you need to find a user that has the access.
Then you can do one of 2 options:
Add a network drive and use the network drive as a local drive.
Import libraries that use protocols SMB1 or SMB2 like jcifs, or similar. ( avoid this )
PS: It's not a good practice to use a groovy script to access a network folder, whatever the need that you have.
I have path where some log files are generated dynamically everyday with timestamp and 400 (-r--------) permission , so the owner of these files can view the logs.
Logs path : /dir_01/abc_01/logpath
Log files :
-r-------- LogFile_20141001
-r-------- LogFile_20141002
-r-------- LogFile_20141003
I want others to view the logs, but I can't give read permission to logs for others and copying the logs every-time to a location (eg : /dir_02/logs) & giving permission there, so that others can see, is really difficult, as logs are created dynamically. Is there any way, that whenever the logs are created in actualy logs path i.e . /dir_01/abc_01/logpath , the same is updated on some other path like /dir_02/logs with read permission to others. Is mounting will be helpful for this scenario, if so, then how.
This is possible to use umask option (for some of filesystems e.g vfat) during mouting, and then all files created in this dir will have required permissions, but definitively better option is to use extended acls, then all files created in dir(s) will have set up permission according to your requirements.
Umask syscall (not umask mount option) sets up permission but only for calling process. It means, that if another process which have another umask, creates file/dir - the permission will not appropriate to your requirements.
I can't get if these are the same files or not:
/dir_01/abc_01/logpath
/dir_02/logs
But if you want to do something exactly in the moment of creating a file, then you need notify to monitor dir (to catch a event) and execute another action when file is created.
This is driving me up the wall. I am still a little new to Linux but i do understand how to do most of day to day stuff I need to do. What I am trying to do is mount a Amazon S3 bucket to a mount point on my server.
I running Ubuntu server 12.04, it is fully up to date. I followed this guided,
http://www.craiglotter.co.za/2012/04/20/how-to-install-s3fs-on-an-ubuntu-server/
In how to install FUSE & S3FS on my server. But it just says it can not 'establish security credentials'. I have use a psswd_s3fs file within in the etc and have tried a .passwd_s3fs file within the home folder(/home/USERNAME - that is where I put it). These files do have the access key ID and the secret access key (ID:ACESSKEY) <- format used.
If I changed the chmod on either file, form 600 to say 777, it reports back that this is wrong it needs to have no other permissions. So I know its using the files.
So what I am doing wrong?
Also made a new user, the access details I have been using are for the default user login, but it would not take them either. Not sure how to asign that user to a selected bucket or do I have to do it some other way?
Please help?
UPDATE :
What wanted to know is if the detail I got from Amazon are right, so I downloaded TntDrive to test it in windows and there was no problems. Mounted my drive without any issues....
try this link http://code.google.com/p/s3fs/wiki/FuseOverAmazon
and also remember that :- The credentials files may not have lax permissions as this creates a security hole.
ie.
~/.passwd-s3fs may not have others/group permissions and /etc/passwd-s3fs may not have others permissions. Set permissions on these files accordingly:
% chmod 600 ~/.passwd-s3fs
% sudo chmod 640 /etc/passwd-s3fs
it should work, its working for me.
Please make sure that you:
1 ) Use this format if you have only one set of
credentials:
accessKeyId:secretAccessKey
2 ) If you have more than one set of credentials, this syntax is also recognized:
bucketName:accessKeyId:secretAccessKey
3 ) Password files can be stored in two locations:
/etc/passwd-s3fs [0640]
$HOME/.passwd-s3fs [0600]
OK,
Do not know why I had this problem, as I did make the file within Linux but basic my password file was not in a Linux readable format.
I used do2unix (just Google it and you find out about it). That fixed my problem and then I could see the Amazon S3 sever.
The next issue I add was that samba would not share the drive out, got to use '-o allow_other' option when mounting the drive.
Note that you will / might have to enable 'user_allow_other' in fuse.conf. (/etc/fuse.conf) You can not miss the option it just has a # in front of it, just remove and then save it.
Hope that helps someone else.
I have multiple developers trying to rsync files (symfony php projects) to the same remote location. This has been setup as follows:
Each user has their own login on the remote server
Each user is a member of the same group on the server, say "mygroup"
Files locally and at the rsync destination are owned by a user and the group. E.g. someuser:mygroup
As far as I am aware you must own a directory in order to set its access and modification times to an arbitrary value, being a member of the owning group is not enough. For this reason if user A tries to rsync directories owned by user B rsync outputs the following errors:
rsync: failed to set times on "/some/path": Operation not permitted (1)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1058) [sender=3.0.5]
So, what is the correct way to setup users & groups when multiple users rsync to the same remote location?
What Let_Me_Be said then deploy from Git (or Mercurial) to testing or staging then rsync from there to live. Better still use something like Hudson/Jenkins to manage the whole shooting match for you.
I am working on a cherrpy web service to work with my web app. In this service it needs to be able to access the file system. For example, I want to be able to list all files under a certain directory. I am using os.walk('/public/') but don't seem to get it to work, even though the same code works outside of cherrpy.
Is there a way to make it work so I can use cherrypy to manage files?
What user is the webapp running as, and does it have access to read the folder?
According to the documentation os.walk() will ignore errors from the underlying calls to os.listdirs()
http://docs.python.org/release/2.4.4/lib/os-file-dir.html
You could try setting the onerror argument like
def print_error(error):
print error
os.walk('/public/', print_error)
which might give you a hint as to what's going on.
Also, you could try going directly to os.listdirs() and see if you get any errors from it.