I am attempting to integrate a standalone product into an LDAP environment.
I have a RHEL 6.7 system that is configured for ldap authentication (via sss) that I need to programmatically add local users and groups to.
The input xml file has a list of users and groups with their group membership, login shell, user id and group id that should be used.
Now comes the problem, I have a Perl script that uses the XML file to configure the users and groups, it uses the getgrnam and getpwnam to query for users and groups then makes a system call to groupmod/groupadd and usermod/useradd if the user exists or not. I found that if LDAP has a group the same name as the group I am trying to create my script will see the group as existing and jump to the groupmod instead of groupadd. Then group binaries will only perform operations on local groups, and fail because the group doesn't exist locally. NSS is setup to check files then sss, which make sense why getgrnam returns the ldap group.
Is there a way to have getgrnam and getpwnam only query the local system without having to reconfigure nsswitch.conf and possibly stop/start SSSD when I run the script?
Is there another perl function I can use to query only local users/groups?
Short answer is no - the purpose of those function calls is to make the authentication mechanisms transparent. There's a variety of things you could be using, and no one wants to hand roll their own local files/ldap/yp/nis+/arbitrary PAM authentication mechanism.
If you're specifically interested in the contents of the local passwd and group files, I'd suggest the answer is - read those directly.
Related
I created a user account on Amazon Linux Instance with root user. I found that if I create a user account(Example: ec2-user) that account will not have execute and write permissions on Hadoop Files System, Hive, Pig and other tools which are installed on Amazon EMR. If I have to give them explicit permissions I have to create a group which has permissions equivalent to superuser(root) account and add users to that group. Is there any other way I can set up access for those accounts to HDFS, Hive and Pigs etc.
Also while logging in as user the Linux command prompt is not prompting to enter any password even though I gave password for the user account while creating it. Is there anything configuration changes I need to make in /etc/ssh/sshd_config file?
Your question is not that clear to me.
But, let me attempt with whatever I suppose I understood.
Hadoop when security is enabled needs to have security for each user. It seems your user needs a separate space for writes and executions i.e. a Home directory.
First login as 'hdfs' user in a terminal and then create a home directory for your user in HDFS. Please check if you have a directory called /user/{yourUser}. If, not create that. Then, make sure you make {yourUser} the owner of /user/{yourUser}.
On many varieties of Linux PostgreSQL runs under a separate user account, so you have to do:
sudo su - postgres
to get any work done. That's all well and good if you just want to type SQL in manually, but what if you have a migration written in a programming language (in my case, Node/Knex)?
Is the common practice to somehow make the code aware of the user situation (ie. write something equivalent to sudo su - postgres in to my code)?
Or, is it to run all of my code as the DB user (even though that would mean giving my DB user permissions on my non-DB user's home folder)?
Or, is it to make my normal user have Postgres access (in which case why does Linux even bother setting Postgres up on a separate user)?
Or, is there some other approach I'm missing?
P.S. I realize this is somewhat a systems administration question, but I posted here rather than super user because it's specifically about running programmer-written code (which just happens to alter a database).
You are conflating three separate user accounts.
First there is the OS account under which the postgresql daemon runs. As you say in most Linux distros this would be a separate user used only for this purpose, often named postgres. This is to prevent other users on the system from accessing the postgresql data files and other resources, and also to limit the damage that could be done by someone who hacked their way into the database.
Then there is the user account which the client program, such as psql or your migration tool might run under.
Finally there is the postgresql user account. Postgresql has it's own user account system to manage the permissions of users within the databases that it administrates, unconnected to the OS user account system.
The one are of overlap between the OS accounts and the postgresql database accounts is that the psql command line tool will connect to the database using a user name the same as the OS user running the tool if you do not specify a user on the command line. For example, if I connect with this:
psql mydatabase
then it will attempt to connect with the user harmic, my Linux user account, but if I use this:
psql -U postgres mydatabase
then it will connect with the user postgres, which is the default administrator account.
Another related aspect is the authentication method. Most likely, if you try the above command on your machine, you would get an error. This is due to the allowed authentication methods, which are configured in the file pg_hba.conf. This file configures allowed authentication methods that specific users can use when connecting to specific databases from specific hosts. The postgres user is normally only allowed to connect from within the same host, using ident as the authentication method, which means identify the user based on the OS user running the command.
This explains why you have been using sudo su - postgres to switch to the postgres user: most likely in your current configuration that is the only way to access this account.
OK, this probably all sounds rather complex. To simplify things, here are my recommendations for best practices in this area:
Do not mess with the OS account used to run the database backend. It is not needed and would weaken security.
Create a separate database account for administrating the application's database(s). Use this account rather than the postgres account for migration scripts and the like. The reason for this is that the postgres account has full permissions over all databases on the server, while you can grant your admin user only the permissions it needs, and only to the database(s) the application controls (not any other databases that might be there). See: CREATE USER SQL command.
Update the pg_hba.conf file to specify the authentication mode that will be used to authenticate this user. See Client Authentication in the manual. md5 with a suitably strong password might be a good choice.
Update your migration tool to use this new user. The user (and password if using passwords) would be supplied via the connection string or connection parameters supplied when connecting to the database. Likewise when connecting with psql specify the user name with the -U option.
Note that there is no need to use sudo su - or even to have an OS account with the same name as the admin user.
How I can create users in Subversion.
For when people access the repository, only to see their projects or files and not those of the other people?
Answer depends on how Subversion server is setup.
If you're using httpd then it depends on how authentication is setup in httpd. See SVNBook | Authentication options.
If you're using svnserve then you can use the built in authentication setup.
And if you're using svn+ssh:// then either users log into ssh as their own user (in which case how to add users is a function of adding users to ssh) or they log into as a shared user and the --tunnel-user argument gets set.
I have a dedicated Linux web server with many user accounts on it. The user accounts are all located in /home/[userid] directories. I am able to create Perl scripts that run within each of my users’ accounts that can access files only within their own account, but now I need to create a script that can run “above” the users’ accounts and be able to access a file within any specified user’s account.
Currently, I have a script that uses Net::FTP to retrieve the needed file from each account so I can extract the necessary data from it, but of course, it’s slow to FTP into every account. Since the accounts are merely directories on the server, I’m looking for a way to run a Perl script in a way that it can access each account directory and simply open the required file and return the requested data for the specified account.
How can I accomplish this?
You should login as a user that has access to all the user directories (e.g. root). For security reasons, it might be safer to use sftp or some other encrypted connection.
So I have one part of my build that requires domain rights and does file copying.
Another part of my build runs some program that requires the user to interact with the desktop which seem to be only accomplish able by the system account.
What is the best way to work around these two items? At the moment it seems like I can only do one or the other...
One way is to do net use s: \\<share path> <password> /user:<domain user> /savecred, which will allow your system account to impersonate the domain user for the share connection.
Another way is to use the runas.exe /user:<your domain user> /savecred <program>. Note that this would require someone running runas to enter the domain user password form the context of your CC.Net user. You can do this by opening a console as the system account (there are numerous articles on this topic, because of CC.Net SVN integration) and manually running runas /savecred and providing the password.
An alternative is to create a separate COM service that runs as the domain account and have a command line tool that CC.Net invokes that call that service.
Yet another alternative would be to have the CC.Net schedule an immediate task running as the domain user. You can use schtasks.exe to do that. You'll need an xml file with the task definition, which will need to contain the domain user name and password.