I have a dedicated Linux web server with many user accounts on it. The user accounts are all located in /home/[userid] directories. I am able to create Perl scripts that run within each of my users’ accounts that can access files only within their own account, but now I need to create a script that can run “above” the users’ accounts and be able to access a file within any specified user’s account.
Currently, I have a script that uses Net::FTP to retrieve the needed file from each account so I can extract the necessary data from it, but of course, it’s slow to FTP into every account. Since the accounts are merely directories on the server, I’m looking for a way to run a Perl script in a way that it can access each account directory and simply open the required file and return the requested data for the specified account.
How can I accomplish this?
You should login as a user that has access to all the user directories (e.g. root). For security reasons, it might be safer to use sftp or some other encrypted connection.
Related
I have an EC2 instance, where the backend for my Mobile App is hosted.
My developer needs access to my server, in order to upload the new code and I guess test it also.
Now, I do not want to give him FTP details, so here is what I did:
Created a new Linux User
Created a new Key pair from the EC2 Console
Created a .ssh directory
Change the file permission to 700 (so only file owner can read/write/open the dir)
Created authorised_keys with the touch command in the .ssh directory
Changed the file permission to 600 (so only file owner can read/write the dir)
Retrieved the public key for the keypair
Add the public key in the authorised_keys
Now I can share the new generated PEM file with my Developer along with the username and my EC2 host IP address.
But I don't understand why not I can directly to this by creating an IAM User from the AWS Console and set his permission accordingly?
I am really confused because I first wanted to do it the IAM way but everyone suggested I go with Linux user - isn't it the same thing?
Also, I shall delete this user entirely once he is done with the work - right?
Furthermore, I don't understand something... after doing all this and setting up the new Linux user, I am able to connect to my server using the Linux Username and Unix Password only - without using the PEM file that I have created - how is that?
Also, technically that new Linux user can simply delete my main Linux user... I mean I can simply right-click on the User and press delete via Filezilla for e.g. How can I prevent this from happening? Even though that wouldn't matter, as he could also simply delete my entire backend?
I have the following on my server now:
Home Folder
Home Folder > appBackend
Home Folder > mainLinuxUser
Home Folder > newLinuxUser
And last but not least, why is everyone always saying to never share the Private PEM file with anyone.. at the end of the day, if I only allowed specific IP address to connect to my EC2, then I should never be worried about anything? Same as I have done for my MongoDB - only if I add the IP address, only then that person can connect and view my Database. So with all the previous developers, I had shared my DB Configuration, it won't matter since their IP is not in my Security Group anymore - am I right?
Sorry, I am new to all this and I am trying to get my head around it all. I appreciate any help!
Creating Linux user vs IAM user :
IAM Users are for users who can access AWS resources based on permissions you provide. That means if you create IAM user with full access to EC2 and provide the details to your developer, he/she can login to AWS and have full access to EC2. He/She can create,start/stop, reboot, terminate your EC2 servers etc.
IAM Users/Groups are created to restrict access to AWS resources such as EC2, S3, VPC etc. and not for OS which runs inside EC2.
Yes, delete the user and keys once the work is done.
For login to linux sever, you can use either username-password or username-key. Check your user settings if you have set login with password. Use "--disabled-password" or similar when creating new user. Refer it here : https://aws.amazon.com/premiumsupport/knowledge-center/new-user-accounts-linux-instance/
Check if you new user has root or sudo access. If you have provided sudo or root access, new linux user can execute such actions.
Note that, it is a best practice to delete the default root user on EC2 once server is up and running. ec2-user or ubuntu. Reason being, if your pem key which you had used when creating EC2 is compromised, your server is compromised. These default root users are known to everyone unlike specific users you'll create. Hence better to delete them.
Also follow the least access principle and provide only the least required access to new user. This means, you should create user as per the need or activity and restrict access to only that activity. Eg, if your user needs to only copy the files to/from within specific folders, set your user with a group and assign permissions to required folders. Do not provide any sudo access to it.
Similarly, you can create an admin user with access to sudo, however, make sure this user is not shared to other developers.
I am trying to connect every user account at login with our shared storage.
This works fine with the command in CMD:
net use \\NAS /USER:email#domain.com passwordhere
As we are using AzureActiveDirectory, I will need an email to connect to our shared storage. How can I read the email of the current user to replace it with the example adress? This all should happen in a batch file as there are also plenty of other lines already written. (Also without admin permissions if possible).
Thanks,
Max
I created a user account on Amazon Linux Instance with root user. I found that if I create a user account(Example: ec2-user) that account will not have execute and write permissions on Hadoop Files System, Hive, Pig and other tools which are installed on Amazon EMR. If I have to give them explicit permissions I have to create a group which has permissions equivalent to superuser(root) account and add users to that group. Is there any other way I can set up access for those accounts to HDFS, Hive and Pigs etc.
Also while logging in as user the Linux command prompt is not prompting to enter any password even though I gave password for the user account while creating it. Is there anything configuration changes I need to make in /etc/ssh/sshd_config file?
Your question is not that clear to me.
But, let me attempt with whatever I suppose I understood.
Hadoop when security is enabled needs to have security for each user. It seems your user needs a separate space for writes and executions i.e. a Home directory.
First login as 'hdfs' user in a terminal and then create a home directory for your user in HDFS. Please check if you have a directory called /user/{yourUser}. If, not create that. Then, make sure you make {yourUser} the owner of /user/{yourUser}.
I am attempting to integrate a standalone product into an LDAP environment.
I have a RHEL 6.7 system that is configured for ldap authentication (via sss) that I need to programmatically add local users and groups to.
The input xml file has a list of users and groups with their group membership, login shell, user id and group id that should be used.
Now comes the problem, I have a Perl script that uses the XML file to configure the users and groups, it uses the getgrnam and getpwnam to query for users and groups then makes a system call to groupmod/groupadd and usermod/useradd if the user exists or not. I found that if LDAP has a group the same name as the group I am trying to create my script will see the group as existing and jump to the groupmod instead of groupadd. Then group binaries will only perform operations on local groups, and fail because the group doesn't exist locally. NSS is setup to check files then sss, which make sense why getgrnam returns the ldap group.
Is there a way to have getgrnam and getpwnam only query the local system without having to reconfigure nsswitch.conf and possibly stop/start SSSD when I run the script?
Is there another perl function I can use to query only local users/groups?
Short answer is no - the purpose of those function calls is to make the authentication mechanisms transparent. There's a variety of things you could be using, and no one wants to hand roll their own local files/ldap/yp/nis+/arbitrary PAM authentication mechanism.
If you're specifically interested in the contents of the local passwd and group files, I'd suggest the answer is - read those directly.
So I have one part of my build that requires domain rights and does file copying.
Another part of my build runs some program that requires the user to interact with the desktop which seem to be only accomplish able by the system account.
What is the best way to work around these two items? At the moment it seems like I can only do one or the other...
One way is to do net use s: \\<share path> <password> /user:<domain user> /savecred, which will allow your system account to impersonate the domain user for the share connection.
Another way is to use the runas.exe /user:<your domain user> /savecred <program>. Note that this would require someone running runas to enter the domain user password form the context of your CC.Net user. You can do this by opening a console as the system account (there are numerous articles on this topic, because of CC.Net SVN integration) and manually running runas /savecred and providing the password.
An alternative is to create a separate COM service that runs as the domain account and have a command line tool that CC.Net invokes that call that service.
Yet another alternative would be to have the CC.Net schedule an immediate task running as the domain user. You can use schtasks.exe to do that. You'll need an xml file with the task definition, which will need to contain the domain user name and password.