At my workplace we have some computer labs. In these laboratories each computer has the operating system windows and linux. Both systems are in our AD domain.
I did a lab of tests and I have a functional puppet server. I put some nodes/agents as root and Everything working perfectly when I run puppet agent -t.
The problem:
When I log in with a domain user (e.g: xiru) on linux machines and run the puppet agent -t command, a new certificate is generated, but an error occurs warning that it does not match the server's certificate.
For the domain users, the puppet creates the new certificate on the path: /home/<user>/.puppetlabs/etc/puppet/ssl
Linux machines names in the test:
mint-client.mycompany.intra
ubuntu-client.mycompany.intra
I try set certname variable on the puppet conf, but the error remains.
[main]
certname = mint-client.mycompany.intra
[agent]
server = puppet.mycompany.intra
How can I get around this and make it always with the same certificate that I configure via root user?
I think you must setup your environtment to accept non-root users.
When you run it, do you use sudo or the users are present on sudoers?
If its not, on puppet docs theres some tips to run it as non-root users...
Installation and configuration
To properly configure non-root agent access, you need to:
Install a monolithic PE master
Install and configure PE agents, disable the puppet service on all nodes, and create non-root users
Verify the non-root configuration
Install and configure a monolithic master
As a root user, install and configure a monolithic PE master. Use the web-based installer or the text-mode installer.
Use the PE console to make sure no new agents can get added to the MCollective group.
a. In the console, click Nodes > Classification, and in the PE Infrastructure group, select the PE MCollective group.
b. On the Rules tab, under Fact, locate aio_agent_version and click Remove.
c. Commit changes.
Install and configure PE agents and create non-root users
1. On each agent node, install a PE agent while logged in as a root user. Refer to the instructions for installing agents.
2. As a root user, log in to an agent node, and add the non-root user with puppet resource user <UNIQUE NON-ROOT USERNAME> ensure=present managehome=true.
Note: Each and every non-root user must have a unique name.
3. As a root user, still on the agent node, set the non-root user’s password. For example, on most *nix systems run passwd
4. By default, the puppet service runs automatically as a root user, so it needs to be disabled. As a root user on the agent node, stop the service by running puppet resource service puppet ensure=stopped enable=false.
5. Disable the MCollective service on the agent node. As a root user, run puppet resource service mcollective ensure=stopped enable=false.
6. Disable the PXP agent.
a. In the console, click Nodes > Classification* and in the PE Infrastructure group, select the PE Agent group.
b. On the Classes tab, select the puppet_enterprise::profile::agent class, and set the parameter pxp_enabled to false.
7. Change to the non-root user.
Tip: If you wish to use su - <NON-ROOT USERNAME> to switch between accounts, make sure to use the - (-l in some unix variants) argument so that full login privileges are correctly granted. Otherwise you may see “permission denied” errors when trying to apply a catalog.
8. As the non-root user, generate and submit the cert for the agent node. From the agent node, execute the following command:
puppet agent -t --certname "<UNIQUE NON-ROOT USERNAME.HOSTNAME>" --server "<PUPPET MASTER HOSTNAME>"
This Puppet run submits a cert request to the master and creates a ~/.puppet directory structure in the non-root user’s home directory.
9. As an admin user, log into the console, navigate to the pending node requests, and accept the requests from non-root user agents.
Note: It is possible to also sign the root user certificate in order to allow that user to also manage the node. However, you should do so only with great caution as this introduces the possibility of unwanted behavior and potential security issues. For example, if your site.pp has no default node configuration, running agent as non-admin could lead to unwanted node definitions getting generated using alt hostnames, which is a potential security issue. In general, if you deploy this scenario, you should ensure that the root and non-root users never try to manage the same resources,ensure that they have clear-cut node definitions, and ensure that classes scope correctly. As the non-root user, run puppet config set certname <UNIQUE NON-ROOT USERNAME.HOSTNAME> --section agent.
10. As the non-root user, run puppet config set server <PUPPET MASTER HOSTNAME> --section agent. Steps 7 and 8 create and set the configuration for the non-root agent’s puppet.conf, created in /.puppetlabs/etc/puppet/ in the non-root user’s home directory.
[main]
certname = <UNIQUE NON-ROOT USERNAME.HOSTNAME>
server = <PUPPET MASTER HOSTNAME>
11. You can now connect the non-root agent node to the master and get PE to configure it. Log into the agent node as the non-root user and run puppet agent -t.
Source: https://puppet.com/docs/pe/2017.1/deploy_nonroot-agent.html
Check the permissions. To make it work, you can provide relevant permissions to the folder where certificates are stored, so that domain user has permissions on the certificates.
Related
I want to know how I can add the local users of my server to a docker container. I don't need to import their files, I just need a username/password/privileges with new home directory in the docker container for every user in my system. For example, suppose my docker container contains the following users:
Host System:
admin: who has root access and rw access to all
bob: a regular non-sudo user
joe: another regular non-sudo user
Then the Docker Container must have users:
admin: who has root access and rw access to all
bob: a regular non-sudo user
joe: another regular non-sudo user
The Docker container and the system are both running linux, though the system is red hat and the container is ubuntu.
EDIT: I don't want to mount /etc/ files if possible, as this can create a two way security vulnerability as pointed out by #caveman
You would have to mount all relevant linux files using -v like /etc/passwd, /etc/shadow, /ect/group, and /etc/sudoers. Though I can't recommend this due to the security risks, if anyone gets root access in the container they can add users on the host or change passwords since he mount works both ways.
The list of files is not exhaustive, for example, you have to also make sure the shell exacutables exist within the container. When testing this I had to make a symbolic link from /usr/bin/zsh to /bin/bash for example since my user has the zsh shell configured which was not present in the docker image.
If you want to use these users to interact with mounted files, you also have to make sure that user namespace remapping is disabled, or specify that you want to use the same user namespace as the host with the --userns=host flag. Again, not recommended since it is a security feature, so use with care.
Note: Once you have done all this you can use su - {username} to switch to all your existing users. The -u options doesn't work since docker checks the /etc/passwd file before mounting and will give an error.
I am learning to use Jenkins to deploy a .Net 5.0 application on an AWS EC2 server. This is the first time I am using Linux server and Jenkins for .Net (I'm am a life long Windows guy), and I am facing an error while trying to publish my artifacts over SSH to Web Server.
My setup:
Jenkins server is an AWS EC2 Linux AMI server.
Web Server is also an AWS EC2 LInux AMI server.
My Jenkins is correctly installed and working. I am able to build and run unit test cases without any issues.
For Deploy, I am using 'Publish Over SSH' plugin, and I have followed all steps to configure this plugin as mentioned here https://plugins.jenkins.io/publish-over-ssh/.
However, when try to 'Test Configuration', I get the below error,
Failed to connect or change directory
jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection. Message: [Failed to connect session for config [WebServer]. Message [Auth fail]]
I did a ping test from Jenkins server to Web Server, and it is a success.
I'm using the .pem key in the 'Key' section of 'Publish over SSH'. This key is the same key I use to SSH into the web server.
The below link suggests many different solutions, but none is working in my case.
Jenkins Publish over ssh authentification failed with private key
I was looking at the below link which describes the same problem,
Jenkins publish over SSH failed to change to remote directory
However in my case I have kept 'Remote Directory' as empty. I don't know if I have to specify any directory here. Anyways, I tried creating a new directory under the home directory of user ec2-user as '/home/ec2-user/publish' and then used this path as Remote Directory, but it still didn't work.
Screenshot of my settings in Jenkins:
I would appreciate if anyone can point me to the right direction or highlight any mistake I'm doing with my configuration.
In my case following steps solved the problem.
Solution is based on Ubuntu 22.04
add two line in /etc/ssh/sshd_config
PubkeyAuthentication yes
PubkeyAcceptedKeyTypes +ssh-rsa
restart sshd service
sudo service sshd restart
you might consider the following:
a. From the screenshot you’ve provided, it seems that you have checked the Use password authentication, or use different key option which will require you to add your key and password (inputs from these fields will be used in connecting to your server via SSH connection). If you use the same SSH key and passphrase/password on all of your servers, you can uncheck/untick that box and just use the config you have specified above.
b. You might also check if port 22 of your web server allows inbound traffic from the security group where your Jenkins server/EC2 instance is running. See reference here.
c. Also, make sure that the remote directory you have specified is existing otherwise the connection may fail.
Here's the sample config
In a simplified version:
There are two users on our shared Server System:
user_1 (me)
user_2
Docker is installed system-wide across users.
I user_1, created a docker container using standard docker run running my process. But user_2 has access to this container thereby he/she can not only view, but also stop and remove my container.
How can I prevent user_2 or other users from accessing this container.
Note: No users have root access through sudo.. Thanks!
Note: No users have root access through sudo
If users have access to the docker socket, they all have root access on the host. You've lost all security at that point. If you don't believe this, see what access you have in /host with:
docker run --privileged --net=host --pid=host -v /:/host debian /bin/bash
There are projects to limit access to the docker socket with authz plugins, including twistlock and open policy agent. There's quite a bit of setup needed for these, including revoking access to the socket from the filesystem and using tls keys to access an encrypted and authenticated port. You could also go the commercial route and use docker EE with UCP to manage users and their access rights.
When running a saltstack, for security reasons I don't want them to run as root. Although I would not mind creating a new 'salt' user with NOPASS sudo access to run the salt minion / master on.
My question is that even though the documentation says here: https://docs.saltstack.com/en/latest/ref/configuration/nonroot.html that we can configure salt to run as a non root user, does it append sudo to normal commands instead or looses that functionality entirely.
Additional Research: Both the master and the minion config files have an uption for setting the users to anything other than root but the minion config file has an option to setup a sudo-user which defaults to saltdev but I changed to root. Not sure if this implies that the minion should sudo and use the root account or not. If so, why is this not present on the master config file.
The direct answer to the title question is NO. As stated in the docs:
[...] running the minion as an unprivileged user will keep it from making changes to things like users, installed packages, etc. unless access controls (sudo, etc.) are setup on the minion to permit the non-root user to make the needed changes.
In order to setup sudo on the minion you should use the sudo_user config. After setting a user to this variable Salt will invoke the salt.module.sudo every time a command is issued to this minion.
This sudo option is only available on the minion because the execution of commands on hosts is intended to be made only by the minion. Even if you are managing your master with Salt, the minion inside the master is what runs the commands.
I'm attempting to test a single node dev cluster for openshift which I've created. I cannot run any basic commands on the cluster, because I haven't set up properly privliged accounts.
In particular I need to:
run pods which make containers which query service endpoints
query the apiserver through an insecure endpoint
run commands like kubectl get pods
Is there a default account somewhere I can use which can do all of these things? I'd prefer not to manually set up a bunch of complicated user accounts for a low-level development task such as this.
Below are a few, somewhat silly attempts I've made to do this, just as examples
First, I created an "admin" account like this:
sudo -u vagrant $oc login https://localhost:8443 -u=admin -p=admin --config=/data/src/github.com/openshift/origin/openshift.local.config/master/openshift-registry.kubeconfig
Then, I went ahead and hacked around in a few attempts to login as an admin:
[vagrant#localhost ~]$ sudo chmod 777 /openshift.local.config/master/admin.kubeconfig
[vagrant#localhost ~]$ oc login
Server [https://localhost:8443]:
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Authentication required for https://localhost:8443 (openshift)
Username: admin
Password: admin
Login successful.
Using project "project1".
[vagrant#localhost ~]$ oc get nodes --config=/openshift.local.config/master/admin.kubeconfig
This leads to the following error:
Error from server: User "admin" cannot list all nodes in the cluster
I also get this error leaving the config out:
[vagrant#localhost ~]$ oc get nodes
Error from server: User "admin" cannot list all nodes in the cluster
Is there any easy way to list nodes and do basic kube operations in a standalone development cluster for openshift?
You don't login when you are using administrative credentials. You simply set KUBECONFIG=admin.kubeconfig. Login is taking you through a different flow - there is no magic "admin" user.