Openshift: How to test kubernetes features without manually managing administrative accounts/permissions? - security

I'm attempting to test a single node dev cluster for openshift which I've created. I cannot run any basic commands on the cluster, because I haven't set up properly privliged accounts.
In particular I need to:
run pods which make containers which query service endpoints
query the apiserver through an insecure endpoint
run commands like kubectl get pods
Is there a default account somewhere I can use which can do all of these things? I'd prefer not to manually set up a bunch of complicated user accounts for a low-level development task such as this.
Below are a few, somewhat silly attempts I've made to do this, just as examples
First, I created an "admin" account like this:
sudo -u vagrant $oc login https://localhost:8443 -u=admin -p=admin --config=/data/src/github.com/openshift/origin/openshift.local.config/master/openshift-registry.kubeconfig
Then, I went ahead and hacked around in a few attempts to login as an admin:
[vagrant#localhost ~]$ sudo chmod 777 /openshift.local.config/master/admin.kubeconfig
[vagrant#localhost ~]$ oc login
Server [https://localhost:8443]:
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Authentication required for https://localhost:8443 (openshift)
Username: admin
Password: admin
Login successful.
Using project "project1".
[vagrant#localhost ~]$ oc get nodes --config=/openshift.local.config/master/admin.kubeconfig
This leads to the following error:
Error from server: User "admin" cannot list all nodes in the cluster
I also get this error leaving the config out:
[vagrant#localhost ~]$ oc get nodes
Error from server: User "admin" cannot list all nodes in the cluster
Is there any easy way to list nodes and do basic kube operations in a standalone development cluster for openshift?

You don't login when you are using administrative credentials. You simply set KUBECONFIG=admin.kubeconfig. Login is taking you through a different flow - there is no magic "admin" user.

Related

PUPPET - linux domain machines cert error

At my workplace we have some computer labs. In these laboratories each computer has the operating system windows and linux. Both systems are in our AD domain.
I did a lab of tests and I have a functional puppet server. I put some nodes/agents as root and Everything working perfectly when I run puppet agent -t.
The problem:
When I log in with a domain user (e.g: xiru) on linux machines and run the puppet agent -t command, a new certificate is generated, but an error occurs warning that it does not match the server's certificate.
For the domain users, the puppet creates the new certificate on the path: /home/<user>/.puppetlabs/etc/puppet/ssl
Linux machines names in the test:
mint-client.mycompany.intra
ubuntu-client.mycompany.intra
I try set certname variable on the puppet conf, but the error remains.
[main]
certname = mint-client.mycompany.intra
[agent]
server = puppet.mycompany.intra
How can I get around this and make it always with the same certificate that I configure via root user?
I think you must setup your environtment to accept non-root users.
When you run it, do you use sudo or the users are present on sudoers?
If its not, on puppet docs theres some tips to run it as non-root users...
Installation and configuration
To properly configure non-root agent access, you need to:
Install a monolithic PE master
Install and configure PE agents, disable the puppet service on all nodes, and create non-root users
Verify the non-root configuration
Install and configure a monolithic master
As a root user, install and configure a monolithic PE master. Use the web-based installer or the text-mode installer.
Use the PE console to make sure no new agents can get added to the MCollective group.
a. In the console, click Nodes > Classification, and in the PE Infrastructure group, select the PE MCollective group.
b. On the Rules tab, under Fact, locate aio_agent_version and click Remove.
c. Commit changes.
Install and configure PE agents and create non-root users
1. On each agent node, install a PE agent while logged in as a root user. Refer to the instructions for installing agents.
2. As a root user, log in to an agent node, and add the non-root user with puppet resource user <UNIQUE NON-ROOT USERNAME> ensure=present managehome=true.
Note: Each and every non-root user must have a unique name.
3. As a root user, still on the agent node, set the non-root user’s password. For example, on most *nix systems run passwd
4. By default, the puppet service runs automatically as a root user, so it needs to be disabled. As a root user on the agent node, stop the service by running puppet resource service puppet ensure=stopped enable=false.
5. Disable the MCollective service on the agent node. As a root user, run puppet resource service mcollective ensure=stopped enable=false.
6. Disable the PXP agent.
a. In the console, click Nodes > Classification* and in the PE Infrastructure group, select the PE Agent group.
b. On the Classes tab, select the puppet_enterprise::profile::agent class, and set the parameter pxp_enabled to false.
7. Change to the non-root user.
Tip: If you wish to use su - <NON-ROOT USERNAME> to switch between accounts, make sure to use the - (-l in some unix variants) argument so that full login privileges are correctly granted. Otherwise you may see “permission denied” errors when trying to apply a catalog.
8. As the non-root user, generate and submit the cert for the agent node. From the agent node, execute the following command:
puppet agent -t --certname "<UNIQUE NON-ROOT USERNAME.HOSTNAME>" --server "<PUPPET MASTER HOSTNAME>"
This Puppet run submits a cert request to the master and creates a ~/.puppet directory structure in the non-root user’s home directory.
9. As an admin user, log into the console, navigate to the pending node requests, and accept the requests from non-root user agents.
Note: It is possible to also sign the root user certificate in order to allow that user to also manage the node. However, you should do so only with great caution as this introduces the possibility of unwanted behavior and potential security issues. For example, if your site.pp has no default node configuration, running agent as non-admin could lead to unwanted node definitions getting generated using alt hostnames, which is a potential security issue. In general, if you deploy this scenario, you should ensure that the root and non-root users never try to manage the same resources,ensure that they have clear-cut node definitions, and ensure that classes scope correctly. As the non-root user, run puppet config set certname <UNIQUE NON-ROOT USERNAME.HOSTNAME> --section agent.
10. As the non-root user, run puppet config set server <PUPPET MASTER HOSTNAME> --section agent. Steps 7 and 8 create and set the configuration for the non-root agent’s puppet.conf, created in /.puppetlabs/etc/puppet/ in the non-root user’s home directory.
[main]
certname = <UNIQUE NON-ROOT USERNAME.HOSTNAME>
server = <PUPPET MASTER HOSTNAME>
11. You can now connect the non-root agent node to the master and get PE to configure it. Log into the agent node as the non-root user and run puppet agent -t.
Source: https://puppet.com/docs/pe/2017.1/deploy_nonroot-agent.html
Check the permissions. To make it work, you can provide relevant permissions to the folder where certificates are stored, so that domain user has permissions on the certificates.

Postgres connection failure when running node app using supervisor

I have nodejs webapp with postresql. I am running this using supervisord on the server. The problem is that the postgresql login from nodejs is failing. The error message is:
no PostgreSQL user name specified in startup packet
which basically means no user name is being passed from the webapp while connecting to the db.
Note that I am using unix socket for connecting to postgres from my webapp.
My webapp1.conf looks like:
[program:webapp1]
user=webapp1
command = node /home/webapp1/projects/webapp1/app.js
directory = /home/webapp1/projects/webapp1
autostart = true
autorestart = true
stdout_logfile = /var/log/supervisor/webapp1.log
stderr_logfile = /var/log/supervisor/webapp1_err.log
I have confirmed that supervisor is running the webapp is running under user webapp1.
One more thing - if I start my webapp by logging in as user webapp1, it works.
It sounds like you've got your server set up to use password-less logins to PostgreSQL-- i.e. you have local logins in your pg_hba.conf set to peer or trust so that as long as there's a user configured in postgres with the same name as your linux user, you don't have to do any further configuration to get Postgres working in your apps-- it effectively grants access to the db based on your Linux user account.
I had the same problem when running a simple nodejs script via cron. It worked fine from the shell, but complained of missing username when running via cron. Setting the username explicitly in code wasn't an option because I'd built my config to be as automatic as possible-- I needed it to figure out privileges by which user the script was running as.
It turns out that either the connector library or postgres itself infers the username from an environment variable. I was able to fix it by setting USER=<cron user name> at the top of my crontab (since this was set explicitly in the env of an interactive shell, which is why this works at all).
It looks like the proper syntax to add to your webapp1.conf would be:
environment=USER="<user name here>"

How to create database and user in influxdb programmatically?

In my use case I am using single ec2 instance [not a cluster]. I want to create a database and an user with all privileges programmatically? Is there a config file which I can edit and copy to the right location after influxdb is installed.
Could someone help me with this?
There isn't any config option that you can use to do that with InfluxDB itself. After starting up an instance you can use the InfluxDB HTTP to create the users. The curl command to do so would be the following:
curl "http://localhost:8086/query" --data-urlencode "q=CREATE USER myuser WITH PASSWORD 'mypass' WITH ALL PRIVILEGES"
Just run this command for each of the users you'd like to create. After that, you'll need to enabled the auth value of the [http] section of the config.
you can use ansible to setup influxb with your own recipe.
here's the ansible module documentation that you can use
http://docs.ansible.com/ansible/influxdb_database_module.html
or, any config/deploy manager that you prefer. i'd do this anyday instead of some ssh script or who knows what.
https://forge.puppet.com/tags/influxdb
chef.
https://github.com/bdangit/chef-influxdb
and also, you can use any of the above config managers to provision/manipulate your ec2 instance(s).
Use the admin token and this command (InfluxDB 2.3 CLI)
.\influx.exe user create -n yourusername -p yourpassword -o "your org name" --token admintokengoeshere

Passwordless SSH error while Installing the Big Insight

I am getting below error while installing BigInsight in my Linux machine (RedHat 6.6). Kindly help me how to resolve this.
[ERROR] Prerequisite check - Failed to use given credentials to access nodes.Either provide root password during add node or make sure BI admin user exists on new nodes and passwordless ssh is setup from management node to new nodes that are being added. Please revisit Secure Shell page from installer UI or SSH section in response file to make sure all prerequisites are satisfied, then re-run the command.
Execute the following as root on the server and rerun
ssh-keygen -t rsa ( leave blanks at all prompts )
cat /root/.ssh/*.pub >> /root/.ssh/authorized_keys
then try root#localhost , this should not ask you for a password.

WARNING keystoneclient.middleware.auth_token [-] Configuring auth_uri to point to the public identity endpoint is required;

I have installed OpenStack following this.
I am trying to install Savanna following the tutorial from here
When I run this command
savanna-venv/bin/python savanna-venv/bin/savanna-api --config-file savanna-venv/etc/savanna.conf
I get this error: -
WARNING keystoneclient.middleware.auth_token [-] Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint (7944) wsgi starting up on <IP>
Try connecting to the database:
mysql -u usernam -p
then do use mysql
and then select user,host from user and check host and users assigned in the output. Revert with the screen shot to make it more clear
Also share entries of files /etc/hosts

Resources