How to create a new user in OpenShift? - openshift-enterprise

I am trying to find out how to create a new user in OpenShift enterprise.
According to the documentation (on https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/projects_and_users.html):
Regular users are created automatically in the system upon first login...
This sounds illogical. How does a user login if they dont have a username and password?
Can someone please clarify this - I'm sure there must be some command for creating a new user, but it is not clear.
Thanks

The OpenShift master-config (/etc/openshift/master/master-config.yaml) describes the configuration about authentication. By default the master-config shows something like this for the authentication-part:
identityProviders:
- challenge: true
login: true
name: anypassword
provider:
apiVersion: v1
kind: AllowAllPasswordIdentityProvider
This means that every user with every password can authenticate. By performing oc get users as system:admin you'll see all the users.
This configuration is not recommended. You're able to configure another form of authentication (htpasswd, ldap, github, ...).
I'm using htpasswd. So than you have to create a file (with htpasswd) which will contain your username + encrypted password. After that you'll need to edit your master-config.yaml. You have to tell it to use HTPasswdPasswordIdentityProvider and link to your file.
You can find those steps here. Don't forget to restart your OpenShift master after performing those steps: sudo service openshift-master restart (origin-master for origin).
After creating users you can assign roles to users
Log in with the default admin (system:admin) and assign roles.

I am creating a script for simply adding a user if OpenShift using HTPasswdPasswordIdentityProvider
wget https://github.com/stedolan/jq/releases/download/jq-1.5/jq-linux64
mv jq-linux64 jq && chmod 755 jq
FILE=$(cat /etc/origin/master/master-config.yaml | python -c 'import sys, yaml, json; y=yaml.load(sys.stdin.read()); print json.dumps(y,indent=4, sort_keys=True)' | ./jq '.oauthConfig.identityProviders[0].provider.file')
FILE=$(sed -e 's/^"//' -e 's/"$//' <<<"$FILE")
htpasswd $FILE user1

Related

Cloning Terraform GitHub module inside private org - permission denied

I have the following module that we are trying to clone via SSH (NOTE: we prefer to not use https) in Terraform:
module "example-module" {
source = "git#github.com:private-org/example-module.git?ref=v1.0.0"
}
However, we have a GitHub actions runner that fails when trying to do a terraform init on this module:
Permission denied (publickey). Could not read Password for
'https://***#github.com': No such device or address
So to give this permission, we are trying to add inside .gitconfig:
[url "https://{GITHUB_TOKEN}#github.com"]
insteadOf = "ssh://git#github.com"
And inside the GitHub actions we are trying to replace GITHUB_TOKEN with the actual value:
- name: Configure SSH
run: |
sed -i 's/{GITHUB_TOKEN}/${{ secrets.GITHUB_TOKEN }}/g' .gitconfig
cat .gitconfig >> ~/.gitconfig
But we are still getting the same error. Any ideas for how we can authenticate to a private module inside our GitHub org and successfully clone via SSH?
Figured out the answer. The default GITHUB_TOKEN did not have have the right access rights and was not deemed a "Personal Access Token". This seems a little confusing on GitHub's part and the reason it was getting the error Could not read Password
You will need to generate a new personal access token in GitHub, and add that as a GitHub Actions secret called NEW_GITHUB_TOKEN. Add read:repo and write:repo as access rights and set the token to never expire.
Your .gitconfig should look like:
[url "https://{GITHUB_TOKEN}#github.com"]
insteadOf = "ssh://git#github.com"
And a step in your GitHub Actions that uses your new personal access token:
- name: Configure SSH
run: |
sed -i 's/{GITHUB_TOKEN}/${{ secrets.NEW_GITHUB_TOKEN }}/g' .gitconfig
cat .gitconfig >> ~/.gitconfig

Google Cloud Pub/Sub C++ Client: Authentication Failure When Binary Has setuid (root) [CentOS7]

The application (bin) loads the (service account) credentials fine when it has "normal" permissions. This is the run script:
#!/bin/bash
export GOOGLE_APPLICATION_CREDENTIALS=/home/user/config/gcloud/key.json
./bin
However when bin permission are changed:
chown root:root bin
chmod u+s bin
I get this error:
E1003 10:02:07.563899584 60263 credentials_generic.cc:35] Could not get HOME environment variable.
E1003 10:02:10.563621247 60263 google_default_credentials.cc:461] Could not create google default credentials: UNKNOWN:creds_path unset {created_time:"2022-10-03T10:02:07.563943484+09:00"}
Any advice would be appreciated.
Thanks.
As far as I can tell, this is expected behavior for gRPC. gRPC uses secure_getenv() to get all environment variables. In your case, that means the gRPC ignores the GOOGLE_APPLICATION_CREDENTIALS set.
You may need to change your application to use explicit service account credentials. Something like:
auto is = std::ifstream(filename);
auto json_string =
std::string(std::istreambuf_iterator<char>(is.rdbuf()), {});
auto credentials =
google::cloud::MakeServiceAccountCredentials(json_string);
auto publisher = pubsub::Publisher(
pubsub::MakePublisherConnection(
pubsub::Topic(project_id, topic_id),
google::cloud::Options{}
.set<google::cloud::UnifiedCredentialsOption>(
credentials)));

Adding "libpam" to initramfs recipe removes password from "extrausers" config

I am setting root password for my linux using the following in local.conf file:
INHERIT += "extrausers"
EXTRA_USERS_PARAMS = "usermod -p '\$6\$...'"
This works correctly, as expected.
But now, I found that in linux 5.10, to authenticate a user, I cannot check /etc/shadow, instead I need to use the libpam module. So, I did the following to add libpam to my linux image:
In my intramfs recipe I added libpam to PACKAGE_INSTALL.
Added pam in DISTRO_FEATURES_append
Now, when I flash this new image, the root user does not have any password. Adding libpam is somehow removing the password set using extrausers...
Is there a way to set the password in the image using libpam? Or is there something I am doing wrong when using extrausers and adding libpam to my image?
Since I couldn't get an answer here and on github issues of libpam I gave up trying to make libpam work for me.
I decided to use the /etc/shadow file. I removed extrausers and instead set the password manually into the shadow file using the following:
ROOTFS_POSTINSTALL_COMMAND += "set_root_password;"
set_root_password () {
sed -e "s/root::/root:\$6\$...:/" -i ${IMAGE_ROOTFS}/etc/shadow
}
Now I don't need to use libpam and can keep authenticating users to my redfish server using the shadow file.

Ansible Azure Creds

I am trying to provision infrastructure on Azure Public and Azure Stack. In order to do that I have stored credentials in $HOME/.azure/credentials file as advised by official documentation for Ansible. The configuration looks like this:
[default]
subscription_id=xxx`enter code here`xxx
client=xxxxx`enter code here`
secret=xxxx`enter code here`x
tenant=xxxxx`enter code here`x
[azurestack]
subscription_id=`enter code here`xxxxxx
client=xxxx`enter code here`x
secret=xxxxx`enter code here`
tenant=xxxxxx`enter code here`
cloud_environemnt=`enter code here`x
I tried to execute the playbook as follows:
sudo ansible-playbook -vvv foo.yml --profile=azurestack
It does not work for Azure stack. It says the operation called --profile. Can anyone guide me how to solve the problem?
In the same Ansible documentation, they explain how to do it:
It is possible to store multiple sets of credentials within the credentials file by creating multiple sections. Each section is considered a profile. The modules look for the [default] profile automatically. Define AZURE_PROFILE in the environment or pass a profile parameter to specify a specific profile.
So there is two way, the first one, export first the env variable:
sudo AZURE_PROFILE=azurestack ansible-playbook -vvv foo.yml
And I have to agree that the second way is not that explicit,… I would try by using extra-vars:
sudo ansible-playbook -vvv foo.yml --extra-vars "profile=azurestack"
But not sure at all that this second is correct, and I'm not able to try it.

Expecting an auth URL via either error thrown openstack

ubuntu#ubuntu-14-lts:~$ export OS_USERNAME=admin
ubuntu#ubuntu-14-lts:~$ export OS_TENANT_NAME=admin
ubuntu#ubuntu-14-lts:~$ export OS_PASSWORD=admin
ubuntu#ubuntu-14-lts:~$ export OS_AUTH_URL=http://localhost:35357/v2.0/
Executed the command to create the Admin tenant
ubuntu#ubuntu-14-lts:~$ sudo keystone tenant-create --name admin --description "Admin Tenant"
got the below error
Expecting an auth URL via either --os-auth-url or env[OS_AUTH_URL]
modified the url
ubuntu#ubuntu-14-lts:~$ export OS_AUTH_URL="http://localhost:35357/v2.0/"
re-run the same command and same error thrown
ubuntu#ubuntu-14-lts:~$ sudo keystone tenant-create --name admin --description "Admin Tenant"
Expecting an auth URL via either --os-auth-url or env[OS_AUTH_URL]
Is there any Issues in running the command ?
The issue is probably with sudo - sudo may not maintain environment variables. Depends on configuration.
Why do you need sudo anyway? The keystone command does not require it. Either drop sudo, or add
--os-auth-url http://localhost:35357/v2.0/
to your command. You can also do
sudo -e keystone ...
You have failed to create a new user or tenant because you have no access to keystone... just like you need to login to mysql to create new tables and all, the same is here. The following steps will help you through:
# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
# keystone --os-username=ADMIN_USERNAME --os-password=ADMIN_PASSWORD --os-auth-url=http://controller:35357/v2.0 token-get
# source admin_creds //this is the file where you have saved the admin credentials
# keystone token-get
# source creds // this is the other file where you have backed up your admin credentials
now you can run your keystone commands normally. Please put a tick mark if it helped you! lol

Resources