WARNING keystoneclient.middleware.auth_token [-] Configuring auth_uri to point to the public identity endpoint is required; - linux

I have installed OpenStack following this.
I am trying to install Savanna following the tutorial from here
When I run this command
savanna-venv/bin/python savanna-venv/bin/savanna-api --config-file savanna-venv/etc/savanna.conf
I get this error: -
WARNING keystoneclient.middleware.auth_token [-] Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint (7944) wsgi starting up on <IP>

Try connecting to the database:
mysql -u usernam -p
then do use mysql
and then select user,host from user and check host and users assigned in the output. Revert with the screen shot to make it more clear
Also share entries of files /etc/hosts

Related

Unable to enroll Fabric client as admin - Amazon Managed Blockchain

I'm following the AWS supply chain workshop. I created an EC2 instance and set up a VPC just like the workshop said. Now I'm connected to the EC2 instance using SSH and I've already downloaded the required packages, setup Docker, downloaded fabric-ca-client. My problem is configuring the fabric-ca client.
When I run the command fabric-ca-client enroll with the required params/flags, it retuns the following error: Error: Failed to create default configuration file: Failed to parse URL 'https://$USER:=9_phK63?#$CA_ENDPOINT': parse https://user:password#ca_endpoint: invalid port ":=9_phK63?" after host
Here's the complete command I'm trying to run: fabric-ca-client enroll -u https://$USER\:$PASSWORD#$CA_ENDPOINT --tls.certfiles ~/managedblockchain-tls-chain.pem -M admin-msp -H $HOME
I'm wondering if the ? in the password is causing the problem. If so, where can I change it?
Workshop link for reference: https://catalog.us-east-1.prod.workshops.aws/workshops/ce1e960e-a811-475f-a221-2afcf57e386a/en-US/02-set-up-a-fabric-client/05-configure-client/06-create-fabric-admin
my name is Forrest and I am a Blockchain Specialist Solutions Architect at AWS. I'd be happy to help you with this.
When using passwords with special characters, these need to be URL-encoded. For example, $ equates to %24. As OP mentioned in comments below, there is a Javascript method encodeURIComponent() that can serve this function. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent
Please make sure your environment variables are all still correctly set as well:
echo $USER
echo $PASSWORD
echo $CA_ENDPOINT
Your CA endpoint should resolve to something like:
ca.m-XXXXXXXXXXXXX.n-XXXXXXXXXXXXXX.managedblockchain.<AWS_REGION>.amazonaws.com:30002

jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection Message [Auth fail]

I am learning to use Jenkins to deploy a .Net 5.0 application on an AWS EC2 server. This is the first time I am using Linux server and Jenkins for .Net (I'm am a life long Windows guy), and I am facing an error while trying to publish my artifacts over SSH to Web Server.
My setup:
Jenkins server is an AWS EC2 Linux AMI server.
Web Server is also an AWS EC2 LInux AMI server.
My Jenkins is correctly installed and working. I am able to build and run unit test cases without any issues.
For Deploy, I am using 'Publish Over SSH' plugin, and I have followed all steps to configure this plugin as mentioned here https://plugins.jenkins.io/publish-over-ssh/.
However, when try to 'Test Configuration', I get the below error,
Failed to connect or change directory
jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection. Message: [Failed to connect session for config [WebServer]. Message [Auth fail]]
I did a ping test from Jenkins server to Web Server, and it is a success.
I'm using the .pem key in the 'Key' section of 'Publish over SSH'. This key is the same key I use to SSH into the web server.
The below link suggests many different solutions, but none is working in my case.
Jenkins Publish over ssh authentification failed with private key
I was looking at the below link which describes the same problem,
Jenkins publish over SSH failed to change to remote directory
However in my case I have kept 'Remote Directory' as empty. I don't know if I have to specify any directory here. Anyways, I tried creating a new directory under the home directory of user ec2-user as '/home/ec2-user/publish' and then used this path as Remote Directory, but it still didn't work.
Screenshot of my settings in Jenkins:
I would appreciate if anyone can point me to the right direction or highlight any mistake I'm doing with my configuration.
In my case following steps solved the problem.
Solution is based on Ubuntu 22.04
add two line in /etc/ssh/sshd_config
PubkeyAuthentication yes
PubkeyAcceptedKeyTypes +ssh-rsa
restart sshd service
sudo service sshd restart
you might consider the following:
a. From the screenshot you’ve provided, it seems that you have checked the Use password authentication, or use different key option which will require you to add your key and password (inputs from these fields will be used in connecting to your server via SSH connection). If you use the same SSH key and passphrase/password on all of your servers, you can uncheck/untick that box and just use the config you have specified above.
b. You might also check if port 22 of your web server allows inbound traffic from the security group where your Jenkins server/EC2 instance is running. See reference here.
c. Also, make sure that the remote directory you have specified is existing otherwise the connection may fail.
Here's the sample config

Connecting docker-machine to Azure using the generic driver

I have a Docker-based deployment on Azure. I know that docker-machine has a Azure driver, which can create VMs and generate the certs, etc.. But I'd rather use the Azure tools (CLI and portal).
So I created a VM, and installed my public SSH key on it. And now I'd like to connect to it using docker-machine. I add the server, so that I can see it when I do docker-machine ls:
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default - virtualbox Stopped Unknown
serv - generic Running tcp://XX.XX.XX.XX:2376 Unknown Unable to query docker version: Unable to read TLS config: open /Users/user/.docker/machine/machines/serv/server.pem: no such file or directory
When I try to set the environment variables, I see this:
$ docker-machine env serv
Error checking TLS connection: Error checking and/or regenerating the certs:
There was an error validating certificates for host "XX.XX.XX.XX:2376":
open /Users/user/.docker/machine/machines/serv/server.pem: no such file or directory
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.
When I try to regennerate-certs, I get:
$ docker-machine regenerate-certs serv
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Something went wrong running an SSH command!
command : sudo hostname serv && echo "serv" | sudo tee /etc/hostname
err : exit status 1
output : sudo: no tty present and no askpass program specified
I can SSH to the server fine.
What's the issue here? How can I make it work?

Obtaining Docker public key.json file

I see a /etc/docker/key.json on Fedora 23 machine. This file seems like a private key for authentication
https://github.com/docker/docker/issues/7667
At what time is it generated ( its not present in output of rpmls docker ), and how do I obtain a corresponding public key?
My usecase is to enable a non-root user to run docker ps command without sudo i.e. by the use of public/private keys.
What should I do?
You don't care about the key.json file, at least as far as I understand your question.
If you want to enable unprivileged users to connect to your Docker daemon using certificates for authentication, you will first need to enable a listening HTTP socket (either binding to localhost or to a public address if you to provide access to the daemon from somewhere other than the docker host), and then you will need to configure appropriate SSL certificates as described in the documentation.
You can also provide access to Docker by managing the permissions on the Docker socket (typically /var/run/docker.sock).
Note that giving someone access to docker is equivalent to giving them root access (because they can always run docker run -v /etc:/hostetc ... and then edit your sudoers configuration or passwd and shadow files, etc.

Openshift: How to test kubernetes features without manually managing administrative accounts/permissions?

I'm attempting to test a single node dev cluster for openshift which I've created. I cannot run any basic commands on the cluster, because I haven't set up properly privliged accounts.
In particular I need to:
run pods which make containers which query service endpoints
query the apiserver through an insecure endpoint
run commands like kubectl get pods
Is there a default account somewhere I can use which can do all of these things? I'd prefer not to manually set up a bunch of complicated user accounts for a low-level development task such as this.
Below are a few, somewhat silly attempts I've made to do this, just as examples
First, I created an "admin" account like this:
sudo -u vagrant $oc login https://localhost:8443 -u=admin -p=admin --config=/data/src/github.com/openshift/origin/openshift.local.config/master/openshift-registry.kubeconfig
Then, I went ahead and hacked around in a few attempts to login as an admin:
[vagrant#localhost ~]$ sudo chmod 777 /openshift.local.config/master/admin.kubeconfig
[vagrant#localhost ~]$ oc login
Server [https://localhost:8443]:
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Authentication required for https://localhost:8443 (openshift)
Username: admin
Password: admin
Login successful.
Using project "project1".
[vagrant#localhost ~]$ oc get nodes --config=/openshift.local.config/master/admin.kubeconfig
This leads to the following error:
Error from server: User "admin" cannot list all nodes in the cluster
I also get this error leaving the config out:
[vagrant#localhost ~]$ oc get nodes
Error from server: User "admin" cannot list all nodes in the cluster
Is there any easy way to list nodes and do basic kube operations in a standalone development cluster for openshift?
You don't login when you are using administrative credentials. You simply set KUBECONFIG=admin.kubeconfig. Login is taking you through a different flow - there is no magic "admin" user.

Resources