Xwiki Docker to Ldap - linux

Trying to connect to give use ldap for auth in xwiki
Main warnings in docker log of xwiki are solr and Xwiki Failed to get the configured AuthService
Tests where I try to insert user logins and passwords in in docker logs gives me WARN nyicationFailureLoggerListener and I can't find any info on what it is.
If u have any suggestions or what might be helpful docker logs of successful ldap connections please pill me in on that
Ive created my .cfg file on an example from https://extensions.xwiki.org/xwiki/bin/view/Extension/LDAP/Authenticator/#H8.3.x
Installed all related extensions

Related

Guacamole ldap 1.3.0 linux native installation

I have done guacamole-1.3.0 version native installation in my linux centOs machine.
I need to do LDAP guacamole integration but I am facing some difficulty in doing the configuration.
I have created /etc/guacamole/extension and added ldap-auth 1.3.0 jar inside /etc/guacamole/extensions. Used Tomcat and that is running in the server. Started guacd service. Installation is done but ldap integration facing difficulty.
Need help on how to check and login using ldap auth.
Kindly help me with the steps for ldap guacamole integration for version-1.3.0
Here the guacamole LDAP Properties vim /etc/guacamole/guacamole.properties
# Auth provider class auth-provider:net.sourceforge.guacamole.net.auth.ldap.LDAPAuthenticationProvider
#LDAP Properties
ldap-hostname: stackoverflow.local
ldap-port: 636 or 3268(LDAP Unsecure conncetion)
ldap-user-base-dn:CN=Users,DC=stackoverflow,DC=local
ldap-user-base-dn: dc=stackoverflow,dc=local
ldap-search-bind-dn:cn=ldap1,CN=Users,dc=stackoverflow,dc=local
ldap-search-bind-password: XXXXXXXXXXX (Password for ldap1 user)
ldap-username-attribute: sAMAccountName
ldap-user-search-filter:(objectClass=user)(!(objectCategory=computer))
ldap-max-search-results:4000
mysql-auto-create-accounts: true
ldap-follow-referrals:true
Then restart tomcat gucad and mysql service
systemctl restart tomcat mysqld guacd
login with domain user simply by giving domain name and password in guacamole dashboard.
If you need more help reply back here with logs i can helpout

jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection Message [Auth fail]

I am learning to use Jenkins to deploy a .Net 5.0 application on an AWS EC2 server. This is the first time I am using Linux server and Jenkins for .Net (I'm am a life long Windows guy), and I am facing an error while trying to publish my artifacts over SSH to Web Server.
My setup:
Jenkins server is an AWS EC2 Linux AMI server.
Web Server is also an AWS EC2 LInux AMI server.
My Jenkins is correctly installed and working. I am able to build and run unit test cases without any issues.
For Deploy, I am using 'Publish Over SSH' plugin, and I have followed all steps to configure this plugin as mentioned here https://plugins.jenkins.io/publish-over-ssh/.
However, when try to 'Test Configuration', I get the below error,
Failed to connect or change directory
jenkins.plugins.publish_over.BapPublisherException: Failed to connect and initialize SSH connection. Message: [Failed to connect session for config [WebServer]. Message [Auth fail]]
I did a ping test from Jenkins server to Web Server, and it is a success.
I'm using the .pem key in the 'Key' section of 'Publish over SSH'. This key is the same key I use to SSH into the web server.
The below link suggests many different solutions, but none is working in my case.
Jenkins Publish over ssh authentification failed with private key
I was looking at the below link which describes the same problem,
Jenkins publish over SSH failed to change to remote directory
However in my case I have kept 'Remote Directory' as empty. I don't know if I have to specify any directory here. Anyways, I tried creating a new directory under the home directory of user ec2-user as '/home/ec2-user/publish' and then used this path as Remote Directory, but it still didn't work.
Screenshot of my settings in Jenkins:
I would appreciate if anyone can point me to the right direction or highlight any mistake I'm doing with my configuration.
In my case following steps solved the problem.
Solution is based on Ubuntu 22.04
add two line in /etc/ssh/sshd_config
PubkeyAuthentication yes
PubkeyAcceptedKeyTypes +ssh-rsa
restart sshd service
sudo service sshd restart
you might consider the following:
a. From the screenshot you’ve provided, it seems that you have checked the Use password authentication, or use different key option which will require you to add your key and password (inputs from these fields will be used in connecting to your server via SSH connection). If you use the same SSH key and passphrase/password on all of your servers, you can uncheck/untick that box and just use the config you have specified above.
b. You might also check if port 22 of your web server allows inbound traffic from the security group where your Jenkins server/EC2 instance is running. See reference here.
c. Also, make sure that the remote directory you have specified is existing otherwise the connection may fail.
Here's the sample config

Heroku Login cmd permission issue for Windows 10

I installed heroku cli.
$ heroku --version
heroku-cli/6.14.36-15f8a25 (windows-x64) node-v8.7.0
Started cmd.exe as admin
$ heroku login (asked me for username and password)
After I provided the username and password. I get the following error:
! EACCES: connect EACCES 34.234.38.27:443
I tried using gitbash to login in heroku. I get the same error message if I try same thing in git bash using the following cmd:
winpty heroku login
I have looked everywhere I could. Most places have closed the tickets but there is no solution anywhere. I also tried deleting the heroku folder from appdata/local/ but that also didn't help. I know its permission issue. I get a better error message when I try
$ heroku update
heroku-cli: Updating plugins... done
! Get https://cli-assets.heroku.com/branches/v6/manifest.json: dial tcp 52.84.64.82:443: connectex: An attempt was made to access a socket in a way forbidden by its access permissions.
I want to host a small NodeJS app for free. What are my other options because if I have to spend so much time just deploying then maybe it is not an option.
maybe your problem is the firewall or the antivirus, add the exception for the address that shows you (34.234.38.27:443), and do the test. Also check your antivirus if it is not blocking you heroku cli, and always run in administrator mode
You can also do the following. Enable debugging during login. Make sure to redact your password from the output before sending it to support! Enable it by running set HEROKU_DEBUG=true; set HEROKU_DEBUG_HEADERS=1; heroku login on Windows to get more information.
Good luck with your app my friend!

Openshift: How to test kubernetes features without manually managing administrative accounts/permissions?

I'm attempting to test a single node dev cluster for openshift which I've created. I cannot run any basic commands on the cluster, because I haven't set up properly privliged accounts.
In particular I need to:
run pods which make containers which query service endpoints
query the apiserver through an insecure endpoint
run commands like kubectl get pods
Is there a default account somewhere I can use which can do all of these things? I'd prefer not to manually set up a bunch of complicated user accounts for a low-level development task such as this.
Below are a few, somewhat silly attempts I've made to do this, just as examples
First, I created an "admin" account like this:
sudo -u vagrant $oc login https://localhost:8443 -u=admin -p=admin --config=/data/src/github.com/openshift/origin/openshift.local.config/master/openshift-registry.kubeconfig
Then, I went ahead and hacked around in a few attempts to login as an admin:
[vagrant#localhost ~]$ sudo chmod 777 /openshift.local.config/master/admin.kubeconfig
[vagrant#localhost ~]$ oc login
Server [https://localhost:8443]:
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y
Authentication required for https://localhost:8443 (openshift)
Username: admin
Password: admin
Login successful.
Using project "project1".
[vagrant#localhost ~]$ oc get nodes --config=/openshift.local.config/master/admin.kubeconfig
This leads to the following error:
Error from server: User "admin" cannot list all nodes in the cluster
I also get this error leaving the config out:
[vagrant#localhost ~]$ oc get nodes
Error from server: User "admin" cannot list all nodes in the cluster
Is there any easy way to list nodes and do basic kube operations in a standalone development cluster for openshift?
You don't login when you are using administrative credentials. You simply set KUBECONFIG=admin.kubeconfig. Login is taking you through a different flow - there is no magic "admin" user.

WARNING keystoneclient.middleware.auth_token [-] Configuring auth_uri to point to the public identity endpoint is required;

I have installed OpenStack following this.
I am trying to install Savanna following the tutorial from here
When I run this command
savanna-venv/bin/python savanna-venv/bin/savanna-api --config-file savanna-venv/etc/savanna.conf
I get this error: -
WARNING keystoneclient.middleware.auth_token [-] Configuring auth_uri to point to the public identity endpoint is required; clients may not be able to authenticate against an admin endpoint (7944) wsgi starting up on <IP>
Try connecting to the database:
mysql -u usernam -p
then do use mysql
and then select user,host from user and check host and users assigned in the output. Revert with the screen shot to make it more clear
Also share entries of files /etc/hosts

Resources