we have a problem with Apache Directory Studio (Version: 2.0.0.v20200411-M15 and previous).
When we connect to an OpenLDAP instance we can't see the naming contexts of the Root DSE.
We have the OpenLDAP configured as multi-master and the strange thing is that in one node we can see it and not in the other.
Root DSE with missing entry screenshot
Root DSE showing the entry screenshot
When we try to get the info via ldapsearch, we can see the correct naming context for both nodes:
ldapsearch -H ldap://localhost:16389 -x -D "cn=adminLDAPI,ou=system,dc=XXXXXXX,dc=com" -W -s base -a always "(objectClass=*)" "namingcontexts"
ldapsearch finds namingcontexts screenshot
If we try to fetch the Base DNs with the connection options, it is not fetched on the "wrong" node.
No base DN returned screenshot
And neither entering manually works.
I've checked cn=config and cn=database but is all the same.
Any ideas?
Thanks in advance
Related
I have downloaded the sandbox from hortonworks (Centos OS), then tried to follow the tutorial. It seems like the ambari-admin-password-reset command is not there and missing. I tried also to login with putty, the console asked me to change the password so I did.
now it seems like the command is there, but I have different passwords for the console and one for the putty for the same user.
I have tried to look for the reason why for the same user 'root' I have 2 different passwords (one for the virtual box console and one for the putty) that I can login with. I see different commands on each box. more than that when I share folder I can only see it on the virtual box console but not on the putty console) which is really frustrating.
How can I enforce that what I would see from putty would be the same as what I see from the virtual box console.
I think it somehow related to TTY but I am not sure.
EDIT:
running commands from the virtual box machine output:
grep "^passwd" /etc/nsswitch.conf
OUT: passwd: files sss
grep root /etc/passwd
OUT: rppt"x"0"0"root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin
getent passwd root
OUT: root:x:0:0:root:/root:/bin/bash
EDIT:
I think this is all about docker containers. It seems like the machine 2222 port is the ssh port for the hdp 2.5 container and not for the hosting machine.
Now I get another problem. when running
docker exec sandbox ls
it is getting stuck. any help ?
Thanks for helpers
So now I had the time to analyze the sandbox vm, and write it up for other users.
As you stated correctly in your edit of the question, its the docker container setup of the sandbox, which confuses with two separate root users:
via ssh root#127.0.0.1 -p 2222 you get into the docker container called "sandbox". This is a CentOS release 6.8 (Final), containing all the HDP services, especially the ambari service. The configuration enforces a password change at first login for the root user. Inside this VM you can also execute the ambari-admin-password-reset and set there a password for the ambari admin.
via console access you reach the docker host running a Centos 7.2, here you can login with the default root password for the VM as found in the HDP docs.
Coming to your sub-question with the hanging docker exec, it seems to be a bug in that specific docker version. If you google that, you will find issues discussing this or similar problems with docker.
So I thought that it would be a good idea to just update the host via yum update. However this turned out to be a difficult path.
yum tried to update the kernel, but complained that there is not enough space on the boot partion.
So I moved the boot partion to the root partition:
edit /etc/fsab and comment out the boot entry
unmount /boot
mv /boot
cp -a /boot.org /boot
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-install /dev/sda
reboot
After that I have found out that the docker configuration is broken and docker does not start anymore. In the logs it complained about
"Error starting daemon: error initializing graphdriver:
\"/var/lib/docker\" contains other graphdrivers: devicemapper; Please
cleanup or explicitly choose storage driver (-s )"
So I edited /etc/systemd/system/multi-user.target.wants/docker.service and changed the ExecStart setting to:
ExecStart=/usr/bin/dockerd --storage-driver=overlay
After a service docker start and a docker start sandbox. The container worked again and I could could login to the container and after a ambari-server restart everything worked again.
And now - with the new docker version 1.12.2, docker exec sandbox ls works again.
So to sum up the docker exec command has a bug in that specific version of the sandbox, but you should think twice if you want to upgrade your sandbox.
I ran into the same issue.
The HDP 2.5 sandbox runs all of its components in a docker container, but commands like docker exec -it sandbox /bin/bash or docker attach sandbox got stuck.
When I ran a simple ps aux, I found several /usr/bin/docker-proxy commands which looked like :
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 60000 -container-ip 172.17.0.2 -container-port 60000
They probably forward the HTTP ports of the various UIs of HDP components.
I could ssh into the container ip (here 172.17.0.2) using root/hadoop to authenticate. From there, I could use all "missing" commands like ambari-admin-password-reset.
$ ssh root#172.17.0.2
... # change password
$ ambari-admin-password-reset
NB: I am new to docker, so there's probably a better way to deal with this.
I'd like to post here the instructions for 3.0.1 here.
I followed the instructions of installing hortonworks version 3.0.1 here: https://youtu.be/5TJMudSNn9c
After running the docker container, go to your browser and enter "localhost:4200", that will take you to the in browser terminal of the container, that hosts ambari. Enter "root" for login and "hadoop" for password, change the root password, and then enter "ambari-admin-password-reset" in order to reset ambari password.
In order to be able to use sandbox-hdp.hortonworks.com, you need to add the line "127.0.0.1 sandbox-hdp.hortonworks.com" at the end of the /private/etc/hosts file on your mac.
Incorrect Pass
Then right corner click on power button >> power off drop down >> Restart >> when it boots up then press Esc key to get into recovery menu
Restart
select advance option and hit enter
Advance Option
Select Recovery mode hit enter
Select Root
Root enter
Command
mount -rw -o remount/
ls /home
change pass command
passwd username
user as yours
last step
enter pass two times by pressing enter
enter image description here
Hopefully you changed password (:
I've followed these instructions to install Cassandra: http://docs.datastax.com/en/cassandra/2.0/cassandra/install/installDeb_t.html
When I do $ cqlsh terminal replies me with
Connection error: Could not connect to localhost:9160
I read that the issue might be with configuration file cassandra.yaml
However, I turned out I can't access it. My etc/cassandra folder is empty: enter image description here
How to access cassandra.yaml?
Where is cassandra is stored in my project?
Is there a way to check if Cassandra is actually set up in project?
The image you have attached is showing the ~/.cassandra directory off of your home dir. That's not the same as/etc/cassandra. You should be able to confirm this with the following command:
$ ls -al /etc/cassandra/cassandra.yaml
-rw-r--r-- 1 cassandra cassandra 43985 Mar 11 12:46 /etc/cassandra/cassandra.yaml
To verify if Cassandra is even running, this should work for you if you have successfully completed the packaged install:
$ sudo service cassandra status
Otherwise, simply running this should work, too:
$ ps -ef | grep cassandra
When you set up Cassandra, you'll want to set the listen_address and rpc_address to the machine's hostname or IP. They're set to localhost by default, so if it's running cqlsh should connect to that automatically.
My guess is that Cassandra is not starting for you. Check the system.log file, which (for the packaged install) is stored in /var/logs/cassandra:
$ cat /var/log/cassandra/system.log
Check out that file, and you might find some clues as to what is happening here.
Also, did you really install Cassandra 2.0? That version has been deprecated, so for a new install you shouldn't go any lower than Cassandra 2.1.
I want to connect to cassandra but got this error:
$ bin/cqlsh
Connection error: ('Unable to connect to any servers', {'192.168.1.200': error(10061, "Tried connecting to [('192.168.1.200', 9042)]. Last error: No connection could be made because the target machine actively refused it")})
Pretty simple.
The machine is actively refusing it because your system does not have cassandra running on it. Follow the following steps to completely get rid of this trouble :
Install Cassandra from DataStax (Datastax-DDC; Cassandra version 3).
Go to ~\installation\path\DataStax-DDC\apache-cassandra\bin.
Open up cmd there. (Use Alt+F+P to open it if you are on windows 8 or later).
type cassandra -f this will generate a lot of stuff on the window and you must get the last line as INFO 11:32:31 Created default superuser role 'cassandra'
Now open another cmd window in the same folder.
Type cqlsh
This should give you a prompt, without any error.
I also discovered that this error doesn't pop up if I use cassadra v2.x found here Archived version of Cassandra. I don't know why :( (If you find out please comment).
So, if the above steps do not work, you can always go back to Cassandra v2.x.
Cheers.
Check if you have started Cassandra server, then provide the host and port as the arguments.
$ bin/cqlsh 127.0.0.1 4092
I run into the same problem. This worked for me.
Go to any directory for example E:\ (doesn't have to be the same disc as the cassandra installation)
Create the following directories
E:\cassandra\storage\commitlogs
E:\cassandra\storage\data
E:\cassandra\storage\savedcaches
Then go to your cassandra installations conf path. In my case.
D:\DataStax-DDC\apache-cassandra\conf
Open cassandra.yaml. Edit the lines containing: data_file_directories, commitlog_directory, saved_caches_directory to look like the code below (change paths accordingly to where you created the folders)
data_file_directories:
- E:\cassandra\storage\data
commitlog_directory: E:\cassandra\storage\commitlog
saved_caches_directory: E:\cassandra\storage\savedcaches
Then open the cmd (I did it as administrator, but didn't check if it is necessary) to your cassandra installations bin path. In my case.
D:\DataStax-DDC\apache-cassandra\bin
run cassandra -f
Lots of stuff will be logged to your screen.
You should now be able to run cqlsh and all other stuff without problems.
Edit: The operating system was windows10 64bit
Edit2: If it stops working after a while check if the service is till running using nodetool status. If it isn't follow this instruction.
I also faced the same problem on a Win32 windows 7 machine.
Check if you have JAVA installed correctly and JAVA_HOME variable set.
Once you have checked the java installation and set JAVA_HOME, uninstall Cassandra and install it again.
Hopefully this would solve the problem. Mine was solved after applying the above two steps.
You need to mention host, user, password for cassandra cqlsh connection. Default cassandra cqlsh user is cassandra and password is cassandra.
$ bin/cqlsh <host> -u cassandra -p cassandra
I also had same problem. I applied many methods given on google and youtube but none of them worked in my case. Finally, I applied the following 3 steps and it worked in my case:-
Create a folder without any space in C or D whichever is your system drive. eg:- C:\cassandra
Install Cassandra in this folder instead of installing in"Program Files".
After installation, it will be like this- C:\cassandra\apache-cassandra-3.11.6
Copy python 2.7 installed in bin folder i.e.,C:\cassandra\apache-cassandra-3.11.6\bin
Now your program is ready for work.
There is no special method to connect cqlsh it simple as below:-
$ bin/cqlsh 127.0.0.1(host IP) 9042 or $ bin/cqlsh 127.0.0.1(host IP) 9160 (if older version of Cassandra)
Don't forget to check port connectivity if you are connecting cqlsh to remote host. Also you can use username/password if you enabled by default it is disabled.
I have problem with replication of hosts enrolled to FreeIPA between my IPA server and replica (both Centos 6.6 ipa-server-3.0.0).
If the host is enrolled to replica I can't see it on Master WEB UI. Although user replication works and the host seem to be in both DNS records (on master and replica)
This behaviour stops me from being able to manage users and groups from one WEB UI as I can't assign access to host which are missing from interface.
To enrol hosts I use puppet with following command:
/usr/sbin/ipa-client-install --realm DOMAIN.COM --password password1 --principal admin#DOMAIN.COM --mkhomedir --domain doamin.com --server master.domain.com --server replica.domain.com --enable-dns-updates --force --unattended
I tried to use that command with --force-join and --fixed-primary however the result were the same which is:
The command performs discovery with random output i.e. sometimes it will choose master other time replica server.
A bit of how I build master:
ipa-server-install --no-ntp --setup-dns --no-reverse --no-forwarders -n domain.com --hostname master.domain.com -p password1 -a password2 -r DOMAIN.COM
and replica:
ipa-replica-prepare replica.domain.com --ip-address 10.0.0.2
ipa-replica-install --setup-ca --setup-dns --no-forwarders /var/lib/ipa/replica-info-replica.domain.com.gpg
Any help will be appreciated, the ports are open as per RedHat manual. CLI command such as ipa-replica-manage list shows good relation between both master and replica.
Kind of disappointing that no one was able to help me, anyway.
I have found that the logs for dirsrv had following error:
sasl_io_recv failed to decode packet for connection
Apparently this was due to a bug in IPA where there is not enough memory assigned to nsslapd-sasl-max-buffer-size which was set to default 64KB
this is a ticket for it https://fedorahosted.org/389/ticket/47457.
One would expect that the IPA 3.0.0 should be patched to resolve this issue but unfortunately not.
As I was not able to find the patch I had to manually increase the buffer size using following:
To check the buffer size:
ldapsearch -b cn=config -D "cn=Directory Manager" -W | grep nsslapd-sasl-max-buffer-size
To enter ldap command line:
ldapmodify -D "cn=directory manager" -w password1 -p 389 -h 127.0.0.1 -x -a
To modify entry (from http://blog.christophersmart.com/):
dn: cn=config
changetype: modify
replace: nsslapd-sasl-max-buffer-size
nsslapd-sasl-max-buffer-size: 2097152
[enter key]
I am new to CouchDB and have run into an odd issue. I can create a database but can't delete it.
I am setting up a test framework which will create the test version of the DB at the start and delete it at the end of the test run but I don't really want to have the framework SSH to the server as suggested in this answer Delete couchDB databases.
My setup is CentOS 7 (from the released minimal image) running in Virtual Box 4.3. I installed CouchDB from the EPEL repository, the version is reported as 1.6.1. I can manage the DB with Futon to create the database, add and delete documents. Deleting the DB in Futon hangs. Deleting with curl returns 404 not found.
$ curl -X PUT http://dbserver:5984/test
{"ok":true}
$ curl -X DELETE http://dbserver:5984/test
{"error":"not_found","reason":"missing"}
Based on the CouchDB documentation that is the correct URL to delete the DB. I disabled SELinux but that had no effect. No CouchDB security has been enabled, all settings are left at their default.
Why can't I delete the DB?
The problem seems to be a missing SELinux policy.
Running SELinux in Permissive mode, makes the "verify installation" checks work for me. (CentOS 7, couchdb 1.6.1 installed via yum from the EPEL repository).
Looking in /var/audit/audit.log reveals:
type=AVC msg=audit(1422956916.341:371): avc: denied { create } for pid=2188 comm="beam" name=".test_suite_db_design" scontext=system_u:system_r:rabbitmq_beam_t:s0 tcontext=system_u:object_r:couchdb_var_lib_t:s0 tclass=dir
So, SELinux denied the couchdb process to create that directory.
Using the audit2allow tool one may create the missing rules which could be enabled using semodule command:
grep couchdb /var/log/audit/audit.log | audit2allow -M couchdbvarlib
semodule -i couchdbvarlib.pp
See here.
There's already a bug reported against CouchDB in Fedora #1098802