Authentication error from server: SASL(-13): user not found: unable to canonify - linux

Ok, so I'm trying to configure and install svnserve on my Ubuntu server. So far so good, up to the point where I try to configure sasl (to prevent plain-text passwords).
So; I installed svnserve and made it run as a daemon (also installed it as a startup script with the command svnserve -d -r /var/svn).
My repository is in /var/svn and has following configuration (to be found in /var/svn/myrepo/conf/svnserve.conf) (I left comments out):
[general]
anon-access = none
auth-access = write
realm = my_repo
[sasl]
use-sasl = true
min-encryption = 128
max-encryption = 256
Over to sasl, I created a svn.conf file in /usr/lib/sasl2/:
pwcheck_method: auxprop
auxprop_plugin: sasldb
sasldb_path: /etc/my_sasldb
mech_list: DIGEST-MD5
I created it in that folder as the article at this link suggested: http://svnbook.red-bean.com/nightly/en/svn.serverconfig.svnserve.html#svn.serverconfig.svnserve.sasl (and also because it existed and was listed as a result when I executed locate sasl).
Right after that I executed this command:
saslpasswd2 -c -f /etc/my_sasldb -u my_repo USERNAME
Which also asked me for a password twice, which I supplied. All going great.
When issuing the following command:
sasldblistusers2 -f /etc/my_sasldb
I get the - correct, as far as I can see - result:
USERNAME#my_repo: userPassword
Restarted svnserve, also restarted the whole server, and tried to connect.
This was the result from my TortoiseSVN client:
Authentication error from server: SASL(-13): user not found: unable to canonify
user and get auxprops
I have no clue at all in what I'm doing wrong. I've been scouring the web for the past few hours, but haven't found anything but that I might need to move the svn.conf file to another location - for example, the install location of subversion itself. which svn results in /usr/bin/svn, thus I moved the svn.conf to /usr/bin (although that doesn't feel right to me).
Still doesn't work, even after a new reboot.
I'm running out of ideas. Anyone else?
EDIT
I tried changing this (according to what some other forums on the internet told me to do): in the file /etc/default/saslauthd, I changed
START=no
MECHANISMS="pam"
to
START=yes
MECHANISMS="sasldb"
(Actually I had already changed START=no to START=yes before, but I forgot to mention it). But still no luck (I did reboot the whole server).

It looks like svnserve uses default values for SASL...
Check /etc/sasl2/svn.conf to be readable by the svnserver process owner.
If /etc/sasl2/svn.conf is owned by user root, group root and --rw------, svnserve uses the default values.
You will not be warned by any log file entry..
see section 4 of https://svn.apache.org/repos/asf/subversion/trunk/notes/sasl.txt:
This file must be named svn.conf, and must be readable by the svnserve process.
(it took me more than 3 days to understand both svnserve-sasl-ldap and this pitfall at the same time..)
I recommend to install the package cyrus-sasl2-doc and to read the section Cyrus SASL for System Administrators carefully.
I expect this is caused by the SASL API for the call
result = sasl_server_new(SVN_RA_SVN_SASL_NAME,
hostname, b->realm,
localaddrport, remoteaddrport,
NULL, SASL_SUCCESS_DATA,
&sasl_ctx);
if (result != SASL_OK)
{
svn_error_t *err = svn_error_create(SVN_ERR_RA_NOT_AUTHORIZED, NULL,
sasl_errstring(result, NULL, NULL));
SVN_ERR(write_failure(conn, pool, &err));
return svn_ra_svn__flush(conn, pool);
}
as you may see, handling the access failure by svnserve is not foreseen, only Ok or error is expected...

I looked in /var/log/messages and found
localhost svnserve: unable to open Berkeley db /etc/sasldb2: No such file or directory
When I created the sasldb to the above file and got the permissions right, it worked. Looks like it ignores or does not use the sasl database path.
There was another suggestion that rebooting solved the problem but that option was not available to me.

Related

django.db.utils.DatabaseError: Error while trying to retrieve text for error ORA-01804

Q1. What versions are we using?
Ans.
Python 3.6.12
OS : CentOS 7 64-bit
DB : Oracle 18c
Django 2.2
cx_Oracle : 8.1.0
Q2. Describe the problem
Ans. While running server with "python3 manage.py runserver"
application is able to contact Oracle DB and show the Django Administration page and login also works.
But when we access the application using the Apache (HTTPD) based URL over secure SSL port, we do see the Django page and the admin page as well but Login to Admin page with Internal server error.
In the logs, we see
"django.db.utils.DatabaseError: Error while trying to retrieve text for error ORA-01804"
cx_oracle is otherwise able to connect to the database properly, another application is also using the same database behind the same httpd proxy and works fine
Q3. Show the directory listing where your Oracle Client libraries are installed (e.g. the Instant Client directory). Is it 64-bit or 32-bit?
Ans. 64-bit
Q4. Show what the PATH environment variable (on Windows) or LD_LIBRARY_PATH (on Linux) is set to?
LD_LIBRARY_PATH=/srv/vol/db/oracle/product/18.0.0/dbhome_1/lib:/lib:/usr/lib
PATH=$ORACLE_HOME/bin:/srv/vol/db/oracle/product/18.0.0/dbhome_1/lib:$PATH
Q5. Show any Oracle environment variables set (e.g. ORACLE_HOME, ORACLE_BASE).
ORACLE_HOME=/srv/vol/db/oracle/product/18.0.0/dbhome_1
TNS_ADMIN=$ORACLE_HOME/network/admin
NLS_LANG=AMERICAN_AMERICA.AL32UTF8
ORACLE_BASE=/srv/vol/db/oracle
CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/lib
Any suggestions/help is highly appreciated.
Thank you
I found the problem
So I just removed all the variable declarations from /etc/sysconfig/httpd and checked, the application was still able to access the lib files, so these were now redundant.
Then undid all variable declarations done earlier in .localsh and .localrc files for the os users. To start from scratch, and go step by step to see where it breaks.
So now, cx_Oracle was looking for the lib files in wrong directory
$ORACLE_HOME/client_1/lib
instead of
$ORACLE_HOME/lib
DPI-1047: Cannot locate a 64-bit Oracle Client library: "$ORACLE_HOME/client_1/lib/libclntsh.so: cannot open shared object file: No such file or directory". See https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html for help
I did not have any subfolder named "client_1" inside dbhome_1
so I just created a symlink client_1 that points to dbhome_1 (still unsure on this, but at least it works :) )
So, now, this error was gone but now again ORA-01804 was coming. 😑
I had read somewhere that this error can be fixed by adding "libociei.so" but I did not have one on my instance, so I generated it using these commands:-
mkdir -p $ORACLE_HOME/rdbms/install/instantclient/light
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk igenliboci
Then I just moved this libociei.so file from
$ORACLE_HOME/instantclient to $ORACLE_HOME/lib
Now there was a new error (so.. progress 😉 ):
ORA-12546 - TNS Permission Denied.
This was easy to solve 😀
I used this command to address this :-
setsebool -P httpd_can_network_connect on
And...... That was all! It worked.

puppet: Could not back up <file>: Got passed new contents for sum

I had a question I was hoping someone might have an answer to. Essentially what I'm doing is try to ensure I'm always using a fixed, slightly older version of phpunit, which I've placed in my module's file resources.
The manifest:
file
{
"/usr/bin/phpunit":
ensure => file,
owner => 'root',
group => 'root',
mode => 0755,
source => "puppet:///modules/php/phpunit"
}
Preparation: I download the current ('wrong') version of phpunit and place it in /usr/bin.
So the first run puppet succeeds:
Notice: Compiled catalog for <hostname> in environment production in 3.06 seconds
Notice: /Stage[main]/Php/File[/usr/bin/phpunit]/content: content changed '{md5}9f61f732829f4f9e3d31e56613f1a93a' to '{md}38789acbf53196e20e9b89e065cbed94'
Notice: /Stage[main]/Httpd/Service[httpd]: Triggered 'refresh' from 1 events
Notice: Finished catalog run in 15.86 seconds
Then I download the current (still 'wrong') version of phpunit and place it in /usr/bin again.
This time the puppet run fails.
Notice: Compiled catalog for <hostname> in environment production in 2.96 seconds
Error: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
Error: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
Error: /Stage[main]/Php/File[/usr/bin/phpunit]/content: change from {md5}9f61f732829f4f9e3d31e56613f1a93a to {md5}38789acbf53196e20e9b89e065cbed94 failed: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
What gives? If I delete the file ( /var/lib/puppet/clientbucket/9/f/6/1/f/7/3/2/9f61f732829f4f9e3d31e56613f1a93a/ ) from my filebucket it will work again... for the next run, but not the one after that.
What am I doing wrong?
I'd appreciate any input and thanks in advance.
Been having this error as well. I solved it with a combination of two previous answers.
Firstly I had to delete /var/lib/puppet/clientbucket on the client node by running:
sudo rm -r /var/lib/puppet/clientbucket
Just doing this will only let it run once more.
Then I had to mark the backup => false to stop it recreating the file, missing out either step failed to solve it for me. The accepted answer is incorrect by saying there is
"no solution other than upgrading".
I was able to fix the same problem by removing /var/lib/puppet/clientbucket on the client node.
This node has been running out of disk space, so puppet has probably incorrectly stored empty files there.
As a workaround, you can set backup => false in the file resource. This is a little unsafe, of course.
This has no solution other than to upgrade since there's a bug in certain versions of puppet where files containing both UTF8 and binary characters are treated wrongly, and it results in an error message.
https://tickets.puppetlabs.com/browse/PUP-1038
The ridiculously overcomplicated solution I used as a workaround is to have a .tar file in the file resource which notifies an exec which untars and places the actual executable in the correct directory, making sure the timestamp for the latter is newer than the former.
It's far from ideal but it works in cases like mine where upgrading puppet to the most current version isn't an attractive option.

Problems with EXEC pplcd from PeopleSoft Application Engine

On a Unix server, I am running an application engine via the process scheduler.
In it, I am attempting to use a "zip" Unix command from within an "Exec" pplcode function.
However, I only get the error
PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I have tried it several ways. The most logical approach I thought was to change directory back to the root, then change to the specified directory so that I could easily use the zip command, such as the following...
Exec("cd / && cd /opt/psfin/pt850/dat/PSFIN1/PYMNT && zip INVREND INVREND.XML");
1643 12.20.34 0.000048 72: Exec("cd /opt/psfin/pt850/dat/PSFIN1/PYMNT");
1644 12.20.34 0.001343 PS_Exec(P): Error executing batch command with reason: No such file or directory (2)
I've even tried the following....just to see if anything works from within an Exec...
Exec("ls");
Sure enough, it gave the same error.
Now, some of you may be wondering, does the account that is associated with the process scheduler actually have authority on this particular directory path on the server ? Well, I was able to create the xml file given in the previous command with no problems.
I just cannot seem to be able to modify it with the Exec issuance of Unix commands.
I'm wondering if this is an error of rights and permissions from the unix server with regards to the operator id that the process scheduler is running from. However, given that it can create and write to a file there, I cannot understand why the Exec command would be met with any resistance....Just my gut shot in the dark...
Any help would be GREATLY appreciated!!!
Thanks,
Flynn
Not sure if you're still having an issue, but in your Exec code, adding the optional %FilePath_Absolute constant should help. When that constant is left off, PS automatically prefixes all commands with <PS_HOME>. You'll have to specify absolute paths with this flag on though. I've changed the command to something that should work.
Exec("zip /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND /opt/psfin/pt850/dat/PSFIN1/PYMNT/INVREND.XML", %FilePath_Absolute);
The documentation at PeopleBooks is a little confusing sometimes, but it explains it fairly well in this case.
You can always store the absolute location in a variable and prefix that to your commands so you don't have to keep typing out /opt/psfin/pt850/dat/PSFIN1/PYMNT/.

How to add Dell Equillogic to Nagios

Note: All firmware and models are compatible, that is why nothing is posted about it.
I've been working on this now for a few hours (reading manuals and such) so I'm not just coming here right out of the blue. I am working on a PRE-EXISTING Nagios server where there are several other existing plugins and checks running and working. Now I want to add another server there to check so I made the following modifications:
First and foremost, I added a file to /usr/local/nagios/libexec named: check_equallogic.sh. The permissions are 755, the same as all others. I have chowned to nagios:nagios and in the listing it shows the Owner as Nagios.
I then added a command to the commands.cfg file in \usr\local\nagios\etc\objects that shows the following:
# 'check_equallogic' command definition
define command{
command_name check_equallogic
command_line $USER1$/check_equallogic -H $HOSTADDRESS$ -C $ARG1$ -t $ARG2$ $ARG3$
}
Following this, I created a file named equallogic.cfg in the objects directory and it contains (more or less):
define host{
use linux-server ; Inherit default values from a template
host_name 172.16.50.11 ; The name we're giving to this device
alias EqualLogic ; A longer name associated with the device
address 172.16.50.11 ; IP address of the device
contact_groups admins
}
Check Equallogic Information
define service{
use generic-service
host_name 172.16.50.11
service_description General Information
check_command check_equallogic!public!info
}
After ensuring that permissions are okay for all files, I restart the nagios service, no errors. When I go into the WebGUI, I get the following errors AFTER the check runs:
(Return code of 127 is out of bounds - plugin may be missing)
Extra, probably unrelated problem
Furthermore, when I log into the EquilLogic server, under Audit logs I get the following error:
Level: AUDIT
Time: 26/05/2014 3:59:13 PM
Member: ps4100-1
Subsystem: agent
Event ID: 22.7.1
SNMP packet validation failed, request received from 172.16.10.11
An snmpwalk receives a timeout, whereas others succeed. I will work on importing the MIBs tomorrow. The reason why I am mentioning it is because I want to make sure that it is only a MIB issue for the SNMP. If it is, then ignore this area.
I am entirely unsure of what to do here.
This doesn't look like a MiBs issue at all.
If snmpwalk fails, your device is not configured properly for snmp or the credentials in your possession are wrong.
Furthermore, on a general note, it is bad practice to create commands definitions for untested plugins. First you need to make sure that your plugin works from the command line, then you add it to Nagios' config.
Since I don't see this essential step in what you wrote, i will assume you didn't test the plugin.
If the plugin does not work and you need help with that please open a new question.

Mint - Stored SVN password was changed and now unable to commit

I have a copy of a repo on my localhost with a saved username / password for the SVN repo.
The problem is that I changed my svn password (and would like to keep it that way) but every time I try to svn commit, it is asking for my GNOME keyring password (which I enter correctly). This is odd in the first place because I never had it ask me this before.
Then, after entering my password to the keyring, I get this error message:
svn: OPTIONS of 'PATH_TO_CHANGED_FILES': authorization failed: Could not authenticate to server: rejected Basic challenge (REPO_URL)
This is happening on 2 repos that I have but a 3rd one is just fine.
When I disable authentication on the server, everything commits just fine and if I try to update / commit from another server, it also works just fine.
I tried adding the following lines to my ~/.subversion/servers:
store-passwords = no
store-plaintext-passwords = no
And I also tried adding the following lines to my ~/.subversion/config:
store-passwords = no
store-auth-creds = no
But those config file changes do nothing.
Is there a way for my localhost svn to forget the username and passwords I have entered for these repos (they were saved before) so I can get back to everything?
I was able to solve this by deleting the keyring file for MATE. It is a bit of a brute way of doing it but it worked. You can delete the keyring file for MATE with the following command:
rm ~/.config/mate/keyrings/*.keyring
I don't know, how to remove stored data from keyring (old pass for repo), but you can try to replace it.
Use in console any SVN command, which will require authentication with additional options
Global options:
--username ARG : specify a username ARG
--password ARG : specify a password ARG
and test repo-communication in usual way after it
About keyring pass-request:
Check settings in ~/.subversion/config, [auth] section for password-stores =
Check settings in ~/.subversion/servers, [global] section for
store-passwords =
store-plaintext-passwords =

Resources