context: Gitlab 8 with external nginx and postgresql on Ubuntu 15.04. It all worked with Gitlab 7.10 and I started with a fresh install to avoid upgrade-problems.
In the gitlab.rb there is:
gitlab_rails['db_adapter'] = "postgresql"
gitlab_rails['db_encoding'] = "unicode"
gitlab_rails['db_database'] = "gitlabdb"
gitlab_rails['db_pool'] = 10
gitlab_rails['db_username'] = "gitlab"
When doing a reconfigure and "gitlab-rake gitlab:setup" there is no problem, and the database gets recreated. So far looking good. Unfortunately the page doesn't load and I get a 500 - the logfile tells me that it cannot login with the given password. I made the database accept all (without password) and then got to the weird error:
ActiveRecord::NoDatabaseError (FATAL: database "gitlabhq_production" does not exist
Nowhere in the config-files a database gitlabhq_production is mentioned, so I'm clueless here. Can you help out?
It was an old instance of Gitlab bugging. A reboot helped.
Related
Q1. What versions are we using?
Ans.
Python 3.6.12
OS : CentOS 7 64-bit
DB : Oracle 18c
Django 2.2
cx_Oracle : 8.1.0
Q2. Describe the problem
Ans. While running server with "python3 manage.py runserver"
application is able to contact Oracle DB and show the Django Administration page and login also works.
But when we access the application using the Apache (HTTPD) based URL over secure SSL port, we do see the Django page and the admin page as well but Login to Admin page with Internal server error.
In the logs, we see
"django.db.utils.DatabaseError: Error while trying to retrieve text for error ORA-01804"
cx_oracle is otherwise able to connect to the database properly, another application is also using the same database behind the same httpd proxy and works fine
Q3. Show the directory listing where your Oracle Client libraries are installed (e.g. the Instant Client directory). Is it 64-bit or 32-bit?
Ans. 64-bit
Q4. Show what the PATH environment variable (on Windows) or LD_LIBRARY_PATH (on Linux) is set to?
LD_LIBRARY_PATH=/srv/vol/db/oracle/product/18.0.0/dbhome_1/lib:/lib:/usr/lib
PATH=$ORACLE_HOME/bin:/srv/vol/db/oracle/product/18.0.0/dbhome_1/lib:$PATH
Q5. Show any Oracle environment variables set (e.g. ORACLE_HOME, ORACLE_BASE).
ORACLE_HOME=/srv/vol/db/oracle/product/18.0.0/dbhome_1
TNS_ADMIN=$ORACLE_HOME/network/admin
NLS_LANG=AMERICAN_AMERICA.AL32UTF8
ORACLE_BASE=/srv/vol/db/oracle
CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/lib
Any suggestions/help is highly appreciated.
Thank you
I found the problem
So I just removed all the variable declarations from /etc/sysconfig/httpd and checked, the application was still able to access the lib files, so these were now redundant.
Then undid all variable declarations done earlier in .localsh and .localrc files for the os users. To start from scratch, and go step by step to see where it breaks.
So now, cx_Oracle was looking for the lib files in wrong directory
$ORACLE_HOME/client_1/lib
instead of
$ORACLE_HOME/lib
DPI-1047: Cannot locate a 64-bit Oracle Client library: "$ORACLE_HOME/client_1/lib/libclntsh.so: cannot open shared object file: No such file or directory". See https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html for help
I did not have any subfolder named "client_1" inside dbhome_1
so I just created a symlink client_1 that points to dbhome_1 (still unsure on this, but at least it works :) )
So, now, this error was gone but now again ORA-01804 was coming. π
I had read somewhere that this error can be fixed by adding "libociei.so" but I did not have one on my instance, so I generated it using these commands:-
mkdir -p $ORACLE_HOME/rdbms/install/instantclient/light
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk igenliboci
Then I just moved this libociei.so file from
$ORACLE_HOME/instantclient to $ORACLE_HOME/lib
Now there was a new error (so.. progress π ):
ORA-12546 - TNS Permission Denied.
This was easy to solve π
I used this command to address this :-
setsebool -P httpd_can_network_connect on
And...... That was all! It worked.
i'm trying to install Magento 2 in localhost, and get an error when i want to connect the database.
The error is:
Source class "\Magento\Framework\DB\Adapter\Pdo\Mysql" for "Magento\Framework\DB\Adapter\Pdo\MysqlFactory" generation does not exist.
OS: linux mint 19.1 x64.
DB: MySQL.
I create a database (magento) and a user (magento), asigning all privileges.
When run:
mysql -u magento -p
And then put his password can access, so all is fine here.
The path dir is: /var/www/html/magento2
i'm follow this tutorial: https://tecadmin.net/install-magento-on-ubuntu-16-04/
what should I do to solve this?
Solved.
Change the Full Release with Sample Data of Magento for the Full Release with NO Sample Data, and solved.
I'm running an instance of Gitlab Omnibus CE, version 8.15.2, on CentOS 7.3.1611. Upgrading from the 8.14 release family didn't go quite according to plan; since doing that, I've been unable to access the Gitlab browser interface.
When I try to access the browser interface, I can access the login screen and log in, but after I'm logged in, going to any page results in an Error 500: Whoops, something went wrong on our end.
So I used gitlab-ctl tail to grab some log data for what's happening, and it looks like it's a problem with Postgresql's data for one of my projects:
http://pastebin.com/VDMk0eKr
But I'm not sure how I should fix this. Any ideas?
It's known issue that's been fixed with newest release 8.15.3. If you don't want to upgrade GitLab, there is an existing workaround (Edit: as mentioned in comment, the workaround does not always work so consider upgrade primary)
File:
/opt/gitlab/embedded/service/gitlab-rails/app/models/concerns/has_status.rb
Replace
builds = scope.select('count(*)').to_sql
created = scope.created.select('count(*)').to_sql
success = scope.success.select('count(*)').to_sql
pending = scope.pending.select('count(*)').to_sql
running = scope.running.select('count(*)').to_sql
skipped = scope.skipped.select('count(*)').to_sql
canceled = scope.canceled.select('count(*)').to_sql
with
builds = scope.select('count(*)').reorder(nil).to_sql
created = scope.created.select('count(*)').reorder(nil).to_sql
success = scope.success.select('count(*)').reorder(nil).to_sql
pending = scope.pending.select('count(*)').reorder(nil).to_sql
running = scope.running.select('count(*)').reorder(nil).to_sql
skipped = scope.skipped.select('count(*)').reorder(nil).to_sql
canceled = scope.canceled.select('count(*)').reorder(nil).to_sql
And restart GitLab.
I had the same issue and the above didn't work so I ran the following command to downgrade.
To check the current version istalled:
sudo dpkg -l | grep gitlab-ce
To see which versions were available:
sudo apt-cache madison gitlab-ce | less
and the following to "downgrade", since I was at 9.2.0-rc2.ce.0 shown by the above command:
sudo apt-get install gitlab-ce=9.2.0-rc1.ce.0
Ok, so I'm trying to configure and install svnserve on my Ubuntu server. So far so good, up to the point where I try to configure sasl (to prevent plain-text passwords).
So; I installed svnserve and made it run as a daemon (also installed it as a startup script with the command svnserve -d -r /var/svn).
My repository is in /var/svn and has following configuration (to be found in /var/svn/myrepo/conf/svnserve.conf) (I left comments out):
[general]
anon-access = none
auth-access = write
realm = my_repo
[sasl]
use-sasl = true
min-encryption = 128
max-encryption = 256
Over to sasl, I created a svn.conf file in /usr/lib/sasl2/:
pwcheck_method: auxprop
auxprop_plugin: sasldb
sasldb_path: /etc/my_sasldb
mech_list: DIGEST-MD5
I created it in that folder as the article at this link suggested: http://svnbook.red-bean.com/nightly/en/svn.serverconfig.svnserve.html#svn.serverconfig.svnserve.sasl (and also because it existed and was listed as a result when I executed locate sasl).
Right after that I executed this command:
saslpasswd2 -c -f /etc/my_sasldb -u my_repo USERNAME
Which also asked me for a password twice, which I supplied. All going great.
When issuing the following command:
sasldblistusers2 -f /etc/my_sasldb
I get the - correct, as far as I can see - result:
USERNAME#my_repo: userPassword
Restarted svnserve, also restarted the whole server, and tried to connect.
This was the result from my TortoiseSVN client:
Authentication error from server: SASL(-13): user not found: unable to canonify
user and get auxprops
I have no clue at all in what I'm doing wrong. I've been scouring the web for the past few hours, but haven't found anything but that I might need to move the svn.conf file to another location - for example, the install location of subversion itself. which svn results in /usr/bin/svn, thus I moved the svn.conf to /usr/bin (although that doesn't feel right to me).
Still doesn't work, even after a new reboot.
I'm running out of ideas. Anyone else?
EDIT
I tried changing this (according to what some other forums on the internet told me to do): in the file /etc/default/saslauthd, I changed
START=no
MECHANISMS="pam"
to
START=yes
MECHANISMS="sasldb"
(Actually I had already changed START=no to START=yes before, but I forgot to mention it). But still no luck (I did reboot the whole server).
It looks like svnserve uses default values for SASL...
Check /etc/sasl2/svn.conf to be readable by the svnserver process owner.
If /etc/sasl2/svn.conf is owned by user root, group root and --rw------, svnserve uses the default values.
You will not be warned by any log file entry..
see section 4 of https://svn.apache.org/repos/asf/subversion/trunk/notes/sasl.txt:
This file must be named svn.conf, and must be readable by the svnserve process.
(it took me more than 3 days to understand both svnserve-sasl-ldap and this pitfall at the same time..)
I recommend to install the package cyrus-sasl2-doc and to read the section Cyrus SASL for System Administrators carefully.
I expect this is caused by the SASL API for the call
result = sasl_server_new(SVN_RA_SVN_SASL_NAME,
hostname, b->realm,
localaddrport, remoteaddrport,
NULL, SASL_SUCCESS_DATA,
&sasl_ctx);
if (result != SASL_OK)
{
svn_error_t *err = svn_error_create(SVN_ERR_RA_NOT_AUTHORIZED, NULL,
sasl_errstring(result, NULL, NULL));
SVN_ERR(write_failure(conn, pool, &err));
return svn_ra_svn__flush(conn, pool);
}
as you may see, handling the access failure by svnserve is not foreseen, only Ok or error is expected...
I looked in /var/log/messages and found
localhost svnserve: unable to open Berkeley db /etc/sasldb2: No such file or directory
When I created the sasldb to the above file and got the permissions right, it worked. Looks like it ignores or does not use the sasl database path.
There was another suggestion that rebooting solved the problem but that option was not available to me.
Using MAMP/phpMyAdmin on Mac OS X Lion, I'm trying to install MODx on a virtual host. During the process, I encounter this:
I've been looking around and have as of yet not found anyone with the same problem. The file it claims is missing does in fact exist at that location. Attaching my database setup as well in case it helps. I'd greatly appreciate any help with this, as databases/virtual hosts are very much not my fortΓ©.
Is your login and password correct? MAMP by default uses root for each of these. Try that out.
Sounds like a configuration problem.
http://rtfm.modx.com/display/revolution20/Troubleshooting+Installation
If it's not a connection error (that will be found below), I'd suspect that PDO is not installed or active, or that caching (eAccellerator,APC, etc) is interfering.
This is from modx site:
1)
You have eAccelerator disabled during install. eAccelerator can cause problems when doing the heavy lifting during the install process.
2)
PDO Error Messages
If you are getting PDO-related error messages during install, before proceeding to specific error messages as below, please confirm that your PDO configuration is setup correctly. You can do so by running this code (replace user/password/database/host with your setup):
< ? php
/* Connect to an ODBC database using driver invocation */
$dsn = 'mysql:dbname=testdb;host=localhost';
$user = 'dbuser';
$password = 'dbpass';
try {
$dbh = new PDO($dsn, $user, $password);
} catch (PDOException $e) {
echo 'Connection failed: ' . $e->getMessage();
}
? >
If this fails, then your PDO setup is not configured correctly.
As it turns out, I had forgotten to turn off Web Sharing, so somewhere along the way there was a conflict between the two Apache configurations.