"NOAUTH Authentication required" Gitlab error with Azure cache for Redis - azure

I have created an Azure cache for Redis and I am trying to use it as external redis for Gitlab.
My gitlab.rb is this:
#external_url "https://ci.example.com"
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = "/etc/gitlab/ssl/ci.example.com.crt"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/ci.example.com.key"
### The duration in seconds to keep backups before they are allowed to be deleted
gitlab_rails['backup_keep_time'] = 604800
### External postgres settings
postgresql['enable'] = false
gitlab_rails['db_adapter'] = "postgresql"
gitlab_rails['db_encoding'] = "unicode"
gitlab_rails['db_database'] = "cisomething"
# username string for AWS
# gitlab_rails['db_username'] = "gitlab"
# username string for Azure
gitlab_rails['db_username'] = "gitlab#ci-something.postgres.database.azure.com"
gitlab_rails['db_password'] = "really long password"
gitlab_rails['db_host'] = "ci-something.postgres.database.azure.com"
gitlab_rails['db_port'] = 5432
gitlab_rails['auto_migrate'] = false
### External redis settings
redis['enable'] = false
gitlab_rails['redis_host'] = "ci.redis.cache.windows.net"
gitlab_rails['redis_port'] = 6379
gitlab_rails['redis_password'] = "azure-redis-primary-access-key"
### Whitelist VPC cidr for access to health checks
gitlab_rails['monitoring_whitelist'] = ['XX.XXX.X.X/24']
### Default Theme
gitlab_rails['gitlab_default_theme'] = 2
### Enable or disable automatic database migrations
gitlab_rails['auto_migrate'] = false
### GitLab email server settings
... other settings here
I can connect to Redis with redis-cli
redis-cli -h ci.redis.cache.windows.net -p 6379 -a azure-redis-primary-access-key
and execute commands.
When I execute gitlab-ctl tail I see this error:
==> /var/log/gitlab/gitlab-workhorse/current <==
{"error":"keywatcher: pubsub receive: NOAUTH Authentication required.","level":"error","msg":"unknown error","time":"2020-02-21T10:26:08Z"}
{"address":"ci.redis.cache.windows.net","level":"info","msg":"redis: dialing","scheme":"redis","time":"2020-02-21T10:26:08Z"}
{"error":"keywatcher: pubsub receive: NOAUTH Authentication required.","level":"error","msg":"unknown error","time":"2020-02-21T10:26:08Z"}
{"address":"ci.redis.cache.windows.net","level":"info","msg":"redis: dialing","scheme":"redis","time":"2020-02-21T10:26:08Z"}
{"error":"keywatcher: pubsub receive: NOAUTH Authentication required.","level":"error","msg":"unknown error","time":"2020-02-21T10:26:08Z"}
{"address":"ci.redis.cache.windows.net","level":"info","msg":"redis: dialing","scheme":"redis","time":"2020-02-21T10:26:08Z"}
{"error":"keywatcher: pubsub receive: NOAUTH Authentication required.","level":"error","msg":"unknown error","time":"2020-02-21T10:26:08Z"}
{"address":"ci.redis.cache.windows.net","level":"info","msg":"redis: dialing","scheme":"redis","time":"2020-02-21T10:26:08Z"}
I searched the internet but I cannot find something to resolve this.
System information
System: Ubuntu 16.04
Current User: git
Using RVM: no
Ruby Version: 2.6.5p114
Gem Version: 2.7.10
Bundler Version:1.17.3
Rake Version: 12.3.3
Redis Version: 5.0.7
Git Version: 2.24.1
Sidekiq Version:5.2.7
GitLab information
Version: 12.7.6
Revision: 61654d25b20
Directory: /opt/gitlab/embedded/service/gitlab-rails
DB Adapter: PostgreSQL
DB Version: 9.5.20,

So, I figured it out.
For future reference here it goes.
gitlab.rb
### External redis settings
redis['enable'] = false
gitlab_rails['redis_host'] = "ci.redis.cache.windows.net"
gitlab_rails['redis_port'] = 6380
gitlab_rails['redis_password'] = "azure-primary-access-key"
gitlab_rails['redis_ssl'] = true
Azure Cache for Redis configuration [Azure portal]
Final note:
When deploying the Gitlab VM, check the logs with gitlab-ctl tail. If you see the redis default port to be 6379 means that Sidekiq has old configuration, which as I observed is not updated with gitlab-ctl reconfigure. Delete the VM and redeploy it.

Related

Connecting to postgresql from python 3, running in Cloud Shell: password authentication failed

I try to run locally (from GCP terminal) python 3 tutorial program to connect to my postgresql dsatabase.
I run proxy, as it is suggested in source:
./cloud_sql_proxy -instances=xxxxxxxx:us-central1:testpg=tcp:5432
it works, I can connect to it with:
psql "host=127.0.0.1 sslmode=disable dbname=guestbook user=postgres
Unfortunately when I try to connect from python:
cnx = psycopg2.connect(dbname=db_name, user=db_user,
password=db_password, host=host)
host is 121.0.0.1 -as I run it locally, I get this error:
psycopg2.OperationalError: connection to server at "127.0.0.1", port 5432 failed: FATAL: password authentication failed for user "postgres"
I can't get around what I miss?
Thanks in advance ...
I'd recommend using the Cloud SQL Python Connector to manage your connections and best of all you won't need to worry about running the proxy manually. It supports the pg8000 postgresql driver and can run from Cloud Shell.
Here is an example code snippet showing how to use it:
from google.cloud.sql.connector import connector
import sqlalchemy
# configure Cloud SQL Python Connector properties
def getconn() ->:
conn = connector.connect(
"xxxxxxxx:us-central1:testpg",
"pg8000",
user="YOUR_USER",
password="YOUR_PASSWORD",
db="YOUR_DB"
)
return conn
# create connection pool to re-use connections
pool = sqlalchemy.create_engine(
"postgresql+pg8000://",
creator=getconn,
)
# query or insert into Cloud SQL database
with pool.connect() as db_conn:
# query database
result = db_conn.execute("SELECT * from my_table").fetchall()
# Do something with the results
for row in result:
print(row)
For more detailed examples refer to the README of the repository.

node-ansible fails to connect to the host when multiple connections coexist

I have an API server which may trigger multiple node-ansibles simultaneously to connect to a remote machine to do something.
Here's the node.js code:
// app.js
const Ansible = require('node-ansible')
let ansibleNum = 10
for (let i = 0; i < ansibleNum; i += 1) {
let command = new Ansible.Playbook().playbook('test')
command.inventory('hosts')
command.exec()
.then(successResult => {
console.log(successResult)
})
.catch(err => {
console.log(err)
})
}
And the ansible playbook:
# test.yml
---
- hosts: all
remote_user: ubuntu
become: true
tasks:
- name: Test Ansible
shell: echo hello
register: result # store the result into a variable called "result"
- debug: var=result.stdout_lines
As ansibleNum increases, the probability of the failure of ansible playbook also increases.
The failure message is:
fatal: [10.50.123.123]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to 10.50.123.123 closed.\r\n", "unreachable": true}
I've read another similar question here, but the solutions provided by it doesn't work in my case.
Another way to trigger the problem is by executing
ansible-playbook -i hosts test.yml & ansible-playbook -i hosts test.yml.
This command runs ansible without node.js.
I've pushed the code to github. You can download it directly.
Anyone knows why the shared connection got closed?
I've set ControlMaster argument to auto by following the document here.
It's strange that setting the connection type to paramiko solves my problem.
Here's the config file located in ~/.ansible.cfg:
[defaults]
transport = paramiko
Based on this document, it seems that paramiko doesn't support persistent connection.
I'm still confused about why this setting solves my problem.

Unable to connect GitLab with Mailgun

I am unable to send emails in GitLab, I am using the service Mailgun, below my settings:
sudo vim /etc/gitlab/gitlab.rb
Settings:
gitlab_rails['gitlab_email_from'] = "username#domain.com"
gitlab_rails['gitlab_email_reply_to'] = "username#domain.com"
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "smtp.mailgun.org"
gitlab_rails['smtp_port'] = 587
gitlab_rails['smtp_authentication'] = "plain"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_user_name'] = "username#domain.com"
gitlab_rails['smtp_password'] = "secret"
gitlab_rails['smtp_domain'] = "domain.com"
Reconfigure and restart:
sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart
Does anyone have any idea how to solve this? Thanks.
All new accounts of DigitalOcean has a lock for sending emails. To remove the blockade is necessary open a ticket and request the unlock.
To curb a recent increase in abuse and SPAM, we have an initial SMTP block on new accounts created in certain contexts.
By DigitalOcean.

Monitor azure with nagios and odbc-freetds

I want to monitor Azure Paas database with Nagios. I'm using this plugin available at https://github.com/MsOpenTech/WaMo
When I try to check database:
./check_azure_sql.py -u -p -d -k top5queries
I get this error message:
('08001', '[08001] [unixODBC][FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)')
Error connecting to database
All dependencies are installed (list in GitHub plugin site).
Here you can see my /etc/odbcinst.ini:
[ODBC]
Trace = Yes
TraceFile = /tmp/odbc.log
[FreeTDS]
Description = ODBC For TDS
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
UsageCount = 1
Here you can see my /etc/freetds/freetds.conf:
# $Id: freetds.conf,v 1.12 2007/12/25 06:02:36 jklowden Exp $
#
# This file is installed by FreeTDS if no file by the same
# name is found in the installation directory.
#
# For information about the layout of this file and its settings,
# see the freetds.conf manpage "man freetds.conf".
# Global settings are overridden by those in a database
# server specific section
[global]
# TDS protocol version
; tds version = 4.2
# Whether to write a TDSDUMP file for diagnostic purposes
# (setting this to /tmp is insecure on a multi-user system)
; dump file = /tmp/freetds.log
; debug flags = 0xffff
# Command and connection timeouts
; timeout = 10
; connect timeout = 10
# If you get out-of-memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field.
# Try setting 'text size' to a more reasonable limit
text size = 64512
# A typical Sybase server
[egServer50]
host = symachine.domain.com
port = 5000
tds version = 5.0
# A typical Microsoft server
[egServer70]
host = ntmachine.domain.com
port = 1433
tds version = 7.0
And my /etc/odbc.ini is empty.
Does anybody have any idea?
bhagdev, to do simple, i'm trying to monitor Sql database Azure Paas with nagios.
It's not me that is written the plugin available at github.com/MsOpenTech/WaMo. For a nagios admin, i only need to execute the command ./check_azure_sql.py -u (username) -p (password) -d (database) -k (key) (check_azure_sql.py written in python) from debian linux cli.
So when i execute the command above i get the error message :
('08001', '[08001] [unixODBC][FreeTDS][SQL Server]Unable to connect to data source (0) (SQLDriverConnect)') Error connecting to database.
Than'ks for your help guy's

failure to push to heroku on git deploying node server

Im using heroku for testing a server I have followed other examples about this but I keep getting the same error that I dont see anywhere. I have even tried deleting my repo and my heroku app and starting again but i get the same error when using
git push heroku master.
$ git push heroku master
fatal: unable to access 'https://git.heroku.com/football-app-development.git/': Could not resolve host: git.heroku.com
`
$ cat .git/config
[core]
repositoryformatversion = 0
filemode = false
bare = false
logallrefupdates = true
symlinks = false
ignorecase = true
hideDotFiles = dotGitOnly
[remote "heroku"]
url = https://git.heroku.com/football-app-development.git
fetch = +refs/heads/*:refs/remotes/heroku/*
`
Any help would be great!
If anyone else has this issue I got it to work by uninstall got and the heroku tool belt and install both again.

Resources