How to use GitLab and Gitlab-Mattermost on one machine? - gitlab

My conf-file:
external_url "http://192.168.3.23" # note the use of a dotted ip
gitlab_rails['gitlab_email_enabled'] = true
gitlab_rails['gitlab_email_from'] = 'gitlab#myhome.com'
gitlab_rails['gitlab_email_display_name'] = 'gitlab'
#gitlab_rails['gitlab_email_reply_to'] = 'gitlab#myhome.com'
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "mail.home"
gitlab_rails['smtp_port'] = 25
gitlab_rails['smtp_domain'] = "myhome.com"
mattermost_external_url 'http://192.168.3.23'
mattermost['gitlab_enable'] = true
mattermost['gitlab_secret'] = "4d1e<***>bdbfe"
mattermost['gitlab_id'] = "1c441<***>092df"
mattermost['gitlab_scope'] = ""
mattermost['gitlab_auth_endpoint'] = "http://192.168.3.23/oauth/authorize"
mattermost['gitlab_token_endpoint'] = "http://192.168.3.23/oauth/token"
mattermost['gitlab_user_api_endpoint'] = "http://192.168.3.23/api/v3/user"
# Shut down GitLab services on the Mattermost server
#gitlab_rails['enable'] = false
But now by the address 192.168.3.23 loading only gitlab.
GitLab Community Edition 8.4.4 9c31cc6!
How to start use gitlab and mattermost together?

Need use different url-address for GitLab and Mattermost.
extermanl_url "http://192.168.3.23"
...
mattermost_external_url "http://192.168.3.23:8065"
Solve here.

Related

ambari_agent restart cause ansible crash

we have big-data Hadoop cluster based on horton-works HDP version 2.6.4 and ambari 2.6.1 version
all machines are with RHEL 7.2 version
in our cluster we have more then 540 machines and on all machines we have ambari-agent that communicate with ambari server , ( Ambari server is installed only on one machine ) while ambari-agent installed on all machines
until using ansible everything was good , when we do ambari-agent upgrade and ambari-agent restart
but recently we start to use ansible ( ansible-playbook ) in order to automate the installation
and ansible is running on all machines
so when task do the ambari-agent restart , then imminently we notice that ansible execution stooped and killed
after some investigation we saw that ambari agent is using the following ports
url_port = 8440
secured_url_port = 8441
ping_port = 8670
but I not see that any ansible process used above ports , so we not think its related
but the basic issue is clear
when ansible task doing on remote machine - ambari-agent restart , then its caused ansible interrupt and ansible killed
ambari-agent configuration looks like this
[server]
hostname = datanode02.gtfactory.com
url_port = 8440
secured_url_port = 8441
connect_retry_delay = 10
max_reconnect_retry_delay = 30
[agent]
logdir = /var/log/ambari-agent
piddir = /var/run/ambari-agent
prefix = /var/lib/ambari-agent/data
loglevel = INFO
data_cleanup_interval = 86400
data_cleanup_max_age = 2592000
data_cleanup_max_size_mb = 100
ping_port = 8670
cache_dir = /var/lib/ambari-agent/cache
tolerate_download_failures = true
run_as_user = root
parallel_execution = 0
alert_grace_period = 5
status_command_timeout = 5
alert_kinit_timeout = 14400000
system_resource_overrides = /etc/resource_overrides
[security]
keysdir = /var/lib/ambari-agent/keys
server_crt = ca.crt
passphrase_env_var_name = AMBARI_PASSPHRASE
ssl_verify_cert = 0
credential_lib_dir = /var/lib/ambari-agent/cred/lib
credential_conf_dir = /var/lib/ambari-agent/cred/conf
credential_shell_cmd = org.apache.hadoop.security.alias.CredentialShell
[network]
use_system_proxy_settings = true
[services]
pidlookuppath = /var/run/
[heartbeat]
state_interval_seconds = 60
dirs = /etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
/etc/sqoop,
/var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
/var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
log_lines_count = 300
idle_interval_min = 1
idle_interval_max = 10
[logging]
syslog_enabled = 0
for now we are thinking about the following:
maybe ansible crash because TLSv1 is restricted ( Transport Layer Security ) , the default is that ambari-agent connects to TLSv1
so we think to set force_https_protocol=PROTOCOL_TLSv1_2 in ambari agent configuration , but this is only assumption
our suggestion and the new conf that maybe can help?
[security]
force_https_protocol=PROTOCOL_TLSv1_2 <------ the new update
keysdir = /var/lib/ambari-agent/keys
server_crt = ca.crt
passphrase_env_var_name = AMBARI_PASSPHRASE
ssl_verify_cert = 0
credential_lib_dir = /var/lib/ambari-agent/cred/lib
credential_conf_dir = /var/lib/ambari-agent/cred/conf
credential_shell_cmd = org.apache.hadoop.security.alias.CredentialShell

InfluxDB write failure node-influx library

I am writing data to influxdb using the node-influx library.
https://github.com/node-influx/node-influx
It writes about 500,000 records and then I start seeing this error following which there are no more writes. It looks like a dns issue but I am running it inside a docker container on ubuntu 18.04 host.
Error: getaddrinfo EAGAIN influxdb influxdb:8086 |at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:56:26)
I have the logging level set to debug but I am not seeing any other errors. Any idea what might be causing this?
Update
tried with a different influx version
increased ulimit of host
used ip address of docker-container instead of service name, no error is thrown, writes stop after sometime silently
tried to call the write API with curl
curl -i -XPOST 'http://localhost:8086/write?db=mydb' --data-binary 'snmp,hostipv4=172.16.102.82,oidname=cpu_idle,site=gotham value=1000 1574751020815819489'
This works and a record is inserted in the DB.
Update2
It seems to be a dns issue on the docker network. I am not able to ping the influxdb container from the worker container. The writes are not reaching influx.
Update3
As a workaround for now, I am forcing a process.exit(1) on catching the error on my worker and using docker-compose restart: on-failure to restart the service. This resumes the writes.
retention policy is set for 2 days on the DB
influxdb.conf
reporting-disabled = true
[meta]
dir = "/var/lib/influxdb/meta"
retention-autocreate = false
logging-enabled = true
[logging]
format = "auto"
level = "debug"
[data]
engine = "tsm1"
dir = "/var/lib/influxdb/data"
wal-dir = "/var/lib/influxdb/wal"
wal-fsync-delay = "200ms"
index-version = "inmem"
wal-logging-enabled = true
query-log-enabled = true
cache-max-memory-size = "2g"
cache-snapshot-memory-size = "256m"
cache-snapshot-write-cold-duration = "20m"
compact-full-write-cold-duration = "24h"
max-concurrent-compactions = 0
compact-throughput = "48m"
max-points-per-block = 0
max-series-per-database = 1000000
trace-logging-enabled = false
[coordinator]
write-timeout = "10s"
max-concurrent-queries = 0
query-timeout = "0s"
log-queries-after = "0s"
max-select-point = 0
max-select-series = 0
max-select-buckets = 0
[retention]
enabled = true
check-interval = "30m0s"
[shard-precreation]
enabled = true
check-interval = "10m0s"
advance-period = "30m0s"
[monitor]
store-enabled = true
store-database = "_internal"
store-interval = "10s"
[http]
enabled = true
bind-address = ":8086"
auth-enabled = false
log-enabled = true
max-concurrent-write-limit = 0
max-enqueued-write-limit = 0
enqueued-write-timeout = 0
[continuous_queries]
enabled = false
log-enabled = true
run-interval = "10s"

Creating a VMware HA cluster using pyvmomi throws exception

Error while creating VMware HA cluster using pyvmomi library. I'm following the official documentation (https://vdc-download.vmware.com/vmwb-repository/dcr-public/6b586ed2-655c-49d9-9029-bc416323cb22/fa0b429a-a695-4c11-b7d2-2cbc284049dc/doc/index.html) of pyvmomi for it.
I'm able to create a normal cluster without HA enabled if i don't set the ha_spec i.e. if i comment out line 2,3,4,5 in the following code.
Here is my piece of code:
cluster_spec = vim.cluster.ConfigSpecEx()
ha_spec = vim.cluster.DasConfigInfo()
ha_spec.enabled = True
ha_spec.hostMonitoring = vim.cluster.DasConfigInfo.ServiceState.enabled
cluster_spec.dasConfig = ha_spec
cluster = host_folder.CreateClusterEx(name=cluster_name, spec=cluster_spec)
The error it throws is:
InvalidArgument: (vmodl.fault.InvalidArgument) {
dynamicType = <unset>,
dynamicProperty = (vmodl.DynamicProperty) [],
msg = 'A specified parameter was not correct: ',
faultCause = <unset>,
faultMessage = (vmodl.LocalizableMessage) [],
invalidProperty = <unset>
}
I'm using Python 3.7, Pyvmomi 6.7.3 any ESX 6.5.
Does anybody know if that's the right way of doing it?
Thank you.
I have same issue.this is my solusion:
cluster_spec = vim.cluster.ConfigSpecEx()
ha_spec = vim.cluster.DasConfigInfo()
ha_spec.enabled = True
ha_spec.hostMonitoring = vim.cluster.DasConfigInfo.ServiceState.enabled
# this field is not optional, but the docs say it's optional.
ha_spec.failoverLevel = 1
cluster_spec.dasConfig = ha_spec
cluster = host_folder.CreateClusterEx(name=cluster_name, spec=cluster_spec)

Deprecation of modulepath in Puppets implicit environments?

Due to the fact that modulepath is now deprecated in a implicit scope (IE main), how can I keep the modulepath for each of my environments? I would like to keep my modules inside of a global scope without having to specify an environment.
[main]
certname = {somecert}
dns_alt_names = puppet,{otherorgs}
vardir = /var/opt/lib/pe-puppet
logdir = /var/log/pe-puppet
rundir = /var/run/pe-puppet
modulepath = $confdir/environments/$environment/modules:/opt/puppet/share/puppet/modules
manifest = /etc/puppetlabs/puppet/environments/$environment/manifests/site.pp
server = {puppetserver}
user = pe-puppet
group = pe-puppet
archive_files = true
archive_file_server = {puppetserver}
[master]
certname = {puppetserver}
ca_name = 'Puppet CA generated on {puppetserver} at 2013-11-07 13:15:40 -0800'
reports = puppetdb,cimlog,console
node_terminus = console
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
storeconfigs_backend = puppetdb
storeconfigs = true
manifest=$confdir/environments/$environment/manifests/site.pp
[agent]
report = true
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
graph = true
pluginsync = true
environment = production
http_compression = true
splaylimit=1800
configtimeout=480
splay=true
It's not perfectly clear what you are asking here, but I shall assume that you care about retaining the ability to keep a global set of modules in /opt/puppet/share/puppet/modules that is independent of the respective environment.
Puppet allows this through the basemodulepath option.
[main]
environmentpath=$confdir/environments
basemodulepath=/opt/puppet/share/puppet/modules

DEBUG: Routing failed, re-queued. Kannel

Hi I am trying to setup Kannel 1.4.3 for sending and receiving SMS. But I'm getting Routing Failed error
ERROR: AT2[Huawei-E220-00]: Couldn't connect (retrying in 10 seconds).
Message in the browser:
3: Queued for later delivery
Details:
Modem - Huawei e220
Sim - AT&T
OS - Ubuntu 14.04 LTS
Please let me know if there is anything wrong in the following smskannel.conf:
#---------------------------------------------
# CORE
#
group = core
admin-port = 13000
smsbox-port = 13001
admin-password = bar
#status-password = foo
box-deny-ip = "*.*.*.*"
box-allow-ip = "127.0.0.1"
#unified-prefix = "+358,00358,0;+,00"
#---------------------------------------------
# SMSC CONNECTIONS
#
group = smsc
smsc = at
smsc-id = Huawei-E220-00
port = 10000
modemtype = huawei_e220_00
device = /dev/ttyUSB0
sms-center = +13123149810
my-number = +1xxxxxxxxxx
connect-allow-ip = 127.0.0.1
sim-buffering = true
keepalive = 5
#---------------------------------------------
# SMSBOX SETUP
#
group = smsbox
bearerbox-host = 127.0.0.1
sendsms-port = 13013
global-sender = 13013
#---------------------------------------------
# SEND-SMS USERS
#
group = sendsms-user
username = tester
password = foobar
#---------------------------------------------
# SERVICES
group = sms-service
keyword = nop
text = "You asked nothing and I did it!"
group = sms-service
keyword = default
text = "No service specified"
group = sms-service
keyword = complex
catch-all = yes
accept-x-kannel-headers = true
max-messages = 3
concatenation = true
get-url = "http://127.0.0.1:13013/cgi-bin/sendsms?username=tester&password=foobar&to=+16782304782&text=Hello World"
#---------------------------------------------
# MODEMS
#
group = modems
id = huawei_e220_00
name = "Huawei E220"
detect-string = "huawei"
init-string = "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0"
message-storage = "SM"
need-sleep = true
speed = 460800`

Resources