Login to docker registry located in Gitlab - gitlab

I created a docker registry and want to connect it with GitLab. I followed this documentation https://docs.gitlab.com/ce/user/project/container_registry.html. After that I tried to login to docker, but I received 401 or Access denied, do you know how to fix this ?
docker login url
Username: gitlab-ci-token
Password:
https://<url>/v2/: unauthorized: HTTP Basic: Access denied
docker login <url>
Username: knikolov
Password:
https://<url>/v2/: unauthorized: HTTP Basic: Access denied
docker login <url>
Username: knikolov
Password:
Error response from daemon: login attempt to https://<url>/v2/ failed with status: 401 Unauthorized
production.log
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:42:51 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:42:54 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:42:57 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:00 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:03 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:06 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:09 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:12 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:15 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:18 +0000
Started GET "/jwt/auth?account=knikolov&client_id=docker&offline_token=true&service=container_registry" for 172.17.0.1 at 2017-06-22 14:43:19 +0000
Processing by JwtController#auth as HTML
Parameters: {"account"=>"knikolov", "client_id"=>"docker", "offline_token"=>"true", "service"=>"container_registry"}
Completed 200 OK in 191ms (Views: 0.5ms | ActiveRecord: 5.7ms)
Started GET "/admin/logs" for 172.17.0.1 at 2017-06-22 14:43:21 +0000
Processing by Admin::LogsController#show as HTML
Form the registry log I received:
registry_1 | time="2017-06-25T17:34:31Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.7.3 http.request.host=<url> http.request.id=e088c13e-aa4c-4701-af26-29e12874519b http.request.method=GET http.request.remoteaddr=37.59.24.105 http.request.uri="/v2/" http.request.useragent="docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))" instance.id=c8d463e0-cf04-48f5-8daa-d096b4e75494 version=v2.6.1
registry_1 | 172.17.0.1 - - [25/Jun/2017:17:34:31 +0000] "GET /v2/ HTTP/1.0" 401 87 "" "docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))"
registry_1 | time="2017-06-25T17:34:32Z" level=info msg="token from untrusted issuer: \"omnibus-gitlab-issuer\""
registry_1 | time="2017-06-25T17:34:32Z" level=warning msg="error authorizing context: invalid token" go.version=go1.7.3 http.request.host=<url> http.request.id=ff0d15e4-3198-4d69-910b-50bc27dd02f2 http.request.method=GET http.request.remoteaddr=37.59.24.105 http.request.uri="/v2/" http.request.useragent="docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))" instance.id=c8d463e0-cf04-48f5-8daa-d096b4e75494 version=v2.6.1
registry_1 | 172.17.0.1 - - [25/Jun/2017:17:34:32 +0000] "GET /v2/ HTTP/1.0" 401 87 "" "docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))"
this is my config for my registry:
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
delete:
enabled: true
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
auth:
token:
realm: https://<url>/jwt/auth
service: container_registry
issuer: gitlab-issuer
rootcertbundle: /certs/registry.crt
docker-compose.yml
registry:
restart: always
image: registry:2
ports:
- 127.0.0.1:5000:5000
environment:
- REGISTRY_STORAGE_DELETE_ENABLED=true
volumes:
- ./data:/var/lib/registry
- ./certs:/certs
- ./config.yml:/etc/docker/registry/config.yml
Gitlab docker-compose.yml
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: '<gitlab_url>'
container_name: gitlab
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url '<gitlab_url>'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
registry_external_url '<docker-registry_url>'
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "172.17.0.1"
gitlab_rails['smtp_domain'] = "<smtp_domain>"
gitlab_rails['gitlab_email_from'] = '<gitlab_email_from>'
gitlab_rails['smtp_enable_starttls_auto'] = false
gitlab_rails['registry_enabled'] = true
registry_nginx['ssl_certificate'] = '/etc/gitlab/ssl/docker.registry.crt'
registry_nginx['ssl_certificate_key'] = '/etc/gitlab/ssl/docker.registry.key'
registry_nginx['proxy_set_headers'] = {
"Host" => "<dokcer-registry_url>"
}
nginx['listen_port'] = 80
nginx['listen_https'] = false
nginx['proxy_set_headers'] = {
"X-Forwarded-Proto" => "https",
"X-Forwarded-Ssl" => "on"
}
ports:
- '127.0.0.1:5432:80'
- '2224:22'
volumes:
- '/home/gitlab/gitlab-ce/config:/etc/gitlab'
- '/home/gitlab/gitlab-ce/logs:/var/log/gitlab'
- '/home/gitlab/gitlab-ce/data:/var/opt/gitlab'
- '/home/docker-registry/data:/var/opt/gitlab/gitlab-rails/shared/registry'

Make sure the .crt file and .key file exists on the path specified here in gitlab.rb if not make the changes and restart gitlab with - sudo gitlab-ctl restart
external_url 'https://myrepo.xyz.com'
nginx['redirect_http_to_https'] = true
registry_external_url 'https://registry.xyz.com'
registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/registry.xyz.com.crt"
registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/registry.xyz.com.key"
More details available at - Appychip

It seems like you are not using the same RSA keypair for your Gitlab registry backend and your Docker setup.
Check your gitlab_rails['registry_key_path'] setting in Gitlab.rb and consult this very detailed guide.
https://m42.sh/gitlab-registry.html (unfortunately offline, backup copy here: https://github.com/ipernet/gitlab-docs/blob/master/gitlab-registry.md)

Make Sure that
The Drive on Docker is shared
(If the drive is not shared: Go to Docker and make the settings as Shared)
Username matches
Remove any domain name if included.
Try this

Related

NodeJS converting Docker Redis hostname to localhost

It seems the Redis container hostname is being converted to localhost by NodeJS.
Here are my files:
.env
REDIS_HOST=redis-eventsystem
REDIS_PORT=6379
REDIS_SECRET=secret
index.ts
// there are things above this
let Redis = require('redis');
let client : any = Redis.createClient({
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
legacyMode: true
});
client.on('error', (err : Error) : void => {
console.error(
`Redis connection error: ${err}`
);
});
client.on('connect', (err : Error) : void => {
console.info(
`Redis connection success.`
);
});
client.connect();
// there are things bellow this
docker-compose.yml
version: '3.8'
services:
eventsystem:
image: eventsystem
restart: always
depends_on:
- "redis-eventsystem"
ports:
- "80:3000"
networks:
- eventsystem
redis-eventsystem:
image: redis
command: ["redis-server", "--bind", "redis-eventsystem", "--port", "6379", "--protected-mode", "no"]
restart: always
networks:
- eventsystem
networks:
eventsystem:
driver: bridge
docker log
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:C 21 Nov 2022 20:50:41.106 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:C 21 Nov 2022 20:50:41.106 # Redis version=7.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:C 21 Nov 2022 20:50:41.106 # Configuration loaded
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:M 21 Nov 2022 20:50:41.106 * monotonic clock: POSIX clock_gettime
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:M 21 Nov 2022 20:50:41.108 * Running mode=standalone, port=6379.
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:M 21 Nov 2022 20:50:41.108 # Server initialized
2022-11-21 17:50:41 eventsystem-redis-eventsystem-1 | 1:M 21 Nov 2022 20:50:41.108 * Ready to accept connections
2022-11-21 17:50:41 eventsystem-eventsystem-1 |
2022-11-21 17:50:41 eventsystem-eventsystem-1 | > eventsystem#1.0.0 start
2022-11-21 17:50:41 eventsystem-eventsystem-1 | > node index.js serve
2022-11-21 17:50:41 eventsystem-eventsystem-1 |
2022-11-21 17:50:42 eventsystem-eventsystem-1 | Application is listening at http://localhost:3000
2022-11-21 17:50:42 eventsystem-eventsystem-1 | Mon Nov 21 2022 20:50:42 GMT+0000 (Coordinated Universal Time) - Redis connection error: Error: connect ECONNREFUSED 127.0.0.1:6379
As you all can see the connection is refused for the IP 127.0.0.1 but on my application the redis is set to work on the hostname for the container which holds the redis server. I can't think of anything that may be causing this problem.
So to answer my own question, basically the problem was related to the variables passed on createClient at my code.
It seems that for some unknown reason the host and port variables need to be passed inside a variable called socket inside the createClient argument object.
So, instead of doing the usual and passing the host and port inside the argument object, you must do the following:
let client : any = Redis.createClient({
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
socket: {
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT
},
legacyMode: true
});
Hope to have helped someone else besides me.
Cheers!

Gunicorn access log format not apply

I'm using gunicorn to run a fastapi script, the access log file were created using the gunicorn.conf.py with accesslog yet it will not apply the access_log_format. I tried this apply this example from the github and is still not working
My gunicorn.conf.py
accesslog = '/home/ossbod/chunhueitest/supervisor_log/accesslog.log'
loglevel = 'info'
access_log_format = '%(h)s %(l)s %(t)s "%(r)s" %(s)s %(q)s %(b)s "%(f)s" "%(a)s" %(M)s'
The result I got
<IP>:54668 - "GET /docs HTTP/1.1" 200
<IP>:54668 - "GET /openapi.json HTTP/1.1" 200
<IP>:54668 - "POST /api/v1/add_user HTTP/1.1" 201
How can i get the format to apply to the log?

Kafka - TimeoutError: Request timed out after 30000ms

Kafka connection timeout after the 30000ms.it showing error
{ TimeoutError: Request timed out after 30000ms
at new TimeoutError (/app/node_modules/kafka-node/lib/errors/TimeoutError.js:6:9)
at Timeout.timeoutId._createTimeout [as _onTimeout] (/app/node_modules/kafka-node/lib/kafkaClient.js:1007:14)
at listOnTimeout (internal/timers.js:535:17)
at processTimers (internal/timers.js:479:7) message: 'Request timed out after 30000ms' }
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient broker is now ready
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient kafka-node-client updated internal metadata
Kafka Producer is connected and ready.
----->data PRODUCT_REF_TOKEN { hash:
'0x964f714829cece2c5f57d5c8d677c251eff82f7fba4b5ba27b4bd650da79a954',
success: 'true' }
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient compressing messages if needed
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient kafka-node-client createBroker 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient missing apiSupport waiting until broker is ready...
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient waitUntilReady [BrokerWrapper 127.0.0.1:9092 (connected: true) (ready: false) (idle: false) (needAuthentication: false) (authenticated: false)]
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient kafka-node-client socket closed 127.0.0.1:9092 (hadError: true)
Tue, 22 Oct 2019 10:10:25 GMT kafka-node:KafkaClient kafka-node-client reconnecting to 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:25 GMT kafka-node:KafkaClient kafka-node-client createBroker 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:25 GMT kafka-node:KafkaClient kafka-node-client socket closed 127.0.0.1:9092 (hadError: true)
Tue, 22 Oct 2019 10:10:26 GMT kafka-node:KafkaClient kafka-node-client reconnecting to 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:26 GMT kafka-node:KafkaClient kafka-node-client createBroker 127.0.0.1:9092
docker-compose.yml for kafka setup please let me know if any setup or properties need to be setup.
version: "3.5"
services:
api:
image: opschain-sapi
restart: always
command: ["yarn", "start"]
ports:
- ${API_PORT}:80
env_file:
- ./truffle/contracts.env
- ./.env
external_links:
- ganachecli-private
- ganachecli-public
networks:
- opschain_network
graphql-api:
build:
context: ./graphql-api
dockerfile: Dockerfile
command: npm run dev
ports:
- 9007:80
depends_on:
- mongodb
- graphql-api-watch
- api
volumes:
- ./graphql-api/dist:/app/dist:delegated
- ./graphql-api/src:/app/src:delegated
environment:
VIRTUAL_HOST: api.blockchain.docker
PORT: 80
OFFCHAIN_DB_URL: mongodb://root:password#mongodb:27017
OFFCHAIN_DB_NAME: opschain-wallet
OFFCHAIN_DB_USER_COLLECTION: user
JWT_PASSWORD: 'supersecret'
JWT_TOKEN_EXPIRE_TIME: 86400000
BLOCKCHAIN_API: api
networks:
- opschain_network
graphql-api-watch:
build:
context: ./graphql-api
dockerfile: Dockerfile
command: npm run watch
volumes:
- ./graphql-api/src:/app/src:delegated
- ./graphql-api/dist:/app/dist:delegated
networks:
- opschain_network
mongodb:
image: mongo:latest
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
MONGO_INITDB_DATABASE: opschain-wallet
logging:
options:
max-size: 100m
networks:
- opschain_network
ui:
build:
context: ./ui
dockerfile: Dockerfile
ports:
- 9000:3000
volumes:
- ./ui/public:/app/public:delegated
- ./ui/src:/app/src:delegated
depends_on:
- graphql-api
networks:
- opschain_network
environment:
VIRTUAL_HOST: tmna.csc.docker
REACT_APP_API_BASE_URL: http://localhost:8080
logging:
options:
max-size: 10m
test:
build: ./test
volumes:
- ./test/postman:/app/postman:delegated
networks:
- opschain_network
zoo1:
image: zookeeper:3.4.9
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_PORT: 2181
ZOO_SERVERS: server.1=zoo1:2888:3888
volumes:
- ./pub-sub/zk-single-kafka-single/zoo1/data:/data
- ./pub-sub/zk-single-kafka-single/zoo1/datalog:/datalog
networks:
- opschain_network
kafka1:
image: confluentinc/cp-kafka:5.3.1
hostname: kafka1
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
# KAFKA_ADVERTISED_HOST_NAME: localhost
# KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_CREATE_TOPICS: "cat:1:1"
volumes:
- ./pub-sub/zk-single-kafka-single/kafka1/data:/var/lib/kafka/data
depends_on:
- zoo1
- api
networks:
- opschain_network
networks:
opschain_network:
external: true
in the above compose file i have exposed the port 9092 and zookeper port 2181. Exactly i am not sure what the issue is
const kafka = require('kafka-node');
const config = require('./configUtils');
function sendMessage({ topic, message }) {
let Producer = kafka.Producer,
client = new kafka.KafkaClient({ kafkaHost: config.kafka.host,autoConnect: true}),
producer = new Producer(client);
producer.on('ready', () => {
console.log('Kafka Producer is connected and ready.');
console.log('----->data',topic,message)
producer.send(
[
{
topic,
messages: [JSON.stringify(message)],
}
],
function(_err, data){
console.log('--err',_err)
console.log('------->message sent from kafka',data);
}
);
});
producer.on('error', error => {
console.error(error);
});
}
module.exports = sendMessage;
producer file where it connects to the kafka client and on ready it produces the message
I ran into a similar issue using the landoop/fast-data-dev image with docker-compose. I was able to solve it by making sure the ADV_HOST environment variable was configured to be the name of the kafka service (e.g. kafka1). Then setting the kafkaHost option to the name of service. (e.g. kafka1:9092).
The environment variable for your kafka image appears to be "KAFKA_ADVERTISED_HOST_NAME".

Node Socket.io on HA Proxy with multiple end points

I am tying to deploy my node websocket service on two boxes and masking it using haproxy but its not working.
frontend http-in
mode http
bind *:80
acl is_websocket path_beg /prodSocket
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend prodSocket if is_websocket
acl is_websocket path_beg /demoSocket
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend demoSocket if is_websocket
backend demoSocket
timeout server 180s
server 148.251.76.84 148.251.76.84:9000 weight 1 maxconn 1024 check
backend prodSocket
timeout server 180s
server 148.251.76.85 148.251.76.85:9000 weight 1 maxconn 1024 check
Client code -
var socket = io('http://localhost/prodSocket', {
'force new connection': false,
'reconnection delay': 500,
'max reconnection attempts': 10,
});
socket.emit('client', { my: 'data' });
socket.on('news', function (data) {
console.log(data);
});
The above code does not work but if I make following changes it works -
frontend http-in
mode http
bind *:80
acl is_websocket path_beg /socket.io
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend socket if is_websocket
backend socket
timeout server 180s
server 148.251.76.84 148.251.76.84:9000 weight 1 maxconn 1024 check
Client code -
var socket = io('http://localhost:9090', {
'force new connection': false,
'reconnection delay': 500,
'max reconnection attempts': 10,
});
socket.emit('client', { my: 'data' });
socket.on('news', function (data) {
console.log(data);
});
I understood that socket io is calling /socket.io endpoint for creating a socket connection but how can then deploy my service over two different end points ?
Versions -
Socket.io - 1.4.5
Node - v5.6.0
HAproxy - 1.4.24
Ubuntu - 14.04
HAProxy Log using /socket.io endpoint -
config -
frontend http-in
mode http
bind *:9090
acl is_websocket path_beg /socket.io
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend socket if is_websocket
CLient script -
var socket = io('http://localhost:9090', {
'force new connection': false,
'reconnection delay': 500,
'max reconnection attempts': 10,
});
Jul 21 10:50:04 localhost haproxy[11981]: 127.0.0.1:48571 [21/Jul/2016:10:49:51.830] http-in socket/148.251.76.84 0/0/171/676/12725 101 187 - - ---- 2/2/2/2/0 0/0 "GET /socket.io/?EIO=3&transport=websocket&sid=PvK2vnQO1_IepDHOAAAJ HTTP/1.1"
Jul 21 10:50:55 localhost haproxy[11981]: 127.0.0.1:48573 [21/Jul/2016:10:49:51.832] http-in socket/148.251.76.84 0/0/123/126/63531 200 1551 - - cD-- 2/2/2/2/0 0/0 "GET /socket.io/?EIO=3&transport=polling&t=LOBvKmM&sid=PvK2vnQO1_IepDHOAAAJ HTTP/1.1"
Jul 21 10:50:55 localhost haproxy[11981]: 127.0.0.1:48569 [21/Jul/2016:10:49:51.505] http-in socket/148.251.76.84 0/0/152/159/64144 200 1199 - - cD-- 1/1/1/1/0 0/0 "GET /socket.io/?EIO=3&transport=polling&t=LOBvKhF HTTP/1.1"
HAProxy Log using /prodSocket endpoint -
Config -
frontend http-in
mode http
bind *:9090
acl is_websocket path_beg /prodSocket
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend socket if is_websocket
CLient script -
var socket = io('http://localhost:9090/prodSocket', {
'force new connection': false,
'reconnection delay': 500,
'max reconnection attempts': 10,
});
Jul 21 10:55:11 localhost haproxy[12361]: Proxy socket started.
Jul 21 10:55:11 localhost haproxy[12361]: 127.0.0.1:48856 [21/Jul/2016:10:55:11.767] http-in http-in/<NOSRV> -1/-1/-1/-1/0 503 213 - - SC-- 0/0/0/0/0 0/0 "GET /socket.io/?EIO=3&transport=polling&t=LOBwYtK HTTP/1.1"
Jul 21 10:55:16 localhost haproxy[12362]: 127.0.0.1:48859 [21/Jul/2016:10:55:16.229] http-in http-in/<NOSRV> -1/-1/-1/-1/0 503 213 - - SC-- 0/0/0/0/0 0/0 "GET /socket.io/?EIO=3&transport=polling&t=LOBwZz3 HTTP/1.1"
Jul 21 10:55:17 localhost haproxy[12362]: 127.0.0.1:48860 [21/Jul/2016:10:55:17.364] http-in http-in/<NOSRV> -1/-1/-1/-1/0 503 213 - - SC-- 0/0/0/0/0 0/0 "GET /socket.io/?EIO=3&transport=polling&t=LOBwaEn HTTP/1.1"
Jul 21 10:55:19 localhost haproxy[12362]: 127.0.0.1:48862 [21/Jul/2016:10:55:19.075] http-in http-in/<NOSRV> -1/-1/-1/-1/0 503 213 - - SC-- 0/0/0/0/0 0/0 "GET /socket.io/?EIO=3&transport=polling&t=LOBwafV HTTP/1.1"
Jul 21 10:55:22 localhost haproxy[12362]: 127.0.0.1:48865 [21/Jul/2016:10:55:22.262] http-in http-in/<NOSRV> -1/-1/-1/-1/0 503 213 - - SC-- 0/0/0/0/0 0/0 "GET /socket.io/?EIO=3&transport=polling&t=LOBwbRI HTTP/1.1"
Jul 21 10:55:27 localhost haproxy[12362]: 127.0.0.1:48869 [21/Jul/2016:10:55:27.271] http-in http-in/<NOSRV> -1/-1/-1/-1/0 503 213 - - SC-- 0/0/0/0/0 0/0 "GET /socket.io/?EIO=3&transport=polling&t=LOBwcfa HTTP/1.1"

Issue sending metrics with statsd

I was using the following instructions to install and configure StatsD on a Graphite server:
https://www.digitalocean.com/community/tutorials/how-to-configure-statsd-to-collect-arbitrary-stats-for-graphite-on-ubuntu-14-04
Now that I have a server with StatsD running, I do not see the metrics being logged under /var/log/statsd/statsd.log when I am testing sending them from the command line. Here is what I see:
29 Oct 02:30:39 - server is up
29 Oct 02:47:49 - reading config file: /etc/statsd/localConfig.js
29 Oct 02:47:49 - server is up
29 Oct 14:16:45 - reading config file: /etc/statsd/localConfig.js
29 Oct 14:16:45 - server is up
29 Oct 15:36:47 - reading config file: /etc/statsd/localConfig.js
29 Oct 15:36:47 - DEBUG: Loading server: ./servers/udp
29 Oct 15:36:47 - server is up
29 Oct 15:36:47 - DEBUG: Loading backend: ./backends/graphite
29 Oct 15:36:47 - DEBUG: numStats: 3
The log stays at the last entry of 'numStats: 3', even though I keep entering different metrics at the command line.
Here are a sample of the metrics I entered:
echo "sample.gauge:14|g" | nc -u -w0 127.0.0.1 8125
echo "sample.gauge:10|g" | nc -u -w0 127.0.0.1 8125
echo "sample.count:1|c" | nc -u -w0 127.0.0.1 8125
echo "sample.set:50|s" | nc -u -w0 127.0.0.1 8125
Of interest, I see this under /var/log/statsd/stderr.log:
events.js:72
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE
at errnoException (net.js:901:11)
at Server._listen2 (net.js:1039:14)
at listen (net.js:1061:10)
at Server.listen (net.js:1135:5)
at /usr/share/statsd/stats.js:383:16
at null.<anonymous> (/usr/share/statsd/lib/config.js:40:5)
at EventEmitter.emit (events.js:95:17)
at /usr/share/statsd/lib/config.js:20:12
at fs.js:268:14
at Object.oncomplete (fs.js:107:15)
Here is what my localConfig.js file looks like:
{
graphitePort: 2003
, graphiteHost: "localhost"
, port: 8125
, graphite: {
legacyNamespace: false
},
debug: true,
dumpMessages: true
}
Would anybody be able to shed some light as to where the problem lies?
Thanks!
There is a management interface available by default on port 8126: https://github.com/etsy/statsd/blob/master/docs/admin_interface.md
You likely have another service listening on that port in the same system.
Try this:
# localConfig.js
{
graphitePort: 2003
, graphiteHost: "localhost"
, port: 8125
, mgmt_port: 8127
, graphite: {
legacyNamespace: false
},
debug: true,
dumpMessages: true
}
See https://github.com/etsy/statsd/blob/master/exampleConfig.js#L28

Resources