JupyterHub "500 : Internal Server Error Redirect loop detected." - jupyter-lab

When running JupyterHub via docker-compose up and logging in for the first time as some_user created via web GUI of JupyterHub I can't seem to spawn the ser
Logs:
sudo docker-compose up
[+] Running 1/0
⠿ Container jh_container Created 0.0s
Attaching to jh_container
jh_container | [D 2022-07-17 12:25:07.224 JupyterHub application:837] Looking for /etc/jupyterhub/jupyterhub_config in /srv/jupyterhub
jh_container | [D 2022-07-17 12:25:07.229 JupyterHub application:858] Loaded config file: /etc/jupyterhub/jupyterhub_config.py
jh_container | [I 2022-07-17 12:25:07.235 JupyterHub app:2771] Running JupyterHub version 2.3.1
jh_container | [I 2022-07-17 12:25:07.235 JupyterHub app:2801] Using Authenticator: nativeauthenticator.nativeauthenticator.NativeAuthenticator
jh_container | [I 2022-07-17 12:25:07.235 JupyterHub app:2801] Using Spawner: dockerspawner.dockerspawner.DockerSpawner-12.1.0
jh_container | [I 2022-07-17 12:25:07.236 JupyterHub app:2801] Using Proxy: jupyterhub.proxy.ConfigurableHTTPProxy-2.3.1
jh_container | [I 2022-07-17 12:25:07.239 JupyterHub app:1606] Loading cookie_secret from /srv/jupyterhub/jupyterhub_cookie_secret
jh_container | [D 2022-07-17 12:25:07.240 JupyterHub app:1775] Connecting to db: sqlite:///jupyterhub.sqlite
jh_container | [D 2022-07-17 12:25:07.248 JupyterHub orm:953] database schema version found: 833da8570507
jh_container | [I 2022-07-17 12:25:07.275 JupyterHub proxy:496] Generating new CONFIGPROXY_AUTH_TOKEN
jh_container | [D 2022-07-17 12:25:07.275 JupyterHub app:2024] Loading roles into database
jh_container | [I 2022-07-17 12:25:07.283 JupyterHub app:1926] Not using allowed_users. Any authenticated user will be allowed.
jh_container | [D 2022-07-17 12:25:07.285 JupyterHub app:2283] Purging expired APITokens
jh_container | [D 2022-07-17 12:25:07.287 JupyterHub app:2283] Purging expired OAuthCodes
jh_container | [D 2022-07-17 12:25:07.288 JupyterHub app:2116] Loading role assignments from config
jh_container | [D 2022-07-17 12:25:07.297 JupyterHub app:2429] Initializing spawners
jh_container | [D 2022-07-17 12:25:07.298 JupyterHub app:2560] Loaded users:
jh_container |
jh_container | [I 2022-07-17 12:25:07.298 JupyterHub app:2840] Initialized 0 spawners in 0.001 seconds
jh_container | [W 2022-07-17 12:25:07.299 JupyterHub proxy:687] Running JupyterHub without SSL. I hope there is SSL termination happening somewhere else...
jh_container | [I 2022-07-17 12:25:07.299 JupyterHub proxy:691] Starting proxy # http://:8000
jh_container | [D 2022-07-17 12:25:07.299 JupyterHub proxy:692] Proxy cmd: ['configurable-http-proxy', '--ip', '', '--port', '8000', '--api-ip', '127.0.0.1', '--api-port', '8001', '--error-target', 'http://192.168.64.2:8081/hub/error']
jh_container | [D 2022-07-17 12:25:07.302 JupyterHub proxy:610] Writing proxy pid file: jupyterhub-proxy.pid
jh_container | 12:25:07.614 [ConfigProxy] info: Proxying http://*:8000 to (no default)
jh_container | 12:25:07.615 [ConfigProxy] info: Proxy API at http://127.0.0.1:8001/api/routes
jh_container | [D 2022-07-17 12:25:07.645 JupyterHub proxy:728] Proxy started and appears to be up
jh_container | [D 2022-07-17 12:25:07.646 JupyterHub proxy:821] Proxy: Fetching GET http://127.0.0.1:8001/api/routes
jh_container | 12:25:07.652 [ConfigProxy] info: 200 GET /api/routes
jh_container | [I 2022-07-17 12:25:07.652 JupyterHub app:3089] Hub API listening on http://0.0.0.0:8081/hub/
jh_container | [I 2022-07-17 12:25:07.652 JupyterHub app:3091] Private Hub API connect url http://192.168.64.2:8081/hub/
jh_container | [D 2022-07-17 12:25:07.653 JupyterHub proxy:343] Fetching routes to check
jh_container | [D 2022-07-17 12:25:07.653 JupyterHub proxy:821] Proxy: Fetching GET http://127.0.0.1:8001/api/routes
jh_container | 12:25:07.653 [ConfigProxy] info: 200 GET /api/routes
jh_container | [D 2022-07-17 12:25:07.653 JupyterHub proxy:346] Checking routes
jh_container | [I 2022-07-17 12:25:07.653 JupyterHub proxy:431] Adding route for Hub: / => http://192.168.64.2:8081
jh_container | [D 2022-07-17 12:25:07.653 JupyterHub proxy:821] Proxy: Fetching POST http://127.0.0.1:8001/api/routes/
jh_container | 12:25:07.654 [ConfigProxy] info: Adding route / -> http://192.168.64.2:8081
jh_container | 12:25:07.654 [ConfigProxy] info: Route added / -> http://192.168.64.2:8081
jh_container | 12:25:07.654 [ConfigProxy] info: 201 POST /api/routes/
jh_container | [I 2022-07-17 12:25:07.655 JupyterHub app:3156] JupyterHub is now running at http://:8000
jh_container | [D 2022-07-17 12:25:07.655 JupyterHub app:2764] It took 0.435 seconds for the Hub to start
jh_container | [I 2022-07-17 12:25:19.270 JupyterHub log:189] 302 GET / -> /hub/ (#192.168.64.1) 0.75ms
jh_container | [D 2022-07-17 12:25:19.279 JupyterHub base:326] Refreshing auth for myadmin
jh_container | [D 2022-07-17 12:25:19.281 JupyterHub user:399] Creating <class 'dockerspawner.dockerspawner.DockerSpawner'> for myadmin:
jh_container | [I 2022-07-17 12:25:19.283 JupyterHub log:189] 302 GET /hub/ -> /hub/spawn (myadmin#192.168.64.1) 10.58ms
jh_container | [D 2022-07-17 12:25:19.285 JupyterHub scopes:491] Checking access via scope servers
jh_container | [D 2022-07-17 12:25:19.285 JupyterHub scopes:389] Unrestricted access to /hub/spawn via servers
jh_container | [D 2022-07-17 12:25:19.285 JupyterHub pages:215] Triggering spawn with default options for myadmin
jh_container | [D 2022-07-17 12:25:19.285 JupyterHub base:934] Initiating spawn for myadmin
jh_container | [D 2022-07-17 12:25:19.285 JupyterHub base:938] 0/100 concurrent spawns
jh_container | [D 2022-07-17 12:25:19.285 JupyterHub base:943] 0 active servers
jh_container | [D 2022-07-17 12:25:19.288 JupyterHub roles:477] Checking token permissions against requested role server
jh_container | [I 2022-07-17 12:25:19.289 JupyterHub roles:482] Adding role server to token: <APIToken('3dde...', user='myadmin', client_id='jupyterhub')>
jh_container | [I 2022-07-17 12:25:19.295 JupyterHub provider:607] Creating oauth client jupyterhub-user-myadmin
jh_container | [D 2022-07-17 12:25:19.307 JupyterHub user:728] Calling Spawner.start for myadmin
jh_container | [D 2022-07-17 12:25:19.319 JupyterHub dockerspawner:982] Getting container 'jupyter-myadmin'
jh_container | [I 2022-07-17 12:25:19.320 JupyterHub dockerspawner:988] Container 'jupyter-myadmin' is gone
jh_container | [D 2022-07-17 12:25:19.323 JupyterHub dockerspawner:1148] Starting host with config: {'auto_remove': True, 'binds': {}, 'links': {}, 'mounts': [], 'mem_limit': 0, 'cpu_period': 100000, 'cpu_quota': 0, 'network_mode': 'jupyterhub-network'}
jh_container | [I 2022-07-17 12:25:19.337 JupyterHub dockerspawner:1272] Created container jupyter-myadmin (id: d5b54b4) from image jupyter/scipy-notebook:lab-3.4.3
jh_container | [I 2022-07-17 12:25:19.337 JupyterHub dockerspawner:1296] Starting container jupyter-myadmin (id: d5b54b4)
jh_container | [D 2022-07-17 12:25:19.631 JupyterHub spawner:1258] Polling subprocess every 30s
jh_container | [I 2022-07-17 12:25:20.286 JupyterHub log:189] 302 GET /hub/spawn -> /hub/spawn-pending/myadmin (myadmin#192.168.64.1) 1002.05ms
jh_container | [D 2022-07-17 12:25:20.291 JupyterHub scopes:491] Checking access via scope servers
jh_container | [D 2022-07-17 12:25:20.291 JupyterHub scopes:389] Unrestricted access to /hub/spawn-pending/myadmin via servers
jh_container | [I 2022-07-17 12:25:20.291 JupyterHub pages:401] myadmin is pending spawn
jh_container | [I 2022-07-17 12:25:20.310 JupyterHub log:189] 200 GET /hub/spawn-pending/myadmin (myadmin#192.168.64.1) 20.61ms
jh_container | [D 2022-07-17 12:25:20.468 JupyterHub scopes:491] Checking access via scope read:servers
jh_container | [D 2022-07-17 12:25:20.468 JupyterHub scopes:389] Unrestricted access to /hub/api/users/myadmin/server/progress via read:servers
jh_container | [I 2022-07-17 12:25:20.527 JupyterHub log:189] 200 GET /hub/api (#192.168.64.3) 0.53ms
jh_container | [D 2022-07-17 12:25:20.532 JupyterHub base:281] Recording first activity for <APIToken('3dde...', user='myadmin', client_id='jupyterhub')>
jh_container | [D 2022-07-17 12:25:20.539 JupyterHub scopes:301] Authenticated with token <APIToken('3dde...', user='myadmin', client_id='jupyterhub')>
jh_container | [D 2022-07-17 12:25:20.541 JupyterHub scopes:491] Checking access via scope users:activity
jh_container | [D 2022-07-17 12:25:20.541 JupyterHub scopes:402] Argument-based access to /hub/api/users/myadmin/activity via users:activity
jh_container | [D 2022-07-17 12:25:20.542 JupyterHub users:859] Activity for user myadmin: 2022-07-17T12:25:20.513412Z
jh_container | [D 2022-07-17 12:25:20.542 JupyterHub users:877] Activity on server myadmin/: 2022-07-17T12:25:20.513412Z
jh_container | [I 2022-07-17 12:25:20.545 JupyterHub log:189] 200 POST /hub/api/users/myadmin/activity (myadmin#192.168.64.3) 14.99ms
jh_container | [D 2022-07-17 12:25:20.653 JupyterHub utils:230] Server at http://192.168.64.3:8888/user/myadmin/ responded with 302
jh_container | [D 2022-07-17 12:25:20.653 JupyterHub _version:74] jupyterhub and jupyterhub-singleuser both on version 2.3.1
jh_container | [I 2022-07-17 12:25:20.653 JupyterHub base:963] User myadmin took 1.368 seconds to start
jh_container | [I 2022-07-17 12:25:20.653 JupyterHub proxy:286] Adding user myadmin to proxy /user/myadmin/ => http://192.168.64.3:8888
jh_container | [D 2022-07-17 12:25:20.653 JupyterHub proxy:821] Proxy: Fetching POST http://127.0.0.1:8001/api/routes/user/myadmin
jh_container | 12:25:20.655 [ConfigProxy] info: Adding route /user/myadmin -> http://192.168.64.3:8888
jh_container | 12:25:20.655 [ConfigProxy] info: Route added /user/myadmin -> http://192.168.64.3:8888
jh_container | 12:25:20.655 [ConfigProxy] info: 201 POST /api/routes/user/myadmin
jh_container | [I 2022-07-17 12:25:20.656 JupyterHub users:753] Server myadmin is ready
jh_container | [I 2022-07-17 12:25:20.656 JupyterHub log:189] 200 GET /hub/api/users/myadmin/server/progress (myadmin#192.168.64.1) 189.29ms
jh_container | [D 2022-07-17 12:25:20.662 JupyterHub scopes:491] Checking access via scope servers
jh_container | [D 2022-07-17 12:25:20.662 JupyterHub scopes:389] Unrestricted access to /hub/spawn-pending/myadmin via servers
jh_container | [I 2022-07-17 12:25:20.663 JupyterHub log:189] 302 GET /hub/spawn-pending/myadmin -> /user/myadmin/ (myadmin#192.168.64.1) 2.32ms
jh_container | [I 2022-07-17 12:25:20.666 JupyterHub log:189] 302 GET /user/myadmin/ -> /hub/user/myadmin/ (#192.168.64.1) 0.27ms
jh_container | [D 2022-07-17 12:25:20.668 JupyterHub scopes:491] Checking access via scope access:servers
jh_container | [D 2022-07-17 12:25:20.668 JupyterHub scopes:389] Unrestricted access to /hub/user/myadmin/ via access:servers
jh_container | [I 2022-07-17 12:25:20.669 JupyterHub log:189] 302 GET /hub/user/myadmin/ -> /user/myadmin/?redirects=1 (myadmin#192.168.64.1) 1.35ms
jh_container | [I 2022-07-17 12:25:20.671 JupyterHub log:189] 302 GET /user/myadmin/?redirects=1 -> /hub/user/myadmin/?redirects=1 (#192.168.64.1) 0.25ms
jh_container | [D 2022-07-17 12:25:20.673 JupyterHub scopes:491] Checking access via scope access:servers
jh_container | [D 2022-07-17 12:25:20.673 JupyterHub scopes:389] Unrestricted access to /hub/user/myadmin/ via access:servers
jh_container | [W 2022-07-17 12:25:20.673 JupyterHub base:1615] Redirect loop detected on /hub/user/myadmin/?redirects=1
jh_container | [I 2022-07-17 12:25:22.675 JupyterHub log:189] 302 GET /hub/user/myadmin/?redirects=1 -> /user/myadmin/?redirects=2 (myadmin#192.168.64.1) 2003.39ms
jh_container | [I 2022-07-17 12:25:22.679 JupyterHub log:189] 302 GET /user/myadmin/?redirects=2 -> /hub/user/myadmin/?redirects=2 (#192.168.64.1) 0.67ms
jh_container | [D 2022-07-17 12:25:22.683 JupyterHub scopes:491] Checking access via scope access:servers
jh_container | [D 2022-07-17 12:25:22.683 JupyterHub scopes:389] Unrestricted access to /hub/user/myadmin/ via access:servers
jh_container | [W 2022-07-17 12:25:22.684 JupyterHub base:1615] Redirect loop detected on /hub/user/myadmin/?redirects=2
jh_container | [I 2022-07-17 12:25:26.685 JupyterHub log:189] 302 GET /hub/user/myadmin/?redirects=2 -> /user/myadmin/?redirects=3 (myadmin#192.168.64.1) 4003.62ms
jh_container | [I 2022-07-17 12:25:26.689 JupyterHub log:189] 302 GET /user/myadmin/?redirects=3 -> /hub/user/myadmin/?redirects=3 (#192.168.64.1) 0.66ms
jh_container | [D 2022-07-17 12:25:26.693 JupyterHub scopes:491] Checking access via scope access:servers
jh_container | [D 2022-07-17 12:25:26.693 JupyterHub scopes:389] Unrestricted access to /hub/user/myadmin/ via access:servers
jh_container | [W 2022-07-17 12:25:26.694 JupyterHub base:1615] Redirect loop detected on /hub/user/myadmin/?redirects=3
jh_container | [I 2022-07-17 12:25:34.695 JupyterHub log:189] 302 GET /hub/user/myadmin/?redirects=3 -> /user/myadmin/?redirects=4 (myadmin#192.168.64.1) 8003.82ms
jh_container | [I 2022-07-17 12:25:34.698 JupyterHub log:189] 302 GET /user/myadmin/?redirects=4 -> /hub/user/myadmin/?redirects=4 (#192.168.64.1) 0.41ms
jh_container | [D 2022-07-17 12:25:34.701 JupyterHub scopes:491] Checking access via scope access:servers
jh_container | [D 2022-07-17 12:25:34.701 JupyterHub scopes:389] Unrestricted access to /hub/user/myadmin/ via access:servers
jh_container | [W 2022-07-17 12:25:34.701 JupyterHub web:1787] 500 GET /hub/user/myadmin/?redirects=4 (192.168.64.1): Redirect loop detected.
jh_container | [D 2022-07-17 12:25:34.702 JupyterHub base:1342] No template for 500
jh_container | [E 2022-07-17 12:25:34.719 JupyterHub log:181] {
jh_container | "Host": "192.168.64.2:8081",
jh_container | "Connection": "keep-alive",
jh_container | "Cache-Control": "max-age=0",
jh_container | "Upgrade-Insecure-Requests": "1",
jh_container | "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.53 Safari/537.36",
jh_container | "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
jh_container | "Referer": "http://192.168.64.2:8081/hub/spawn-pending/myadmin",
jh_container | "Accept-Encoding": "gzip, deflate",
jh_container | "Accept-Language": "pl-PL,pl;q=0.9,en-US;q=0.8,en;q=0.7",
jh_container | "Cookie": "jupyterhub-hub-login=[secret]; jupyterhub-session-id=[secret]"
jh_container | }
jh_container | [E 2022-07-17 12:25:34.719 JupyterHub log:189] 500 GET /hub/user/myadmin/?redirects=4 (myadmin#192.168.64.1) 19.43ms
^CGracefully stopping... (press Ctrl+C again to force)
[+] Running 1/1
⠿ Container jh_container Stopped
with follwoing docker-compose.yml:
version: "3.9"
services:
jupyterhub: # Configuration for Hub+Proxy
build:
context: ./jupyterhub/ # path to the Dockerfile
args:
JUPYTERHUB_VERSION: 2.3.1
image: mc_jupyterhub_img:v1 # image name
container_name: jh_container # container name
restart: always
tty: true
stdin_open: true
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:rw"
ports:
- 8000:8000
- 8081:8081
networks:
- ${DOCKER_NETWORK_NAME}
environment:
DOCKER_JUPYTER_IMAGE: jupyter/scipy-notebook:lab-3.4.3
DOCKER_NETWORK_NAME: ${DOCKER_NETWORK_NAME}
env_file:
- .env
command: >
jupyterhub --debug -f /etc/jupyterhub/jupyterhub_config.py
networks:
jupyterhub-network:
name: ${DOCKER_NETWORK_NAME}
and jupyter_config.py:
import os
c = get_config()
import nativeauthenticator
c.JupyterHub.authenticator_class = 'nativeauthenticator.NativeAuthenticator'
c.JupyterHub.template_paths = [f"{os.path.dirname(nativeauthenticator.__file__)}/templates/"]
c.Authenticator.admin_users = {"myadmin"}
import netifaces
docker0 = netifaces.ifaddresses('eth0')
docker0_ipv4 = docker0[netifaces.AF_INET][0]
c.JupyterHub.hub_ip = '0.0.0.0'
c.JupyterHub.hub_connect_ip = docker0_ipv4['addr']
c.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'
c.DockerSpawner.image = os.environ['DOCKER_JUPYTER_IMAGE']
network_name = os.environ['DOCKER_NETWORK_NAME']
c.DockerSpawner.use_internal_ip = True
c.DockerSpawner.network_name = network_name
c.DockerSpawner.extra_host_config = { 'network_mode': network_name }
c.DockerSpawner.remove = True
How can I fix this? What am I doing wrong?

Related

why a socket server container can't reach my redis server container even if [duplicate]

This question already has answers here:
Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
(7 answers)
Closed 9 months ago.
i have this docker compose that connects my redis server with my socket server that has redis client
version: "3.8"
services:
redis-server:
image: "redis"
volumes:
- express-chat-vol:/express-chat/
networks:
- express-chat-net
ports:
- 6379:6379
socket-server:
build:
context: ./server/
dockerfile: Dockerfile.socketServer
ports:
- 5000:5000
networks:
- express-chat-net
volumes:
express-chat-vol:
networks:
express-chat-net:
but socket server can't access the redis server and bring up this error and i'm stuck for 2 days now , how can i solve it ?
Starting token2_socket-server_1 ... done
Starting token2_redis-server_1 ... done
Attaching to token2_socket-server_1, token2_redis-server_1
redis-server_1 | 1:C 29 May 2022 01:07:22.191 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-server_1 | 1:C 29 May 2022 01:07:22.191 # Redis version=7.0.0, bits=64, commit=00000000, modified=0, pid=1, just started
redis-server_1 | 1:C 29 May 2022 01:07:22.191 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
//some logs
redis-server_1 | 1:M 29 May 2022 01:07:22.195 * Ready to accept connections
socket-server_1 | socket server is listening on port 5000
socket-server_1 | node:internal/process/promises:288
socket-server_1 | triggerUncaughtException(err, true /* fromPromise */);
socket-server_1 | ^
socket-server_1 |
socket-server_1 | Error: connect ECONNREFUSED 127.0.0.1:6379
socket-server_1 | at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1195:16)
socket-server_1 | Emitted 'error' event on Commander instance at:
socket-server_1 | at RedisSocket.<anonymous> (/socketServer/node_modules/#redis/client/dist/lib/client/index.js:339:14)
socket-server_1 | at RedisSocket.emit (node:events:527:28)
socket-server_1 | at RedisSocket._RedisSocket_connect (/socketServer/node_modules/#redis/client/dist/lib/client/socket.js:127:14)
socket-server_1 | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
socket-server_1 | at async Commander.connect (/socketServer/node_modules/#redis/client/dist/lib/client/index.js:163:9)
socket-server_1 | at async connectRedisClient (/socketServer/socketServer.js:19:3) {
socket-server_1 | errno: -111,
socket-server_1 | code: 'ECONNREFUSED',
socket-server_1 | syscall: 'connect',
socket-server_1 | address: '127.0.0.1',
socket-server_1 | port: 6379
socket-server_1 | }
socket-server_1 |
socket-server_1 | Node.js v18.0.0
token2_socket-server_1 exited with code 1```
Try using redis-server as the hostname and not 127.0.0.1.

I get an EADDRINUSE error when trying to start my project with pm2

I get the following error when running 'pm2 start project.json' in my project.
port: 3000 }
0|serv | Tue, 08 Sep 2020 03:14:18 GMT app LoadSettingFromRedis: loaded
0|serv | { Error: listen EADDRINUSE 127.0.0.1:3000
0|serv | at Server.setupListenHandle [as _listen2] (net.js:1360:14)
0|serv | at listenInCluster (net.js:1401:12)
0|serv | at doListen (net.js:1510:7)
0|serv | at _combinedTickCallback (internal/process/next_tick.js:142:11)
0|serv | at process._tickCallback (internal/process/next_tick.js:181:9)
0|serv | errno: 'EADDRINUSE',
0|serv | code: 'EADDRINUSE',
0|serv | syscall: 'listen',
0|serv | address: '127.0.0.1',
0|serv | port: 3000 }
0|serv | Tue, 08 Sep 2020 03:15:08 GMT app LoadSettingFromRedis: loaded
0|serv | Tue, 08 Sep 2020 03:20:43 GMT app LoadSettingFromRedis: loaded
When I check what process is listening on port 3000, I get node. I kill this process, but it still doesn't solve the issue. Does anyone know what is the problem here?
It means your port is already in use. Try killing port with following command
sudo kill -9 $(sudo lsof -t -i:3000)
if that not works try following
sudo lsof -i tcp:3000 // this will return some PIDs
sudo kill -9 [your pid to remove]
Then run pm2 start command again

Hyperledger Fabric performance issue with couchdb

I am trying to invoke a chaincode function. The couchdb is taking almost 15 seconds to serve 3 request to fabric (2 query and 1 write operation)
Here are the peer logs ==> https://hastebin.com/ezihededuq.md
Orderer logs ==> https://hastebin.com/enebuxuval.coffeescript
Chaincode function that was executed ==> https://hastebin.com/uwazokegih.cs
But when I directly query couchdb it only takes few milliseconds
If anyone knows
please help.
Here couchdb took 5 seconds | line 5 & 6
2018-07-25 11:50:47.300 IST [couchdb] handleRequest -> DEBU 15cd HTTP
Request: GET /assetchain_assetchaincode/_design/lastnameasc/_view/lastnameasc?stale=update_after HTTP/1.1 | Host: couchdb:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2018-07-25 11:50:52.309 IST [couchdb] handleRequest -> DEBU 15ce Exiting handleRequest()
Line 9 & 10 couchdb took 4 seconds
2018-07-25 11:50:52.309 IST [couchdb] handleRequest -> DEBU 15d1 HTTP Request: GET /assetchain_assetchaincode/_design/lastnamedesc/_view/lastnamedesc?stale=update_after HTTP/1.1 | Host: couchdb:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2018-07-25 11:50:56.991 IST [endorser] ProcessProposal -> DEBU 15d2 Entering: Got request from 192.168.0.18:60858
Line 68 - 69 couchdb took 5 seconds again
2018-07-25 11:50:57.322 IST [couchdb] handleRequest -> DEBU 160c HTTP Request: GET /assetchain_assetchaincode/_design/request_created_at_sort_asc/_view/request_sorting?stale=update_after HTTP/1.1 | Host: couchdb:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2018-07-25 11:51:02.007 IST [couchdb] handleRequest -> DEBU 160d Exiting handleRequest()
line 259-260 couchdb took 2 seconds
2018-07-25 11:51:02.335 IST [couchdb] handleRequest -> DEBU 1699 HTTP Request: GET /assetchain_assetchaincode/_design/request_sort_name_asc/_view/request_sort_name_asc?stale=update_after HTTP/1.1 | Host: couchdb:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2018-07-25 11:51:04.123 IST [policies] Evaluate -> DEBU 169a == Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/BlockValidation ==
line 293-294 couchdb took 2 seconds
2018-07-25 11:51:04.125 IST [couchdb] handleRequest -> DEBU 16bb HTTP Request: GET /assetchain_lscc/assetchaincode?attachments=true HTTP/1.1 | Host: couchdb:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2018-07-25 11:51:06.071 IST [endorser] ProcessProposal -> DEBU 16bc Entering: Got request from 192.168.0.18:60858
line 381-382 couchdb took 2 seconds
2018-07-25 11:51:07.346 IST [couchdb] handleRequest -> DEBU 1713 HTTP Request: GET /assetchain_assetchaincode/_design/request_sort_name_dsc/_view/request_sort_name_dsc?stale=update_after HTTP/1.1 | Host: couchdb:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2018-07-25 11:51:09.137 IST [couchdb] handleRequest -> DEBU 1714 Exiting handleRequest()
line 489-490 couchdb took 3 seconds
2018-07-25 11:51:09.338 IST [couchdb] handleRequest -> DEBU 177c HTTP Request: GET /assetchain_/statedb_savepoint?attachments=true HTTP/1.1 | Host: couchdb:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Accept-Encoding: gzip | |
2018-07-25 11:51:12.356 IST [couchdb] handleRequest -> DEBU 177d Exiting handleRequest()
The peer, couchdb, orderer are hosted each on a separate machine.
The configuration of machines are described below:
Processor: i5
Ram: 8GB
OS: Ubuntu 64bit 16.04 LTS
If anyone have any idea then please let me know
The write doesn't complete until the block is cut by the orderer. Default time is 2s...

Hyperledger Composer BNA deployment results in 'TCP write failed'

I've followed the tutorial "Build your first network" for Hyperledger Fabric and added a CA. Now, when trying to deploy a BNA with composer, using composer network deploy -a maintenance-network.bna -p maintenance -i PeerAdmin -s randomString -A admin -S, i get an error:
~/network-setup$ composer network deploy -a ~/maintenance-
network/dist/maintenance-network.bna -p maint
enance -i PeerAdmin -s randomString -A admin -S
Deploying business network from archive: /home/vagrant/maintenance-
network/dist/maintenance-network.bna
Business network definition:
Identifier: maintenance-network#0.1.11
Description: Maintenance-network
✖ Deploying business network definition. This may take a minute...
Error: Error trying deploy. Error: Error trying install composer runtime.
Error: TCP Write failed
Command failed
Does anyone know what the problem is ?
This is the output of docker ps:
IMAGE COMMAND CREATED STATUS PORTS NAMES
2a4710a6805c hyperledger/fabric-orderer "orderer" 50 seconds ago Up 48 seconds 0.0.0.0:7050->7050/tcp orderer.example.com
81b8cab17323 hyperledger/fabric-peer "peer node start" 50 seconds ago Up 47 seconds 0.0.0.0:8051->8051/tcp, 0.0.0.0:8053->8053/tcp peer1.org1.example.com
ed8f0148a402 hyperledger/fabric-peer "peer node start" 50 seconds ago Up 48 seconds 0.0.0.0:9051->9051/tcp, 0.0.0.0:9053->9053/tcp peer0.org2.example.com
9de5f3918f1d hyperledger/fabric-ca "sh -c 'fabric-ca-..." 50 seconds ago Up 47 seconds 0.0.0.0:7054->7054/tcp ca_peerOrg1
d2d95dc6f20a hyperledger/fabric-ca "sh -c 'fabric-ca-..." 50 seconds ago Up 48 seconds 7054/tcp, 0.0.0.0:8054->8054/tcp ca_peerOrg2
8396f528dc75 hyperledger/fabric-peer "peer node start" 50 seconds ago Up 48 seconds 0.0.0.0:10051->10051/tcp, 0.0.0.0:10053->10053/tcp peer1.org2.example.com
6b1185ea529a hyperledger/fabric-peer "peer node start" 50 seconds ago Up 48 seconds 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer0.org1.example.com
This is the connection.json i'm using:
{
"type": "hlfv1",
"orderers": [
{ "url" : "grpc://localhost:7050" }
],
"ca": { "url": "http://localhost:7054",
"name": "ca-org1"
},
"peers": [
{
"requestURL": "grpc://localhost:7051",
"eventURL": "grpc://localhost:7053"
},
{
"requestURL": "grpc://localhost:8051",
"eventURL": "grpc://localhost:8053"
},
{
"requestURL": "grpc://localhost:9051",
"eventURL": "grpc://localhost:9053"
},
{
"requestURL": "grpc://localhost:10051",
"eventURL": "grpc://localhost:10053"
}
],
"keyValStore": "/home/vagrant/.composer-credentials",
"channel": "mychannel",
"mspID": "Org1MSP",
"timeout": "300"
}
The keyValStore contains the identity imported with:
composer identity import -p maintenance -u PeerAdmin -c crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts/Admin#org1.example.com-cert.pem -k crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/keystore/*_sk
An identity was imported with name 'PeerAdmin' successfully
The docker containers are started with this docker-compose-cli.yaml:
version: '2'
networks:
byfn:
services:
ca0:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_TLS_ENABLED=false
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/CA1_PRIVATE_KEY
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/CA1_PRIVATE_KEY -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerOrg1
networks:
- byfn
ca1:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org2
- FABRIC_CA_SERVER_TLS_ENABLED=false
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/CA2_PRIVATE_KEY
ports:
- "8054:8054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/CA2_PRIVATE_KEY -b admin:adminpw -d'
volumes:
- ./crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerOrg2
networks:
- byfn
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
networks:
- byfn
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
networks:
- byfn
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com
networks:
- byfn
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org2.example.com
networks:
- byfn
peer1.org2.example.com:
container_name: peer1.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org2.example.com
networks:
- byfn
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME} ${DELAY}; sleep $TIMEOUT'
volumes:
- /var/run/:/host/var/run/
- ./../chaincode/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- ca0
- ca1
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer0.org2.example.com
- peer1.org2.example.com
networks:
- byfn
And this is the output when running CHANNEL_NAME=mychannel docker-compose -f docker-compose-cli.yaml up -d:
| ____ _____ _ ____ _____
| / ___| |_ _| / \ | _ \ |_ _|
| \___ \ | | / _ \ | |_) | | |
| ___) | | | / ___ \ | _ < | |
| |____/ |_| /_/ \_\ |_| \_\ |_|
|
| Starting the network
|
| Channel name : mychannel
| Creating channel...
| CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
| CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
| CORE_PEER_LOCALMSPID=Org1MSP
| CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
| CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
| CORE_PEER_TLS_ENABLED=false
| CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
| CORE_PEER_ID=cli
| CORE_LOGGING_LEVEL=DEBUG
| CORE_PEER_ADDRESS=peer0.org1.example.com:7051
| 2017-11-15 09:31:36.011 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
| 2017-11-15 09:31:36.012 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
| 2017-11-15 09:31:36.017 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
| 2017-11-15 09:31:36.018 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
| 2017-11-15 09:31:36.019 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
| 2017-11-15 09:31:36.019 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
| 2017-11-15 09:31:36.019 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
| 2017-11-15 09:31:36.019 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0AC6060A074F7267314D535012BA062D...53616D706C65436F6E736F727469756D
| 2017-11-15 09:31:36.019 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: D6E8392380793B24537309F14EA1C0D9CF3F18FF8292A65D09CF3AA92EA2094D
| 2017-11-15 09:31:36.019 UTC [msp] GetLocalMSP -> DEBU 00a Returning existing local MSP
| 2017-11-15 09:31:36.019 UTC [msp] GetDefaultSigningIdentity -> DEBU 00b Obtaining default signing identity
| 2017-11-15 09:31:36.019 UTC [msp] GetLocalMSP -> DEBU 00c Returning existing local MSP
| 2017-11-15 09:31:36.019 UTC [msp] GetDefaultSigningIdentity -> DEBU 00d Obtaining default signing identity
| 2017-11-15 09:31:36.019 UTC [msp/identity] Sign -> DEBU 00e Sign: plaintext: 0AFD060A1508021A0608F892B0D00522...628AB20AD0563C9EA2A482A301EA32D8
| 2017-11-15 09:31:36.019 UTC [msp/identity] Sign -> DEBU 00f Sign: digest: 5A58E17C75478098A108CFCFA1E909639C7830022602D225A61C4D0BE9E8C5AD
| 2017-11-15 09:31:36.060 UTC [msp] GetLocalMSP -> DEBU 010 Returning existing local MSP
| 2017-11-15 09:31:36.060 UTC [msp] GetDefaultSigningIdentity -> DEBU 011 Obtaining default signing identity
| 2017-11-15 09:31:36.060 UTC [msp] GetLocalMSP -> DEBU 012 Returning existing local MSP
| 2017-11-15 09:31:36.060 UTC [msp] GetDefaultSigningIdentity -> DEBU 013 Obtaining default signing identity
| 2017-11-15 09:31:36.060 UTC [msp/identity] Sign -> DEBU 014 Sign: plaintext: 0AFD060A1508021A0608F892B0D00522...5B18D1838ED112080A021A0012021A00
| 2017-11-15 09:31:36.060 UTC [msp/identity] Sign -> DEBU 015 Sign: digest: 4309C46AA7BBA47AD146AA77CB1ABAC79114C9C3D66D41B833F82ED7F882E326
| 2017-11-15 09:31:36.082 UTC [channelCmd] readBlock -> DEBU 016 Received block: 0
| 2017-11-15 09:31:36.083 UTC [main] main -> INFO 017 Exiting.....
| ===================== Channel "mychannel" is created successfully =====================
|
| Having all peers join the channel...
| CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
| CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
| CORE_PEER_LOCALMSPID=Org1MSP
| CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
| CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
| CORE_PEER_TLS_ENABLED=false
| CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
| CORE_PEER_ID=cli
| CORE_LOGGING_LEVEL=DEBUG
| CORE_PEER_ADDRESS=peer0.org1.example.com:7051
| 2017-11-15 09:31:36.121 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
| 2017-11-15 09:31:36.121 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
| 2017-11-15 09:31:36.123 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
| 2017-11-15 09:31:36.123 UTC [msp/identity] Sign -> DEBU 004 Sign: plaintext: 0AC3070A5B08011A0B08F892B0D00510...A1A603DD33A31A080A000A000A000A00
| 2017-11-15 09:31:36.123 UTC [msp/identity] Sign -> DEBU 005 Sign: digest: 5D870839DD3A368A48E2CED8314E3817CA48001BACBF74B36380408D851769AE
| 2017-11-15 09:31:36.158 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
| 2017-11-15 09:31:36.158 UTC [main] main -> INFO 007 Exiting.....
| ===================== PEER0 joined on the channel "mychannel" =====================
| sleep: missing operand
| Try 'sleep --help' for more information.
|
| CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
| CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
| CORE_PEER_LOCALMSPID=Org1MSP
| CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
| CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
| CORE_PEER_TLS_ENABLED=false
| CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
| CORE_PEER_ID=cli
| CORE_LOGGING_LEVEL=DEBUG
| CORE_PEER_ADDRESS=peer1.org1.example.com:7051
| 2017-11-15 09:31:36.194 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
| 2017-11-15 09:31:36.194 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
| 2017-11-15 09:31:36.196 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
| 2017-11-15 09:31:36.196 UTC [msp/identity] Sign -> DEBU 004 Sign: plaintext: 0AC3070A5B08011A0B08F892B0D00510...A1A603DD33A31A080A000A000A000A00
| 2017-11-15 09:31:36.196 UTC [msp/identity] Sign -> DEBU 005 Sign: digest: 0B0C3F308D78AC2FDD6FC89A71FED2DCB889038AE993980A6E4B540BB4D3C51A
| 2017-11-15 09:31:36.248 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
| 2017-11-15 09:31:36.248 UTC [main] main -> INFO 007 Exiting.....
| ===================== PEER1 joined on the channel "mychannel" =====================
| sleep: missing operand
| Try 'sleep --help' for more information.
|
| CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
| CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
| CORE_PEER_LOCALMSPID=Org2MSP
| CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
| CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
| CORE_PEER_TLS_ENABLED=false
| CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin#org2.example.com/msp
| CORE_PEER_ID=cli
| CORE_LOGGING_LEVEL=DEBUG
| CORE_PEER_ADDRESS=peer0.org2.example.com:7051
| 2017-11-15 09:31:36.288 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
| 2017-11-15 09:31:36.288 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
| 2017-11-15 09:31:36.289 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
| 2017-11-15 09:31:36.290 UTC [msp/identity] Sign -> DEBU 004 Sign: plaintext: 0AC4070A5C08011A0C08F892B0D00510...A1A603DD33A31A080A000A000A000A00
| 2017-11-15 09:31:36.290 UTC [msp/identity] Sign -> DEBU 005 Sign: digest: 992F4939F777DD575F2753ADF6936A7E6FB9CC8548C188B31E25B06F9ECEA7E7
| 2017-11-15 09:31:36.335 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
| 2017-11-15 09:31:36.335 UTC [main] main -> INFO 007 Exiting.....
| ===================== PEER2 joined on the channel "mychannel" =====================
| sleep: missing operand
| Try 'sleep --help' for more information.
|
| CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
| CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
| CORE_PEER_LOCALMSPID=Org2MSP
| CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
| CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
| CORE_PEER_TLS_ENABLED=false
| CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin#org2.example.com/msp
| CORE_PEER_ID=cli
| CORE_LOGGING_LEVEL=DEBUG
| CORE_PEER_ADDRESS=peer1.org2.example.com:7051
| 2017-11-15 09:31:36.372 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
| 2017-11-15 09:31:36.372 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
| 2017-11-15 09:31:36.373 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
| 2017-11-15 09:31:36.374 UTC [msp/identity] Sign -> DEBU 004 Sign: plaintext: 0AC4070A5C08011A0C08F892B0D00510...A1A603DD33A31A080A000A000A000A00
| 2017-11-15 09:31:36.374 UTC [msp/identity] Sign -> DEBU 005 Sign: digest: 6776D313E8DD88880918868C6BA2C93ECEB05425FE93A7E385762918FA2AF556
| 2017-11-15 09:31:36.419 UTC [channelCmd] executeJoin -> INFO 006 Peer joined the channel!
| 2017-11-15 09:31:36.419 UTC [main] main -> INFO 007 Exiting.....
| ===================== PEER3 joined on the channel "mychannel" =====================
| sleep: missing operand
| Try 'sleep --help' for more information.
|
|
| ========= All GOOD, BYFN execution completed ===========
I'm using Ubuntu 16.04 and Composer v0.14.2.
So, if anyone has the same issue: There were errors in the mapping of the docker ports. The ports of the peer docker containers should be 7051 and 7053, mapped to different ports on your host machine (e.g 8051, 9051, etc.).
Another error i found: I received the error Error: Failed to deserialize creator identity, err expected MSP ID Org2MSP, received Org1MSP. My peers at 7051 and 8051 have Org1MSP as MSP (Since they both belong to Org1), everyone else has Org2MSP. So, in the connection.json, you can't enter all peers at once. You have to create a connection.json for each MSP, with the peers connected to this MSP.

Login to docker registry located in Gitlab

I created a docker registry and want to connect it with GitLab. I followed this documentation https://docs.gitlab.com/ce/user/project/container_registry.html. After that I tried to login to docker, but I received 401 or Access denied, do you know how to fix this ?
docker login url
Username: gitlab-ci-token
Password:
https://<url>/v2/: unauthorized: HTTP Basic: Access denied
docker login <url>
Username: knikolov
Password:
https://<url>/v2/: unauthorized: HTTP Basic: Access denied
docker login <url>
Username: knikolov
Password:
Error response from daemon: login attempt to https://<url>/v2/ failed with status: 401 Unauthorized
production.log
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:42:51 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:42:54 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:42:57 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:00 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:03 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:06 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:09 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:12 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:15 +0000
Started POST "/api/v4/jobs/request" for 172.17.0.1 at 2017-06-22 14:43:18 +0000
Started GET "/jwt/auth?account=knikolov&client_id=docker&offline_token=true&service=container_registry" for 172.17.0.1 at 2017-06-22 14:43:19 +0000
Processing by JwtController#auth as HTML
Parameters: {"account"=>"knikolov", "client_id"=>"docker", "offline_token"=>"true", "service"=>"container_registry"}
Completed 200 OK in 191ms (Views: 0.5ms | ActiveRecord: 5.7ms)
Started GET "/admin/logs" for 172.17.0.1 at 2017-06-22 14:43:21 +0000
Processing by Admin::LogsController#show as HTML
Form the registry log I received:
registry_1 | time="2017-06-25T17:34:31Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.7.3 http.request.host=<url> http.request.id=e088c13e-aa4c-4701-af26-29e12874519b http.request.method=GET http.request.remoteaddr=37.59.24.105 http.request.uri="/v2/" http.request.useragent="docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))" instance.id=c8d463e0-cf04-48f5-8daa-d096b4e75494 version=v2.6.1
registry_1 | 172.17.0.1 - - [25/Jun/2017:17:34:31 +0000] "GET /v2/ HTTP/1.0" 401 87 "" "docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))"
registry_1 | time="2017-06-25T17:34:32Z" level=info msg="token from untrusted issuer: \"omnibus-gitlab-issuer\""
registry_1 | time="2017-06-25T17:34:32Z" level=warning msg="error authorizing context: invalid token" go.version=go1.7.3 http.request.host=<url> http.request.id=ff0d15e4-3198-4d69-910b-50bc27dd02f2 http.request.method=GET http.request.remoteaddr=37.59.24.105 http.request.uri="/v2/" http.request.useragent="docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))" instance.id=c8d463e0-cf04-48f5-8daa-d096b4e75494 version=v2.6.1
registry_1 | 172.17.0.1 - - [25/Jun/2017:17:34:32 +0000] "GET /v2/ HTTP/1.0" 401 87 "" "docker/17.03.1-ce go/go1.7.5 git-commit/c6d412e kernel/4.4.0-81-generic os/linux arch/amd64 UpstreamClient(Docker-Client/17.03.1-ce \\(linux\\))"
this is my config for my registry:
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
delete:
enabled: true
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
auth:
token:
realm: https://<url>/jwt/auth
service: container_registry
issuer: gitlab-issuer
rootcertbundle: /certs/registry.crt
docker-compose.yml
registry:
restart: always
image: registry:2
ports:
- 127.0.0.1:5000:5000
environment:
- REGISTRY_STORAGE_DELETE_ENABLED=true
volumes:
- ./data:/var/lib/registry
- ./certs:/certs
- ./config.yml:/etc/docker/registry/config.yml
Gitlab docker-compose.yml
web:
image: 'gitlab/gitlab-ce:latest'
restart: always
hostname: '<gitlab_url>'
container_name: gitlab
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url '<gitlab_url>'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
registry_external_url '<docker-registry_url>'
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "172.17.0.1"
gitlab_rails['smtp_domain'] = "<smtp_domain>"
gitlab_rails['gitlab_email_from'] = '<gitlab_email_from>'
gitlab_rails['smtp_enable_starttls_auto'] = false
gitlab_rails['registry_enabled'] = true
registry_nginx['ssl_certificate'] = '/etc/gitlab/ssl/docker.registry.crt'
registry_nginx['ssl_certificate_key'] = '/etc/gitlab/ssl/docker.registry.key'
registry_nginx['proxy_set_headers'] = {
"Host" => "<dokcer-registry_url>"
}
nginx['listen_port'] = 80
nginx['listen_https'] = false
nginx['proxy_set_headers'] = {
"X-Forwarded-Proto" => "https",
"X-Forwarded-Ssl" => "on"
}
ports:
- '127.0.0.1:5432:80'
- '2224:22'
volumes:
- '/home/gitlab/gitlab-ce/config:/etc/gitlab'
- '/home/gitlab/gitlab-ce/logs:/var/log/gitlab'
- '/home/gitlab/gitlab-ce/data:/var/opt/gitlab'
- '/home/docker-registry/data:/var/opt/gitlab/gitlab-rails/shared/registry'
Make sure the .crt file and .key file exists on the path specified here in gitlab.rb if not make the changes and restart gitlab with - sudo gitlab-ctl restart
external_url 'https://myrepo.xyz.com'
nginx['redirect_http_to_https'] = true
registry_external_url 'https://registry.xyz.com'
registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/registry.xyz.com.crt"
registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/registry.xyz.com.key"
More details available at - Appychip
It seems like you are not using the same RSA keypair for your Gitlab registry backend and your Docker setup.
Check your gitlab_rails['registry_key_path'] setting in Gitlab.rb and consult this very detailed guide.
https://m42.sh/gitlab-registry.html (unfortunately offline, backup copy here: https://github.com/ipernet/gitlab-docs/blob/master/gitlab-registry.md)
Make Sure that
The Drive on Docker is shared
(If the drive is not shared: Go to Docker and make the settings as Shared)
Username matches
Remove any domain name if included.
Try this

Resources