Jupyter kernel crashing in docker container - linux

I'm trying to set up a docker container with a jupyter notebook installed. Everything seems to be running fine until I open an .ipynb file.
Here's the debug logs from running jupyter:
jupyter_1 | [D 16:32:58.134 NotebookApp] Native kernel (python3) available from /root/anaconda/lib/python3.6/site-packages/ipykernel/resources
jupyter_1 | [D 16:32:58.135 NotebookApp] Starting kernel: ['/root/anaconda/bin/python', '-m', 'ipykernel', '-f', '/root/.local/share/jupyter/runtime/kernel-d709c271-698e-4593-9e21-ee782bb057a1.json']
jupyter_1 | [D 16:32:58.140 NotebookApp] Connecting to: tcp://127.0.0.1:50259
jupyter_1 | [I 16:32:58.140 NotebookApp] Kernel started: d709c271-698e-4593-9e21-ee782bb057a1
jupyter_1 | [D 16:32:58.140 NotebookApp] Kernel args: {'kernel_name': 'python3', 'cwd': '/var/workspace'}
jupyter_1 | [D 16:32:58.142 NotebookApp] 201 POST /api/sessions (172.19.0.1) 12.44ms
jupyter_1 | [D 16:32:58.144 NotebookApp] 200 GET /api/contents/Untitled.ipynb/checkpoints?_=1495902777780 (172.19.0.1) 1.26ms
jupyter_1 | [D 16:32:58.214 NotebookApp] 304 GET /static/components/MathJax/extensions/Safe.js?rev=2.6.0 (172.19.0.1) 1.21ms
jupyter_1 | [D 16:32:58.297 NotebookApp] Initializing websocket connection /api/kernels/d709c271-698e-4593-9e21-ee782bb057a1/channels
jupyter_1 | [D 16:32:58.300 NotebookApp] Requesting kernel info from d709c271-698e-4593-9e21-ee782bb057a1
jupyter_1 | [D 16:32:58.300 NotebookApp] Connecting to: tcp://127.0.0.1:37738
jupyter_1 | [I 16:33:01.142 NotebookApp] KernelRestarter: restarting kernel (1/5)
jupyter_1 | [D 16:33:01.144 NotebookApp] Starting kernel: ['/root/anaconda/bin/python', '-m', 'ipykernel', '-f', '/root/.local/share/jupyter/runtime/kernel-d709c271-698e-4593-9e21-ee782bb057a1.json']
jupyter_1 | [D 16:33:01.149 NotebookApp] Connecting to: tcp://127.0.0.1:50259
jupyter_1 | [I 16:33:04.152 NotebookApp] KernelRestarter: restarting kernel (2/5)
jupyter_1 | [D 16:33:04.153 NotebookApp] Starting kernel: ['/root/anaconda/bin/python', '-m', 'ipykernel', '-f', '/root/.local/share/jupyter/runtime/kernel-d709c271-698e-4593-9e21-ee782bb057a1.json']
jupyter_1 | [D 16:33:04.159 NotebookApp] Connecting to: tcp://127.0.0.1:50259
jupyter_1 | [I 16:33:07.161 NotebookApp] KernelRestarter: restarting kernel (3/5)
jupyter_1 | [D 16:33:07.162 NotebookApp] Starting kernel: ['/root/anaconda/bin/python', '-m', 'ipykernel', '-f', '/root/.local/share/jupyter/runtime/kernel-d709c271-698e-4593-9e21-ee782bb057a1.json']
jupyter_1 | [D 16:33:07.168 NotebookApp] Connecting to: tcp://127.0.0.1:50259
jupyter_1 | [W 16:33:08.302 NotebookApp] Timeout waiting for kernel_info reply from d709c271-698e-4593-9e21-ee782bb057a1
jupyter_1 | [D 16:33:08.303 NotebookApp] Opening websocket /api/kernels/d709c271-698e-4593-9e21-ee782bb057a1/channels
jupyter_1 | [D 16:33:08.304 NotebookApp] Connecting to: tcp://127.0.0.1:37738
jupyter_1 | [D 16:33:08.305 NotebookApp] Connecting to: tcp://127.0.0.1:46487
jupyter_1 | [D 16:33:08.305 NotebookApp] Connecting to: tcp://127.0.0.1:48305
jupyter_1 | [I 16:33:10.169 NotebookApp] KernelRestarter: restarting kernel (4/5)
jupyter_1 | WARNING:root:kernel d709c271-698e-4593-9e21-ee782bb057a1 restarted
jupyter_1 | [D 16:33:10.171 NotebookApp] Starting kernel: ['/root/anaconda/bin/python', '-m', 'ipykernel', '-f', '/root/.local/share/jupyter/runtime/kernel-d709c271-698e-4593-9e21-ee782bb057a1.json']
jupyter_1 | [D 16:33:10.176 NotebookApp] Connecting to: tcp://127.0.0.1:50259
jupyter_1 | [W 16:33:13.177 NotebookApp] KernelRestarter: restart failed
jupyter_1 | [W 16:33:13.178 NotebookApp] Kernel d709c271-698e-4593-9e21-ee782bb057a1 died, removing from map.
jupyter_1 | ERROR:root:kernel d709c271-698e-4593-9e21-ee782bb057a1 restarted failed!
jupyter_1 | [D 16:33:13.185 NotebookApp] Websocket closed d709c271-698e-4593-9e21-ee782bb057a1:0580392FC0924FAA8D06C61958352847
jupyter_1 | [W 16:33:13.188 NotebookApp] Kernel deleted before session
jupyter_1 | [W 16:33:13.189 NotebookApp] 410 DELETE /api/sessions/892c369f-0dda-4d24-a084-52c7646c79e6 (172.19.0.1) 1.49ms referer=http://0.0.0.0:8888/notebooks/Untitled.ipynb
And for reference here's my Dockerfile:
FROM ubuntu:16.04
## install python
RUN apt-get update \
&& apt-get install -y apt-utils python3 python3-pip wget \
&& pip3 install --upgrade pip setuptools
## Install anaconda and do cleanup
RUN wget https://repo.continuum.io/archive/Anaconda3-4.3.1-Linux-x86_64.sh -O anaconda.sh \
&& chmod 700 anaconda.sh \
&& /bin/bash anaconda.sh -bp /root/anaconda \
&& rm -f anaconda.sh \
&& /root/anaconda/bin/conda install -y jupyter
WORKDIR /var/workspace
EXPOSE 8888
CMD ["/root/anaconda/bin/jupyter", "notebook", "--no-browser", "--ip=0.0.0.0", "--log-level=DEBUG"]
This is actually the second attempt, the first of which I have on my github account in which I tried to do this on a alpine linux install and I was having the same problem there. I switched to ubuntu just to make sure that wasn't the issue.
Ideally I'd like to get debug logs from the kernel itself, but I don't see any way to get the system to pass the debug log level onto the kernel. Perhaps I'll try hacking the site-packages files to see if I can update the system call to include the debug flags.
In the mean-time, has anyone experienced this and found a solution?
Oh.. btw. I have this installed locally and it works just fine. I could give up and just use that, but what fun would that be?

Related

unable to run kibana with docker-compose

I am trying to run Kibana with opendistro elasticsearch using the following docker-compose:
version: '3'
services:
odfe-node1:
image: amazon/opendistro-for-elasticsearch:1.11.0
container_name: odfe-node1
environment:
- cluster.name=odfe-cluster
- node.name=odfe-node1
- discovery.seed_hosts=odfe-node1
- cluster.initial_master_nodes=odfe-node1
- bootstrap.memory_lock=true # along with the memlock settings below, disables swapping
- "ES_JAVA_OPTS=-Xms1g -Xmx1g" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536 # maximum number of open files for the Elasticsearch user, set to at least 65536 on modern systems
hard: 65536
volumes:
- odfe-data1:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
networks:
- odfe-net
kibana:
image: amazon/opendistro-for-elasticsearch-kibana:1.11.0
container_name: odfe-kibana
ports:
- 5601:5601
expose:
- "5601"
environment:
ELASTICSEARCH_URL: https://odfe-node1:9200
ELASTICSEARCH_HOSTS: https://odfe-node1:9200
networks:
- odfe-net
volumes:
odfe-data1:
networks:
odfe-net:
after running the above docker-compose using
docker-compose up
i get the following error:
Starting odfe-kibana ... done
Starting odfe-node1 ... done
Attaching to odfe-kibana, odfe-node1
odfe-node1 | OpenDistro for Elasticsearch Security Demo Installer
odfe-node1 | ** Warning: Do not use on production or public reachable systems **
odfe-node1 | Basedir: /usr/share/elasticsearch
odfe-node1 | Elasticsearch install type: rpm/deb on CentOS Linux release 7.8.2003 (Core)
odfe-node1 | Elasticsearch config dir: /usr/share/elasticsearch/config
odfe-node1 | Elasticsearch config file: /usr/share/elasticsearch/config/elasticsearch.yml
odfe-node1 | Elasticsearch bin dir: /usr/share/elasticsearch/bin
odfe-node1 | Elasticsearch plugins dir: /usr/share/elasticsearch/plugins
odfe-node1 | Elasticsearch lib dir: /usr/share/elasticsearch/lib
odfe-node1 | Detected Elasticsearch Version: x-content-7.9.1
odfe-node1 | Detected Open Distro Security Version: 1.11.0.0
odfe-node1 | /usr/share/elasticsearch/config/elasticsearch.yml seems to be already configured for Security. Quit.
odfe-node1 | Unlinking stale socket /usr/share/supervisor/performance_analyzer/supervisord.sock
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:12Z","tags":["warning","plugins-discovery"],"pid":1,"message":"Expect plugin \"id\" in camelCase, but found: opendistro-notebooks-kibana"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:22Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"telemetryManagementSection\" has been disabled since the following direct or transitive dependencies are missing or disabled: [telemetry]"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:22Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"newsfeed\" is disabled."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:22Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"telemetry\" is disabled."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:22Z","tags":["info","plugins-service"],"pid":1,"message":"Plugin \"visTypeXy\" is disabled."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:26Z","tags":["warning","legacy-service"],"pid":1,"message":"Some installed third party plugin(s) [opendistro-alerting, opendistro-anomaly-detection-kibana, opendistro_index_management_kibana, opendistro-query-workbench] are using the legacy plugin format and will no longer work in a future Kibana release. Please refer to https://ela.st/kibana-breaking-changes-8-0 for a list of breaking changes and https://ela.st/kibana-platform-migration for documentation on how to migrate legacy plugins."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:27Z","tags":["info","plugins-system"],"pid":1,"message":"Setting up [38] plugins: [usageCollection,telemetryCollectionManager,kibanaUsageCollection,kibanaLegacy,mapsLegacy,timelion,share,legacyExport,esUiShared,bfetch,expressions,data,home,console,apmOss,management,indexPatternManagement,advancedSettings,savedObjects,opendistroSecurity,visualizations,visualize,visTypeVega,visTypeTimelion,visTypeTable,visTypeMarkdown,tileMap,inputControlVis,regionMap,dashboard,opendistro-notebooks-kibana,charts,visTypeVislib,visTypeTimeseries,visTypeTagcloud,visTypeMetric,discover,savedObjectsManagement]"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:28Z","tags":["info","savedobjects-service"],"pid":1,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:29Z","tags":["error","elasticsearch","data"],"pid":1,"message":"Request error, retrying\nGET https://odfe-node1:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => connect ECONNREFUSED 172.19.0.3:9200"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:31Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:32Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:32Z","tags":["error","savedobjects-service"],"pid":1,"message":"Unable to retrieve version information from Elasticsearch nodes."}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:33Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:33Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:35Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:35Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:37Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:37Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:40Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:41Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:42Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:42Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:45Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"Unable to revive connection: https://odfe-node1:9200/"}
odfe-kibana | {"type":"log","#timestamp":"2022-09-05T11:04:45Z","tags":["warning","elasticsearch","data"],"pid":1,"message":"No living connections"}
odfe-kibana exited with code 137
odfe-node1 exited with code 137
i donn know why i am always got this exit status with those only two services running with docker-compose
so if anyone had the same issue or can help, please feel free
Exit code 137 indicates that the container was terminated due to an out of memory issue.
Try adding
ulimits:
memlock:
soft: -1
hard: -1
to Kibana service as well.

Can't run jupyter lab kernel

I am trying to run jupyter lab but the kernel is not connecting and I am not too sure the reason why. Any help will be appreciated thank you so much! This is the error I am getting:
RuntimeError: This event loop is already running
[W 2022-02-11 09:54:20.931 ServerApp] AsyncIOLoopKernelRestarter: restart failed
[W 2022-02-11 09:54:20.931 ServerApp] Kernel c9bb27e5-56fc-4daf-9631-306d5d277217 died, removing from map.
[W 2022-02-11 09:54:20.949 ServerApp] AsyncIOLoopKernelRestarter: restart failed
[W 2022-02-11 09:54:20.949 ServerApp] Kernel d2ab9d87-1ddd-4cdb-a7a7-3bfaf3e91fc0 died, removing from map.
[W 2022-02-11 09:54:20.960 ServerApp] AsyncIOLoopKernelRestarter: restart failed
[W 2022-02-11 09:54:20.961 ServerApp] Kernel b17914dd-d148-4d22-af26-f1a0a9f6149e died, removing from map.
[W 2022-02-11 09:54:20.972 ServerApp] AsyncIOLoopKernelRestarter: restart failed
[W 2022-02-11 09:54:20.972 ServerApp] Kernel b37ce6bf-f2cb-428d-8188-f6b41ab3309e died, removing from map.
[W 2022-02-11 09:55:05.901 ServerApp] Timeout waiting for kernel_info reply from b37ce6bf-f2cb-428d-8188-f6b41ab3309e
[E 2022-02-11 09:55:05.904 ServerApp] Error opening stream: HTTP 404: Not Found (Kernel does not exist: b37ce6bf-f2cb-428d-8188-f6b41ab3309e)
[W 2022-02-11 09:55:06.196 ServerApp] Timeout waiting for kernel_info reply from c9bb27e5-56fc-4daf-9631-306d5d277217
[E 2022-02-11 09:55:06.197 ServerApp] Error opening stream: HTTP 404: Not Found (Kernel does not exist: c9bb27e5-56fc-4daf-9631-306d5d277217)
If you are using Anaconda to run Python then this post might be useful ...
http://www.techanswersweb.com/kernel-error-jupyter-notebook

Failed to connect to socket /opt/local/var/run/dbus/system_bus_socket: No such file or directory

I was trying to send message to the microbit using Bluezero I am using macOS, but got and error.
Sample code.
from bluezero import microbit
ubit = microbit.Microbit(adapter_addr='x',
device_addr='x',
accelerometer_service=True,
button_service=True,
magnetometer_service=False,
pin_service=False,
temperature_service=True)
my_text = 'Hello, world'
ubit.connect()
while my_text is not '':
ubit.text = my_text
my_text = input('Enter message: ')
ubit.disconnect()
Error
dbus.exceptions.DBusException:
org.freedesktop.DBus.Error.FileNotFound: Failed to connect to socket
/opt/local/var/run/dbus/system_bus_socket: No such file or directory
I've get this error on Ubuntu 20
Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
normally this file is created/listening by dbus daemon
# netstat --all --program | grep system_bus_socket
unix 2 [ ACC ] STREAM LISTENING 19161 1/init /run/dbus/system_bus_socket
but on this server dbus.service is not running
# systemctl status dbus.service
● dbus.service - D-Bus System Message Bus
Loaded: loaded (/lib/systemd/system/dbus.service; static; vendor preset: enabled)
Active: inactive (dead)
TriggeredBy: ● dbus.socket
Docs: man:dbus-daemon(1)
attempt to start dbus.service failed
# systemctl start dbus.service
Failed to start dbus.service: Operation refused, unit dbus.service may be requested by dependency only (it is configured to refuse manual start/stop).
Maybe it can be started by systemctl start dbus.socket but I've solved this by finding service which has dependency on dbus.service it was firewalld
# grep -r dbus /etc/systemd/system/*
/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service:After=dbus.service
and start it
# systemctl start firewalld
and that's it
# ls -la /var/run/dbus/system_bus_socket
srw-rw-rw- 1 root root 0 Jul 28 13:45 /var/run/dbus/system_bus_socket
The Bluezero library talks via DBus to the BlueZ bluetooth daemon (bluetoothd). As BlueZ is not available for MacOS, this is not going to work.

Postgresql server doesn't start

[Ubuntu 16.04]
I installed postgresql 9.5 along with dependencies:
sudo sh -c "echo 'deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main' > /etc/apt/sources.list.d/pgdg.list"
wget --quiet -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
sudo apt-get install postgresql-common
sudo apt-get install postgresql-9.5 libpq-dev
When I want to run psql then I get:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
But /var/run/postgresql/ is empty. When I restart posgresql everything appears to be fine:
$ /etc/init.d/postgresql restart
[ ok ] Restarting postgresql (via systemctl): postgresql.service.
$ /etc/init.d/postgresql status
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (exited) since wto 2016-09-27 16:18:26 CEST; 1min 15s ago
Process: 3076 ExecReload=/bin/true (code=exited, status=0/SUCCESS)
Process: 3523 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 3523 (code=exited, status=0/SUCCESS)
but if check ps aux there is not such PID (why??)
Total reinstalation doesn't help at all. How can I fix it?
Edit:
I've just noticed that then type /etc/init.d/postgresql force-reload postgresql, systemctl are alive awhile and then are killed automatically
Besides systemctl returns
postgresql.service loaded active exited PostgreSQL RDBMS
● postgresql#9.5-main.service loaded failed failed PostgreSQL Cluster 9.5-main
Why is it failed?

Jupyterhub not running on Ubuntu 14.04

Been stuck in this issue for the past two days.
Followed the github link regarding this issue. Didn't worked.
https://github.com/jupyter/jupyterhub/issues/237
saimmehmood#saimmehmood-VirtualBox:~$ sudo jupyterhub
[sudo] password for saimmehmood:
[I 2016-03-22 02:18:54.577 JupyterHub app:558] Loading cookie_secret from /home/saimmehmood/jupyterhub_cookie_secret
[W 2016-03-22 02:18:54.865 JupyterHub app:292]
Generating CONFIGPROXY_AUTH_TOKEN. Restarting the Hub will require restarting the proxy.
Set CONFIGPROXY_AUTH_TOKEN env or JupyterHub.proxy_auth_token config to avoid this message.
[W 2016-03-22 02:18:54.893 JupyterHub app:685] No admin users, admin interface will be unavailable.
[W 2016-03-22 02:18:54.900 JupyterHub app:686] Add any administrative users to `c.Authenticator.admin_users` in config.
[I 2016-03-22 02:18:54.906 JupyterHub app:712] Not using whitelist. Any authenticated user will be allowed.
[I 2016-03-22 02:18:55.016 JupyterHub app:1113] Hub API listening on http://127.0.0.1:8081/hub/
[E 2016-03-22 02:18:55.055 JupyterHub app:855] Refusing to run JuptyterHub without SSL. If you are terminating SSL in another layer, pass --no-ssl to tell JupyterHub to allow the proxy to listen on HTTP.
saimmehmood#saimmehmood-VirtualBox:~$ sudo jupyterhub --no-ssl
[I 2016-03-22 02:19:12.896 JupyterHub app:558] Loading cookie_secret from /home/saimmehmood/jupyterhub_cookie_secret
[W 2016-03-22 02:19:13.046 JupyterHub app:292]
Generating CONFIGPROXY_AUTH_TOKEN. Restarting the Hub will require restarting the proxy.
Set CONFIGPROXY_AUTH_TOKEN env or JupyterHub.proxy_auth_token config to avoid this message.
[W 2016-03-22 02:19:13.079 JupyterHub app:685] No admin users, admin interface will be unavailable.
[W 2016-03-22 02:19:13.080 JupyterHub app:686] Add any administrative users to `c.Authenticator.admin_users` in config.
[I 2016-03-22 02:19:13.080 JupyterHub app:712] Not using whitelist. Any authenticated user will be allowed.
[I 2016-03-22 02:19:13.149 JupyterHub app:1113] Hub API listening on http://127.0.0.1:8081/hub/
[W 2016-03-22 02:19:13.174 JupyterHub app:851] Running JupyterHub without SSL. There better be SSL termination happening somewhere else...
[I 2016-03-22 02:19:13.174 JupyterHub app:860] Starting proxy # http://*:8000/
/usr/bin/env: node: No such file or directory
[C 2016-03-22 02:19:14.297 JupyterHub app:1119] Failed to start proxy
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/jupyterhub/app.py", line 1117, in start
yield self.start_proxy()
File "/usr/local/lib/python3.4/dist-packages/jupyterhub/app.py", line 881, in start_proxy
_check()
File "/usr/local/lib/python3.4/dist-packages/jupyterhub/app.py", line 877, in _check
raise e
RuntimeError: Proxy failed to start with exit code 127
Kindly let me know any solution.
Thank you!
It looks like the problem is in this line
/usr/bin/env: node: No such file or directory
Either you don't have nodejs installed or it's not in $PATH. Note that nodejs/npm are required to run jupyterhub. It looks like you're running some linux distribution so you should just be able to run
sudo apt-get install npm nodejs-legacy
See the JupyterHub GitHub page and the docs for more info.

Resources