I am running CentOS Linux release 7.2.1511 (Core).
Running the "first network" sample of the brand new Hyperledger Fabric 1.0 out yesterday I am getting the error:
Error: Error endorsing chaincode: rpc error: code = Unknown desc = Error starting container: API error (500): {"message":"oci runtime error: container_linux.go:262: starting container process caused \"process_linux.go:339: container init caused \\"read init-p: connection reset by peer\\"\"\n"}
How do I debug further?
My complete installation procedure of prereqs was as follows:
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 policycoreutils-python git dos2unux unzip gcc-c++ make
sudo yum-config-manager --enable rhel-7-server-extras-rpms
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum makecache fast
wget http://mirror.centos.org/centos/7/extras/x86_64/Packages/container-selinux-2.9-4.el7.noarch.rpm
yum -y install docker-ce
sudo yum -y install docker-ce
sudo systemctl start docker
sudo docker run hello-world
sudo usermod -aG docker root
sudo usermod -aG docker vagrant
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum install -y ./epel-release-latest-7.noarch.rpm
sudo yum install -y python-pip
sudo pip install docker-compose
sudo yum upgrade python*
cd
mkdir docker-compose-hello-world
cd docker-compose-hello-world
echo 'my-test:' > ./docker-compose.yml
echo ' image: hello-world' >> ./docker-compose.yml
docker-compose up
sudo docker-compose up
cd
mkdir golang
cd golang
echo downloading go1.8.3.linux-amd64.tar.gz
wget https://storage.googleapis.com/golang/go1.8.3.linux-amd64.tar.gz
sudo cp ./go1.8.3.linux-amd64.tar.gz /usr/local
cd /usr/local
sudo tar -C /usr/local -xzf go1.8.3.linux-amd64.tar.gz
sudo vi /etc/profile
cd
sudo curl -sL https://rpm.nodesource.com/setup_6.x | sudo bash -
sudo yum install -y nodejs
sudo npm install npm#latest -g
cd
git clone https://github.com/hyperledger/fabric-samples.git
sudo docker run hello-world
sudo systemctl start docker
sudo docker run hello-world
cd fabric-samples
# Stackoverflow validation asked me to replace short URL
# goo.gl/iX9dek with long one below:
curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap-1.0.0.sh | sudo bash
cd first-network
yes | sudo ./byfn.sh -m generate
yes | sudo ./byfn.sh -m up
My NodeJs (node) version is v6.11.1
My npm version is 5.2.0
My Golang version is go1.8.3 linux/amd64
Thanks in advance for any enlightenment!
Resolved!
Inspired by reference https://github.com/moby/moby/issues/34046, I upgraded the Linux kernel using the instructions at:
https://www.tecmint.com/install-upgrade-kernel-version-in-centos-7/
Works perfectly after a reboot and a selection of the new kernel in the boot menu.
Downside: the Vagrant shared folders between the Windows 7 host OS and the CentOS virtual machine no longer work, but that should be fixed by the next Oracle Virtualbox client tools update.
Related
wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo apt update
sudo apt install jenkins
these commands are not working fine when I'm running this im getting:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package jenkins is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'jenkins' has no installation candidate
How can i install jenkins in wsl ubuntu
Follow the below steps.
First install certificates for WSL environment.
sudo apt install ca-certificates
Then Install Jenkins.
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee \
/usr/share/keyrings/jenkins-keyring.asc > /dev/null
echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
/etc/apt/sources.list.d/jenkins.list > /dev/null
sudo apt-get update
sudo apt-get install jenkins
Note: If you have docker running it's better to start Jenkins as a Docker container.
I have created a notebook Docker file as below to running the JupyterHub and JupyterLab
FROM ubuntu:16.04
RUN apt-get update
RUN apt-get install sudo
RUN sudo useradd -m admin
RUN sudo echo -e "admin\nadmin\n" | passwd admin
RUN sudo echo "admin ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
USER admin
RUN sudo apt-get update && sudo apt-get install -y --no-install-recommends apt-utils
RUN sudo apt-get update \
&& sudo apt-get install -y build-essential \
&& sudo apt-get install -y libffi-dev \
&& sudo apt-get install -y libmysqlclient-dev \
&& sudo apt-get install -y libsasl2-dev \
&& sudo apt-get install -y openjdk-8-jdk \
&& sudo apt-get install -y openssh-server \
&& sudo apt-get install -y python-dev \
&& sudo apt-get install -y unzip \
&& sudo apt-get install -y wget \
&& sudo apt-get install -y mysql-client \
&& sudo apt-get install -y git
RUN sudo apt-get install -y libssl-dev libxml2-dev libxslt1-dev zlib1g-dev libkrb5-dev
RUN sudo apt-get update
RUN sudo apt-get install -y python3-pip
RUN sudo apt-get install -y python-pip
RUN sudo apt-get install -y python3-venv
RUN sudo python3 -m venv /opt/jupyterhub/
RUN sudo /opt/jupyterhub/bin/python3 -m pip install --upgrade pip
RUN sudo /opt/jupyterhub/bin/python3 -m pip install wheel
RUN sudo /opt/jupyterhub/bin/python3 -m pip install jupyterhub jupyterlab
RUN sudo /opt/jupyterhub/bin/python3 -m pip install ipywidgets
RUN sudo apt-get install -y curl
RUN sudo apt-get install -y nodejs npm
RUN sudo curl -sL https://deb.nodesource.com/setup_10.x -o nodesource_setup.sh
RUN sudo bash nodesource_setup.sh
RUN sudo apt-get install -y nodejs
RUN sudo npm install -g -y configurable-http-proxy
RUN sudo mkdir -p /opt/jupyterhub/etc/jupyterhub/
RUN cd /opt/jupyterhub/etc/jupyterhub/
RUN sudo /opt/jupyterhub/bin/jupyterhub --generate-config
RUN sudo mkdir -p /opt/jupyterhub/etc/systemd
RUN sudo chown -R admin:admin /opt/jupyterhub
RUN sudo echo "c.Spawner.default_url = '/lab' " >> /opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py
RUN sudo echo "c.Authenticator.admin_users = {'admin'} " >> /opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py
RUN sudo echo "c.LocalAuthenticator.create_system_users=True" >> /opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py
RUN sudo echo -e '[Unit]\nDescription=JupyterHub\nAfter=syslog.target network.target\n\n[Service]\nUser=root\nEnvironment="PATH=/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/jupyterhub/bin"\nExecStart=/opt/jupyterhub/bin/jupyterhub -f /opt/jupyterhub/etc/jupyterhub/jupyterhub_config.py\n\n[Install]\nWantedBy=multi-user.target' >> /opt/jupyterhub/etc/systemd/jupyterhub.service
RUN sudo cp /opt/jupyterhub/etc/systemd/jupyterhub.service /etc/systemd/system/jupyterhub.service
RUN sudo systemctl enable jupyterhub.service
#RUN sudo chown -R admin:admin /home/admin/.cache/pip
RUN sudo /opt/jupyterhub/bin/python3 -m pip install --upgrade setuptools
RUN sudo /opt/jupyterhub/bin/python3 -m pip install ipython==3.2.3
RUN sudo /opt/jupyterhub/bin/python3 -m pip install zipp==1.2.0
RUN sudo /opt/jupyterhub/bin/python3 -m pip install git+https://github.com/as-sher/sparkmagic.git#subdirectory=sparkmagic
RUN sudo /opt/jupyterhub/bin/jupyter-kernelspec install /opt/jupyterhub/lib/python3.5/site-packages/sparkmagic/kernels/sparkkernel
RUN sudo /opt/jupyterhub/bin/jupyter-kernelspec install /opt/jupyterhub/lib/python3.5/site-packages/sparkmagic/kernels/pysparkkernel
RUN sudo sed -i 's|root:x:0:0:root:/root:/bin/bash|root:x:0:0:root:/root:/sbin/nolgin|g' /etc/passwd
WORKDIR /home/admin
#USER root
EXPOSE 8000 2222
CMD SUDO SYSTEMCTL START JUPYTERHUB.SERVICE
When I am running this container as a root user, it is working, but when I am running this with admin (sudo user), I am getting the following error
Failed to mount tmpfs at /run/lock: Operation not permitted
[!!!!!!] Failed to mount API filesystems, freezing.
Freezing execution.
So my concern is that I have to run this container on Kubernetes and I don't want to run it as a root user and with privileged flag.
Is there any way in which I can run the existing docker with non-root user or if we can run the jupyter service without adding systemd.
There are some ways you can try in order to run Docker as a non-root user.
Manage Docker as a non-root user:
If you don’t want to preface the docker command with sudo, create a
Unix group called docker and add users to it. When the Docker daemon
starts, it creates a Unix socket accessible by members of the docker
group.
Create a docker group if there isn’t one:
$ sudo groupadd docker
Add your user to the docker group:
$ sudo usermod -aG docker [non-root user]
Log out and log back in so that your group membership is re-evaluated.
Run the Docker daemon as a non-root user (Rootless mode):
Rootless mode allows running the Docker daemon and containers as a
non-root user to mitigate potential vulnerabilities in the daemon and
the container runtime. Rootless mode does not require root privileges
even during the installation of the Docker daemon, as long as the
prerequisites are met.
Notice however that the second option is currently available as an experimental feature. You can find all the necessary details in the linked docs.
Please let me know if that helped.
I'm trying to follow this tutorial and having issues with Node.js installation. Installing on a Debian VM, and have run the suggested installation command on the nodejs site:
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejson
When I run sudo apt-get install nodejs-legacy It gives me this error:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
nodejs-legacy : Depends: nodejs (>= 0.6.19~dfsg1-3~) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
Any ideas about what's going on?
Found this old .txt file with some instructions whilst sorting through junk. Looks like I ended up solving the problem.
sudo apt-get install python3-pip
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
sudo apt-get install -y build-essential
sudo npm i webpack -g
sudo npm install --global yarn#1.0.2
https://cloud.google.com/community/tutorials/setting-up-postgres
sudo -u postgres psql -c 'create database saleor'
sudo -u postgres psql -c "CREATE ROLE saleor WITH SUPERUSER CREATEDB CREATEROLE LOGIN ENCRYPTED PASSWORD 'saleor';"
sudo -u postgres psql -c 'grant all privileges on database saleor to saleor;'
sudo apt-get install python3-venv
pyvenv env1
source env1/bin/activate
deactivate
sudo apt-get install git
git clone https://github.com/mirumee/saleor.git
cd saleor
pip3 install -r requirements.txt
export SECRET_KEY='yourkey'
python3 manage.py migrate
yarn
sudo apt-get install libfontconfig
yarn run build-assets
I am writing a script where I should install MySQL on my Centos hard disk when I run it.
I downloaded the following :
mysql-apt-config_0.8.5-1_all.deb
But I don't figure out how to install it from the shell script.
I am new to Linux and any help is appreciated !!
Since Centos is yum package manager just run the following commands to install MySql
If you are using Centos 7, you need to first add the repository as follows
wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
sudo rpm -ivh mysql-community-release-el7-5.noarch.rpm
yum update
To install MySql on Centos, we run
sudo yum install mysql-server
sudo systemctl start mysqld
Note: .deb files are only used for Debian based Linux Distros. Centos uses .rpm files.
Try to save following command in a shell file(extension with .sh) and run with sudo.
#!/bin/bash**strong text**
export http_proxy= $Write proxy
export https_proxy= $ write proxy
sudo -E yum -y update
sudo -E yum -y install wget
wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
sudo rpm -ivh mysql-community-release-el7-5.noarch.rpm
sudo -E yum -y update
sudo -E yum -y install mysql-server
sudo systemctl start mysqld
sleep 1s
mysql -u root <<-EOF
UPDATE mysql.user SET Password=PASSWORD('123') WHERE User='root';
DELETE FROM mysql.user WHERE User='root' AND Host NOT IN ('localhost', '127.0.0.1', '::1');
DELETE FROM mysql.user WHERE User='';
DELETE FROM mysql.db WHERE Db='test' OR Db='test_%';
FLUSH PRIVILEGES;
EOF
I'm running a virtual machine in Windows Azure with the prebuild image for Ubuntu 14.04 LTS.
When I want to install Docker.io like described here:
http://blog.docker.io/2014/04/docker-in-ubuntu-ubuntu-in-docker/
The installation works but when i`m running:
sudo docker.io pull ubuntu
An error will be thrown:
Cannot connect to the Docker daemon. Is docker -d running on this host?
Can anyone help or has the similar problem?
P.S.: Can anyone with a high reputation create a Tag for Ubuntu-14.04?
Evidently the docker daemon is not running. You wanna check /etc/default/docker.conf for proper configuration and issue
sudo service docker.io start
or
sudo service docker start
depending on how they called the service
Adding myself to the docker group:
sudo usermod -a -G docker myuser
and rebooting the machine worked for me. This solution is discussed in: https://github.com/docker/docker/issues/5314
On Ubuntu 14.04, the docker.io package installs Docker 0.9.1.
According to the documentation, to install the current version use these commands:
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update
$ sudo apt-get install lxc-docker
There is also a simple script available to help with this process:
$ curl -s https://get.docker.io/ubuntu/ | sudo sh
Alternatively, check the azure-docker-registry project for an example of how to automate Azure provisioning and Docker container deployment. For instance, this Ansible playbook:
- name: create docker data directory
file: path=/mnt/data/docker state=directory
- name: store docker files in data disk
file: src=/mnt/data/docker dest=/var/lib/docker state=link
- name: add repository key
command: creates=/etc/apt/sources.list.d/docker.list apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
- name: copy repository source file
copy: src=docker.list dest=/etc/apt/sources.list.d/docker.list
- name: install docker package
apt: name=lxc-docker update_cache=yes state=present
Also make sure to symlink the docker.io binary to docker to use the tutorials/documentation without rewriting every command.
ln -s /usr/bin/docker.io /usr/bin/docker
Run docker -d to see if it shows any error messages.
If apparmor is missing install it with sudo apt-get install apparmor
Then sudo service docker start
Hard to say but sometime official docker installation procedure fails on Ubuntu 14.04.
One can simply install docker using below given commands [Quick and Dirty]
sudo apt-get update
sudo apt-get -y install docker.io