Compiled opencv3.2.0 on ubuntu16.04 failed - linux

I failed to compile opencv3.2.0 on ubuntu 16.04,that is the error:
Build output check failed:Regex: 'command line option .* is valid for .* but not for C++'
I tried to add -D CMAKE_C_COMPILER=/usr/bin/gcc-5 and -D ENABLE_CXX11=ON but it didn't work...So how can i solve this problem?

Ubuntu 16.04.6 - amd64 : Build opencv-3.2.0
sudo apt build-dep opencv
sudo apt install cmake python3-dev python3-numpy libgl1-mesa-dev libgoogle-glog-dev \
libgphoto2-dev liblapack-dev libleptonica-dev libprotobuf-dev libraw1394-dev libtbb-dev \
libtesseract-dev libv4l-dev maven-repo-helper ocl-icd-opencl-dev protobuf-compiler \
python-dev libtiff-dev libswscale-dev python-vtk *gstreamer*-dev linux-libc-dev \
libavresample-dev libatlas-cpp-0.6-dev libopenblas-dev doxygen \
libinsighttoolkit4-dev liblapacke-dev
tar xvf opencv_3.2.0+dfsg.orig.tar.gz
cd opencv-3.2.0+dfsg/
tar xvf ../opencv_3.2.0+dfsg-6.debian.tar.xz
mkdir build && cd build/
cmake -DCMAKE_INSTALL_PREFIX=/usr \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DOPENCV_MATHJAX_RELPATH=/usr/share/javascript/mathjax/ \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_EXAMPLES=ON \
-DINSTALL_C_EXAMPLES=ON \
-DINSTALL_PYTHON_EXAMPLES=ON \
-DWITH_FFMPEG=ON \
-DWITH_GSTREAMER=OFF \
-DWITH_GTK=ON \
-DWITH_JASPER=OFF \
-DWITH_JPEG=ON \
-DWITH_PNG=ON \
-DWITH_TIFF=ON \
-DWITH_OPENEXR=ON \
-DWITH_PVAPI=ON \
-DWITH_UNICAP=OFF \
-DWITH_EIGEN=ON \
-DWITH_VTK=ON \
-DWITH_GDAL=ON \
-DWITH_GDCM=ON \
-DWITH_XINE=OFF \
-DWITH_IPP=OFF \
-DBUILD_TESTS=OFF \
-DCMAKE_SKIP_RPATH=ON \
-DWITH_CUDA=OFF \
-DENABLE_PRECOMPILED_HEADERS=OFF \
-DWITH_IPP=OFF \
-DWITH_CAROTENE=OFF \
-DOPENCL_INCLUDE_DIR:PATH="/usr/include/CL/" ../
make ## no errors
.
[100%] Linking CXX executable ../../bin/opencv_version
.
[100%] Built target opencv_version

Related

Docker image generating error for /usr/bin/java

I am trying to run this docker image, but not sure why I am getting this error:
/usr/bin/time: cannot run /usr/bin/java: No such file or directory
Command exited with non-zero status 127
Can someone please help me debug this error?
My docker file:
FROM openjdk:8-jre
LABEL maintainer="APN <xxx#xxx.edu>"
LABEL org.label-schema.schema-version="1.0"
# LABEL org.label-schema.build-date=$BUILD_DATE
LABEL org.label-schema.name="apn/addreadgroups"
LABEL org.label-schema.description="Image for adding read groups in .bam"
ENV PICARD_VERSION 2.20.8
WORKDIR /tmp
RUN apt-get update -y \
&& apt-get install --no-install-recommends -y \
make \
gcc \
g++ \
libz-dev \
libbz2-dev \
liblzma-dev \
ncurses-dev \
bc \
libnss-sss \
time \
&& cd /tmp \
&& wget -q -O /usr/bin/picard.jar https://github.com/broadinstitute/picard/releases/download/${PICARD_VERSION}/picard.jar \
&& ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime \
&& echo "America/Chicago" > /etc/timezone \
&& dpkg-reconfigure --frontend noninteractive tzdata \
&& apt-get clean all \
&& rm -rfv /var/lib/apt/lists/* /tmp/* /var/tmp/*
# This makes the image crazy large -- will find a workaround
# COPY human_g1k_v37_decoy* /usr/local/
COPY ./entrypoint.sh /usr/local/bin/
ENV PICARD /usr/bin/picard.jar
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
# CMD ["/bin/bash"]
and the entrypoint.sh:
JAVAOPTS="-Xms2g -Xmx${MEM}g -XX:+UseSerialGC -Dpicard.useLegacyParser=false"
CUR_STEP="AddOrReplaceReadGroups"
/usr/bin/java ${JAVAOPTS} -jar "${PICARD}" \
"${CUR_STEP}" \
I="${INBAM}" \
O=${BAMFILE} \
RGID=${FLOWCELL} \
RGLB=${LIBRARY} \
RGPL=${PLATFORM} \
RGPU=${FLOWCELL} \
RGSM=${SM}
Exit status 127 means no command found.
This is due to the java command in openjdk:8-jre not located in /usr/bin/java, see next:
$ docker run -it openjdk:8-jre which java
/usr/local/openjdk-8/bin/java

Docker not exposing ports as expected

I have the below dockerfile that I am trying to build. At the end of the file I am attempting to expose ports 8888 and 6006.
FROM nvidia/cuda:9.2-devel-ubuntu16.04
LABEL maintainer="nweir <nweir#iqt.org>"
ARG solaris_branch='master'
# prep apt-get and cudnn
RUN apt-get update && apt-get install -y --no-install-recommends \
apt-utils && \
rm -rf /var/lib/apt/lists/*
# install requirements
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
bc \
bzip2 \
ca-certificates \
curl \
git \
libgdal-dev \
libssl-dev \
libffi-dev \
libncurses-dev \
libgl1 \
jq \
nfs-common \
parallel \
python-dev \
python-pip \
python-wheel \
python-setuptools \
unzip \
vim \
wget \
build-essential \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
SHELL ["/bin/bash", "-c"]
ENV PATH /opt/conda/bin:$PATH
# install anaconda
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda clean -tipsy && \
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc
# prepend pytorch and conda-forge before default channel
RUN conda update conda && \
conda config --prepend channels conda-forge && \
conda config --prepend channels pytorch
# get dev version of solaris and create conda environment based on its env file
WORKDIR /tmp/
RUN git clone https://github.com/cosmiq/solaris.git && \
cd solaris && \
git checkout ${solaris_branch} && \
conda env create -f environment.yml
ENV PATH /opt/conda/envs/solaris/bin:$PATH
RUN cd solaris && pip install .
# install various conda dependencies into the space_base environment
RUN conda install -n solaris \
jupyter \
jupyterlab \
ipykernel
# add a jupyter kernel for the conda environment in case it's wanted
RUN source activate solaris && python -m ipykernel.kernelspec \
--name solaris --display-name solaris
# open ports for jupyterlab and tensorboard
EXPOSE 8888
EXPOSE 6006
RUN ["/bin/bash"]
After building the dockerfile into an image I attempt to expose the ports by running the following command:
docker run -p localhost:8888:8888 -p localhost:6006:6006 1ff
When I run docker ps -a I get the below image. As you can see the ports are not exposed.
I am currently using Ubuntu 20.04.
I can't for the life of me figure out what is wrong, your help would be greatly appreciated!

SonarQube already scanned projects disappearing after Azure container instance or Azure docker based app service restarts

I have created/utilized a docker file for sonarqube community edition provided by the docker hub. I have also added volumes accordingly in the docker file and set the ACI/App service restart policy as "NEVER", but still whenever ACI/App service restarts, there is no history for the already scanned projects (and signing in sonarqube azure container-based serverless instance always asks for the creation of new project -> creation of token all over again).
Could anyone help me troubleshoot this issue, Below is the docker file and sonar screenshot for reference
FROM alpine:3.11
ENV JAVA_VERSION="jdk-11.0.6+10" \
LANG='en_US.UTF-8' \
LANGUAGE='en_US:en' \
LC_ALL='en_US.UTF-8'
#
# glibc setup
#
RUN set -eux; \
apk add --no-cache --virtual .build-deps curl binutils; \
GLIBC_VER="2.31-r0"; \
ALPINE_GLIBC_REPO="https://github.com/sgerrand/alpine-pkg-glibc/releases/download"; \
GCC_LIBS_URL="https://archive.archlinux.org/packages/g/gcc-libs/gcc-libs-9.1.0-2-x86_64.pkg.tar.xz"; \
GCC_LIBS_SHA256="91dba90f3c20d32fcf7f1dbe91523653018aa0b8d2230b00f822f6722804cf08"; \
ZLIB_URL="https://archive.archlinux.org/packages/z/zlib/zlib-1%3A1.2.11-3-x86_64.pkg.tar.xz"; \
ZLIB_SHA256=17aede0b9f8baa789c5aa3f358fbf8c68a5f1228c5e6cba1a5dd34102ef4d4e5; \
curl -LfsS https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub -o /etc/apk/keys/sgerrand.rsa.pub; \
SGERRAND_RSA_SHA256="823b54589c93b02497f1ba4dc622eaef9c813e6b0f0ebbb2f771e32adf9f4ef2"; \
echo "${SGERRAND_RSA_SHA256} */etc/apk/keys/sgerrand.rsa.pub" | sha256sum -c -; \
curl -LfsS ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-${GLIBC_VER}.apk > /tmp/glibc-${GLIBC_VER}.apk; \
apk add --no-cache /tmp/glibc-${GLIBC_VER}.apk; \
curl -LfsS ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-bin-${GLIBC_VER}.apk > /tmp/glibc-bin-${GLIBC_VER}.apk; \
apk add --no-cache /tmp/glibc-bin-${GLIBC_VER}.apk; \
curl -LfsS ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-i18n-${GLIBC_VER}.apk > /tmp/glibc-i18n-${GLIBC_VER}.apk; \
apk add --no-cache /tmp/glibc-i18n-${GLIBC_VER}.apk; \
/usr/glibc-compat/bin/localedef --force --inputfile POSIX --charmap UTF-8 "$LANG" || true; \
echo "export LANG=$LANG" > /etc/profile.d/locale.sh; \
curl -LfsS ${GCC_LIBS_URL} -o /tmp/gcc-libs.tar.xz; \
echo "${GCC_LIBS_SHA256} */tmp/gcc-libs.tar.xz" | sha256sum -c -; \
mkdir /tmp/gcc; \
tar -xf /tmp/gcc-libs.tar.xz -C /tmp/gcc; \
mv /tmp/gcc/usr/lib/libgcc* /tmp/gcc/usr/lib/libstdc++* /usr/glibc-compat/lib; \
strip /usr/glibc-compat/lib/libgcc_s.so.* /usr/glibc-compat/lib/libstdc++.so*; \
curl -LfsS ${ZLIB_URL} -o /tmp/libz.tar.xz; \
echo "${ZLIB_SHA256} */tmp/libz.tar.xz" | sha256sum -c -; \
mkdir /tmp/libz; \
tar -xf /tmp/libz.tar.xz -C /tmp/libz; \
mv /tmp/libz/usr/lib/libz.so* /usr/glibc-compat/lib; \
apk del --purge .build-deps glibc-i18n; \
rm -rf /tmp/*.apk /tmp/gcc /tmp/gcc-libs.tar.xz /tmp/libz /tmp/libz.tar.xz /var/cache/apk/*;
#
# AdoptOpenJDK/openjdk11 setup
#
RUN set -eux; \
apk add --no-cache --virtual .fetch-deps curl; \
ARCH="$(apk --print-arch)"; \
case "${ARCH}" in \
aarch64|arm64) \
ESUM='7ed04ed9ed7271528e7f03490f1fd7dfbbc2d391414bd6fe4dd80ec3bad76d30'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_aarch64_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
ppc64el|ppc64le) \
ESUM='49231f2c36487b53141ade3f7eb291e2855138b14b1129f9acf435ea9cc0e899'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_ppc64le_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
s390x) \
ESUM='bcb3f46cbad742b08c81e922e313549c029f436ac7d91ef3c9bed8e4049d67d2'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_s390x_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
amd64|x86_64) \
ESUM='c5a4e69e2be0e3e5f5bb7c759960b20650967d0f571baad4a7f15b2c03bda352'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_x64_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
*) \
echo "Unsupported arch: ${ARCH}"; \
exit 1; \
;; \
esac; \
curl -LfsSo /tmp/openjdk.tar.gz ${BINARY_URL}; \
echo "${ESUM} */tmp/openjdk.tar.gz" | sha256sum -c -; \
mkdir -p /opt/java/openjdk; \
cd /opt/java/openjdk; \
tar -xf /tmp/openjdk.tar.gz --strip-components=1; \
apk del --purge .fetch-deps; \
rm -rf /var/cache/apk/*; \
rm -rf /tmp/openjdk.tar.gz;
#
# SonarQube setup
#
ARG SONARQUBE_VERSION=8.4.0.35506
ARG SONARQUBE_ZIP_URL=https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-${SONARQUBE_VERSION}.zip
ENV JAVA_HOME=/opt/java/openjdk \
PATH="/opt/java/openjdk/bin:$PATH" \
SONARQUBE_HOME=/opt/sonarqube \
SONAR_VERSION="${SONARQUBE_VERSION}" \
SQ_DATA_DIR="/opt/sonarqube/data" \
SQ_EXTENSIONS_DIR="/opt/sonarqube/extensions" \
SQ_LOGS_DIR="/opt/sonarqube/logs" \
SQ_TEMP_DIR="/opt/sonarqube/temp"
RUN set -ex \
&& addgroup -S -g 1000 sonarqube \
&& adduser -S -D -u 1000 -G sonarqube sonarqube \
&& apk add --no-cache --virtual build-dependencies gnupg unzip curl \
&& apk add --no-cache bash su-exec ttf-dejavu \
# pub 2048R/D26468DE 2015-05-25
# Key fingerprint = F118 2E81 C792 9289 21DB CAB4 CFCA 4A29 D264 68DE
# uid sonarsource_deployer (Sonarsource Deployer) <infra#sonarsource.com>
# sub 2048R/06855C1D 2015-05-25
&& sed --in-place --expression="s?securerandom.source=file:/dev/random?securerandom.source=file:/dev/urandom?g" "${JAVA_HOME}/conf/security/java.security" \
&& for server in $(shuf -e ha.pool.sks-keyservers.net \
hkp://p80.pool.sks-keyservers.net:80 \
keyserver.ubuntu.com \
hkp://keyserver.ubuntu.com:80 \
pgp.mit.edu) ; do \
gpg --batch --keyserver "${server}" --recv-keys F1182E81C792928921DBCAB4CFCA4A29D26468DE && break || : ; \
done \
&& mkdir --parents /opt \
&& cd /opt \
&& curl --fail --location --output sonarqube.zip --silent --show-error "${SONARQUBE_ZIP_URL}" \
&& curl --fail --location --output sonarqube.zip.asc --silent --show-error "${SONARQUBE_ZIP_URL}.asc" \
&& gpg --batch --verify sonarqube.zip.asc sonarqube.zip \
&& unzip -q sonarqube.zip \
&& mv "sonarqube-${SONARQUBE_VERSION}" sonarqube \
&& rm sonarqube.zip* \
&& rm -rf ${SONARQUBE_HOME}/bin/* \
&& chown -R sonarqube:sonarqube ${SONARQUBE_HOME} \
# this 777 will be replaced by 700 at runtime (allows semi-arbitrary "--user" values)
&& chmod -R 777 "${SQ_DATA_DIR}" "${SQ_EXTENSIONS_DIR}" "${SQ_LOGS_DIR}" "${SQ_TEMP_DIR}" \
&& apk del --purge build-dependencies
COPY --chown=sonarqube:sonarqube run.sh sonar.sh ${SONARQUBE_HOME}/bin/
VOLUME ["/opt/sonarqube/data","/opt/sonarqube/extensions","/opt/sonarqube/logs","/opt/sonarqube/temp"]
WORKDIR ${SONARQUBE_HOME}
EXPOSE 9000
#These steps are added to give the desired access to the *sh files to execute in the docker web app service server.
RUN chmod 755 ./bin/run.sh
RUN chmod 755 ./bin/sonar.sh
ENTRYPOINT ["bin/run.sh"]
CMD ["bin/sonar.sh"]
SonarQubeServer:
enter image description here
According to the documentation: "By default, Azure Container Instances are stateless. If the container crashes or stops, all of its state is lost. To persist state beyond the lifetime of the container, you must mount a volume from an external store."
The documentation explains how to mount an Azure file share in Azure Container Instances.

After building TensorFlow from source, seeing __longjmp_chk: symbol not found error

I am building TensorFlow 1.10.1 from sources on Alpine Linux Docker image with Python 3.6. The Dockerfile was built successfully. Nevertheless running the following code:
import tensorflow
fails with:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/local/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/local/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: Error relocating /usr/local/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so: __longjmp_chk: symbol not found
I assumed the problem was in ld lib and added glibc, but still error appeared. I install Python packages with dependency management tool poetry.
Dockerfile:
FROM python:3.6-alpine
WORKDIR /app
COPY pyproject.toml /app/pyproject.toml
COPY pyproject.lock /app/pyproject.lock
# Install TensorFlow CPU version
ENV TENSORFLOW_VERSION 1.10.1
RUN apk update \
&& apk add bash \
ca-certificates \
libstdc++ \
libgfortran \
&& wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub \
&& wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.28-r0/glibc-2.28-r0.apk \
&& apk add glibc-2.28-r0.apk \
&& apk add --virtual build-deps \
curl \
gcc \
gfortran \
g++ \
libstdc++ \
libxml2-dev \
libxslt-dev \
linux-headers \
make \
musl-dev \
python3-dev \
py-numpy-dev \
&& apk --no-cache add --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing/ \
hdf5 \
hdf5-dev \
&& mkdir -p /tmp/build \
&& cd /tmp/build/ \
&& wget http://www.netlib.org/blas/blas-3.6.0.tgz \
&& wget http://www.netlib.org/lapack/lapack-3.6.1.tgz \
&& tar xzf blas-3.6.0.tgz \
&& tar xzf lapack-3.6.1.tgz \
&& cd /tmp/build/BLAS-3.6.0/ \
&& gfortran -O3 -std=legacy -m64 -fno-second-underscore -fPIC -c *.f \
&& ar r libfblas.a *.o \
&& ranlib libfblas.a \
&& mv libfblas.a /tmp/build/. \
&& cd /tmp/build/lapack-3.6.1/ \
&& sed -e "s/frecursive/fPIC/g" -e "s/ \.\.\// /g" -e "s/^CBLASLIB/\#CBLASLIB/g" make.inc.example > make.inc \
&& make lapacklib \
&& make clean \
&& mv liblapack.a /tmp/build/. \
&& cd / \
&& export BLAS=/tmp/build/libfblas.a \
&& export LAPACK=/tmp/build/liblapack.a \
&& python3.6 -m pip install --default-timeout=100 future \
&& curl -sSL https://raw.githubusercontent.com/sdispater/poetry/master/get-poetry.py | python3.6 \
# Download Tensorflow package
&& cd /app \
&& which ld \
&& wget -q https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-${TENSORFLOW_VERSION}-cp36-cp36m-linux_x86_64.whl \
&& python3.6 -m poetry config settings.virtualenvs.create false \
&& python3.6 -m poetry install \
&& apk del build-deps \
&& rm -rf /tmp/build \
/var/cache/apk/*
# Make sure Tensorflow built properly
RUN python3.6 -c 'import tensorflow'
COPY . /app
COPY bin/docker_entrypoint.sh /app/entrypoint.sh
COPY --from=0 /app/train app/train
EXPOSE 80
ENTRYPOINT ["/app/entrypoint.sh"]

Installing the most recent stable version of nodejs Dockerfile

The following portion of Dockerfile installs node, but defaults to v.4.2.6, How do I install the most recent stable version 7.4.0:
RUN apt-get clean && apt-get update \
&& apt-get -yqq install \
apache2 \
nodejs \ ## nodejs installed here
php \
php-mcrypt \
php-curl \
php-mbstring \
php-xml \
php-zip \
libapache2-mod-php \
php-mysql \
git \
supervisor \
&& apt-get -y autoremove \
&& apt-get clean \
&& php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
&& ln -sf /dev/stdout /var/log/apache2/access.log \
&& ln -sf /dev/stderr /var/log/apache2/error.log
According to the documentation from nodejs.org you can install it by doing this :
curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -
sudo apt-get install -y nodejs
So your Dockerfile could be like this :
RUN curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash \
&& apt-get clean && apt-get update \
&& apt-get -yqq install \
apache2 \
nodejs \ ## It should be the good version
RUN apt-get clean && apt-get update \
&& apt-get -yqq install \
apache2 \
php \
php-mcrypt \
php-curl \
php-mbstring \
php-xml \
php-zip \
libapache2-mod-php \
php-mysql \
git \
supervisor \
&& apt-get -y autoremove \
&& apt-get clean \
&& php -r "readfile('http://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
&& ln -sf /dev/stdout /var/log/apache2/access.log \
&& ln -sf /dev/stderr /var/log/apache2/error.log \
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash && nvm install 7.4.0 \
&& nvm use 7.4.0 \

Resources