I'm trying to build the library trilinos on a 32bit ubuntu virtual machine. I wrote the following configuration script:
cmake \
-D CMAKE_INSTALL_PREFIX:FILEPATH=./ \
-D Trilinos_ENABLE_ALL_OPTIONAL_PACKAGES:BOOL=OFF \
-D Trilinos_ENABLE_Anasazi:BOOL=ON \
-D Trilinos_ENABLE_Epetra:BOOL=ON \
-D Trilinos_ENABLE_EpetraEXt:BOOL=ON \
-D Trilinos_ENABLE_Triutils:BOOL=ON \
-D Trilinos_ENABLE_Belos:BOOL=ON \
-D Trilinos_ENABLE_Ifpack:BOOL=ON \
-D Trilinos_ENABLE_TESTS:BOOL=ON \
-D TPL_BLAS_LIBRARIES=/usr/lib/libblas.so.3 \
-D TPL_LAPACK_LIBRARIES=/usr/lib/liblapack.so.3 \
-D CMACKE_VERBOSE_MAKEFILE:BOOL=ON \
-D Trilinos_ENABLE_DEBUG:BOOL=ON \
-D CMACK_BUILD_TYPE:STRING=DEBUG \
-D Trilinos_ENABLE_EXPLICIT_INSTANTIATION:BOOL=ON \
../
When I execute it with the ksh command in the terminal, I get the following error:
CMake Error: CMAKE_Fortran_Compiler not set, after EnableLanguage
It appears you do not have a Fortran compiler installed. This is why cmake cannot set CMAKE_Fortran_Compiler on its own, and requests you to manually specify one.
Since you are using Ubuntu, I would recommend using gfortran from the GCC suite. If you install the compiler from the repository, cmake should be fine.
You can install the compiler using
sudo apt-get install gfortran
Related
I am trying to run this docker image, but not sure why I am getting this error:
/usr/bin/time: cannot run /usr/bin/java: No such file or directory
Command exited with non-zero status 127
Can someone please help me debug this error?
My docker file:
FROM openjdk:8-jre
LABEL maintainer="APN <xxx#xxx.edu>"
LABEL org.label-schema.schema-version="1.0"
# LABEL org.label-schema.build-date=$BUILD_DATE
LABEL org.label-schema.name="apn/addreadgroups"
LABEL org.label-schema.description="Image for adding read groups in .bam"
ENV PICARD_VERSION 2.20.8
WORKDIR /tmp
RUN apt-get update -y \
&& apt-get install --no-install-recommends -y \
make \
gcc \
g++ \
libz-dev \
libbz2-dev \
liblzma-dev \
ncurses-dev \
bc \
libnss-sss \
time \
&& cd /tmp \
&& wget -q -O /usr/bin/picard.jar https://github.com/broadinstitute/picard/releases/download/${PICARD_VERSION}/picard.jar \
&& ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime \
&& echo "America/Chicago" > /etc/timezone \
&& dpkg-reconfigure --frontend noninteractive tzdata \
&& apt-get clean all \
&& rm -rfv /var/lib/apt/lists/* /tmp/* /var/tmp/*
# This makes the image crazy large -- will find a workaround
# COPY human_g1k_v37_decoy* /usr/local/
COPY ./entrypoint.sh /usr/local/bin/
ENV PICARD /usr/bin/picard.jar
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
# CMD ["/bin/bash"]
and the entrypoint.sh:
JAVAOPTS="-Xms2g -Xmx${MEM}g -XX:+UseSerialGC -Dpicard.useLegacyParser=false"
CUR_STEP="AddOrReplaceReadGroups"
/usr/bin/java ${JAVAOPTS} -jar "${PICARD}" \
"${CUR_STEP}" \
I="${INBAM}" \
O=${BAMFILE} \
RGID=${FLOWCELL} \
RGLB=${LIBRARY} \
RGPL=${PLATFORM} \
RGPU=${FLOWCELL} \
RGSM=${SM}
Exit status 127 means no command found.
This is due to the java command in openjdk:8-jre not located in /usr/bin/java, see next:
$ docker run -it openjdk:8-jre which java
/usr/local/openjdk-8/bin/java
I have the below dockerfile that I am trying to build. At the end of the file I am attempting to expose ports 8888 and 6006.
FROM nvidia/cuda:9.2-devel-ubuntu16.04
LABEL maintainer="nweir <nweir#iqt.org>"
ARG solaris_branch='master'
# prep apt-get and cudnn
RUN apt-get update && apt-get install -y --no-install-recommends \
apt-utils && \
rm -rf /var/lib/apt/lists/*
# install requirements
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
bc \
bzip2 \
ca-certificates \
curl \
git \
libgdal-dev \
libssl-dev \
libffi-dev \
libncurses-dev \
libgl1 \
jq \
nfs-common \
parallel \
python-dev \
python-pip \
python-wheel \
python-setuptools \
unzip \
vim \
wget \
build-essential \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
SHELL ["/bin/bash", "-c"]
ENV PATH /opt/conda/bin:$PATH
# install anaconda
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-4.5.4-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda clean -tipsy && \
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc
# prepend pytorch and conda-forge before default channel
RUN conda update conda && \
conda config --prepend channels conda-forge && \
conda config --prepend channels pytorch
# get dev version of solaris and create conda environment based on its env file
WORKDIR /tmp/
RUN git clone https://github.com/cosmiq/solaris.git && \
cd solaris && \
git checkout ${solaris_branch} && \
conda env create -f environment.yml
ENV PATH /opt/conda/envs/solaris/bin:$PATH
RUN cd solaris && pip install .
# install various conda dependencies into the space_base environment
RUN conda install -n solaris \
jupyter \
jupyterlab \
ipykernel
# add a jupyter kernel for the conda environment in case it's wanted
RUN source activate solaris && python -m ipykernel.kernelspec \
--name solaris --display-name solaris
# open ports for jupyterlab and tensorboard
EXPOSE 8888
EXPOSE 6006
RUN ["/bin/bash"]
After building the dockerfile into an image I attempt to expose the ports by running the following command:
docker run -p localhost:8888:8888 -p localhost:6006:6006 1ff
When I run docker ps -a I get the below image. As you can see the ports are not exposed.
I am currently using Ubuntu 20.04.
I can't for the life of me figure out what is wrong, your help would be greatly appreciated!
I have created/utilized a docker file for sonarqube community edition provided by the docker hub. I have also added volumes accordingly in the docker file and set the ACI/App service restart policy as "NEVER", but still whenever ACI/App service restarts, there is no history for the already scanned projects (and signing in sonarqube azure container-based serverless instance always asks for the creation of new project -> creation of token all over again).
Could anyone help me troubleshoot this issue, Below is the docker file and sonar screenshot for reference
FROM alpine:3.11
ENV JAVA_VERSION="jdk-11.0.6+10" \
LANG='en_US.UTF-8' \
LANGUAGE='en_US:en' \
LC_ALL='en_US.UTF-8'
#
# glibc setup
#
RUN set -eux; \
apk add --no-cache --virtual .build-deps curl binutils; \
GLIBC_VER="2.31-r0"; \
ALPINE_GLIBC_REPO="https://github.com/sgerrand/alpine-pkg-glibc/releases/download"; \
GCC_LIBS_URL="https://archive.archlinux.org/packages/g/gcc-libs/gcc-libs-9.1.0-2-x86_64.pkg.tar.xz"; \
GCC_LIBS_SHA256="91dba90f3c20d32fcf7f1dbe91523653018aa0b8d2230b00f822f6722804cf08"; \
ZLIB_URL="https://archive.archlinux.org/packages/z/zlib/zlib-1%3A1.2.11-3-x86_64.pkg.tar.xz"; \
ZLIB_SHA256=17aede0b9f8baa789c5aa3f358fbf8c68a5f1228c5e6cba1a5dd34102ef4d4e5; \
curl -LfsS https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub -o /etc/apk/keys/sgerrand.rsa.pub; \
SGERRAND_RSA_SHA256="823b54589c93b02497f1ba4dc622eaef9c813e6b0f0ebbb2f771e32adf9f4ef2"; \
echo "${SGERRAND_RSA_SHA256} */etc/apk/keys/sgerrand.rsa.pub" | sha256sum -c -; \
curl -LfsS ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-${GLIBC_VER}.apk > /tmp/glibc-${GLIBC_VER}.apk; \
apk add --no-cache /tmp/glibc-${GLIBC_VER}.apk; \
curl -LfsS ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-bin-${GLIBC_VER}.apk > /tmp/glibc-bin-${GLIBC_VER}.apk; \
apk add --no-cache /tmp/glibc-bin-${GLIBC_VER}.apk; \
curl -LfsS ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-i18n-${GLIBC_VER}.apk > /tmp/glibc-i18n-${GLIBC_VER}.apk; \
apk add --no-cache /tmp/glibc-i18n-${GLIBC_VER}.apk; \
/usr/glibc-compat/bin/localedef --force --inputfile POSIX --charmap UTF-8 "$LANG" || true; \
echo "export LANG=$LANG" > /etc/profile.d/locale.sh; \
curl -LfsS ${GCC_LIBS_URL} -o /tmp/gcc-libs.tar.xz; \
echo "${GCC_LIBS_SHA256} */tmp/gcc-libs.tar.xz" | sha256sum -c -; \
mkdir /tmp/gcc; \
tar -xf /tmp/gcc-libs.tar.xz -C /tmp/gcc; \
mv /tmp/gcc/usr/lib/libgcc* /tmp/gcc/usr/lib/libstdc++* /usr/glibc-compat/lib; \
strip /usr/glibc-compat/lib/libgcc_s.so.* /usr/glibc-compat/lib/libstdc++.so*; \
curl -LfsS ${ZLIB_URL} -o /tmp/libz.tar.xz; \
echo "${ZLIB_SHA256} */tmp/libz.tar.xz" | sha256sum -c -; \
mkdir /tmp/libz; \
tar -xf /tmp/libz.tar.xz -C /tmp/libz; \
mv /tmp/libz/usr/lib/libz.so* /usr/glibc-compat/lib; \
apk del --purge .build-deps glibc-i18n; \
rm -rf /tmp/*.apk /tmp/gcc /tmp/gcc-libs.tar.xz /tmp/libz /tmp/libz.tar.xz /var/cache/apk/*;
#
# AdoptOpenJDK/openjdk11 setup
#
RUN set -eux; \
apk add --no-cache --virtual .fetch-deps curl; \
ARCH="$(apk --print-arch)"; \
case "${ARCH}" in \
aarch64|arm64) \
ESUM='7ed04ed9ed7271528e7f03490f1fd7dfbbc2d391414bd6fe4dd80ec3bad76d30'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_aarch64_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
ppc64el|ppc64le) \
ESUM='49231f2c36487b53141ade3f7eb291e2855138b14b1129f9acf435ea9cc0e899'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_ppc64le_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
s390x) \
ESUM='bcb3f46cbad742b08c81e922e313549c029f436ac7d91ef3c9bed8e4049d67d2'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_s390x_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
amd64|x86_64) \
ESUM='c5a4e69e2be0e3e5f5bb7c759960b20650967d0f571baad4a7f15b2c03bda352'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.6%2B10/OpenJDK11U-jre_x64_linux_hotspot_11.0.6_10.tar.gz'; \
;; \
*) \
echo "Unsupported arch: ${ARCH}"; \
exit 1; \
;; \
esac; \
curl -LfsSo /tmp/openjdk.tar.gz ${BINARY_URL}; \
echo "${ESUM} */tmp/openjdk.tar.gz" | sha256sum -c -; \
mkdir -p /opt/java/openjdk; \
cd /opt/java/openjdk; \
tar -xf /tmp/openjdk.tar.gz --strip-components=1; \
apk del --purge .fetch-deps; \
rm -rf /var/cache/apk/*; \
rm -rf /tmp/openjdk.tar.gz;
#
# SonarQube setup
#
ARG SONARQUBE_VERSION=8.4.0.35506
ARG SONARQUBE_ZIP_URL=https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-${SONARQUBE_VERSION}.zip
ENV JAVA_HOME=/opt/java/openjdk \
PATH="/opt/java/openjdk/bin:$PATH" \
SONARQUBE_HOME=/opt/sonarqube \
SONAR_VERSION="${SONARQUBE_VERSION}" \
SQ_DATA_DIR="/opt/sonarqube/data" \
SQ_EXTENSIONS_DIR="/opt/sonarqube/extensions" \
SQ_LOGS_DIR="/opt/sonarqube/logs" \
SQ_TEMP_DIR="/opt/sonarqube/temp"
RUN set -ex \
&& addgroup -S -g 1000 sonarqube \
&& adduser -S -D -u 1000 -G sonarqube sonarqube \
&& apk add --no-cache --virtual build-dependencies gnupg unzip curl \
&& apk add --no-cache bash su-exec ttf-dejavu \
# pub 2048R/D26468DE 2015-05-25
# Key fingerprint = F118 2E81 C792 9289 21DB CAB4 CFCA 4A29 D264 68DE
# uid sonarsource_deployer (Sonarsource Deployer) <infra#sonarsource.com>
# sub 2048R/06855C1D 2015-05-25
&& sed --in-place --expression="s?securerandom.source=file:/dev/random?securerandom.source=file:/dev/urandom?g" "${JAVA_HOME}/conf/security/java.security" \
&& for server in $(shuf -e ha.pool.sks-keyservers.net \
hkp://p80.pool.sks-keyservers.net:80 \
keyserver.ubuntu.com \
hkp://keyserver.ubuntu.com:80 \
pgp.mit.edu) ; do \
gpg --batch --keyserver "${server}" --recv-keys F1182E81C792928921DBCAB4CFCA4A29D26468DE && break || : ; \
done \
&& mkdir --parents /opt \
&& cd /opt \
&& curl --fail --location --output sonarqube.zip --silent --show-error "${SONARQUBE_ZIP_URL}" \
&& curl --fail --location --output sonarqube.zip.asc --silent --show-error "${SONARQUBE_ZIP_URL}.asc" \
&& gpg --batch --verify sonarqube.zip.asc sonarqube.zip \
&& unzip -q sonarqube.zip \
&& mv "sonarqube-${SONARQUBE_VERSION}" sonarqube \
&& rm sonarqube.zip* \
&& rm -rf ${SONARQUBE_HOME}/bin/* \
&& chown -R sonarqube:sonarqube ${SONARQUBE_HOME} \
# this 777 will be replaced by 700 at runtime (allows semi-arbitrary "--user" values)
&& chmod -R 777 "${SQ_DATA_DIR}" "${SQ_EXTENSIONS_DIR}" "${SQ_LOGS_DIR}" "${SQ_TEMP_DIR}" \
&& apk del --purge build-dependencies
COPY --chown=sonarqube:sonarqube run.sh sonar.sh ${SONARQUBE_HOME}/bin/
VOLUME ["/opt/sonarqube/data","/opt/sonarqube/extensions","/opt/sonarqube/logs","/opt/sonarqube/temp"]
WORKDIR ${SONARQUBE_HOME}
EXPOSE 9000
#These steps are added to give the desired access to the *sh files to execute in the docker web app service server.
RUN chmod 755 ./bin/run.sh
RUN chmod 755 ./bin/sonar.sh
ENTRYPOINT ["bin/run.sh"]
CMD ["bin/sonar.sh"]
SonarQubeServer:
enter image description here
According to the documentation: "By default, Azure Container Instances are stateless. If the container crashes or stops, all of its state is lost. To persist state beyond the lifetime of the container, you must mount a volume from an external store."
The documentation explains how to mount an Azure file share in Azure Container Instances.
I failed to compile opencv3.2.0 on ubuntu 16.04,that is the error:
Build output check failed:Regex: 'command line option .* is valid for .* but not for C++'
I tried to add -D CMAKE_C_COMPILER=/usr/bin/gcc-5 and -D ENABLE_CXX11=ON but it didn't work...So how can i solve this problem?
Ubuntu 16.04.6 - amd64 : Build opencv-3.2.0
sudo apt build-dep opencv
sudo apt install cmake python3-dev python3-numpy libgl1-mesa-dev libgoogle-glog-dev \
libgphoto2-dev liblapack-dev libleptonica-dev libprotobuf-dev libraw1394-dev libtbb-dev \
libtesseract-dev libv4l-dev maven-repo-helper ocl-icd-opencl-dev protobuf-compiler \
python-dev libtiff-dev libswscale-dev python-vtk *gstreamer*-dev linux-libc-dev \
libavresample-dev libatlas-cpp-0.6-dev libopenblas-dev doxygen \
libinsighttoolkit4-dev liblapacke-dev
tar xvf opencv_3.2.0+dfsg.orig.tar.gz
cd opencv-3.2.0+dfsg/
tar xvf ../opencv_3.2.0+dfsg-6.debian.tar.xz
mkdir build && cd build/
cmake -DCMAKE_INSTALL_PREFIX=/usr \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DOPENCV_MATHJAX_RELPATH=/usr/share/javascript/mathjax/ \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_EXAMPLES=ON \
-DINSTALL_C_EXAMPLES=ON \
-DINSTALL_PYTHON_EXAMPLES=ON \
-DWITH_FFMPEG=ON \
-DWITH_GSTREAMER=OFF \
-DWITH_GTK=ON \
-DWITH_JASPER=OFF \
-DWITH_JPEG=ON \
-DWITH_PNG=ON \
-DWITH_TIFF=ON \
-DWITH_OPENEXR=ON \
-DWITH_PVAPI=ON \
-DWITH_UNICAP=OFF \
-DWITH_EIGEN=ON \
-DWITH_VTK=ON \
-DWITH_GDAL=ON \
-DWITH_GDCM=ON \
-DWITH_XINE=OFF \
-DWITH_IPP=OFF \
-DBUILD_TESTS=OFF \
-DCMAKE_SKIP_RPATH=ON \
-DWITH_CUDA=OFF \
-DENABLE_PRECOMPILED_HEADERS=OFF \
-DWITH_IPP=OFF \
-DWITH_CAROTENE=OFF \
-DOPENCL_INCLUDE_DIR:PATH="/usr/include/CL/" ../
make ## no errors
.
[100%] Linking CXX executable ../../bin/opencv_version
.
[100%] Built target opencv_version
I am trying to build ffmpeg with dav1d. I've successfully built davit using the following commands:
git clone --depth=1 https://code.videolan.org/videolan/dav1d.git && \
cd dav1d && \
mkdir build && cd build && \
meson .. && \
ninja
After that, I am running a config command for the FFmpeg and get the error:
PKG_CONFIG_PATH="/app/ffmpeg_build/lib/pkgconfig" ./configure \
--prefix="/app/ffmpeg_build" \
--pkg-config-flags="--static" \
--extra-cflags="-I/app/ffmpeg_build/include" \
--extra-ldflags="-L/app/ffmpeg_build/lib" \
--extra-libs="-lpthread -lm" \
--bindir="/usr/local/bin" \
--enable-gpl \
--enable-libass \
--enable-libmp3lame \
--enable-libfreetype \
--enable-libopus \
--enable-libvorbis \
--enable-libx264 \
--enable-libdav1d \
--enable-nonfree
(All other libs are installed and FFmpeg configures and builds correctly with them if I omit --enable-libdav1d, but in case of the above command I get):
ERROR: dav1d >= 0.2.1 not found using pkg-config
I think the reason could be that meson puts bin files in the wrong directory. Could anyone please help?
P.S. I am using Ubuntu 18.04.
Example of build commands for the other libs:
git -C x264 pull 2> /dev/null || git clone --depth 1 https://code.videolan.org/videolan/x264.git && \
cd x264 && \
PKG_CONFIG_PATH="/app/ffmpeg_build/lib/pkgconfig" ./configure --prefix="/app/ffmpeg_build" --bindir="/usr/local/bin" --enable-static --enable-pic && \
make && \
make install
To pass the build you have to add ninja install:
git clone --depth=1 https://code.videolan.org/videolan/dav1d.git && \
cd dav1d && \
mkdir build && cd build && \
meson --bindir="/usr/local/bin" .. && \
ninja && \
ninja install
But this not enough, if you run FFmpeg after, you'll get:
ffmpeg: error while loading shared libraries: libdav1d.so.4: cannot open shared object file: No such file or directory
To fix this issue add /usr/local/lib/x86_64-linux-gnu to the LD_LIBRARY_PATH:
export LD_LIBRARY_PATH+=":/usr/local/lib/x86_64-linux-gnu"