Update nifi Controllers and Processors through API without IDs - linux

I need to automate nifi bootstraping using docker(specifically nifi-1.17.0).
I want to be able to load a template on each new container instance andi nstead of running the new instance and then logging in to the UI and changing all settings manually, I want to do it automatically within a bash script.
What I need is to update a controller service's fields, and 1 processor's property.
I understand that my workflow is such:
stop processor
disable controller service
update controller service
enable controller service
start processor
and
stop processor
update property
start processor
but I also understand that each time I use the template, all processors and controllers have a different ID.
How can I automate this or use the API if I dont know what ID my processors and controllers will have?
And how do I upload a template and use it through the API?
Thanks in advance to all!
I am running the container using the following Dockerfile:
FROM ubuntu:20.04
# Args
ENV NIFI_SERVER_IP "some ip"
ENV BASE_DIR "some path"
# Create our workdir
RUN mkdir -p ${BASE_DIR} && cd ${BASE_DIR}
# Set workdir
WORKDIR ${BASE_DIR}
# Copy bootstrap to workdir
COPY script.sh .
# Run the bootstrap
RUN apt-get update && apt-get install curl -y
RUN ./script.sh ${NIFI_SERVER_IP}
# Set workdir to new workdir
#WORKDIR ${BASE_DIR}/something i didnt want to share/nifi-1.17.0/bin
# Command to run when container starts
#CMD ["./nifi.sh","start"]
and my script is:
#!/bin/bash
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install dialog
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends tzdata
DEBIAN_FRONTEND=noninteractive apt-get install wget -y
DEBIAN_FRONTEND=noninteractive apt-get install git-all -y
wget https://archive.apache.org/dist/nifi/1.17.0/nifi-1.17.0-bin.zip
DEBIAN_FRONTEND=noninteractive apt install unzip -y
unzip nifi-1.17.0-bin.zip
cd nifi-1.17.0
sed -i 's/127.0.0.1/0.0.0.0/g' conf/nifi.properties
sed -i "s/nifi.web.proxy.host=/nifi.web.proxy.host=$1:8443/g" conf/nifi.properties
DEBIAN_FRONTEND=noninteractive apt-get install java-11-openjdk -y
DEBIAN_FRONTEND=noninteractive apt-get install openjdk-11 -y
DEBIAN_FRONTEND=noninteractive apt-get install openjdk-11-jre -y
echo "export JAVA_HOME='/usr/lib/jvm/java-11-openjdk-amd64/'" >> ~/.bashrc
cd bin
./nifi.sh start
./nifi.sh set-single-user-credentials admin nifipassword1
./nifi.sh stop

You should name each Controller or Processor object in your flow uniquely, and search for them by that name in a setup script. Some of these actions are possible via the NiFi CLI, however I would recommend you automate more complex NiFi Flow configurations via the community Python client NiPyAPI (I am the main author)

Related

How to add user and a group in Docker Container running Macosx

I have a Docker container running "FROM arm64v8/oraclelinux:8" , I am running this on a Mac m1 mini using tightvnc.
I want to add a user called "suiteuser" (uid 42065) and in a group called "cvsgroup" (gid 513), inside my docker container, So that when I run the container it starts under my user directly.
Here is my entire Dockerfile-
FROM arm64v8/oraclelinux:8
# Setup basic environment stuff
ENV container docker
ENV LANG en_US.UTF-8
ENV TZ EST
ENV DEBIAN_FRONTEND=noninteractive
# Base image stuff
#RUN yum install -y zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel openssl-devel vim yum-utils sssd sssd-tools krb5-libs krb5-workstation.x86_64
# CCSMP dependent
RUN yum install -y wget
RUN yum install -y openssl-libs-1.1.1g-15.el8_3.aarch64
RUN yum install -y krb5-workstation krb5-libs krb5-devel
RUN yum install -y glibc-devel glibc-common
RUN yum install -y make gcc java-1.8.0-openjdk-devel tar perl maven svn openssl-devel gcc
RUN yum install -y gdb
RUN yum install -y openldap* openldap-clients nss-pam-ldapd
RUN yum install -y zlib-devel bzip2 bzip2-devel vim yum-utils sssd sssd-tools
# Minor changes to image to get ccsmp to build
RUN ln -s /usr/lib/jvm/java-1.8.0-openjdk /usr/lib/jvm/default-jvm
RUN cp /usr/include/linux/stddef.h /usr/include/stddef.h
# Install ant 1.10.12
RUN wget https://mirror.its.dal.ca/apache//ant/binaries/apache-ant-1.10.12-bin.zip
RUN unzip apache-ant-1.10.12-bin.zip && mv apache-ant-1.10.12/ /opt/ant
ENV JAVA_HOME /usr
ENV ANT_HOME="/usr/bin/ant"
ENV PATH="/usr/bin/ant:$PATH"
CMD /bin/bash
could anyone please suggest any ideas on how to do this.
Note 1. I know its not advisable to do this directly in the container as, every time you want to make any changes you would have to rebuild it, but this time i want to do this.
To create the group:
RUN groupadd -g 513 cvsgroup
To create the user, as a member of that group:
RUN useradd -G cvsgroup -m -u 42065 suiteuser
And toward the end of Dockerfile, you can set the user:
USER suiteuser
There may be more to do here, though, depending on your application. For example, you may need to chown some of the contents to be owned by suiteuser.

Issue with node-oracledb inside a Docker container

I have a Nest.js app which is connecting to an Oracle database through TypeORM and i'm trying to containerize my application using Docker. So the issue here is that when the app is not containerized it works fine but when I do, I get the following error.
[Nest] 19 - 06/16/2020, 4:00:52 AM [TypeOrmModule] Unable to connect to the database. Retrying (4)... +3003ms
Error: DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/odpi/doc/installation.html#linux for help
Node-oracledb installation instructions: https://oracle.github.io/node-oracledb/INSTALL.html
You must have 64-bit Oracle client libraries in LD_LIBRARY_PATH, or configured with ldconfig.
If you do not have Oracle Database on this computer, then install the Instant Client Basic or Basic Light package from
http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html
at OracleDb.createPool (/app/node_modules/oracledb/lib/oracledb.js:202:8)
at OracleDb.createPool (/app/node_modules/oracledb/lib/util.js:185:19)
I made sure to add the dependencies that are needed for the Oracle thin client in my Dockerfile as follows
FROM oraclelinux:7-slim
RUN yum -y install oracle-release-el7 oracle-nodejs-release-el7 && \
yum-config-manager --disable ol7_developer_EPEL --enable ol7_oracle_instantclient && \
yum -y install oracle-instantclient19.5-basiclite && \
rm -rf /var/cache/yum
FROM node:10
WORKDIR /app
COPY ./package.json ./
I am not sure what else to add here to make this work. I got some of the instructions on what to add for oracle client from here https://oracle.github.io/node-oracledb/INSTALL.html#docker
It looks like you are trying to use a multi-stage build. I discussed this in "Node.js Dockerfile Example 3" in my blog posst Docker for Oracle Database Applications in Node.js and Python.
You could try something like:
FROM oraclelinux:7-slim as builder
ARG release=19
ARG update=5
RUN yum -y install oracle-release-el7 &&
oracle-instantclient${release}.${update}-basiclite
RUN rm -rf /usr/lib/oracle/${release}.${update}/client64/bin
WORKDIR /usr/lib/oracle/${release}.${update}/client64/lib/
RUN rm -rf *jdbc* *occi* *mysql* *jar
# Get a new image
FROM node:12-buster-slim
# Copy the Instant Client libraries, licenses and config file from the previous image
COPY --from=builder /usr/lib/oracle /usr/lib/oracle
COPY --from=builder /usr/share/oracle /usr/share/oracle
COPY --from=builder /etc/ld.so.conf.d/oracle-instantclient.conf /etc/ld.so.conf.d/oracle-instantclient.conf
RUN apt-get update && apt-get -y upgrade && apt-get -y dist-upgrade && apt-get install -y libaio1 && \
apt-get -y autoremove && apt-get -y clean && \
ldconfig
Overall this is probably not worth the complexity. Look at the other example in the blog post. Or use Oracle's Dockerfile: https://github.com/oracle/docker-images/tree/master/OracleLinuxDevelopers/oraclelinux7/nodejs/12-oracledb

Azure IoT Edge Module : Error while Build and Push IoT Edge Solution

I followed this tutorial step by step : https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-c-module
But at the step "Build and Push your solution" (https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-c-module#build-and-push-your-solution) I have the following error in the terminal :
standard_init_linux.go:207: exec user process caused "no such file or directory"
I check the 3 points listed on the tutorials ("If you receive an error trying to build and push your module") but I still have the error.
I don't even know what file it is about..
Do someone have an idea of the problem ?
Thanks
EDIT
I add all the terminal output :
Sending build context to Docker daemon 106kB
Step 1/14 : FROM arm32v7/ubuntu:xenial AS base
---> 8593318db04f
Step 2/14 : RUN apt-get update && apt-get install -y --no-install-recommends software-properties-common && add-apt-repository -y ppa:aziotsdklinux/ppa-azureiot && apt-get update && apt-get install -y azure-iot-sdk-c-dev && rm -rf /var/lib/apt/lists/*
---> Running in 8bed4f396527
standard_init_linux.go:207: exec user process caused "no such file or directory"
The command '/bin/sh -c apt-get update && apt-get install -y --no-install-recommends software-properties-common && add-apt-repository -y ppa:aziotsdklinux/ppa-azureiot && apt-get update && apt-get install -y azure-iot-sdk-c-dev && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 1
Looks like one of the paths in your command cannot be found in the intermediate docker image. Try running a shell directly on the intermediate image using:
docker run -it --entrypoint sh 8593318db04f
to check /var/lib/apt/lists/ and /bin/sh are actually present on the image. You should be able to manually run the command specified in your docker file.
I have found that quite helpful in debugging failing docker builds.
It seems you are building arm32v7 image, so what os is your host machine? Can you try to build the amd64 image instead of arm32v7?

Cant launch chrome in docker linux container

I have an asp.net core application that uses the jsreport nuget packages to run reports. I am attempting to deploy it with a linux docker container. I seem to be having trouble getting chrome to launch when I run a report. I am getting the error:
Failed to launch chrome! Running as root without --no-sandbox is not supported.
I have followed the directions on the .net local reporting page (https://jsreport.net/learn/dotnet-local) regarding docker, but I am still getting the error.
Here is my full docker file:
#use the .net core 2.1 runtime default image
FROM microsoft/dotnet:2.1-aspnetcore-runtime
#set the working directory to the server
WORKDIR /server
#copy all contents in the current directory to the container server directory
COPY . /server
#install node
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_8.x | bash \
&& apt-get install nodejs -yq
#install jsreport-cli
RUN npm install jsreport-cli -g
#install chrome for jsreport linux
RUN apt-get update && \
apt-get install -y gnupg libgconf-2-4 wget && \
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' && \
apt-get update && \
apt-get install -y google-chrome-unstable --no-install-recommends
ENV chrome:launchOptions:executablePath google-chrome-unstable
ENV chrome:launchOptions:args --no-sandbox
#expose port 80
EXPOSE 80
CMD dotnet Server.dll
Is there another step that I am missing somewhere?
Its little late but may be can help someone else.
For me, the only option that was needed to fix this issue in the docker container was to run chrome in a headless mode (so cause was in tests not in dockerfile).
ChromeOptions options = new ChromeOptions().setHeadless(true);
WebDriver driver = new ChromeDriver(options);
Results: Now tests run successfully, without any errors.
Expanding on Pramod's answer, my own issues were only solved by running with both the --headless and --no-sandbox flags.

Docker - Node.js + MongoDB - "Error: failed to connect to [localhost:27017]"

I am trying to create a container for my Node app. This app uses MongoDB to ensure some data persistence.
So I created this Dockerfile:
FROM ubuntu:latest
# --- Installing MongoDB
# Add 10gen official apt source to the sources list
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list
# Hack for initctl not being available in Ubuntu
RUN dpkg-divert --local --rename --add /sbin/initctl
RUN ln -s /bin/true /sbin/initctl
# Install MongoDB
RUN apt-get update
RUN apt-get install mongodb-10gen
# Create the MongoDB data directory
RUN mkdir -p /data/db
CMD ["usr/bin/mongod", "--smallfiles"]
# --- Installing Node.js
RUN apt-get update
RUN apt-get install -y python-software-properties python python-setuptools ruby rubygems
RUN add-apt-repository ppa:chris-lea/node.js
# Fixing broken dependencies ("nodejs : Depends: rlwrap but it is not installable"):
RUN echo "deb http://archive.ubuntu.com/ubuntu precise universe" >> /etc/apt/sources.list
RUN echo "deb http://us.archive.ubuntu.com/ubuntu/ precise universe" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y nodejs
# Removed unnecessary packages
RUN apt-get purge -y python-software-properties python python-setuptools ruby rubygems
RUN apt-get autoremove -y
# Clear package repository cache
RUN apt-get clean all
# --- Bundle app source
ADD . /src
# Install app dependencies
RUN cd /src; npm install
EXPOSE 8080
CMD ["node", "/src/start.js"]
Then I build and launch the whole thing through:
$ sudo docker build -t aldream/myApp
$ sudo docker run aldream/myApp
But the machine displays the following error:
[error] Error: failed to connect to [localhost:27017]
Any idea what I am doing wrong? Thanks!
Do you actually docker run aldream/myApp? In that case, with the Dockerfile that you provided, it should run MongODB, but not your app. Is there another CMD command, or another Dockerfile, or are you running docker run aldream/myApp <somethingelse>? In the latter case, it will override the CMD directive and MongoDB will not be started.
If you want to run multiple processes in a single container, you need a process manager (like e.g. Supervisor, god, monit) or start the processes in the background from a script; e.g.:
#!/bin/sh
mongod &
node myapp.js &
wait
Redefine your Dockerfile as follows;
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# ENTRYPOINT should not be used as it wont allow commands from run to be executed
# Define mountable directories.
VOLUME ["/data/db"]
# Expose ports.
# - 27017: process
# - 28017: http
# - 9191: web app
EXPOSE 27017 28017 9191
ENTRYPOINT ["/usr/bin/supervisord"]
supervisord.conf will contain the following;
[supervisord]
nodaemon=true
[program:mongod]
command=/usr/bin/mongod --smallfiles
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
[program:nodejs]
command=nodejs /opt/app/server/server.js

Resources