Docker return error DPI-1047: cannot locate a 64-bit Oracle library: libclntsh.so. Node.js Windows 10 - node.js

I'm using in docker container on windows 10 with nodejs. When I try to get data from oracle database - get request (the connection to data base in nodejs code) I get the message:
DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/node-oracledb/INSTALL.html for help
When I make a get request without the container(run server) the data was return well.
Dockerfile:
FROM node:latest
WORKDIR /app
COPY package*.json app.js ./
RUN npm install
COPY . .
EXPOSE 9000
CMD ["npm", "start"]
connection to oracle:
async function send2db(sql_command, res) {
console.log("IN");
console.log(sql_command);
try {
await oracledb.createPool({
user: dbConfig.user,
password: dbConfig.password,
connectString: dbConfig.connectString,
});
console.log("Connection pool started");
const result = await executeSQLCommand(sql_command
// { outFormat: oracledb.OUT_FORMAT_OBJECT }
);
return result;
} catch (err) {
// console.log("init() error: " + err.message);
throw err;
}
}

From Docker for Oracle Database Applications in Node.js and Python here is one solution:
FROM node:12-buster-slim
WORKDIR /opt/oracle
RUN apt-get update && \
apt-get install -y libaio1 unzip wget
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip && \
unzip instantclient-basiclite-linuxx64.zip && \
rm -f instantclient-basiclite-linuxx64.zip && \
cd instantclient* && \
rm -f *jdbc* *occi* *mysql* *jar uidrvci genezi adrci && \
echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf && \
ldconfig
You would want to use a later Node.js version now. The referenced link shows installs on other platforms too.

Related

Rails dont send email in docker

i need a help.
I try send email using rails and default mail service. In developering all ok, but after dockerize project i get error: "wrong authentication type 'plain'".
------------------------ My docker file ------------------------
FROM ruby:3.1.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile .
COPY Gemfile.lock .
RUN gem update bundler
RUN bundle install
COPY . .
ENV RAILS_ENV production
EXPOSE 3000
CMD rails server -b 0.0.0.0 -p 3000
------------------------ My .env file ------------------------
SMTP_ADDRESS='smtp.gmail.com'
SMTP_PORT=587
SMTP_AUTHENTICATION='plain'
SMTP_USER_NAME='login'
SMTP_PASSWORD='password'
DATABASE_NAME='dbname'
DATABASE_USERNAME='dbuser'
DATABASE_PASSWORD='dbpassword'
DATABASE_PORT=5432
DATABASE_HOST='host.docker.internal'
------------------------ My production.rb file ------------------------
config.action_mailer.delivery_method = :smtp
host = 'example.com' #replace with your own url
config.action_mailer.default_url_options = { host: host }
config.action_mailer.perform_caching = false
config.action_mailer.raise_delivery_errors = true
config.action_mailer.delivery_method = :smtp
config.action_mailer.smtp_settings = {
:address => ENV['SMTP_ADDRESS'],
:port => ENV['SMTP_PORT'],
:authentication => ENV['SMTP_AUTHENTICATION'],
:user_name => ENV['SMTP_USER_NAME'],
:password => ENV['SMTP_PASSWORD'],
:enable_starttls_auto => true,
:openssl_verify_mode => 'none' #Use this because ssl is activated but we have no certificate installed. So clients need to confirm to use the untrusted url.
}
I think maybe you need to pass the ENV variables into the Dockerfile? Or if you have a docker-compose file, pass it there

How can I connect to Memgraph database and executes queries using Rust?

I'm starting to learn Rust. I want to try out connecting to the Memgraph database and executing a query. I'm running a local instance of Memgraph Platform in Docker. I'm running it with default settings.
Since you are using Docker right after you create a new Rust project using cargo new memgraph_rust --bin add the following line to the Cargo.toml file under the line [dependencies] :
rsmgclient = "1.0.0"
Then, add the following code to the src/main.rs file:
use rsmgclient::{ConnectParams, Connection, SSLMode};
fn main(){
// Parameters for connecting to database.
let connect_params = ConnectParams {
host: Some(String::from("172.17.0.2")),
sslmode: SSLMode::Disable,
..Default::default()
};
// Make a connection to the database.
let mut connection = match Connection::connect(&connect_params) {
Ok(c) => c,
Err(err) => panic!("{}", err)
};
// Execute a query.
let query = "CREATE (u:User {name: 'Alice'})-[:Likes]->(m:Software {name: 'Memgraph'}) RETURN u, m";
match connection.execute(query, None) {
Ok(columns) => println!("Columns: {}", columns.join(", ")),
Err(err) => panic!("{}", err)
};
// Fetch all query results.
match connection.fetchall() {
Ok(records) => {
for value in &records[0].values {
println!("{}", value);
}
},
Err(err) => panic!("{}", err)
};
// Commit any pending transaction to the database.
match connection.commit() {
Ok(()) => {},
Err(err) => panic!("{}", err)
};
}
Now, create a new file in the project root directory /memgraph_rust and name it Dockerfile:
# Set base image (host OS)
FROM rust:1.56
# Install CMake
RUN apt-get update && \
apt-get --yes install cmake
# Install mgclient
RUN apt-get install -y git cmake make gcc g++ libssl-dev clang && \
git clone https://github.com/memgraph/mgclient.git /mgclient && \
cd mgclient && \
git checkout 5ae69ea4774e9b525a2be0c9fc25fb83490f13bb && \
mkdir build && \
cd build && \
cmake .. && \
make && \
make install
# Set the working directory in the container
WORKDIR /code
# Copy the dependencies file to the working directory
COPY Cargo.toml .
# Copy the content of the local src directory to the working directory
RUN mkdir src
COPY src/ ./src
# Generate binary using the Rust compiler
RUN cargo build
# Command to run on container start
CMD [ "cargo", "run" ]
All that is now left is to get the address docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' CONTAINER_ID, create and image docker build -t memgraph_rust . and starting the application with docker run memgraph_rust.
If you ever decide to take your Rust program to an environment that doesn't have Docker you will maybe need to install rsmgclient driver
The complete documentation for connecting using Rust can be found at Rust quick start guide on the Memgraph site.

How to add flags to golang build in dockerfile

I am currently running a node server with a golang submodule in docker.
To run the golang module, I run the command
go run cmd/downloader/main.go -build 1621568 -outdir /src/results
I have been unable to figure out how to add these flags to the golang build in my dockerfile. Here is my current dockerfile.
FROM golang:1.17 AS downloader
WORKDIR /app
COPY component-review-handler/ ./
RUN go build -o downloader ./cmd/downloader
FROM node:14
# vvv add this line
COPY --from=downloader /app/downloader /usr/local/bin/
# same as before
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
ENV NODE_TLS_REJECT_UNAUTHORIZED='0'
EXPOSE 3000
CMD ["node", "server.js"]
In the node service, I execute the golang binary by running
exec(
`downloader`,
(error, stdout, stderr) => {
if (error) {
logger.error(`error: ${error.message}`)
return
}
if (stderr) {
logger.log(`stderr: ${stderr}`)
return
}
logger.log(`stdout: ${stdout}`)
}
)
The issue is I need to add flags to my downloader command. Does anyone know how I can add this flag when I dynamically run the binary in the node server?
-build 1621568 -outdir /src/results
Try execFile instead
execFile('downloader', ['-build', '1621568', '-outdir', '/src/results'], (error, stdout, stderr) => {
if (error) {
logger.error(`error: ${error.message}`)
return
}
if (stderr) {
logger.log(`stderr: ${stderr}`)
return
}
logger.log(`stdout: ${stdout}`)
}
)

AWS lambda function throwing error "newPromise is not defined"

I am using AWS lambda function with below code
'use strict';
var newPromise = require('es6-promise').Promise;
const childProcess= require("child_process");
const path= require("path");
const backupDatabase = () => {
const scriptFilePath =path.resolve(__dirname, "./backup.sh");
return newPromise((resolve, reject) => {
childProcess.execFile(scriptFilePath, (error) => {
if (error) {
console.error(error);
resolve(false);
}
resolve(true);
});
});
};
module.exports.handler = async (event) => {
const isBackupSuccessful = await backupDatabase();
if (isBackupSuccessful) {
return {
status: "success",
message: "Database backup completed successfully!"
};
}
return {
status: "failed",
message: "Failed to backup the database! Check out the logs for more details"
};
};
The code above run's with in the docker container, tries to run the below backup script
#!/bin/bash
#
# Author: Bruno Coimbra <bbcoimbra#gmail.com>
#
# Backups database located in DB_HOST, DB_PORT, DB_NAME
# and can be accessed using DB_USER. Password should be
# located in $HOME/.pgpass and this file should be
# chmod 0600[1].
#
# Target bucket should be set in BACKUP_BUCKET variable.
#
# AWS credentials should be available as needed by aws-cli[2].
#
# Dependencies:
#
# * pg_dump executable (can be found in postgresql-client-<version> package)
# * aws-cli (with python environment configured execute 'pip install awscli')
#
#
# References
# [1] - http://www.postgresql.org/docs/9.3/static/libpq-pgpass.html
# [2] - http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
#
#
###############
### Variables
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
DB_HOST=
DB_PORT="5432"
DB_USER="postgres"
BACKUP_BUCKET=
###############
#
# **RISK ZONE** DON'T TOUCH below this line unless you know
# exactly what you are doing.
#
###############
set -e
export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
### Variables
S3_BACKUP_BUCKET=${BACKUP_BUCKET:-test-db-backup-bucket}
TEMPFILE_PREFIX="db-$DB_NAME-backup"
TEMPFILE="$(mktemp -t $TEMPFILE_PREFIX-XXXXXXXX)"
DATE="$(date +%Y-%m-%d)"
TIMESTAMP="$(date +%s)"
BACKUPFILE="backup-$DB_NAME-$TIMESTAMP.sql.gz"
LOGTAG="DB $DB_NAME Backup"
### Validations
if [[ ! -r "$HOME/.pgpass" ]]; then
logger -t "$LOGTAG" "$0: Can't find database credentials. $HOME/.pgpass file isn't readable. Aborted."
exit 1
fi
if ! which pg_dump > /dev/null; then
logger -t "$LOGTAG" "$0: Can't find 'pg_dump' executable. Aborted."
exit 1
fi
if ! which aws > /dev/null; then
logger -t "$LOGTAG" "$0: Can't find 'aws cli' executable. Aborted."
exit 1
fi
logger -t "$LOGTAG" "$0: remove any previous dirty backup file"
rm -f /tmp/$TEMPFILE_PREFIX*
### Generate dump and compress it
logger -t "$LOGTAG" "Dumping Database..."
pg_dump -O -x -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -w "$DB_NAME" > "$TEMPFILE"
logger -t "$LOGTAG" "Dumped."
logger -t "$LOGTAG" "Compressing file..."
nice gzip -9 "$TEMPFILE"
logger -t "$LOGTAG" "Compressed."
mv "$TEMPFILE.gz" "$BACKUPFILE"
### Upload it to S3 Bucket and cleanup
logger -t "$LOGTAG" "Uploading '$BACKUPFILE' to S3..."
aws s3 cp "$BACKUPFILE" "s3://$S3_BACKUP_BUCKET/$DATE/$BACKUPFILE"
logger -t "$LOGTAG" "Uploaded."
logger -t "$LOGTAG" "Clean-up..."
rm -f $TEMPFILE
rm -f $BACKUPFILE
rm -f /tmp/$TEMPFILE_PREFIX*
logger -t "$LOGTAG" "Finished."
if [ $? -eq 0 ]; then
echo "script passed"
exit 0
else
echo "script failed"
exit 1
fi
I created a docker image with above app.js content and bakup.sh with the below docker file
ARG FUNCTION_DIR="/function"
FROM node:14-buster
RUN apt-get update && \
apt install -y \
g++ \
make \
cmake \
autoconf \
libtool \
wget \
openssh-client \
gnupg2
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - && \
echo "deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list && \
apt-get update && apt-get -y install postgresql-client-12
ARG FUNCTION_DIR
RUN mkdir -p ${FUNCTION_DIR} && chmod -R 755 ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY package.json .
RUN npm install
COPY backup.sh .
RUN chmod +x backup.sh
COPY app.js .
ENTRYPOINT ["/usr/local/bin/npx", "aws-lambda-ric"]
CMD ["app.handler"]
I am running the docker container created with the image created from the above docker file
docker run -v ~/aws:/aws -it --rm -p 9000:8080 --entrypoint /aws/aws-lambda-rie backup-db:v1 /usr/local/bin/npx aws-lambda-ric app.handler
And trying to hit that docker container with below curl command
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
when I run curl command I am seeing the below error
An error I see is :"newPromise is not defined","trace":["ReferenceError: newPromise is not defined"," at backupDatabase (/function/app.js:9:3)","
Tried adding the variable var newPromise = require('es6-promise').Promise;, but that gave a new error "Cannot set property 'scqfkjngu7o' of undefined","trace"
Could someone help me with fixing the error ? My expected output is the message as described in the function, but am seeing the errors.
Thank you
Node 14 supports promises natively. You should do:
return new Promise((resolve, reject) => {
childProcess.execFile(scriptFilePath, (error) => {
if (error) {
console.error(error);
resolve(false);
}
resolve(true);
});
Note the space between new and Promise. Promise is the object and you are using a constructor. There is no need to import any module.

net::ERR_ADDRESS_UNREACHABLE at {URL}

i am using puppeteer v1.19.0 in nodejs , is error unreachable after build and run in docker,
is js file
await puppeteer.launch({
executablePath: '/usr/bin/chromium-browser',
args: ['--no-sandbox', '--disable-setuid-sandbox', '--headless'],
}).then(async (browser) => {
const url = `${thisUrl}analisa-jabatan/pdf/${_id}`
const page = await browser.newPage()
await page.goto(url, { waitUntil: 'networkidle0' })
// await page.evaluate(() => { window.scrollBy(0, window.innerHeight) })
await page.setViewport({
width: 1123,
height: 794,
})
setTimeout(async () => {
const buffer = await page.pdf({
path: `uploads/analisa-jabatan.pdf`,
displayHeaderFooter: true,
headerTemplate: '',
footerTemplate: '',
printBackground: true,
format: 'A4',
landscape: true,
margin: {
top: 20,
bottom: 20,
left: 20,
right: 20,
},
})
let base64data = buffer.toString('base64')
await res.status(200).send(base64data)
// await res.download(process.cwd() + '/uploads/analisa-jabatan.pdf')
await browser.close()
}, 2000)
})
}
and is dockerfile
FROM aria/alpine-nodejs:3.10
#FROM node:12-alpine
LABEL maintainer="Aria <aryamuktadir22#gmail.com>"
# ENVIRONMENT VARIABLES
# NODE_ENV
ENV NODE_ENV=production
# SERVER Configuration
ENV HOST=0.0.0.0
ENV PORT=3001
ENV SESSION_SECRET=thisissecret
# CORS Configuration
ENV CORS_ORIGIN=http://117.54.250.109:8081
ENV CORS_METHOD=GET,POST,PUT,DELETE,PATCH,OPTIONS,HEAD
ENV CORS_ALLOWED_HEADERS=Authorization,Content-Type,Access-Control-Request-Method,X-Requested-With
ENV CORS_MAX_AGE=600
ENV CORS_CREDENTIALS=false
# DATABASE Configuration
ENV DB_HOST=anjabdb
ENV DB_PORT=27017
ENV DB_NAME=anjab
# Tell Puppeteer to skip installing Chrome. We'll be using the installed package.
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true \
PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
# SET WORKDIR
WORKDIR /usr/local/app
# INSTALL REQUIRED DEPENDENCIES
RUN apk update && apk upgrade && \
apk add --update --no-cache \
gcc g++ make autoconf automake pngquant \
python2 \
chromium \
udev \
nss \
freetype \
freetype-dev \
harfbuzz \
ca-certificates \
ttf-freefont ca-certificates \
nodejs \
yarn \
libpng libpng-dev lcms2 lcms2-dev
# COPY SOURCE TO CONTAINER
ADD deploy/etc/ /etc
ADD package.json app.js server.js process.yml ./
ADD lib ./lib
ADD middlewares ./middlewares
ADD models ./models
ADD modules ./modules
ADD uploads ./uploads
ADD assets ./assets
ADD views ./views
COPY keycloak.js.prod ./keycloak.js
# INSTALL NODE DEPENDENCIES
RUN npm cache clean --force
RUN npm config set unsafe-perm true
RUN npm -g install pm2 phantomjs html-pdf
RUN yarn && yarn install --production=true && sleep 3 &&\
yarn cache clean
RUN set -ex \
&& apk add --no-cache --virtual .build-deps ca-certificates openssl \
&& wget -qO- "https://github.com/dustinblackman/phantomized/releases/download/2.1.1/dockerized-phantomjs.tar.gz" | tar xz -C / \
&& npm install -g phantomjs \
&& apk del .build-deps
EXPOSE 3001
And is result
Error: net::ERR_ADDRESS_UNREACHABLE at http://117.54.250.109:8089/analisa-jabatan/pdf/5ee9e6a15ff81d00c7c3a614
at navigate (/usr/local/app/node_modules/puppeteer/lib/FrameManager.js:120:37)
at process._tickCallback (internal/process/next_tick.js:68:7)
-- ASYNC --
at Frame. (/usr/local/app/node_modules/puppeteer/lib/helper.js:111:15)
at Page.goto (/usr/local/app/node_modules/puppeteer/lib/Page.js:674:49)
at Page. (/usr/local/app/node_modules/puppeteer/lib/helper.js:112:23)
at puppeteer.launch.then (/usr/local/app/modules/analisajabatan/methods/pdfpuppeteer.js:60:20)
at process._tickCallback (internal/process/next_tick.js:68:7)
From the puppeteer docs:
page.goto will not throw an error when any valid HTTP status code is returned by the remote server, including 404 "Not Found" and 500 "Internal Server Error"
so providing the url is valid, the server doesn't seem to be sending a response.

Resources