How to add flags to golang build in dockerfile - node.js

I am currently running a node server with a golang submodule in docker.
To run the golang module, I run the command
go run cmd/downloader/main.go -build 1621568 -outdir /src/results
I have been unable to figure out how to add these flags to the golang build in my dockerfile. Here is my current dockerfile.
FROM golang:1.17 AS downloader
WORKDIR /app
COPY component-review-handler/ ./
RUN go build -o downloader ./cmd/downloader
FROM node:14
# vvv add this line
COPY --from=downloader /app/downloader /usr/local/bin/
# same as before
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
ENV NODE_TLS_REJECT_UNAUTHORIZED='0'
EXPOSE 3000
CMD ["node", "server.js"]
In the node service, I execute the golang binary by running
exec(
`downloader`,
(error, stdout, stderr) => {
if (error) {
logger.error(`error: ${error.message}`)
return
}
if (stderr) {
logger.log(`stderr: ${stderr}`)
return
}
logger.log(`stdout: ${stdout}`)
}
)
The issue is I need to add flags to my downloader command. Does anyone know how I can add this flag when I dynamically run the binary in the node server?
-build 1621568 -outdir /src/results

Try execFile instead
execFile('downloader', ['-build', '1621568', '-outdir', '/src/results'], (error, stdout, stderr) => {
if (error) {
logger.error(`error: ${error.message}`)
return
}
if (stderr) {
logger.log(`stderr: ${stderr}`)
return
}
logger.log(`stdout: ${stdout}`)
}
)

Related

Rails dont send email in docker

i need a help.
I try send email using rails and default mail service. In developering all ok, but after dockerize project i get error: "wrong authentication type 'plain'".
------------------------ My docker file ------------------------
FROM ruby:3.1.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile .
COPY Gemfile.lock .
RUN gem update bundler
RUN bundle install
COPY . .
ENV RAILS_ENV production
EXPOSE 3000
CMD rails server -b 0.0.0.0 -p 3000
------------------------ My .env file ------------------------
SMTP_ADDRESS='smtp.gmail.com'
SMTP_PORT=587
SMTP_AUTHENTICATION='plain'
SMTP_USER_NAME='login'
SMTP_PASSWORD='password'
DATABASE_NAME='dbname'
DATABASE_USERNAME='dbuser'
DATABASE_PASSWORD='dbpassword'
DATABASE_PORT=5432
DATABASE_HOST='host.docker.internal'
------------------------ My production.rb file ------------------------
config.action_mailer.delivery_method = :smtp
host = 'example.com' #replace with your own url
config.action_mailer.default_url_options = { host: host }
config.action_mailer.perform_caching = false
config.action_mailer.raise_delivery_errors = true
config.action_mailer.delivery_method = :smtp
config.action_mailer.smtp_settings = {
:address => ENV['SMTP_ADDRESS'],
:port => ENV['SMTP_PORT'],
:authentication => ENV['SMTP_AUTHENTICATION'],
:user_name => ENV['SMTP_USER_NAME'],
:password => ENV['SMTP_PASSWORD'],
:enable_starttls_auto => true,
:openssl_verify_mode => 'none' #Use this because ssl is activated but we have no certificate installed. So clients need to confirm to use the untrusted url.
}
I think maybe you need to pass the ENV variables into the Dockerfile? Or if you have a docker-compose file, pass it there

Docker return error DPI-1047: cannot locate a 64-bit Oracle library: libclntsh.so. Node.js Windows 10

I'm using in docker container on windows 10 with nodejs. When I try to get data from oracle database - get request (the connection to data base in nodejs code) I get the message:
DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/node-oracledb/INSTALL.html for help
When I make a get request without the container(run server) the data was return well.
Dockerfile:
FROM node:latest
WORKDIR /app
COPY package*.json app.js ./
RUN npm install
COPY . .
EXPOSE 9000
CMD ["npm", "start"]
connection to oracle:
async function send2db(sql_command, res) {
console.log("IN");
console.log(sql_command);
try {
await oracledb.createPool({
user: dbConfig.user,
password: dbConfig.password,
connectString: dbConfig.connectString,
});
console.log("Connection pool started");
const result = await executeSQLCommand(sql_command
// { outFormat: oracledb.OUT_FORMAT_OBJECT }
);
return result;
} catch (err) {
// console.log("init() error: " + err.message);
throw err;
}
}
From Docker for Oracle Database Applications in Node.js and Python here is one solution:
FROM node:12-buster-slim
WORKDIR /opt/oracle
RUN apt-get update && \
apt-get install -y libaio1 unzip wget
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip && \
unzip instantclient-basiclite-linuxx64.zip && \
rm -f instantclient-basiclite-linuxx64.zip && \
cd instantclient* && \
rm -f *jdbc* *occi* *mysql* *jar uidrvci genezi adrci && \
echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf && \
ldconfig
You would want to use a later Node.js version now. The referenced link shows installs on other platforms too.

How can I connect to Memgraph database and executes queries using Rust?

I'm starting to learn Rust. I want to try out connecting to the Memgraph database and executing a query. I'm running a local instance of Memgraph Platform in Docker. I'm running it with default settings.
Since you are using Docker right after you create a new Rust project using cargo new memgraph_rust --bin add the following line to the Cargo.toml file under the line [dependencies] :
rsmgclient = "1.0.0"
Then, add the following code to the src/main.rs file:
use rsmgclient::{ConnectParams, Connection, SSLMode};
fn main(){
// Parameters for connecting to database.
let connect_params = ConnectParams {
host: Some(String::from("172.17.0.2")),
sslmode: SSLMode::Disable,
..Default::default()
};
// Make a connection to the database.
let mut connection = match Connection::connect(&connect_params) {
Ok(c) => c,
Err(err) => panic!("{}", err)
};
// Execute a query.
let query = "CREATE (u:User {name: 'Alice'})-[:Likes]->(m:Software {name: 'Memgraph'}) RETURN u, m";
match connection.execute(query, None) {
Ok(columns) => println!("Columns: {}", columns.join(", ")),
Err(err) => panic!("{}", err)
};
// Fetch all query results.
match connection.fetchall() {
Ok(records) => {
for value in &records[0].values {
println!("{}", value);
}
},
Err(err) => panic!("{}", err)
};
// Commit any pending transaction to the database.
match connection.commit() {
Ok(()) => {},
Err(err) => panic!("{}", err)
};
}
Now, create a new file in the project root directory /memgraph_rust and name it Dockerfile:
# Set base image (host OS)
FROM rust:1.56
# Install CMake
RUN apt-get update && \
apt-get --yes install cmake
# Install mgclient
RUN apt-get install -y git cmake make gcc g++ libssl-dev clang && \
git clone https://github.com/memgraph/mgclient.git /mgclient && \
cd mgclient && \
git checkout 5ae69ea4774e9b525a2be0c9fc25fb83490f13bb && \
mkdir build && \
cd build && \
cmake .. && \
make && \
make install
# Set the working directory in the container
WORKDIR /code
# Copy the dependencies file to the working directory
COPY Cargo.toml .
# Copy the content of the local src directory to the working directory
RUN mkdir src
COPY src/ ./src
# Generate binary using the Rust compiler
RUN cargo build
# Command to run on container start
CMD [ "cargo", "run" ]
All that is now left is to get the address docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' CONTAINER_ID, create and image docker build -t memgraph_rust . and starting the application with docker run memgraph_rust.
If you ever decide to take your Rust program to an environment that doesn't have Docker you will maybe need to install rsmgclient driver
The complete documentation for connecting using Rust can be found at Rust quick start guide on the Memgraph site.

Vite: Could not resolve entry module (index.html)

I am new to Openshift 3.11 deployment, I created a Multistage Dockerfile for a React application, the build want correctly on my local machine, but when I run on the openshift cluster I get the error below:
> kncare-ui#0.1.0 build
> tsc && vite build
vite v2.9.9 building for production...
✓ 0 modules transformed.
Could not resolve entry module (index.html).
error during build:
Error: Could not resolve entry module (index.html).
at error (/app/node_modules/rollup/dist/shared/rollup.js:198:30)
at ModuleLoader.loadEntryModule (/app/node_modules/rollup/dist/shared/rollup.js:22680:20)
at async Promise.all (index 0)
error: build error: running 'npm run build' failed with exit code 1
and this is my Dockefile
FROM node:16.14.2-alpine as build-stage
RUN mkdir -p /app/
WORKDIR /app/
RUN chmod -R 777 /app/
COPY package*.json /app/
COPY tsconfig.json /app/
COPY tsconfig.node.json /app/
RUN npm ci
COPY ./ /app/
RUN npm run build
FROM nginxinc/nginx-unprivileged
#FROM bitnami/nginx:latest
COPY --from=build-stage /app/dist/ /usr/share/nginx/html
#CMD ["nginx", "-g", "daemon off;"]
ENTRYPOINT ["nginx", "-g", "daemon off;"]
EXPOSE 80
Vite by default uses an html page as an entry point. So you either need to create one or if you don't have an html page you can use it in "Library Mode".
https://vitejs.dev/guide/build.html#library-mode
From the docs:
// vite.config.js
const path = require('path')
const { defineConfig } = require('vite')
module.exports = defineConfig({
build: {
lib: {
entry: path.resolve(__dirname, 'lib/main.js'),
name: 'MyLib',
fileName: (format) => `my-lib.${format}.js`
},
rollupOptions: {
// make sure to externalize deps that shouldn't be bundled
// into your library
external: ['vue'],
output: {
// Provide global variables to use in the UMD build
// for externalized deps
globals: {
vue: 'Vue'
}
}
}
}
})
If you're using ES Modules (i.e., import sytax):
Look in your package.json to confirm type field is set to module:
// vite.config.js
import * as path from 'path';
import { defineConfig } from "vite";
const config = defineConfig({
build: {
lib: {
entry: path.resolve(__dirname, 'lib/main.js'),
name: 'MyLib',
fileName: (format) => `my-lib.${format}.js`
},
rollupOptions: {
// make sure to externalize deps that shouldn't be bundled
// into your library
external: ['vue'],
output: {
// Provide global variables to use in the UMD build
// for externalized deps
globals: {
vue: 'Vue'
}
}
}
}
})
export default config;
Had same issue because of .dockerignore. Make sure your index.html not ignored.
In case if you ignoring everything (**) you can add !index.html to the next line and try.

Run another yarn/npm task within a package.json, without specifying yarn or npm

I have a task in my package.json "deploy", which needs to first call "build". I have specified it like this:
"deploy": "yarn run build; ./deploy.sh",
The problem is that this hard codes yarn as the package manager. So if someone doesn't use yarn, it doesn't work. Switching to npm causes a similar issue.
What's a good way to achieve this while remaining agnostic to the choice of npm or yarn?
One simple approach is to use the npm-run-all package, whose documentation states:
Yarn Compatibility
If a script is invoked with Yarn, npm-run-all will correctly use Yarn to execute the plan's child scripts.
So you can do this:
"predeploy": "run-s build",
"deploy": "./deploy.sh",
And the predeploy step will use either npm or yarn depending on how you invoked the deploy task.
I think it is good to have the runs in package.json remain package manager agnostic so that they aren't tied to a specific package manager, but within a project, it is probably prudent to agree on the use of a single package manager so that you're not dealing with conflicting lockfiles.
It's probably not ideal, but you could run a .js file at your project root to make these checks...
You could create a file at your project root called yarnpm.js (or whatever), and call said file in your package.json deploy command..
// package.json (trimmed)
"scripts": {
"deploy": "node yarnpm",
"build": "whatever build command you use"
},
// yarnpm.js
const fs = require('fs');
const FILE_NAME = process.argv[1].replace(/^.*[\\\/]/, '');
// Command you wish to run with `{{}}` in place of `npm` or `yarn'
// This would allow you to easily run multiple `npm`/`yarn` commands without much work
// For example, `{{}} run one && {{}} run two
const COMMAND_TO_RUN = '{{}} run build; ./deploy.sh';
try {
if (fs.existsSync('./package-lock.json')) { // Check for `npm`
execute(COMMAND_TO_RUN.replace('{{}}', 'npm'));
} else if (fs.existsSync('./yarn.lock')) { // Check for `yarn`
execute(COMMAND_TO_RUN.replace('{{}}', 'yarn'));
} else {
console.log('\x1b[33m', `[${FILE_NAME}] Unable to locate either npm or yarn!`, '\033[0m');
}
} catch (err) {
console.log('\x1b[31m', `[${FILE_NAME}] Unable to deploy!`, '\033[0m');
}
function execute(command) { // Helper function, to make running `exec` easier
require('child_process').exec(command,
(error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
console.log(stdout);
});
}
Hope this helps in some way! Cheers.
EDIT:
...or if you wanted to parameterize the yarnpm.js script, to make it easily reusable, and to keep all "commands" inside the package.json file, you could do something like this..
// package.json (trimmed, parameterized)
"scripts": {
"deploy": "node yarnpm '{{}} run build; ./deploy.sh'",
"build": "node build.js"
},
// yarnpm.js (parameterized)
const COMMAND_TO_RUN = process.argv[2]; // Technically, the first 'parameter' is the third index
const FILE_NAME = process.argv[1].replace(/^.*[\\\/]/, '');
if (COMMAND_TO_RUN) {
const fs = require('fs');
try {
if (fs.existsSync('./package-lock.json')) { // Check for `npm`
execute(COMMAND_TO_RUN.replace('{{}}', 'npm'));
} else if (fs.existsSync('./yarn.lock')) { // Check for `yarn`
execute(COMMAND_TO_RUN.replace('{{}}', 'yarn'));
} else {
console.log('\x1b[33m', `[${FILE_NAME}] Unable to locate either npm or yarn!`, '\033[0m');
}
} catch (err) {
console.log('\x1b[31m', `[${FILE_NAME}] Unable to deploy!`, '\033[0m');
}
function execute(command) { // Helper function, to make running `exec` easier
require('child_process').exec(command,
(error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
console.log(stdout);
});
}
} else {
console.log('\x1b[31m', `[${FILE_NAME}] Requires a single argument!`, '\033[0m')
}
What if check before run?
You can create a new file called build.sh, and it's content below:
# check if current user installed node environment, if not, auto install it.
if command -v node >/dev/null 2>&1; then
echo "version of node: $(node -v)"
echo "version of npm: $(npm -v)"
else
# auto install node environment, suppose platform is centos,
# need change this part to apply other platform.
curl --silent --location https://rpm.nodesource.com/setup_12.x | sudo bash -
yum -y install nodejs
fi
npm run build
Then your script will be:
{
"deploy": "./build.sh && ./deploy.sh"
}
So I think I have a much simpler solution:
"deploy": "yarn run build || npm run build; ./deploy.sh",
Its only real downside is in the case where yarn exists, but the build fails, then npm run build will also take place.

Resources