i need a help.
I try send email using rails and default mail service. In developering all ok, but after dockerize project i get error: "wrong authentication type 'plain'".
------------------------ My docker file ------------------------
FROM ruby:3.1.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile .
COPY Gemfile.lock .
RUN gem update bundler
RUN bundle install
COPY . .
ENV RAILS_ENV production
EXPOSE 3000
CMD rails server -b 0.0.0.0 -p 3000
------------------------ My .env file ------------------------
SMTP_ADDRESS='smtp.gmail.com'
SMTP_PORT=587
SMTP_AUTHENTICATION='plain'
SMTP_USER_NAME='login'
SMTP_PASSWORD='password'
DATABASE_NAME='dbname'
DATABASE_USERNAME='dbuser'
DATABASE_PASSWORD='dbpassword'
DATABASE_PORT=5432
DATABASE_HOST='host.docker.internal'
------------------------ My production.rb file ------------------------
config.action_mailer.delivery_method = :smtp
host = 'example.com' #replace with your own url
config.action_mailer.default_url_options = { host: host }
config.action_mailer.perform_caching = false
config.action_mailer.raise_delivery_errors = true
config.action_mailer.delivery_method = :smtp
config.action_mailer.smtp_settings = {
:address => ENV['SMTP_ADDRESS'],
:port => ENV['SMTP_PORT'],
:authentication => ENV['SMTP_AUTHENTICATION'],
:user_name => ENV['SMTP_USER_NAME'],
:password => ENV['SMTP_PASSWORD'],
:enable_starttls_auto => true,
:openssl_verify_mode => 'none' #Use this because ssl is activated but we have no certificate installed. So clients need to confirm to use the untrusted url.
}
I think maybe you need to pass the ENV variables into the Dockerfile? Or if you have a docker-compose file, pass it there
Related
I'm using in docker container on windows 10 with nodejs. When I try to get data from oracle database - get request (the connection to data base in nodejs code) I get the message:
DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/node-oracledb/INSTALL.html for help
When I make a get request without the container(run server) the data was return well.
Dockerfile:
FROM node:latest
WORKDIR /app
COPY package*.json app.js ./
RUN npm install
COPY . .
EXPOSE 9000
CMD ["npm", "start"]
connection to oracle:
async function send2db(sql_command, res) {
console.log("IN");
console.log(sql_command);
try {
await oracledb.createPool({
user: dbConfig.user,
password: dbConfig.password,
connectString: dbConfig.connectString,
});
console.log("Connection pool started");
const result = await executeSQLCommand(sql_command
// { outFormat: oracledb.OUT_FORMAT_OBJECT }
);
return result;
} catch (err) {
// console.log("init() error: " + err.message);
throw err;
}
}
From Docker for Oracle Database Applications in Node.js and Python here is one solution:
FROM node:12-buster-slim
WORKDIR /opt/oracle
RUN apt-get update && \
apt-get install -y libaio1 unzip wget
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip && \
unzip instantclient-basiclite-linuxx64.zip && \
rm -f instantclient-basiclite-linuxx64.zip && \
cd instantclient* && \
rm -f *jdbc* *occi* *mysql* *jar uidrvci genezi adrci && \
echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf && \
ldconfig
You would want to use a later Node.js version now. The referenced link shows installs on other platforms too.
I'm starting to learn Rust. I want to try out connecting to the Memgraph database and executing a query. I'm running a local instance of Memgraph Platform in Docker. I'm running it with default settings.
Since you are using Docker right after you create a new Rust project using cargo new memgraph_rust --bin add the following line to the Cargo.toml file under the line [dependencies] :
rsmgclient = "1.0.0"
Then, add the following code to the src/main.rs file:
use rsmgclient::{ConnectParams, Connection, SSLMode};
fn main(){
// Parameters for connecting to database.
let connect_params = ConnectParams {
host: Some(String::from("172.17.0.2")),
sslmode: SSLMode::Disable,
..Default::default()
};
// Make a connection to the database.
let mut connection = match Connection::connect(&connect_params) {
Ok(c) => c,
Err(err) => panic!("{}", err)
};
// Execute a query.
let query = "CREATE (u:User {name: 'Alice'})-[:Likes]->(m:Software {name: 'Memgraph'}) RETURN u, m";
match connection.execute(query, None) {
Ok(columns) => println!("Columns: {}", columns.join(", ")),
Err(err) => panic!("{}", err)
};
// Fetch all query results.
match connection.fetchall() {
Ok(records) => {
for value in &records[0].values {
println!("{}", value);
}
},
Err(err) => panic!("{}", err)
};
// Commit any pending transaction to the database.
match connection.commit() {
Ok(()) => {},
Err(err) => panic!("{}", err)
};
}
Now, create a new file in the project root directory /memgraph_rust and name it Dockerfile:
# Set base image (host OS)
FROM rust:1.56
# Install CMake
RUN apt-get update && \
apt-get --yes install cmake
# Install mgclient
RUN apt-get install -y git cmake make gcc g++ libssl-dev clang && \
git clone https://github.com/memgraph/mgclient.git /mgclient && \
cd mgclient && \
git checkout 5ae69ea4774e9b525a2be0c9fc25fb83490f13bb && \
mkdir build && \
cd build && \
cmake .. && \
make && \
make install
# Set the working directory in the container
WORKDIR /code
# Copy the dependencies file to the working directory
COPY Cargo.toml .
# Copy the content of the local src directory to the working directory
RUN mkdir src
COPY src/ ./src
# Generate binary using the Rust compiler
RUN cargo build
# Command to run on container start
CMD [ "cargo", "run" ]
All that is now left is to get the address docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' CONTAINER_ID, create and image docker build -t memgraph_rust . and starting the application with docker run memgraph_rust.
If you ever decide to take your Rust program to an environment that doesn't have Docker you will maybe need to install rsmgclient driver
The complete documentation for connecting using Rust can be found at Rust quick start guide on the Memgraph site.
I wrote a python flask API that takes an image as an input, uploads it on an S3 bucket and then process it in a function. When I run it locally, it works just fine. But when I run it through Docker image, it gives botocore.exceptions.NoCredentialsError: Unable to locate credentials.
My python code:
import boto3
s3BucketName = "textract_bucket"
region_name = 'ap-southeast-1'
aws_access_key_id = 'advdsav',
aws_secret_access_key = 'sfvsdvsdvsdvvfbdf'
session = boto3.Session(
aws_access_key_id = aws_access_key_id,
aws_secret_access_key = aws_secret_access_key,
)
s3 = session.resource('s3')
# Amazon Textract client
textractmodule = boto3.client('textract', region_name = region_name)
def extract_text(doc_name):
response = textractmodule.detect_document_text(
Document={
'S3Object': {
'Bucket': s3BucketName,
'Name': doc_name,
}
})
extracted_items = []
for item in response["Blocks"]:
if item["BlockType"] == "LINE":
extracted_items.append(item["Text"])
return extracted_items
The flask API:
#app.route('/text_extract', methods = ['GET', 'POST'])
def upload_file():
if request.method == 'POST':
img = request.files['file']
file = secure_filename(img.filename)
bucket.Object(file).put(Body=img)
output = extract_text(file)
return {'results': output}
app.run(host="0.0.0.0")
Dockerfile:
FROM python:3.7
RUN apt update
RUN apt install -y libgl1-mesa-glx
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 5000
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
The docker commands that I ran are:
docker build -t text_extract .
and then: docker run -p 5000:5000 text_extract
When I run the API and do a post request, I get the botocore.exceptions.NoCredentialsError error.
How can I fix this? Thanks
i am using puppeteer v1.19.0 in nodejs , is error unreachable after build and run in docker,
is js file
await puppeteer.launch({
executablePath: '/usr/bin/chromium-browser',
args: ['--no-sandbox', '--disable-setuid-sandbox', '--headless'],
}).then(async (browser) => {
const url = `${thisUrl}analisa-jabatan/pdf/${_id}`
const page = await browser.newPage()
await page.goto(url, { waitUntil: 'networkidle0' })
// await page.evaluate(() => { window.scrollBy(0, window.innerHeight) })
await page.setViewport({
width: 1123,
height: 794,
})
setTimeout(async () => {
const buffer = await page.pdf({
path: `uploads/analisa-jabatan.pdf`,
displayHeaderFooter: true,
headerTemplate: '',
footerTemplate: '',
printBackground: true,
format: 'A4',
landscape: true,
margin: {
top: 20,
bottom: 20,
left: 20,
right: 20,
},
})
let base64data = buffer.toString('base64')
await res.status(200).send(base64data)
// await res.download(process.cwd() + '/uploads/analisa-jabatan.pdf')
await browser.close()
}, 2000)
})
}
and is dockerfile
FROM aria/alpine-nodejs:3.10
#FROM node:12-alpine
LABEL maintainer="Aria <aryamuktadir22#gmail.com>"
# ENVIRONMENT VARIABLES
# NODE_ENV
ENV NODE_ENV=production
# SERVER Configuration
ENV HOST=0.0.0.0
ENV PORT=3001
ENV SESSION_SECRET=thisissecret
# CORS Configuration
ENV CORS_ORIGIN=http://117.54.250.109:8081
ENV CORS_METHOD=GET,POST,PUT,DELETE,PATCH,OPTIONS,HEAD
ENV CORS_ALLOWED_HEADERS=Authorization,Content-Type,Access-Control-Request-Method,X-Requested-With
ENV CORS_MAX_AGE=600
ENV CORS_CREDENTIALS=false
# DATABASE Configuration
ENV DB_HOST=anjabdb
ENV DB_PORT=27017
ENV DB_NAME=anjab
# Tell Puppeteer to skip installing Chrome. We'll be using the installed package.
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true \
PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser
# SET WORKDIR
WORKDIR /usr/local/app
# INSTALL REQUIRED DEPENDENCIES
RUN apk update && apk upgrade && \
apk add --update --no-cache \
gcc g++ make autoconf automake pngquant \
python2 \
chromium \
udev \
nss \
freetype \
freetype-dev \
harfbuzz \
ca-certificates \
ttf-freefont ca-certificates \
nodejs \
yarn \
libpng libpng-dev lcms2 lcms2-dev
# COPY SOURCE TO CONTAINER
ADD deploy/etc/ /etc
ADD package.json app.js server.js process.yml ./
ADD lib ./lib
ADD middlewares ./middlewares
ADD models ./models
ADD modules ./modules
ADD uploads ./uploads
ADD assets ./assets
ADD views ./views
COPY keycloak.js.prod ./keycloak.js
# INSTALL NODE DEPENDENCIES
RUN npm cache clean --force
RUN npm config set unsafe-perm true
RUN npm -g install pm2 phantomjs html-pdf
RUN yarn && yarn install --production=true && sleep 3 &&\
yarn cache clean
RUN set -ex \
&& apk add --no-cache --virtual .build-deps ca-certificates openssl \
&& wget -qO- "https://github.com/dustinblackman/phantomized/releases/download/2.1.1/dockerized-phantomjs.tar.gz" | tar xz -C / \
&& npm install -g phantomjs \
&& apk del .build-deps
EXPOSE 3001
And is result
Error: net::ERR_ADDRESS_UNREACHABLE at http://117.54.250.109:8089/analisa-jabatan/pdf/5ee9e6a15ff81d00c7c3a614
at navigate (/usr/local/app/node_modules/puppeteer/lib/FrameManager.js:120:37)
at process._tickCallback (internal/process/next_tick.js:68:7)
-- ASYNC --
at Frame. (/usr/local/app/node_modules/puppeteer/lib/helper.js:111:15)
at Page.goto (/usr/local/app/node_modules/puppeteer/lib/Page.js:674:49)
at Page. (/usr/local/app/node_modules/puppeteer/lib/helper.js:112:23)
at puppeteer.launch.then (/usr/local/app/modules/analisajabatan/methods/pdfpuppeteer.js:60:20)
at process._tickCallback (internal/process/next_tick.js:68:7)
From the puppeteer docs:
page.goto will not throw an error when any valid HTTP status code is returned by the remote server, including 404 "Not Found" and 500 "Internal Server Error"
so providing the url is valid, the server doesn't seem to be sending a response.
Apologies if Duplicate:
I Have a docker container which is a Node.js service. I want to test the endpoint of that service from the same linux machine.I am testing the endpoint using curl command I get curl: (56) Recv failure: Connection reset by peer
Here is my Dockerfile
FROM ubuntu
ARG ENVIRONMENT
ARG PORT
RUN apt-get update -qq
RUN apt-get install -y build-essential nodejs npm nodejs-legacy vim
RUN mkdir /database_service
ADD . /database_service
WORKDIR /database_service
RUN npm install -g express
RUN npm install -g path
RUN npm cache clean
EXPOSE $PORT
ENTRYPOINT [ "node", "server.js" ]
CMD [ $PORT, $ENVIRONMENT ]
Here is My configuration file:
module.exports = {
database: {
username: 'someusername',
password: 'somepassword',
host: '13.68.86.237',
port: 27017,
name: 'admin'
},
"sandbox_config": {
"commerce.api.endpoint":"sandbox_ep",
"eurekaInstance":{
"instanceId":'10.71.9.40:database-service:'+process.env.PORT || 9200,
"hostName": 'database-service',
"app": 'database-service',
"ipAddr": '10.71.9.40',
"port": { '$': process.env.PORT || 9200, '#enabled': 'true' },
"securePort": { '$': 443, '#enabled': 'false' },
"dataCenterInfo": {
'#class': 'com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo',
"name": 'MyOwn'
},
"homePageUrl": 'http://database-service:'+process.env.PORT || 9200+'/',
"statusPageUrl": 'http://database-service:'+process.env.PORT || 9200+'/info',
"healthCheckUrl": 'http://database-service:'+process.env.PORT || 9200+'/health',
"vipAddress": 'database-service',
"secureVipAddress": 'database-service',
"isCoordinatingDiscoveryServer": 'false',
"leaseInfo": {
"renewalIntervalInSecs": 60000,
"durationInSecs": 60000,
}
},
"eurekaConfig":{
"host":'eureka-server',
"port":8761,
"servicePath":'/eureka/apps/'
}
}
};
Please suggest is there something missing here or wrong command.
Here is the Snap for error
If you run a docker image inspect image_tag you'll see that the variables you believe are being interpolated in your CMD instruction won't be resolved until container run-time.
Add this after your ARG instructions
ENV PORT $PORT
ENV ENVIRONMENT $ENVIRONMENT
To ensure the default environment variables are available at run-time