How can I connect to Memgraph database and executes queries using Rust? - rust

I'm starting to learn Rust. I want to try out connecting to the Memgraph database and executing a query. I'm running a local instance of Memgraph Platform in Docker. I'm running it with default settings.

Since you are using Docker right after you create a new Rust project using cargo new memgraph_rust --bin add the following line to the Cargo.toml file under the line [dependencies] :
rsmgclient = "1.0.0"
Then, add the following code to the src/main.rs file:
use rsmgclient::{ConnectParams, Connection, SSLMode};
fn main(){
// Parameters for connecting to database.
let connect_params = ConnectParams {
host: Some(String::from("172.17.0.2")),
sslmode: SSLMode::Disable,
..Default::default()
};
// Make a connection to the database.
let mut connection = match Connection::connect(&connect_params) {
Ok(c) => c,
Err(err) => panic!("{}", err)
};
// Execute a query.
let query = "CREATE (u:User {name: 'Alice'})-[:Likes]->(m:Software {name: 'Memgraph'}) RETURN u, m";
match connection.execute(query, None) {
Ok(columns) => println!("Columns: {}", columns.join(", ")),
Err(err) => panic!("{}", err)
};
// Fetch all query results.
match connection.fetchall() {
Ok(records) => {
for value in &records[0].values {
println!("{}", value);
}
},
Err(err) => panic!("{}", err)
};
// Commit any pending transaction to the database.
match connection.commit() {
Ok(()) => {},
Err(err) => panic!("{}", err)
};
}
Now, create a new file in the project root directory /memgraph_rust and name it Dockerfile:
# Set base image (host OS)
FROM rust:1.56
# Install CMake
RUN apt-get update && \
apt-get --yes install cmake
# Install mgclient
RUN apt-get install -y git cmake make gcc g++ libssl-dev clang && \
git clone https://github.com/memgraph/mgclient.git /mgclient && \
cd mgclient && \
git checkout 5ae69ea4774e9b525a2be0c9fc25fb83490f13bb && \
mkdir build && \
cd build && \
cmake .. && \
make && \
make install
# Set the working directory in the container
WORKDIR /code
# Copy the dependencies file to the working directory
COPY Cargo.toml .
# Copy the content of the local src directory to the working directory
RUN mkdir src
COPY src/ ./src
# Generate binary using the Rust compiler
RUN cargo build
# Command to run on container start
CMD [ "cargo", "run" ]
All that is now left is to get the address docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' CONTAINER_ID, create and image docker build -t memgraph_rust . and starting the application with docker run memgraph_rust.
If you ever decide to take your Rust program to an environment that doesn't have Docker you will maybe need to install rsmgclient driver
The complete documentation for connecting using Rust can be found at Rust quick start guide on the Memgraph site.

Related

Rails dont send email in docker

i need a help.
I try send email using rails and default mail service. In developering all ok, but after dockerize project i get error: "wrong authentication type 'plain'".
------------------------ My docker file ------------------------
FROM ruby:3.1.2
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN mkdir /app
WORKDIR /app
COPY Gemfile .
COPY Gemfile.lock .
RUN gem update bundler
RUN bundle install
COPY . .
ENV RAILS_ENV production
EXPOSE 3000
CMD rails server -b 0.0.0.0 -p 3000
------------------------ My .env file ------------------------
SMTP_ADDRESS='smtp.gmail.com'
SMTP_PORT=587
SMTP_AUTHENTICATION='plain'
SMTP_USER_NAME='login'
SMTP_PASSWORD='password'
DATABASE_NAME='dbname'
DATABASE_USERNAME='dbuser'
DATABASE_PASSWORD='dbpassword'
DATABASE_PORT=5432
DATABASE_HOST='host.docker.internal'
------------------------ My production.rb file ------------------------
config.action_mailer.delivery_method = :smtp
host = 'example.com' #replace with your own url
config.action_mailer.default_url_options = { host: host }
config.action_mailer.perform_caching = false
config.action_mailer.raise_delivery_errors = true
config.action_mailer.delivery_method = :smtp
config.action_mailer.smtp_settings = {
:address => ENV['SMTP_ADDRESS'],
:port => ENV['SMTP_PORT'],
:authentication => ENV['SMTP_AUTHENTICATION'],
:user_name => ENV['SMTP_USER_NAME'],
:password => ENV['SMTP_PASSWORD'],
:enable_starttls_auto => true,
:openssl_verify_mode => 'none' #Use this because ssl is activated but we have no certificate installed. So clients need to confirm to use the untrusted url.
}
I think maybe you need to pass the ENV variables into the Dockerfile? Or if you have a docker-compose file, pass it there

Docker return error DPI-1047: cannot locate a 64-bit Oracle library: libclntsh.so. Node.js Windows 10

I'm using in docker container on windows 10 with nodejs. When I try to get data from oracle database - get request (the connection to data base in nodejs code) I get the message:
DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/node-oracledb/INSTALL.html for help
When I make a get request without the container(run server) the data was return well.
Dockerfile:
FROM node:latest
WORKDIR /app
COPY package*.json app.js ./
RUN npm install
COPY . .
EXPOSE 9000
CMD ["npm", "start"]
connection to oracle:
async function send2db(sql_command, res) {
console.log("IN");
console.log(sql_command);
try {
await oracledb.createPool({
user: dbConfig.user,
password: dbConfig.password,
connectString: dbConfig.connectString,
});
console.log("Connection pool started");
const result = await executeSQLCommand(sql_command
// { outFormat: oracledb.OUT_FORMAT_OBJECT }
);
return result;
} catch (err) {
// console.log("init() error: " + err.message);
throw err;
}
}
From Docker for Oracle Database Applications in Node.js and Python here is one solution:
FROM node:12-buster-slim
WORKDIR /opt/oracle
RUN apt-get update && \
apt-get install -y libaio1 unzip wget
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip && \
unzip instantclient-basiclite-linuxx64.zip && \
rm -f instantclient-basiclite-linuxx64.zip && \
cd instantclient* && \
rm -f *jdbc* *occi* *mysql* *jar uidrvci genezi adrci && \
echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf && \
ldconfig
You would want to use a later Node.js version now. The referenced link shows installs on other platforms too.

How to add flags to golang build in dockerfile

I am currently running a node server with a golang submodule in docker.
To run the golang module, I run the command
go run cmd/downloader/main.go -build 1621568 -outdir /src/results
I have been unable to figure out how to add these flags to the golang build in my dockerfile. Here is my current dockerfile.
FROM golang:1.17 AS downloader
WORKDIR /app
COPY component-review-handler/ ./
RUN go build -o downloader ./cmd/downloader
FROM node:14
# vvv add this line
COPY --from=downloader /app/downloader /usr/local/bin/
# same as before
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
ENV NODE_TLS_REJECT_UNAUTHORIZED='0'
EXPOSE 3000
CMD ["node", "server.js"]
In the node service, I execute the golang binary by running
exec(
`downloader`,
(error, stdout, stderr) => {
if (error) {
logger.error(`error: ${error.message}`)
return
}
if (stderr) {
logger.log(`stderr: ${stderr}`)
return
}
logger.log(`stdout: ${stdout}`)
}
)
The issue is I need to add flags to my downloader command. Does anyone know how I can add this flag when I dynamically run the binary in the node server?
-build 1621568 -outdir /src/results
Try execFile instead
execFile('downloader', ['-build', '1621568', '-outdir', '/src/results'], (error, stdout, stderr) => {
if (error) {
logger.error(`error: ${error.message}`)
return
}
if (stderr) {
logger.log(`stderr: ${stderr}`)
return
}
logger.log(`stdout: ${stdout}`)
}
)

Yarn install production dependencies of a single package in workspace

I'm trying to install the production dependencies only for a single package in my workspace. Is that possible?
I've already tried this:
yarn workspace my-package-in-workspace install -- --prod
But it is installing all production dependencies of all my packages.
yarn 1 doesn't support it as far as I know.
If you are trying to install a specific package in a dockerfile, then there is a workaround:
copy the yarn.lock file and the root package.json
copy only the packages's package.json that you need: your package and which other packages that your package depends on (locally in the monorepo).
in the dockerfile, manually remove all the devDependnecies of all the package.json(s) that you copied.
run yarn install on the root package.json.
Note:
Deterministic installation - It is recommended to do so in monorepos to force deterministic install - https://stackoverflow.com/a/64503207/806963
Full dockefile example:
FROM node:12
WORKDIR /usr/project
COPY yarn.lock package.json remove-all-dev-deps-from-all-package-jsons.js change-version.js ./
ARG package_path=packages/dancer-placing-manager
COPY ${package_path}/package.json ./${package_path}/package.json
RUN node remove-all-dev-deps-from-all-package-jsons.js && rm remove-all-dev-deps-from-all-package-jsons.js
RUN yarn install --frozen-lockfile --production
COPY ${package_path}/dist/src ./${package_path}/dist/src
COPY ${package_path}/src ./${package_path}/src
CMD node --unhandled-rejections=strict ./packages/dancer-placing-manager/dist/src/index.js
remove-all-dev-deps-from-all-package-jsons.js:
const fs = require('fs')
const path = require('path')
const { execSync } = require('child_process')
async function deleteDevDeps(packageJsonPath) {
const packageJson = require(packageJsonPath)
delete packageJson.devDependencies
await new Promise((res, rej) =>
fs.writeFile(packageJsonPath, JSON.stringify(packageJson, null, 2), 'utf-8', error => (error ? rej(error) : res())),
)
}
function getSubPackagesPaths(repoPath) {
const result = execSync(`yarn workspaces --json info`).toString()
const workspacesInfo = JSON.parse(JSON.parse(result).data)
return Object.values(workspacesInfo)
.map(workspaceInfo => workspaceInfo.location)
.map(packagePath => path.join(repoPath, packagePath, 'package.json'))
}
async function main() {
const repoPath = __dirname
const packageJsonPath = path.join(repoPath, 'package.json')
await deleteDevDeps(packageJsonPath)
await Promise.all(getSubPackagesPaths(repoPath).map(packageJsonPath => deleteDevDeps(packageJsonPath)))
}
if (require.main === module) {
main()
}
It looks like this is easily possible now with Yarn 2: https://yarnpkg.com/cli/workspaces/focus
But I haven't tried myself.
Here is my solution for Yarn 1:
# Install dependencies for the whole monorepo because
# 1. The --ignore-workspaces flag is not implemented https://github.com/yarnpkg/yarn/issues/4099
# 2. The --focus flag is broken https://github.com/yarnpkg/yarn/issues/6715
# Avoid the target workspace dependencies to land in the root node_modules.
sed -i 's|"dependencies":|"workspaces": { "nohoist": ["**"] }, "dependencies":|g' apps/target-app/package.json
# Run `yarn install` twice to workaround https://github.com/yarnpkg/yarn/issues/6988
yarn || yarn
# Find all linked node_modules and dereference them so that there are no broken
# symlinks if the target-app is copied somewhere. (Don't use
# `cp -rL apps/target-app some/destination` because then it also dereferences
# node_modules/.bin/* and thus breaks them.)
cd apps/target-app/node_modules
for f in $(find . -maxdepth 1 -type l)
do
l=$(readlink -f $f) && rm $f && cp -rf $l $f
done
Now apps/target-app can be copied and used as a standalone app.
I would not recommend it for production. It is slow (because it installs dependencies for the whole monorepo) and not really reliable (because there may be additional issues with symlinks).
You may try
yarn workspace #my-monorepo/my-package-in-workspace install -- --prod

Puppet agent considers old version of package installed using exec

I am trying to install autoconf version 2.69 by building it from source. After autoconf is installed, my intention is to build another package called crmsh from its source. I want to do this using Puppet.
I have written a few classes that enable me to do this using puppet. The class contents are below.
Download autoconf from source
class custom-autoconf {
require custom-packages-1
exec { "download_autoconf" :
command => "wget http://ftp.gnu.org/gnu/autoconf/autoconf-2.69.tar.gz ; \
tar xvfvz autoconf-2.69.tar.gz; ",
path => ["/bin","/usr/bin","/sbin","/usr/sbin"],
cwd => '/root',
unless => "test -e /root/autoconf-2.69.tar.gz",
provider => shell,
}
notify { 'autoconf_download' :
withpath => true,
name => "download_autoconf",
message => "Execution of autoconf download completed. "
}
}
Build autoconf
class custom-autoconf::custom-autoconf-2 {
require custom-autoconf
exec { "install_autoconf" :
command => "sh configure ; \
make && make install ; \
sleep 5 ; \
autoconf --version",
path => ["/bin","/usr/bin","/sbin","/usr/sbin"],
timeout => 1800,
logoutput => true,
cwd => '/root/autoconf-2.69',
onlyif => "test -d /root/autoconf-2.69",
provider => shell,
}
notify { 'autoconf_install' :
withpath => true,
name => "install_autoconf",
message => "Execution of autoconf install completed. Requires custom-autoconf class completion "
}
}
Download crmsh source
class custom-autoconf::custom-crmsh {
require custom-autoconf::custom-autoconf-2
exec { "clone_crmsh" :
command => "git clone https://github.com/crmsh/crmsh.git ; ",
path => ["/bin","/usr/bin","/sbin","/usr/sbin"],
cwd => '/root',
unless => "test -d /root/crmsh",
provider => shell,
}
notify { 'crmsh_clone' :
withpath => true,
name => "clone_crmsh",
message => "Execution of git clone https://github.com/crmsh/crmsh.git completed. Requires custom-autoconf-2 "
}
}
Build crmsh
class custom-autoconf::custom-crmsh-1 {
require custom-autoconf::custom-crmsh
exec {"build_crmsh" :
command => "pwd ; \
autoconf --version ; \
sleep 5 ; \
autoconf --version ; \
sh autogen.sh ; \
sh configure ; \
make && make install ; ",
path => ["/bin","/usr/bin","/sbin","/usr/sbin"],
require => Class['custom-autoconf::custom-crmsh'],
cwd => '/root/crmsh',
onlyif => "test -d /root/crmsh",
provider => shell,
}
notify { 'crmsh_build' :
withpath => true,
name => "build_crmsh",
message => "Execution of crmsh build is complete. Depends on custom-crmsh"
}
}
The problem is that the crmsh build fails saying autoconf version is 2.63. Notice: /Stage[main]/Custom-autoconf::Custom-crmsh-1/Exec[build_crmsh]/returns: configure.ac:11: error: Autoconf version 2.69 or higher is required
When puppet execution completes with this failure, I see that autoconf version is 2.69 (meaning, the initial build of autoconf was successful).
Could someone please tell me why Puppet is considering autoconf version as 2.63 when in the system it is 2.69. Or, am I missing something here?
It was actually my mistake. It turns out that autoconf binary was present in /usr/bin and /usr/local/bin. The custom autoconf build creates the binary in /usr/local/bin and this was not mentioned in the "path =>" section. Since this was missing, puppet was executing autoconf present in /usr/bin. Adding /usr/local/bin in path fixed the issue.
Thanks for the help.

Resources