Run imageMagick command with Nodejs - node.js

I am trying to improve the quality of a scanned PDF to proceed to OCR. I found the command textcleaner that uses ImageMagick. So how can I include this :
textcleaner -g -e normalize -f 30 -o 12 -s 2 original.jpg output.jpg in my Nodejs code ?

you can use exec:
exec(`textcleaner -g -e normalize -f 30 -o 12 -s 2 ${inputName} ${outputName}`);
or use a child process
const { spawn } = require('child_process');
const textcleaner = spawn('textcleaner', ['-g', '-e', 'normalize', '-f 30', '-o 12', '-s 2', inputName, outputName]);
textcleaner.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});

Related

execSync throws and error trying to run node

I am running some processes inside of an EC2 instance.
To run it I initiate it with an SSM command:
cd / && cd home/ec2-user && . .nvm/nvm.sh && cd ufo && npm run start
and inside of it, I have a method in app.ts which is initialized with ts-node app.ts
import { execSync } from 'node:child_process';
import { takeNextScheduledAudit } from './sqs-scheduler';
import { uploadResultsToBucket } from './s3-uploader';
import { AuditRunParams } from "./types";
import { sendAuditResults } from "./sendResults";
(async function conductor(): Promise<void> {
const nextAuditRunParams = await takeNextScheduledAudit();
if (!nextAuditRunParams) {
execSync("sudo shutdown -h now");
}
const { targetUrl, requesterId, endpoint } = nextAuditRunParams as AuditRunParams;
try {
execSync(`npx user-flow --url=${targetUrl} --open=false`);
const resultsUrl = await uploadResultsToBucket(targetUrl);
await sendAuditResults(requesterId, endpoint, resultsUrl);
} catch (error) {
console.log(error);
}
await conductor();
})();
If I log in manually and run npm run start the scripts works as intended but if I run it using the SSM command I get this output:
> start
> ts-node app.ts
Error: Command failed: npx user-flow --url=https://deep-blue.io/ --open=false
at checkExecSyncError (node:child_process:841:11)
at execSync (node:child_process:912:15)
at conductor (/home/ec2-user/ufo/app.ts:15:17)
at processTicksAndRejections (node:internal/process/task_queues:96:5) {
status: 243,
signal: null,
output: [ null, <Buffer >, <Buffer 0a> ],
pid: 2691,
stdout: <Buffer >,
stderr: <Buffer 0a>
}
and this error:
Error: Command failed: sudo shutdown -h now
at checkExecSyncError (node:child_process:841:11)
at execSync (node:child_process:912:15)
at conductor (/home/ec2-user/ufo/app.ts:10:17)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async conductor (/home/ec2-user/ufo/app.ts:21:5) {
status: null,
signal: 'SIGTERM',
output: [ null, Buffer(0) [Uint8Array] [], Buffer(0) [Uint8Array] [] ],
pid: 2705,
stdout: Buffer(0) [Uint8Array] [],
stderr: Buffer(0) [Uint8Array] []
}
failed to run commands: exit status 1
Moreover, if I run execSync("node -v && npx -v") it also throws an error.
Why can I run this script when i am logged in but if i run it via a SSM command it does not recognize node inside of node?
--- Edit - Added Info ---
When running execSync(node -v && npx -v,{shell: '/bin/bash'}) I get an error:
Error: Command failed: node -v && npx -v
When running execSync(ps -p $$ && echo $SHELL, {shell: '/bin/bash'}):
PID TTY TIME CMD
7817 ? 00:00:00 bash
/bin/bash
And when I loggin and do ps -p $$ && echo $SHELL I get:
PID TTY TIME CMD
6873 pts/0 00:00:00 bash
/bin/bash
By default, all of the child_process functions execute in the same environment as the process that launched them. I don't have an account handy to test with, but it's quite likely that SSM skips over a traditional shell and just executes certain runtimes directly.
You can use the exec options like this to set a particular shell in which to launch the process:
const output = execSync('echo "doing stuff"', {
shell: '/bin/bash',
})
console.log('***** output:', output.toString())
This is assuming the OS you're using for the EC2 instance has bash available. Most flavors of linux should, but for what you're doing there, /bin/sh is sufficient if not. To get a list of the available shells, you can run:
cat /etc/shells
## or possibly
sudo cat /etc/shells
EDIT: Since you say it works fine in a shell already, you have presumably already handled this, but user-flow would also have to be available. It's not a module from npmjs, so would need to already be present on the box as either a local dependency or a private repo to which the EC2 instance has access.

AWS lambda function throwing error "constchildProcess is not defined"

I am using AWS lambda function with below code
'use strict';
constchildProcess= require("child_process");
constpath= require("path");
const backupDatabase = () => {
const scriptFilePath =path.resolve(__dirname, "./backup.sh");
return newPromise((resolve, reject) => {
childProcess.execFile(scriptFilePath, (error) => {
if (error) {
console.error(error);
resolve(false);
}
resolve(true);
});
});
};
module.exports.handler = async (event) => {
const isBackupSuccessful = await backupDatabase();
if (isBackupSuccessful) {
return {
status: "success",
message: "Database backup completed successfully!"
};
}
return {
status: "failed",
message: "Failed to backup the database! Check out the logs for more details"
};
};
The code above run's with in the docker container, tries to run the below backup script
#!/bin/bash
#
# Author: Bruno Coimbra <bbcoimbra#gmail.com>
#
# Backups database located in DB_HOST, DB_PORT, DB_NAME
# and can be accessed using DB_USER. Password should be
# located in $HOME/.pgpass and this file should be
# chmod 0600[1].
#
# Target bucket should be set in BACKUP_BUCKET variable.
#
# AWS credentials should be available as needed by aws-cli[2].
#
# Dependencies:
#
# * pg_dump executable (can be found in postgresql-client-<version> package)
# * aws-cli (with python environment configured execute 'pip install awscli')
#
#
# References
# [1] - http://www.postgresql.org/docs/9.3/static/libpq-pgpass.html
# [2] - http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
#
#
###############
### Variables
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
DB_HOST=
DB_PORT="5432"
DB_USER="postgres"
BACKUP_BUCKET=
###############
#
# **RISK ZONE** DON'T TOUCH below this line unless you know
# exactly what you are doing.
#
###############
set -e
export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
### Variables
S3_BACKUP_BUCKET=${BACKUP_BUCKET:-test-db-backup-bucket}
TEMPFILE_PREFIX="db-$DB_NAME-backup"
TEMPFILE="$(mktemp -t $TEMPFILE_PREFIX-XXXXXXXX)"
DATE="$(date +%Y-%m-%d)"
TIMESTAMP="$(date +%s)"
BACKUPFILE="backup-$DB_NAME-$TIMESTAMP.sql.gz"
LOGTAG="DB $DB_NAME Backup"
### Validations
if [[ ! -r "$HOME/.pgpass" ]]; then
logger -t "$LOGTAG" "$0: Can't find database credentials. $HOME/.pgpass file isn't readable. Aborted."
exit 1
fi
if ! which pg_dump > /dev/null; then
logger -t "$LOGTAG" "$0: Can't find 'pg_dump' executable. Aborted."
exit 1
fi
if ! which aws > /dev/null; then
logger -t "$LOGTAG" "$0: Can't find 'aws cli' executable. Aborted."
exit 1
fi
logger -t "$LOGTAG" "$0: remove any previous dirty backup file"
rm -f /tmp/$TEMPFILE_PREFIX*
### Generate dump and compress it
logger -t "$LOGTAG" "Dumping Database..."
pg_dump -O -x -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -w "$DB_NAME" > "$TEMPFILE"
logger -t "$LOGTAG" "Dumped."
logger -t "$LOGTAG" "Compressing file..."
nice gzip -9 "$TEMPFILE"
logger -t "$LOGTAG" "Compressed."
mv "$TEMPFILE.gz" "$BACKUPFILE"
### Upload it to S3 Bucket and cleanup
logger -t "$LOGTAG" "Uploading '$BACKUPFILE' to S3..."
aws s3 cp "$BACKUPFILE" "s3://$S3_BACKUP_BUCKET/$DATE/$BACKUPFILE"
logger -t "$LOGTAG" "Uploaded."
logger -t "$LOGTAG" "Clean-up..."
rm -f $TEMPFILE
rm -f $BACKUPFILE
rm -f /tmp/$TEMPFILE_PREFIX*
logger -t "$LOGTAG" "Finished."
if [ $? -eq 0 ]; then
echo "script passed"
exit 0
else
echo "script failed"
exit 1
fi
I created a docker image with above app.js content and bakup.sh with the below docker file
ARG FUNCTION_DIR="/function"
FROM node:14-buster
RUN apt-get update && \
apt install -y \
g++ \
make \
cmake \
autoconf \
libtool \
wget \
openssh-client \
gnupg2
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - && \
echo "deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list && \
apt-get update && apt-get -y install postgresql-client-12
ARG FUNCTION_DIR
RUN mkdir -p ${FUNCTION_DIR} && chmod -R 755 ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY package.json .
RUN npm install
COPY backup.sh .
RUN chmod +x backup.sh
COPY app.js .
ENTRYPOINT ["/usr/local/bin/npx", "aws-lambda-ric"]
CMD ["app.handler"]
I am running the docker container created with the image created from the above docker file
docker run -v ~/aws:/aws -it --rm -p 9000:8080 --entrypoint /aws/aws-lambda-rie backup-db:v1 /usr/local/bin/npx aws-lambda-ric app.handler
And trying to hit that docker container with below curl command
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
when I run curl command I am seeing the below error
29 Nov 2021 10:57:30,838 [INFO] (rapid) extensionsDisabledByLayer(/opt/disable-extensions-jwigqn8j) -> stat /opt/disable-extensions-jwigqn8j: no such file or directory
29 Nov 2021 10:57:30,838 [WARNING] (rapid) Cannot list external agents error=open /opt/extensions: no such file or directory
START RequestId: 053246ef-4687-438d-aade-a6794b917b79 Version: $LATEST
2021-11-29T10:57:30.912Z undefined INFO Executing 'app.handler' in function directory '/function'
2021-11-29T10:57:30.919Z undefined ERROR constchildProcess is not defined
29 Nov 2021 10:57:30,926 [WARNING] (rapid) First fatal error stored in appctx: Runtime.ExitError
29 Nov 2021 10:57:30,927 [WARNING] (rapid) Process 53(npx) exited: Runtime exited with error: exit status 1
29 Nov 2021 10:57:30,927 [ERROR] (rapid) Init failed error=Runtime exited with error: exit status 1 InvokeID=
29 Nov 2021 10:57:30,927 [WARNING] (rapid) Reset initiated: ReserveFail
29 Nov 2021 10:57:30,927 [WARNING] (rapid) Cannot list external agents error=open /opt/extensions: no such file or directory
Could someone help me with fixing the error ? My expected output is the message as described in the function, but am seeing the errors.
Thank you
Because they both do not exist. There is a typo on your first 2 lines:
constchildProcess= require("child_process");
constpath= require("path");
Should be:
const childProcess= require("child_process");
const path= require("path");

AWS lambda function throwing error "newPromise is not defined"

I am using AWS lambda function with below code
'use strict';
var newPromise = require('es6-promise').Promise;
const childProcess= require("child_process");
const path= require("path");
const backupDatabase = () => {
const scriptFilePath =path.resolve(__dirname, "./backup.sh");
return newPromise((resolve, reject) => {
childProcess.execFile(scriptFilePath, (error) => {
if (error) {
console.error(error);
resolve(false);
}
resolve(true);
});
});
};
module.exports.handler = async (event) => {
const isBackupSuccessful = await backupDatabase();
if (isBackupSuccessful) {
return {
status: "success",
message: "Database backup completed successfully!"
};
}
return {
status: "failed",
message: "Failed to backup the database! Check out the logs for more details"
};
};
The code above run's with in the docker container, tries to run the below backup script
#!/bin/bash
#
# Author: Bruno Coimbra <bbcoimbra#gmail.com>
#
# Backups database located in DB_HOST, DB_PORT, DB_NAME
# and can be accessed using DB_USER. Password should be
# located in $HOME/.pgpass and this file should be
# chmod 0600[1].
#
# Target bucket should be set in BACKUP_BUCKET variable.
#
# AWS credentials should be available as needed by aws-cli[2].
#
# Dependencies:
#
# * pg_dump executable (can be found in postgresql-client-<version> package)
# * aws-cli (with python environment configured execute 'pip install awscli')
#
#
# References
# [1] - http://www.postgresql.org/docs/9.3/static/libpq-pgpass.html
# [2] - http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
#
#
###############
### Variables
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
DB_HOST=
DB_PORT="5432"
DB_USER="postgres"
BACKUP_BUCKET=
###############
#
# **RISK ZONE** DON'T TOUCH below this line unless you know
# exactly what you are doing.
#
###############
set -e
export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
### Variables
S3_BACKUP_BUCKET=${BACKUP_BUCKET:-test-db-backup-bucket}
TEMPFILE_PREFIX="db-$DB_NAME-backup"
TEMPFILE="$(mktemp -t $TEMPFILE_PREFIX-XXXXXXXX)"
DATE="$(date +%Y-%m-%d)"
TIMESTAMP="$(date +%s)"
BACKUPFILE="backup-$DB_NAME-$TIMESTAMP.sql.gz"
LOGTAG="DB $DB_NAME Backup"
### Validations
if [[ ! -r "$HOME/.pgpass" ]]; then
logger -t "$LOGTAG" "$0: Can't find database credentials. $HOME/.pgpass file isn't readable. Aborted."
exit 1
fi
if ! which pg_dump > /dev/null; then
logger -t "$LOGTAG" "$0: Can't find 'pg_dump' executable. Aborted."
exit 1
fi
if ! which aws > /dev/null; then
logger -t "$LOGTAG" "$0: Can't find 'aws cli' executable. Aborted."
exit 1
fi
logger -t "$LOGTAG" "$0: remove any previous dirty backup file"
rm -f /tmp/$TEMPFILE_PREFIX*
### Generate dump and compress it
logger -t "$LOGTAG" "Dumping Database..."
pg_dump -O -x -h "$DB_HOST" -p "$DB_PORT" -U "$DB_USER" -w "$DB_NAME" > "$TEMPFILE"
logger -t "$LOGTAG" "Dumped."
logger -t "$LOGTAG" "Compressing file..."
nice gzip -9 "$TEMPFILE"
logger -t "$LOGTAG" "Compressed."
mv "$TEMPFILE.gz" "$BACKUPFILE"
### Upload it to S3 Bucket and cleanup
logger -t "$LOGTAG" "Uploading '$BACKUPFILE' to S3..."
aws s3 cp "$BACKUPFILE" "s3://$S3_BACKUP_BUCKET/$DATE/$BACKUPFILE"
logger -t "$LOGTAG" "Uploaded."
logger -t "$LOGTAG" "Clean-up..."
rm -f $TEMPFILE
rm -f $BACKUPFILE
rm -f /tmp/$TEMPFILE_PREFIX*
logger -t "$LOGTAG" "Finished."
if [ $? -eq 0 ]; then
echo "script passed"
exit 0
else
echo "script failed"
exit 1
fi
I created a docker image with above app.js content and bakup.sh with the below docker file
ARG FUNCTION_DIR="/function"
FROM node:14-buster
RUN apt-get update && \
apt install -y \
g++ \
make \
cmake \
autoconf \
libtool \
wget \
openssh-client \
gnupg2
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - && \
echo "deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main" | tee /etc/apt/sources.list.d/pgdg.list && \
apt-get update && apt-get -y install postgresql-client-12
ARG FUNCTION_DIR
RUN mkdir -p ${FUNCTION_DIR} && chmod -R 755 ${FUNCTION_DIR}
WORKDIR ${FUNCTION_DIR}
COPY package.json .
RUN npm install
COPY backup.sh .
RUN chmod +x backup.sh
COPY app.js .
ENTRYPOINT ["/usr/local/bin/npx", "aws-lambda-ric"]
CMD ["app.handler"]
I am running the docker container created with the image created from the above docker file
docker run -v ~/aws:/aws -it --rm -p 9000:8080 --entrypoint /aws/aws-lambda-rie backup-db:v1 /usr/local/bin/npx aws-lambda-ric app.handler
And trying to hit that docker container with below curl command
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
when I run curl command I am seeing the below error
An error I see is :"newPromise is not defined","trace":["ReferenceError: newPromise is not defined"," at backupDatabase (/function/app.js:9:3)","
Tried adding the variable var newPromise = require('es6-promise').Promise;, but that gave a new error "Cannot set property 'scqfkjngu7o' of undefined","trace"
Could someone help me with fixing the error ? My expected output is the message as described in the function, but am seeing the errors.
Thank you
Node 14 supports promises natively. You should do:
return new Promise((resolve, reject) => {
childProcess.execFile(scriptFilePath, (error) => {
if (error) {
console.error(error);
resolve(false);
}
resolve(true);
});
Note the space between new and Promise. Promise is the object and you are using a constructor. There is no need to import any module.

onlyif not working in exec resource

exec { 'add text to file':
cwd => '/usrdata/apps/java',
command => 'command which writes a line in file',
onlyif => "grep -c -w line-in-file /path/to/file"
}
Even though the grep command returns 1, the exec resource is getting executed. Where am I going wrong?
exec { 'add text to file':
cwd => '/usrdata/apps/java',
command => "grep -q -F -v 'text to check if present or not' /usrdata/apps/java || echo 'text to add' >> location_of_file.txt",
}
This will not add text if it is already present in the file.

Unable to execute "hive -e 'select * from table" using .exec() method of simple-ssh npm module

I am unable to execute a hive -e command using simple-ssh module .exec() in nodejs.
I think there is a problem with the single/double quotes ' or ". I don't know which quote to put in what sequence. I tried a lot of combination, but none of them worked.
Here is the code below:
var runSSH(obj){
var ssh = new SSH({
host: remote1,
user: 'root',
timeout: 1500000,
key: require('fs').readFileSync("C:/Users/Aiman/Desktop/hRep_prv"),
agent: process.env.SSH_AUTH_SOCK,
agentForward: true
});
ssh.exec('timeout 900 ssh -i /root/rsaPrvtKeyPath/to/remoteHost2 '+remoteHost2+' \'for i in '+remote3+' '+remote4+'; do clush -w ${i} "hive -e \'select * from table_name limit 3;\'" done\' ',{
out: function(stdout) {
devHive_check += stdout;
obj.devHive_check = devHive_check;
console.log(stdout);
}
}) //-->not executing
.exec('timeout 300 ssh -i /root/rsaPrvtKeyPath/to/remoteHost2 '+remoteHost2+' \'for i in '+remote3+' '+remote4+'; do clush -w ${i} "ps -ef | grep HiveServer2"; done;\' ',{
out: function(stdout) {
devHS2_check += stdout;
obj.devHS2_check = devHS2_check;
console.log(stdout);
}
})//-->running fine
.exec('echo "parse and save"',{
out: function(){
parseData(obj);
ssh.end();
}
}).start(); //-->running fine
}
I am logging into remoteHost1, running a couple of shell scripts (which are runnig fine), then I am doing an ssh to remoteHost2 to check Hive (hive is running on remote3 and remote4).
HiveServer2 is running fine, but Hive isn't.
Please help me.
Its solved now.
Instead of the single quotes inside hive, it requires a doulbe quotes, i.e. instead line ssh.exec('timeout 900 ssh -i /root/rsaPrvtKeyPath/to/remoteHost2 '+remoteHost2+' \'for i in '+remote3+' '+remote4+'; do clush -w ${i} "hive -e \'select * from table_name limit 3;\'" done\' ',{
I did
ssh.exec('timeout 900 ssh -i /root/rsaPrvtKeyPath/to/remoteHost2 '+remoteHost2+' \'for i in '+remote3+' '+remote4+'; do clush -w ${i} "hive -e \\\"select * from table_name limit 3;\\\" " done\' ',{, and it worked.
Thanks #mscdex for your support.

Resources