Using ImageMagick or GraphicsMagick on Azure Functions - node.js

I am trying to see if my company can use Azure Functions to automate conversions of TIFF files to a number of JPG and PNG formats and sizes. I am using Functions with Node.js, but other languages could be used.
My problem is, that I can't get GraphicsMagick or ImageMagick to work on Functions. I used the normal procedures for installment using npm install.
It seems to install ok, and the module also seems to load, but nothing happens when I try to process a file. Nothing, as in no errors either.
var fs = require('fs');
var gm = require('gm');
module.exports = function (context, req) {
context.log('Start...');
try {
context.log('Looking for GM...');
context.log(require.resolve("gm"));
} catch(e) {
console.log("GM is not found");
process.exit(e.code);
}
gm('D:/home/site/wwwroot/HttpTriggerJS1/input/870003-02070-main-nfh.jpg')
.resize(240, 240)
.noProfile()
.write('D:/home/site/wwwroot/HttpTriggerJS1/output/resize.jpg',
function (err) {
context.log('TEST');
if (!err) {
context.log('done');
}
}
);
context.done(null, res); };
I'm not sure that it's even possible, but I haven't found any information that states that it can't.
So, can I use ImageMagick, GraphicsMagick or a third image converter in Functions? If yes, is there something special that I need to be aware of when installing?
Is there also a C# solution to this?

Web Apps in Azure is a SaaS (Software as a Service). You deploy your bits to the Azure IIS containers, and Azure do the rest. We don’t get much control.
So we will not have the privilege to install any 3rd party executable file on Azure Functions App (e.g. ImageMagick or GraphicsMagick). If you need to do that, look at Virtual Machines. Another option is using Cloud Services' Web or Worker Role.
Alternatively, there is a good image processing library for Node written entirely in JavaScript, with zero external or native dependencies, Jimp. https://github.com/oliver-moran/jimp
Example usage:
var Jimp = require("jimp");
Jimp.read("lenna.png").then(function (lenna) {
lenna.resize(256, 256) // resize
.quality(60) // set JPEG quality
.greyscale() // set greyscale
.write("lena-small-bw.jpg"); // save
}).catch(function (err) {
console.error(err);
});
There is another node.js library called sharp to achieve your requirement. You may try this way:
First, install the sharp on your local environment, and then deploy your application to Azure with the node_modules folder which contains the compiled module. Finally, upgrade the node executable on Azure App Service to 64-bits.
The similar thread you can refer here.
Example usage:
var sharp = require("sharp");
sharp(inputBuffer)
.resize(320, 240)
.toFile('output.webp', (err, info) => {
//...
});

Azure functions can run custom docker images also
https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image
Not sure which language you are interested in, but you can have a python image with below style Dockerfile
FROM mcr.microsoft.com/azure-functions/python:2.0
RUN apt-get update && \
apt-get install -y --no-install-recommends apt-utils && \
apt-get install -y imagemagick
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY . /home/site/wwwroot
RUN cd /home/site/wwwroot && \
pip install -r requirements.txt
And then use the PythonMagick to work with the same

You can use the site extension to make imagemagick work for azure web apps.
You can check the repository for more info: https://github.com/fatihturgut/azure-imagemagick-nodejs

Related

node.js running inside k8s as non-root fails to download archive

My application does the following, inside a web server, when handling a request - all imports are plain default node modules (tar, stream, base64-stream etc.) except fs-extra, which is a wrapper around node's built-in fs and which is used instad of the wrapped fs:
let archiveFileStream = fse.createWriteStream(tmpDir + "/" + ARCHIVE_NAME);
let saveArchive = new ArchiveWriter(archiveFileStream, {});
stream.pipeline(request, new Base64Decode(), saveArchive, tar.extract({
strip: 0,
C: tmpDir,
sync: true,
gzip: true
}), function() {
try {
saveArchive.end(null, null, function() {
try {
logger.info("Saved file.");
doSomethingWithUnpackedArchive();
setTimeout(function() {
copyAllFilesToNewLocationSync();
}, 1000);
} catch(error) {
logger.error("Some errors", error);
// callback is simply a callback passed as parameter to this method - never null
callback(null, error);
}
});
} catch (error) {
logger.error("Error while processing archive from request.", error);
callback(null, error);
} finally {
setTimeout(() => fse.remove(tmpDir), 1500);
}
});
The ArchiveWriter class referenced in that piece of code is:
class ArchiveWriter extends stream.Transform {
constructor(writeStream, options) {
super(options);
this.output = writeStream;
this.finished = false;
}
_transform(chunk, encoding, done) {
this.output.write(chunk);
this.push(chunk);
done();
}
_flush(done) {
logger.debug("Flushing archive.");
this.output.end();
this.finished = true;
done();
if (this.callback) {
this.callback();
}
}
}
The entire application is packaged into a docker image using alpine as base image to which node was added:
FROM alpine
COPY app/ /opt/app/
RUN apk update; \
apk add node nano curl net-tools wget bash less socat zip unzip procps jq; \
chmod -R u+rwx /opt/app; \
adduser --disabled-password --no-create-home app;
chown -R app:app /opt/app;
USER app
CMD "node main.js"
When I run the image locally, using docker run, and upload an archived and base64-encoded file to the app, everything works fine. When I run the docker image as part of a pod in a Kubernetes cluster, I get the message Flushing archive. from the archive writer, but in fact the archive is never fully saved, and therefore processing never proceeds. If I skip the user app when creating the image, i.e. create the image like this:
FROM alpine
COPY app/ /opt/app/
RUN apk update; \
apk add node nano curl net-tools wget bash less socat zip unzip procps jq; \
chmod -R u+rwx /opt/app;
CMD "node main.js"
then everything runs in Kubernetes as it does on local, i.e. the archive is fully saved and further processed. The behavior doesn't seem to depend on the size of the uploaded file.
When running as non-root inside Kubernetes, the error is reproducible even if I upload the archive into the container, then open a shell into the container and upload the file from there using curl. This puzzles me even more - the isolation containers promise from the host OS is seemingly not that good, since the same operation performed strictly inside the container in two different environments has different outcomes.
My question: why does node behave differently in Kubernetes when running as root and as non root, and why does node not behave differently in these two situations when running locally? Could it be related to the fact that alpine uses busybox instead of a normal shell? Or are there limitations in alpine that might impact network traffic of non-privileged processes?

How to fix "Error: /home/site/wwwroot/node_modules/canvas/build/Release/canvas.node: invalid ELF header" on NodeJs Azure Functions in Linux?

I am trying to deploy an AzureFunctions in NodeJs but it doesn't work on Azure.
My apllication is a v3 functions running on Linux.
When the deploy is completed, i get this 500 error:
Error:
/home/site/wwwroot/node_modules/canvas/build/Release/canvas.node:
invalid ELF header
Its happen only when I do this imports:
import ChartDataLabels from 'chartjs-plugin-datalabels';
const canvasRenderService = new CanvasRenderService(width, height, chartCallback);
const chartCallback = (ChartJS) => {
ChartJS.register(require('chartjs-plugin-datalabels'))
};
const jsdom = require("jsdom");
const { JSDOM } = jsdom;
const { document } = (new JSDOM(`...`)).window;
Would someone help me please?
It works (only) on my machine :(
Edit: It works when I make the deploy by Linux Subsystem.
I hope this will help somebody.
Azure function will not include the Node_modules while deploying into azure. Because Node_modules directory contains very large file. You can include your package.json in you function directory and run npm install as you normally would with Node.js projects using Kudu (https://<function_app_name>.scm.azurewebsites.net )or the Console in the Azure portal.
Check Dependency management for more information.
Refer here Link 1 & Link 2
Any updates on this topic?
Doesn't seem like a valid option for me to manually run npm install via KUDU or some other terminal in a Cloud Function App - especially with Continoues Deployment etc.
Got the same problem while using canvas for barcode generation...

MeteorUp volumes and how Meteor can access to their contents

First, thank you for reading my question. This is my first time on stackoverflow and I made a lot of research for answers that could help me.
CONTEXT
I'm developing a Meteor App that is used as a CMS, I create contents and store datas in mongoDb collections. The goal is to use these datas and a React project to build a static website, which is sent to an AWS S3 bucket for hosting purpose.
I'm using meteorUp to deploy my Meteor App (on an AWS EC2 instance) and according to MeteorUp documentation (http://meteor-up.com/docs.html#volumes), I added a docker volume in my mup.js:
module.exports = {
...
meteor: {
...
volumes: {
'/opt/front': '/front'
},
...
},
...
};
Once deployed, volume is well set in '/opt/myproject/config/start.sh':
sudo docker run \
-d \
--restart=always \
$VOLUME \
\
--expose=3000 \
\
--hostname="$HOSTNAME-$APPNAME" \
--env-file=$ENV_FILE \
\
--log-opt max-size=100m --log-opt max-file=10 \
-v /opt/front:/front \
--memory-reservation 600M \
\
--name=$APPNAME \
$IMAGE
echo "Ran abernix/meteord:node-8.4.0-base"
# When using a private docker registry, the cleanup run in
# Prepare Bundle is only done on one server, so we also
# cleanup here so the other servers don't run out of disk space
if [[ $VOLUME == "" ]]; then
# The app starts much faster when prepare bundle is enabled,
# so we do not need to wait as long
sleep 3s
else
sleep 15s
fi
On my EC2, '/opt/front' contains the React project used to generate a static website.
This folder includes a package.json file, every modules are available in the 'node_modules' directory. 'react-scripts' is one of them, and package.json contains the following script line:
"build": "react-scripts build",
React Project
React App is fed with a JSON file available in 'opt/front/src/assets/datas/publish.json'.
This JSON file can be hand-written (so the project can be developed independently) or generated by my Meteor App.
Meteor App
Client-side, on the User Interface, we have a 'Publish' button that the Administrator can click when she/he wants to generate the static website (using CMS datas) and deploy it to the S3 bucket.
It calls a Meteor method (server-side)
Its action is separated in 3 steps:
1. Collect every useful datas and save them into a Publish collection
2. JSON creation
a. Get Public collection first entry into a javascript object.
b. Write a JSON file using that object in the React Project directory ('opt/front/src/assets/datas/publish.json').
Here's the code:
import fs from 'fs';
let publishDatas = Publish.find({}, {sort : { createdAt : -1}}).fetch();
let jsonDatasString = JSON.stringify(publishDatas[0]);
fs.writeFile('/front/src/assets/datas/publish.json', jsonDatasString, 'utf8', function (err) {
if (err) {
return console.log(err);
}
});
2. Static Website build
a. Run a CD command to reach React Project's directory then run the 'build' script using this code:
process_exec_sync = function (command) {
// Load future from fibers
var Future = Npm.require("fibers/future");
// Load exec
var child = Npm.require("child_process");
// Create new future
var future = new Future();
// Run command synchronous
child.exec(command, {maxBuffer: 1024 * 10000}, function(error, stdout, stderr) {
// return an onbject to identify error and success
var result = {};
// test for error
if (error) {
result.error = error;
}
// return stdout
result.stdout = stdout;
future.return(result);
});
// wait for future
return future.wait();
}
var build = process_exec_sync('(cd front && npm run build)');
b. if 'build' is OK, then I send the 'front/build' content to my S3 bucket.
Behaviors:
On local environment (Meteor running on development mode):
FYI: React Project directory's name and location are slightly different.
Its located in my meteor project directory, so instead of 'front', it's named '.#front' because I don't want Meteor to restart every time a file is modified, added or deleted.
Everything works well, but I'm fully aware that I'm in development mode and I benefit from my local environment.
On production environment (Meteor running on production mode in a docker container):
Step 2.b : It works well, I can see the new generated file in 'opt/front/src/assets/datas/'
Step 3.a : I get the following error:
"Error running ls: Command failed: (cd /front && npm run build)
(node:39) ExperimentalWarning: The WHATWG Encoding Standard
implementation is an experimental API. It should not yet be used in
production applications.
npm ERR! code ELIFECYCLE npm ERR! errno 1 npm
ERR! front#0.1.0 build: react-scripts build npm ERR! Exit status 1
npm ERR! npm ERR! Failed at the front#0.1.0 build script. npm ERR!
This is probably not a problem with npm. There is likely additional
logging output above.
npm ERR! A complete log of this run can be found in: npm ERR!
/root/.npm/_logs/2021-09-16T13_55_24_043Z-debug.log [exec-fail]"
So here's my question:
On production mode, is it possible to use Meteor to reach another directory and run a script from a package.json?
I've been searching for an answer for months, and can't find a similar or nearby case.
Am I doing something wrong?
Am I using a wrong approach?
Am I crazy? :D
Thank you so much to have read until the end.
Thank you for your answers!
!!!!! UPDATE !!!!!
I found the solution!
In fact I had to check few things on my EC2 with ssh:
once connected, I had to go to '/opt/front/' and try to build the React-app with 'npm run build'
I had a first error because of CHMOD not set to 777 on that directory (noob!)
then, I had an error because of node-sass.
The reason is that my docker is using Node v8, and my EC2 is using Node v16.
I had to install NVM and use a Node v8, then delete my React-App node_modules (and package-lock.json) then reinstall it.
Once it was done, everything worked perfectly!
I now have a Meteor App acting as a CMS / Preview website hosted on an EC2 instance that can publish a static website on a S3 bucket.
Thank you for reading me!
!!!!! UPDATE !!!!!
I found the solution!
In fact I had to check few things on my EC2 with ssh:
once connected, I had to go to '/opt/front/' and try to build the React-app with 'npm run build'
I had a first error because of CHMOD not set to 777 on that directory (noob!)
then, I had an error because of node-sass.
The reason is that my docker is using Node v8, and my EC2 is using Node v16.
I had to install NVM and use a Node v8, then delete my React-App node_modules (and package-lock.json) then reinstall it.
Once it was done, everything worked perfectly!
I now have a Meteor App acting as a CMS / Preview website hosted on an EC2 instance that can publish a static website on a S3 bucket.
Thank you for reading me!

Please ensure that your service worker file contains the following:/(const precacheManifest =)\[\](;)/

I am quite new to react React workbox. I am trying to make my Electron react App have the ability to cache all images and data to be made available while it is offline.
This is exactly what I am trying to accomplish as in this youtube video. from 14:00 to 21:00 minutes: Building PWAs with React and Workbox, /watch?v=Ok2r1M1jM_M
But this command is giving
"start-sw":"workbox injectManifest workbox-config.js && workbox copylibraries build/ && http-server build/ -c 0"
This error:
C:\Users\rajesh.ram\Desktop\Day\K\demok\client>npm run start-sw
> client#0.1.0 start-sw C:\Users\rajesh.ram\Desktop\Day\K\demok\client
> workbox injectManifest workbox-config.js && workbox copylibraries build/ && http-server build/ -c 0
Using configuration from C:\Users\rajesh.ram\Desktop\Day\K\demok\client\workbox-config.js.
Service worker generation failed:
Unable to find a place to inject the manifest. Please ensure that your service worker file contains the followin
g:/(const precacheManifest =)\[\](;)/
Please help me fix this or suggest alternative packages/repositories/videos to make it possible.
In newer workbox versions including 5.1.3 current at time of this post , the parameter which specifies the injectionPoint for the precacheManifest has changed from regex to string. The name of the parameter has also changed and as far as I can tell this is not backwards compatible...meaning it doesn't work to use the regex anymore.
module.exports = {
"globDirectory": "build/",
"globPatterns": [
"**/*.{json,ico,html,png,js,txt,css,svg}"
],
"swDest": "build/sw.js",
"swSrc": "src/sw.js",
"injectionPoint": "injectionPoint"
}
Changing that parameter as per above worked for me following the rest of the video.
Then several other updates affected how sw.js is written also...
importScripts("workbox-v5.1.3/workbox-sw.js");
workbox.setConfig({ modulePathPrefix: "workbox-v5.1.3/" });
const precacheManifest = [injectionPoint];
workbox.precaching.precacheAndRoute(precacheManifest);
You have to remove the .supressWarnings() command. It has been removed. A good video...needs some updates.
Link to the presentation github which needs an update so...
https://github.com/mikegeyser/building-pwas-with-react
Link to the manual: https://developers.google.com/web/tools/workbox/reference-docs/latest/module-workbox-build
#MegPhillips91
By changing the parameter of precacheAndRoute as below it worked for me
workbox.precaching.precacheAndRoute(self.__WB_MANIFEST);
If you're following the video strictly, make sure that the custom sw.js file that you create in the src folder is exactly:
importScripts("workbox-v4.3.1/workbox-sw.js");
workbox.setConfig({ modulePathPrefix: "workbox-v4.3.1/" });
const precacheManifest = [];
workbox.precaching.suppressWarnings();
workbox.precaching.precacheAndRoute(preCacheManifest);
and workbox-config.js
module.exports = {
globDirectory: "build/",
globPatterns: ["**/*.{json,ico,html,png,js,txt,css}"],
swDest: "build/sw.js",
swSrc: "src/sw.js",
injectionPointRegexp: /(const precacheManifest = )\[\](;)/
};
make sure the workbox version matches the version you have in the video he uses 3.6.3 but now its 4.3.1.....hope this helps.

Graphicsmagick not working in Elastic Beanstalk with nodejs and S3

I'm using nodejs and graphicsmagick to process images with text, then streaming the final jpg to S3.
Using postman, I was able to test this flow on my localhost and everything works fine. However, I'm having issues now that I moved it to Elastic Beanstalk. When I post to the endpoint, it uploads a blank file to S3 and there are no errors logged in EB. I think it has something to do with the software but am a bit stuck. Any advice appreciated! Thanks!
Top file is from localhost, bottom file is from Elastic Beanstalk:
http://cl.ly/image/0O231k171N0W
var gm = require('gm');
var appRoot = require('app-root-path').path;
function createImage(caption, res) {
var originalImage = '/images/2015-02-24.jpg';
var textColor = 'white';
gm(appRoot + originalImage)
.fill(textColor)
.font( appRoot + '/fonts/BentonSans-Book.otf')
.drawText(0, 0, caption, 'Center')
.stream(function(err, stdout, stderr) {
sendToS3(err, stdout, stderr, originalImage, res);
});
}
function sendToS3(err, stdout, stderr, originalImage, client_response) {
var imageName = shortId.generate();
var buff = new Buffer('');
stdout.on('data', function(data) {
buff = Buffer.concat([buff, data]);
});
stdout.on('end', function(data) {
var data = {
Bucket: S3_bucket,
Key: imageName + '.jpg',
Body: buff,
ContentType: mime.lookup(originalImage)
};
s3.putObject(data, function(err, res) {
client_response.send('done');
});
});
}
===============================================================
EDIT:
Instead of streaming to S3, I changed it to write directly to the filesystem. The error being thrown in AWS EB logs is:
err { [Error: Command failed: gm convert: Request did not return an
image.] code: 1, signal: null }
I believe I'm missing some dependencies for ImageMagick. Any thoughts?
This is from running convert --version in my local terminal:
Version: ImageMagick 6.8.9-7 Q16 x86_64 2014-08-31
http://www.imagemagick.org
Copyright: Copyright (C) 1999-2014 ImageMagick Studio LLC
Features: DPC Modules
Delegates: bzlib freetype jng jpeg ltdl lzma png xml zlib
This is from running convert --version in my EC2 instance (The Delegates section is empty):
Version: ImageMagick 6.9.1-1 Q16 x86_64 2015-04-10
http://www.imagemagick.org
Copyright: Copyright (C) 1999-2015 ImageMagick Studio LLC
License: http://www.imagemagick.org/script/license.php
Features: DPC OpenMP
Delegates (built-in):
How are you installing GraphicsMagick on your EC2 instance in ElasticBeanstalk? Are you using a custom AMI? The default AMI (at least the ones I've used) didn't have GraphicsMagick, I don't know about ImageMagick though.
You can use container commands to install packages with yum. I used the one below on a project where I needed GraphicsMagick.
Create a folder at the root of your project with the name ".ebextensions." Inside of that folder, create a file called "package.config" with the following contents:
commands:
01-command:
command: yum install -y --enablerepo=epel GraphicsMagick
This will install it when the instance is created. I have a feeling this should resolve your issue, if not you may want to use command line options for yum to use the same version or install the delegates as well:
commands:
01-command:
command: yum install -y --enablerepo=epel GraphicsMagick
02-command:
command: yum install -y --enablerepo=epel GraphicsMagick-devel
I lowered my elasticbeanstalks' nodejs version from node 12 to node 8.15.0, and yum CAN find Graphicsmagick and installs it successfully. (I listed Graphicsmagick in .ebextensions/packages.config)
Hope this will help someone!

Resources