My application does the following, inside a web server, when handling a request - all imports are plain default node modules (tar, stream, base64-stream etc.) except fs-extra, which is a wrapper around node's built-in fs and which is used instad of the wrapped fs:
let archiveFileStream = fse.createWriteStream(tmpDir + "/" + ARCHIVE_NAME);
let saveArchive = new ArchiveWriter(archiveFileStream, {});
stream.pipeline(request, new Base64Decode(), saveArchive, tar.extract({
strip: 0,
C: tmpDir,
sync: true,
gzip: true
}), function() {
try {
saveArchive.end(null, null, function() {
try {
logger.info("Saved file.");
doSomethingWithUnpackedArchive();
setTimeout(function() {
copyAllFilesToNewLocationSync();
}, 1000);
} catch(error) {
logger.error("Some errors", error);
// callback is simply a callback passed as parameter to this method - never null
callback(null, error);
}
});
} catch (error) {
logger.error("Error while processing archive from request.", error);
callback(null, error);
} finally {
setTimeout(() => fse.remove(tmpDir), 1500);
}
});
The ArchiveWriter class referenced in that piece of code is:
class ArchiveWriter extends stream.Transform {
constructor(writeStream, options) {
super(options);
this.output = writeStream;
this.finished = false;
}
_transform(chunk, encoding, done) {
this.output.write(chunk);
this.push(chunk);
done();
}
_flush(done) {
logger.debug("Flushing archive.");
this.output.end();
this.finished = true;
done();
if (this.callback) {
this.callback();
}
}
}
The entire application is packaged into a docker image using alpine as base image to which node was added:
FROM alpine
COPY app/ /opt/app/
RUN apk update; \
apk add node nano curl net-tools wget bash less socat zip unzip procps jq; \
chmod -R u+rwx /opt/app; \
adduser --disabled-password --no-create-home app;
chown -R app:app /opt/app;
USER app
CMD "node main.js"
When I run the image locally, using docker run, and upload an archived and base64-encoded file to the app, everything works fine. When I run the docker image as part of a pod in a Kubernetes cluster, I get the message Flushing archive. from the archive writer, but in fact the archive is never fully saved, and therefore processing never proceeds. If I skip the user app when creating the image, i.e. create the image like this:
FROM alpine
COPY app/ /opt/app/
RUN apk update; \
apk add node nano curl net-tools wget bash less socat zip unzip procps jq; \
chmod -R u+rwx /opt/app;
CMD "node main.js"
then everything runs in Kubernetes as it does on local, i.e. the archive is fully saved and further processed. The behavior doesn't seem to depend on the size of the uploaded file.
When running as non-root inside Kubernetes, the error is reproducible even if I upload the archive into the container, then open a shell into the container and upload the file from there using curl. This puzzles me even more - the isolation containers promise from the host OS is seemingly not that good, since the same operation performed strictly inside the container in two different environments has different outcomes.
My question: why does node behave differently in Kubernetes when running as root and as non root, and why does node not behave differently in these two situations when running locally? Could it be related to the fact that alpine uses busybox instead of a normal shell? Or are there limitations in alpine that might impact network traffic of non-privileged processes?
Related
I have a node app running, and I need to access a command that lives in an alpine docker image.
Do I have to use exec inside of javascript?
How can I install latex on an alpine container and use it from a node app?
I pulled an alpine docker image, started it and installed latex.
Now I have a docker container running on my host. I want to access this latex compiler from inside my node app (dockerized or not) and be able to compile *.tex files into *.pdf
If I sh into the alpine image I can compile '.tex into *.pdf just fine, but how can I access this software from outside the container e.g. a node app?
If you just want to run the LaTeX engine over files that you have in your local container filesystem, you should install it directly in your image and run it as an ordinary subprocess.
For example, this Javascript code will run in any environment that has LaTeX installed locally, Docker or otherwise:
const { execFileSync } = require('node:child_process');
const { mkdtemp, open } = require('node:fs/promises');
const tmpdir = await mkdtemp('/tmp/latex-');
let input;
try {
input = await open(tmpdir + '/input.tex', 'w');
await input.write('\\begin{document}\n...\n\\end{document}\n');
} finally {
input?.close();
}
execFileSync('pdflatex', ['input'], { cwd: tmpdir, stdio: 'inherit' });
// produces tmpdir + '/input.pdf'
In a Docker context, you'd have to make sure LaTeX is installed in the same image as your Node application. You mention using an Alpine-based LaTeX setup, so you could
FROM node:lts-alpine
RUN apk add texlive-full # or maybe a smaller subset
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY ./ ./
CMD ["node", "main.js"]
You should not try to directly run commands in other Docker containers. There are several aspects of this that are tricky, including security concerns and managing the input and output files. If it's possible to directly invoke a command in a new or existing container, it's also very straightforward to use that permission to compromise the entire host.
I have a node js application which needs to access ffmpeg. My approach to this has been to use {exec} from the child_processes module built into Node. The problem is that {exec} always starts the cmd line from the current directory and I can't figure out how to point the directory context to where ffmpeg.exe is located so that I can access the program. Is my approach flawed? How can I access a seperate CLI application from Node?
This code returns "'ffmpeg' is not recognized as an internal or external command" because I'm obviously in Node's execution context which is not where ffmpeg is located.
I also do not want to store the node application in the directory of ffmpeg.exe because that's just lazy and impractical.
exec(`ffmpeg -i ${filepathToMedia} --vf fps=1 ${outputdirectory}/out%d.png`, (error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
if(stdout) {
console.log(`success: ${stdout}`)
} });
You could do one of two things here:
Use the absolute path to the ffmpeg executable so instead of just exec('ffmpeg ...') you'd do something likeexec('C:\Users\user\ffmpeg_dir\ffmpeg ...') This isn't very clean and will probably break if someone else tries to use your code.
Add your ffmpeg directory to your system's PATH environment variable. If you add ffmpeg to your PATH it becomes available as ffmpeg regardless of what folder you're in, allowing the script to work as-is. This'll also make it easier for other people to run your script.
We have an app that's running in GCP, in Kubernetes services. The backend is inside a container running a node/alpine base image. We try to use the nodejs client library for Google Cloud Storage (#google-cloud/storage": "~2.0.3") to update file to our bucket like in the github repo samples :
storage.bucket(bucketName)
.upload(path.join(sourcePath, filename),
{
'gzip': true,
'metadata': {
'cacheControl': 'public, max-age=31536000',
},
}, (err) => {
if (err) {
return reject(err);
}
return resolve(true);
});
});
It works fine for files smaller than 5Mb, but when I get higher size files, I get an error :
{"name":"ResumableUploadError"}
A few google searches later, I see that the client automaticaly switch to resumable upload. Unfortunately, I cannot find any example on how to manage this special cases with the node client. We want to allow up to 50Mb so it's a bit of a concern right now.
OK, just so you know the problem was because my container runs the node/alpine image. The alpine distributions are stripped to the minimum so there was no ~/.config folder which is used by the Configstore library used by the google-cloud/storage node library. I had to go in the repo check the code and saw the comment in file.ts Once I added the folder in the container (by adding RUN mkdir ~/.config in Dockerfile) everything started to work as intended.
Alternatively you can set resumable: false in the options you pass in. So the complete code would look like this:
storage.bucket(bucketName)
.upload(path.join(sourcePath, filename),
{
'resumable': false,
'gzip': true,
'metadata': {
'cacheControl': 'public, max-age=31536000',
},
}, (err) => {
if (err) {
return reject(err);
}
return resolve(true);
});
});
If you still want to have resumable upload and you don't want to have to create additional bespoke directories in Dockerfile, here is another solution.
Resumable upload requires a writable directory to be accessible. Depending on the os and how you installed #google-cloud/storage, the default config path could change. To make sure that this always works, without having to create specific directories in your Dockerfile, you can specify the configPath to a writable file.
Here's an example of what you can do. Be sure to point configPath to a file not a existing directory (otherwise you'll get Error: EISDIR: illegal operation on a directory, read)
gcsBucket.upload(
`${filePath}`,
{
destination: `${filePath}`,
configPath: `${writableDirectory}/.config`,
resumable: true
}
)
I'm new to Docker and I have some difficulties to understand how I should use it.
For now, I'm wondering if that makes sense to attempt sending commands to a docker machine on my computer from the client side script of a javascript web app using an SDK like Dockerode.
I installed Docker CE for windows (17.06.0-ce) and Docker Toolbox, and I ran a container on the default machine using the docker terminal. Now I'm wondering if the commands I typed could be sent from a web app using NodeJS. I tried using this code:
import Docker from 'dockerode';
const docker = new Docker({host: 'myDefaultMachineHost'});
export function createLocalDb () {
docker.pull('someImageFromDockerHub', function (err, stream) {
if (err) console.log("Catch : " + err.toString());
stream.pipe(process.stdout, {end: true});
stream.on('end', function() {
//run the container
}).catch(function (err) {
console.log("Catch : " + err.toString());
});
});
}
But that doesn't work(stream.pipe throws an error). Am I misunderstanding the context in which I'm supposed to use dockerode ?
Thanks for your explanations !
In short: You need change your code to this const docker = new Docker({socketPath: '/var/run/docker.sock'}); and add docker socket inside your container.
Theory:
You have docker socket inside your local machine. You should add this socket inside your docker container. The volume is your solution.
Image for visualization this issue:
Implementation with arguments
This is simple task for Linux/Mac user. They can do
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
On Windows you need run
docker run -v //var/run/docker.sock:/var/run/docker.sock ...
More details in this question.
Implementation with Dockerfile
Also, you can add to your Dockerfile VOLUME instruction.
On Linux/Mac it should be line like this:
VOLUME /var/run/docker.sock /var/run/docker.sock
I don't know who it will be on Windows, I use Mac.
I am trying to see if my company can use Azure Functions to automate conversions of TIFF files to a number of JPG and PNG formats and sizes. I am using Functions with Node.js, but other languages could be used.
My problem is, that I can't get GraphicsMagick or ImageMagick to work on Functions. I used the normal procedures for installment using npm install.
It seems to install ok, and the module also seems to load, but nothing happens when I try to process a file. Nothing, as in no errors either.
var fs = require('fs');
var gm = require('gm');
module.exports = function (context, req) {
context.log('Start...');
try {
context.log('Looking for GM...');
context.log(require.resolve("gm"));
} catch(e) {
console.log("GM is not found");
process.exit(e.code);
}
gm('D:/home/site/wwwroot/HttpTriggerJS1/input/870003-02070-main-nfh.jpg')
.resize(240, 240)
.noProfile()
.write('D:/home/site/wwwroot/HttpTriggerJS1/output/resize.jpg',
function (err) {
context.log('TEST');
if (!err) {
context.log('done');
}
}
);
context.done(null, res); };
I'm not sure that it's even possible, but I haven't found any information that states that it can't.
So, can I use ImageMagick, GraphicsMagick or a third image converter in Functions? If yes, is there something special that I need to be aware of when installing?
Is there also a C# solution to this?
Web Apps in Azure is a SaaS (Software as a Service). You deploy your bits to the Azure IIS containers, and Azure do the rest. We don’t get much control.
So we will not have the privilege to install any 3rd party executable file on Azure Functions App (e.g. ImageMagick or GraphicsMagick). If you need to do that, look at Virtual Machines. Another option is using Cloud Services' Web or Worker Role.
Alternatively, there is a good image processing library for Node written entirely in JavaScript, with zero external or native dependencies, Jimp. https://github.com/oliver-moran/jimp
Example usage:
var Jimp = require("jimp");
Jimp.read("lenna.png").then(function (lenna) {
lenna.resize(256, 256) // resize
.quality(60) // set JPEG quality
.greyscale() // set greyscale
.write("lena-small-bw.jpg"); // save
}).catch(function (err) {
console.error(err);
});
There is another node.js library called sharp to achieve your requirement. You may try this way:
First, install the sharp on your local environment, and then deploy your application to Azure with the node_modules folder which contains the compiled module. Finally, upgrade the node executable on Azure App Service to 64-bits.
The similar thread you can refer here.
Example usage:
var sharp = require("sharp");
sharp(inputBuffer)
.resize(320, 240)
.toFile('output.webp', (err, info) => {
//...
});
Azure functions can run custom docker images also
https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-function-linux-custom-image
Not sure which language you are interested in, but you can have a python image with below style Dockerfile
FROM mcr.microsoft.com/azure-functions/python:2.0
RUN apt-get update && \
apt-get install -y --no-install-recommends apt-utils && \
apt-get install -y imagemagick
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY . /home/site/wwwroot
RUN cd /home/site/wwwroot && \
pip install -r requirements.txt
And then use the PythonMagick to work with the same
You can use the site extension to make imagemagick work for azure web apps.
You can check the repository for more info: https://github.com/fatihturgut/azure-imagemagick-nodejs