Google Cloud Storage node client ResumableUploadError - node.js

We have an app that's running in GCP, in Kubernetes services. The backend is inside a container running a node/alpine base image. We try to use the nodejs client library for Google Cloud Storage (#google-cloud/storage": "~2.0.3") to update file to our bucket like in the github repo samples :
storage.bucket(bucketName)
.upload(path.join(sourcePath, filename),
{
'gzip': true,
'metadata': {
'cacheControl': 'public, max-age=31536000',
},
}, (err) => {
if (err) {
return reject(err);
}
return resolve(true);
});
});
It works fine for files smaller than 5Mb, but when I get higher size files, I get an error :
{"name":"ResumableUploadError"}
A few google searches later, I see that the client automaticaly switch to resumable upload. Unfortunately, I cannot find any example on how to manage this special cases with the node client. We want to allow up to 50Mb so it's a bit of a concern right now.

OK, just so you know the problem was because my container runs the node/alpine image. The alpine distributions are stripped to the minimum so there was no ~/.config folder which is used by the Configstore library used by the google-cloud/storage node library. I had to go in the repo check the code and saw the comment in file.ts Once I added the folder in the container (by adding RUN mkdir ~/.config in Dockerfile) everything started to work as intended.

Alternatively you can set resumable: false in the options you pass in. So the complete code would look like this:
storage.bucket(bucketName)
.upload(path.join(sourcePath, filename),
{
'resumable': false,
'gzip': true,
'metadata': {
'cacheControl': 'public, max-age=31536000',
},
}, (err) => {
if (err) {
return reject(err);
}
return resolve(true);
});
});

If you still want to have resumable upload and you don't want to have to create additional bespoke directories in Dockerfile, here is another solution.
Resumable upload requires a writable directory to be accessible. Depending on the os and how you installed #google-cloud/storage, the default config path could change. To make sure that this always works, without having to create specific directories in your Dockerfile, you can specify the configPath to a writable file.
Here's an example of what you can do. Be sure to point configPath to a file not a existing directory (otherwise you'll get Error: EISDIR: illegal operation on a directory, read)
gcsBucket.upload(
`${filePath}`,
{
destination: `${filePath}`,
configPath: `${writableDirectory}/.config`,
resumable: true
}
)

Related

node.js running inside k8s as non-root fails to download archive

My application does the following, inside a web server, when handling a request - all imports are plain default node modules (tar, stream, base64-stream etc.) except fs-extra, which is a wrapper around node's built-in fs and which is used instad of the wrapped fs:
let archiveFileStream = fse.createWriteStream(tmpDir + "/" + ARCHIVE_NAME);
let saveArchive = new ArchiveWriter(archiveFileStream, {});
stream.pipeline(request, new Base64Decode(), saveArchive, tar.extract({
strip: 0,
C: tmpDir,
sync: true,
gzip: true
}), function() {
try {
saveArchive.end(null, null, function() {
try {
logger.info("Saved file.");
doSomethingWithUnpackedArchive();
setTimeout(function() {
copyAllFilesToNewLocationSync();
}, 1000);
} catch(error) {
logger.error("Some errors", error);
// callback is simply a callback passed as parameter to this method - never null
callback(null, error);
}
});
} catch (error) {
logger.error("Error while processing archive from request.", error);
callback(null, error);
} finally {
setTimeout(() => fse.remove(tmpDir), 1500);
}
});
The ArchiveWriter class referenced in that piece of code is:
class ArchiveWriter extends stream.Transform {
constructor(writeStream, options) {
super(options);
this.output = writeStream;
this.finished = false;
}
_transform(chunk, encoding, done) {
this.output.write(chunk);
this.push(chunk);
done();
}
_flush(done) {
logger.debug("Flushing archive.");
this.output.end();
this.finished = true;
done();
if (this.callback) {
this.callback();
}
}
}
The entire application is packaged into a docker image using alpine as base image to which node was added:
FROM alpine
COPY app/ /opt/app/
RUN apk update; \
apk add node nano curl net-tools wget bash less socat zip unzip procps jq; \
chmod -R u+rwx /opt/app; \
adduser --disabled-password --no-create-home app;
chown -R app:app /opt/app;
USER app
CMD "node main.js"
When I run the image locally, using docker run, and upload an archived and base64-encoded file to the app, everything works fine. When I run the docker image as part of a pod in a Kubernetes cluster, I get the message Flushing archive. from the archive writer, but in fact the archive is never fully saved, and therefore processing never proceeds. If I skip the user app when creating the image, i.e. create the image like this:
FROM alpine
COPY app/ /opt/app/
RUN apk update; \
apk add node nano curl net-tools wget bash less socat zip unzip procps jq; \
chmod -R u+rwx /opt/app;
CMD "node main.js"
then everything runs in Kubernetes as it does on local, i.e. the archive is fully saved and further processed. The behavior doesn't seem to depend on the size of the uploaded file.
When running as non-root inside Kubernetes, the error is reproducible even if I upload the archive into the container, then open a shell into the container and upload the file from there using curl. This puzzles me even more - the isolation containers promise from the host OS is seemingly not that good, since the same operation performed strictly inside the container in two different environments has different outcomes.
My question: why does node behave differently in Kubernetes when running as root and as non root, and why does node not behave differently in these two situations when running locally? Could it be related to the fact that alpine uses busybox instead of a normal shell? Or are there limitations in alpine that might impact network traffic of non-privileged processes?

Receiving "error: no such file or directory, open" when passing a remote file to libreoffice-convert library in a Node.js app

I'm currently building a Node.js application that will eventually be used to convert certain file formats into other formats. Most of the work is being done by the libreoffice-convert library.
I am able to do file conversions without any issues when passing a local file path to the library but it doesn't seem to be working when I grab the contents of a remote file via request() and pass the received body to libreoffice-convert.
This is the relevant code I have right now:
request(fileUrl, {encoding: 'binary'}, function(error, response, body) {
const ext = '.html';
libre.convert(body, ext, undefined, (err, done) => {
if (err) {
console.log(`Error converting file: ${err}`);
res.sendStatus(500);
} else {
console.log(done);
}
});
});
I can see that when I run this, libreoffice starts the conversion but eventually, I'm getting this error:
Error: ENOENT: no such file or directory, open '/var/folders/j9/z_z85kh5501dbslrg53mpjsw0000gn/T/libreofficeConvert_-6529-x08o2o3peLMh/source..html
The example libreoffice-convert code gets the local file using fs.readFileSync() but given that I want to get my contents from a remote file, I'm passing the body received in the request() call.
To be sure that body has the correct contents, I compared the result I receive from fs.readFileSync() to the result I receive from request() when calling for the same exact file locally and remotely. There didn't seem to be any differences at all.
Am I missing something or it's a matter that the libreoffice-convert library or libreoffice itself doesn't support this?
libreoffice-convert is dependent on some linux package, i.e. libreoffice-writer. apt install libreoffice-writer will solve your problem.

Gulp vinyl ftp - how to use clean function?

The vinyl-ftp package has a function clean() but I'm not sure how to use it right. I need to:
get all files from my build folder
put them into the target folder on my ftp server
clean files if they're not available locally
I have the following gulp task:
gulp.task('deploy', () => {
let conn = ftp.create({host:host,user:user,password: password});
return gulp.src('build/**', {base: './build/', buffer: false })
.pipe(conn.newer('/path/on/my/server/')) // only upload newer files
.pipe(conn.dest('/path/on/my/server/'))
.pipe(conn.clean('build/**', './build/'));
});
1) and 2) is OK, but the clean() function does nothing
The vinyl-ftp docs have this to say:
conn.clean( globs, local[, options] )
Globs remote files, tests if they are locally available at <local>/<remote.relative> and removes them if not.
Note that globs expects a path for the remote files on your FTP server. Since your remote files are located in /path/on/my/server/ you have to specify that path as your glob:
.pipe(conn.clean('/path/on/my/server/**', './build/'));
Since I got a lot of struggle with this, here a working peace of code. It removes all files from the server that dont exist locally except the usage folder:
var connection = ftp.create({ ... });
connection.clean([
'/*.*',
'/!(usage)*',
'/de/**',
'/en/**',
'/images/**',
'/fonts/**',
'/json/**',
'/sounds/**'
], './dist', { base: '/' });
My files are locally on the ./dist folder and remote directly in the root directory (/) (of the used ftp user).

Creating directory at wrong place after packaging Electron

Electron version: 0.37.5
Operating system: Ubuntu 15.10
I packaged my project using electron-packager. Normally, I create a directory named downloads in application directory where my main.js file exists. After packaging, I have locales and resources directories along with other files, and inside resources directory, there is another named app and there's also electron.asar file. Inside app folder there are my project files.
When I run the executable, it creates the directory at the same location, instead of creating it under /resources/app/. How can I fix this problem?
My createDirectories function:
// create directory if it does not exist
function createDirectory(directory, callback) {
Fs.mkdirs(directory, function (err) {
if (err) {
console.error(err);
} else {
return callback();
}
})
}
I give downloads/images/ as a parameter to this function, for example. Fs.mkdirs is a method of fs-extra module.
My directory parameter is downloads/images/ and downloads/videos/
Writing app data to the application installation directory is generally a bad idea since the user running the app may not have permission to write files to the application installation directory. What you should probably do instead is store whatever your application downloads at the location returned by app.getPath('userData').

Node.js: Check if file is an symbolic link when iterating over directory with 'fs'

Supervisor is a package for Node.js that monitors files in your app directory for modifications and reloads the app when a modification occurs.
This script interprets symbolic links as regular files and logs out a warning. I would like to fork Supervisor so that either this can be fixed entirely or that a more descriptive warning is produced.
How can I use the File System module of Node.js to determine if a given file is really an symbolic link?
You can use fs.lstat and then call statis.isSymbolicLink() on the fs.Stats object that's passed into your lstat callback.
fs.lstat('myfilename', function(err, stats) {
console.log(stats.isSymbolicLink());
});
Seems like you can use isSymbolicLink()
const files = fs.readdirSync(dir, {encoding: 'utf8', withFileTypes: true});
files.forEach((file) => {
if (file.isSymbolicLink()) {
console.log('found symlink!');
}
}

Resources