encfs decryption, filename encoding 'nameio/block' 4.0.0 not supported - linux

I'm trying to decrypt an encfs folder, but when executing the command I get the following error:
(FileUtils.cpp:1649) Unable to find nameio interface nameio/block, version 4:0:0
The command I use for mounting is simply:
encfs ~/encrypted_folder ~/mount_point
I've tried with sudo or --forcedecode, got same result.
the output for the encfsctl command over the encrypted folder is:
Version 6 configuration; created by EncFS 1.7.5 (revision 20100713)
Filesystem cipher: "ssl/aes", version 3:0:0 (using 3:0:2)
Filename encoding: "nameio/block", version 4:0:0 (NOT supported)
Key Size: 256 bits
Using PBKDF2, with 1351653 iterations
Salt Size: 160 bits
Block Size: 1024 bytes, including 8 byte MAC header
Each file contains 8 byte header with unique IV data.
Filenames encoded using IV chaining mode.
File data IV is chained to filename IV.
File holes passed through to ciphertext.
My OS details are:
Ubuntu 14.04.1 LTS with kernel 3.13.0-35-generic
I'm pretty lost, I don't know what is that encoding and why is not supported. Searching on google does not give me any solutions...

Confimed: using encfs 1.7.5 solves the problem.

Related

Oracle 11gR2 Installation on Oracle linux 7 - Prerequisite condition failed for OS kernel parameter “semmni”

while Installing oracle 11gr2 on oracle Linux 7. The prerequisite condition to test whether the OS kernel parameter "semmni" is failed with below error.
Please find the screenshot of it.
Below is the kernel parameters which are configured in the etc/sysctl.conf file.
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
#semaphores: semmsl, semmns, semopm, semmni
kernel.sem = 250 256000 100 1024
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
Any idea why it is failing?
You are missing the kernel.semmni=128.
Read the requirements here.
https://docs.oracle.com/cd/E11882_01/install.112/e24326/toc.htm#BHCGJCEA
I recommend you to install all requirements at once using the preinstall package. Find the link here:
https://blogs.oracle.com/linux/oracle-rdbms-server-11gr2-pre-install-rpm-for-oracle-linux-6-has-been-released
I just have the same problem. I checked my /proc/sys/kernel/sem and the values are exactly as its been mentioned in the Oracle document.
Plus, despite having all the packages, its still showing me FAILED when it check the packages.
It is fixable, you could just click the Fix button and execute the specified script to fix it.

docker, openmpi and unexpected end of /proc/mounts line

I have build environment to run code in a Docker container. One of the components is OpenMPI, which I think is the source of problem or manifest it.
When I run code using MPI I getting message,
Unexpected end of /proc/mounts line `overlay / overlay rw,relatime,lowerdir=/var/lib/docker/overlay2/l/NHW6L2TB73FPMK4A52XDP6SO2V:/var/lib/docker/overlay2/l/MKAGUDHZZTJF4KNSUM73QGVRUD:/var/lib/docker/overlay2/l/4PFRG6M47TX5TYVHKQQO2KCG7Q:/var/lib/docker/overlay2/l/4UR3OEP3IW5ZTADZ6OKT77ZBEU:/var/lib/docker/overlay2/l/LGBMK7HFUCHRTM2MMITMD6ILMG:/var/lib/docker/overlay2/l/ODJ2DJIGYGWRXEJZ6ECSLG7VDJ:/var/lib/docker/overlay2/l/JYQIR5JVEUVQPHEF452BRDVC23:/var/lib/docker/overlay2/l/AUDTRIBKXDZX62ANXO75LD3DW5:/var/lib/docker/overlay2/l/RFFN2MQPDHS2Z'
Unexpected end of /proc/mounts line `KNEJCAQH6YG5S:/var/lib/docker/overlay2/l/7LZSAIYKPQ56QB6GEIB2KZTDQA:/var/lib/docker/overlay2/l/CP2WSFS5347GXQZMXFTPWU4F3J:/var/lib/docker/overlay2/l/SJHIWRVQO5IENQFYDG6R5VF7EB:/var/lib/docker/overlay2/l/ICNNZZ4KB64VEFSKEQZUF7XI63:/var/lib/docker/overlay2/l/SOHRMEBEIIP4MRKRRUWMFTXMU2:/var/lib/docker/overlay2/l/DL4GM7DYQUV4RQE4Z6H5XWU2AB:/var/lib/docker/overlay2/l/JNEAR5ISUKIBKQKKZ6GEH6T6NP:/var/lib/docker/overlay2/l/LIAK7F7Q4SSOJBKBFY4R66J2C3:/var/lib/docker/overlay2/l/MYL6XNGBKKZO5CR3PG3HIB475X:/var/lib/do'
That message is printed for code line
MPI_Init(&argc,&argv);
To make the problem more complex to understand, a warning message is printed only when the host machine is mac os x, for linux host all is ok.
Except for warning message all works fine. I do not know how OpenMPI and docker well enough how this can be fixed.
This is likely due to your /proc/mount file having a line in it greater than 512 characters, causing the hwloc module of OpenMPI to fail to parse it correctly. Docker has a tendency to put very long lines into /proc/mounts. You can see the bug in openmpi-1.10.7/opal/mca/hwloc/hwloc191/hwloc/src/topology-linux.c:1677:
static void
hwloc_find_linux_cpuset_mntpnt(char **cgroup_mntpnt, char **cpuset_mntpnt, int fsroot_fd)
{
#define PROC_MOUNT_LINE_LEN 512
char line[PROC_MOUNT_LINE_LEN];
FILE *fd;
*cgroup_mntpnt = NULL;
*cpuset_mntpnt = NULL;
/* ideally we should use setmntent, getmntent, hasmntopt and endmntent,
* but they do not support fsroot_fd.
*/
fd = hwloc_fopen("/proc/mounts", "r", fsroot_fd);
if (!fd)
return;
This can be fixed by increasing the value of PROC_MOUNT_LINE_LEN, although that should be considered a temporary workaround.
This issue should be fixed in hwloc since 1.11.3 (released 2 years ago). You can either upgrade to OpenMPI 3.0 which contains a hwloc 1.11.7 >= 1.11.3. Or recompile OpenMPI to use an external hwloc instead of the old embedded one.

AWS Lambda function - convert PDF to Image

I am developing application where user can upload some drawings in pdf format. Uploaded files are stored on S3. After uploading, files has to be converted to images. For this purpose I have created lambda function which downloads file from S3 to /tmp folder in lambda execution environment and then I call ‘convert’ command from imagemagick.
convert sourceFile.pdf targetFile.png
Lambda runtime environment is nodejs 4.3. Memory is set to 128MB, timeout 30 sec.
Now the problem is that some files are converted successfully while others are failing with the following error:
{ [Error: Command failed: /bin/sh -c convert /tmp/sourceFile.pdf
/tmp/targetFile.png convert: %s' (%d) "gs" -q -dQUIET -dSAFER -dBATCH
-dNOPAUSE -dNOPROMPT -dMaxBitmap=500000000 -dAlignToPixels=0 -dGridFitTT=2 "-sDEVICE=pngalpha" -dTextAlphaBits=4 -dGraphicsAlphaBits=4 "-r72x72" "-sOutputFile=/tmp/magick-QRH6nVLV--0000001" "-f/tmp/magick-B610L5uo"
"-f/tmp/magick-tIe1MjeR" # error/utility.c/SystemCommand/1890.
convert: Postscript delegate failed/tmp/sourceFile.pdf': No such
file or directory # error/pdf.c/ReadPDFImage/678. convert: no images
defined `/tmp/targetFile.png' #
error/convert.c/ConvertImageCommand/3046. ] killed: false, code: 1,
signal: null, cmd: '/bin/sh -c convert /tmp/sourceFile.pdf
/tmp/targetFile.png' }
At first I did not understand why this happens, then I tried to convert problematic files on my local Ubuntu machine with the same command. This is the output from terminal:
**** Warning: considering '0000000000 XXXXX n' as a free entry.
**** This file had errors that were repaired or ignored.
**** The file was produced by:
**** >>>> Mac OS X 10.10.5 Quartz PDFContext <<<<
**** Please notify the author of the software that produced this
**** file that it does not conform to Adobe's published PDF
**** specification.
So the message was very clear, but the file gets converted to png anyway. If I try to do convert source.pdf target.pdf and after that convert target.pdf image.png, file is repaired and converted without any errors. This doesn’t work with lambda.
Since the same thing works on one environment but not on the other, my best guess is that the version of Ghostscript is the problem. Installed version on AMI is 8.70. On my local machine Ghostsript version is 9.18.
My questions are:
Is the version of ghostscript problem? Is this a bug with older
version of ghostscript? If not, how can I tell ghostscript (with or
without using imagemagick) to repair or ignore errors like it does on
my local environment?
If the old version is a problem, is it possible to build ghostscript
from source, create nodejs module and then use that version of
ghostscript instead the one that is installed?
Is there an easier way to convert pdf to image without using
imagemagick and ghostscript?
UPDATE
Relevant part of lambda code:
var exec = require('child_process').exec;
var AWS = require('aws-sdk');
var fs = require('fs');
...
var localSourceFile = '/tmp/sourceFile.pdf';
var localTargetFile = '/tmp/targetFile.png';
var writeStream = fs.createWriteStream(localSourceFile);
writeStream.write(body);
writeStream.end();
writeStream.on('error', function (err) {
console.log("Error writing data from s3 to tmp folder.");
context.fail(err);
});
writeStream.on('finish', function () {
var cmd = 'convert ' + localSourceFile + ' ' + localTargetFile;
exec(cmd, function (err, stdout, stderr ) {
if (err) {
console.log("Error executing convert command.");
context.fail(err);
}
if (stderr) {
console.log("Command executed successfully but returned error.");
context.fail(stderr);
}else{
//file converted successfully - do something...
}
});
});
You can find a compiled version of Ghostscript for Lambda in the following repository.
You should add the files to the zip file that you are uploading as the source code to AWS Lambda.
https://github.com/sina-masnadi/lambda-ghostscript
This is an npm package to call Ghostscript functions:
https://github.com/sina-masnadi/node-gs
After copying the compiled Ghostscript files to your project and adding the npm package, you can use the executablePath('path to ghostscript') function to point the package to the compiled Ghostscript files that you added earlier.
Its almost certainly a bug, or perhaps limitation, with the older version of Ghostscript.
Many PDF producers create PDF files which do not conform to the specification, and yet will open without complain in Adobe Acrobat. Ghostscript endeavours to do the same, but obviously we can't know what Acrobat is going to allow, so we are continually chasing this nebulous target. (FWIW that warning is a legitimate out-of-spec PDF file).
There's nothing you can do with the old version other than replace it.
Yes you can build Ghostscript from source, I have no idea about a nodejs module, not sure why that's relevant.
There are numerous other applications which will render a PDF file, MuPDF is another one I know of. And, of course, you can use Ghostscript directly without using ImageMagick. Of course, if you can load another application, then you should simply be able to replace your Ghostscript installation too.
The version of GS on aws is an old version with known bugs. We can get around this by uploading an x64 GS file, compiled specifically for Linux. Then upload that using new AWS lambda layers. I have written a node function that does just this here:
https://github.com/rcastoro/PDFImagine
Make sure you have that GS layer for your lambda, however!

Wrong image format for "source" command for U-boot

I'm trying to write a script to run commands automatically at U-boot.
I followed the instructions on the website [1].
Below is what I did.
I compiled the script on the NVIDIA Jetson board and put the compiled file into /boot/setenv.img
Now when I reboot the board, load the script image with the command:
ext2load mmc 0:1 100000 /boot/setenv.img
I got the following output:
1096 bytes read in 403 ms (2 KiB/s)
Note that when I run command imi, it reports "Command not found" on Jetson board.
I run the loaded image with the command:
source 100000
It gives me the following output (error message):
\## Executing script at 00100000
Wrong image format for "source" command
My question is:
Why is the image format incorrect for the source command?
Is there any method that I can debug the error?
Any help or advice about how to debug the error is really appreciated!
[1] http://www.denx.de/wiki/view/DULG/UBootScripts
Thank you very much!
As sawdust noted, the iminfo would be helpful for debugging - so make sure that's compiled in to U-Boot. You can also manually check the header info using the md command. Keep in mind that mkimage doesn't compile anything - it simply prepends 64 bytes of metadata to the beginning of the file.
So, if you load the script into DDR at 0x100000, you can look at that location in memory by entering md 0x100000.
The first four lines of output are bytes 0x0 - 0x40 of the file (the 64-byte U-Boot header). If your header is there, you should see something like this:
00100000: 56190527 030b131f eb439a57 de0d0000 '..V....W.C.....
00100010: 00000000 00000000 fc6de331 00060205 ........1.m.....
00100020: 6f747541 616d492d 676e6967 72635320 Auto-Imaging Scr
00100030: 00747069 00000000 00000000 00000000 ipt.............
This header contains a magic number, CRC checksums (one for the file, one for the header itself), timestamp, filesize, name, and a few other things. All iminfo does is parse the header. Beyond those 64 bytes should just be your plaintext ASCII script.
The source command is looking for a header that was generated with the -t script flag in mkimage. The error message you're getting is printed when an image at that address has the wrong type.
For future readers, CONFIG_LEGACY_IMAGE_FORMAT=y is required to load the boot.scr script or else you will get following error message:
Wrong image format for "source" command
Also, when CONFIG_FIT_SIGNATURE is enabled, then the above flag is disabled. From the Kconfig:
WARNING: When relying on signed FIT images with a required signature
check the legacy image format is disabled by default, so that
unsigned images cannot be loaded. If a board needs the legacy image
format support in this case, enable it using
CONFIG_LEGACY_IMAGE_FORMAT.

openssh - Reading from a file

In the openssh code (openssh-5.9p1/auth2.c) in the function
input_userauth_request(int type, u_int32_t seq, void *ctxt)
I'm trying to read my own file (tried using fopen, fread) but fopen fails with an error saying No such file or directory. The file exists with full permissions. Thanks.
This function is called during in pre-authentication child (before authentication), which is chrooted in /var/empty/sshd/ (Fedora/RHEL). This is the reason why it can not find your file.
If you want it to find that file, you might disable UsePrivilegeSeparation option in sshd_config NOT RECOMMENDED IN PRODUCTION!!!. Read a bit more about privilege separation in openssh.

Resources