Groovy encodeBase64() returning unexpected result for PNG image file - groovy

I am trying to convert a PNG image file to Base64 encoding in Groovy.
Here is my code:
ImageFile = new File("D:/DATA/CustomScript/Logo.png").text;
String encoded = ImageFile.getBytes().encodeBase64().toString();
I get the following as result:
iVBORw0KGgoAAAANSUhEUgAAAIQAAABPCAIAAAClCfqHAAAABGdBTUEAALE/C/xhBQAAAAlwSFlzAAAOwwAADsMBx2+oZAAAAQ1JREFUeF7t1KGRgwAURdFVyHQbSwOkKlrIoECDSwusoYgDcz97396Z/3eGUQxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgzIE2IcxzHP87qu176tJ8T4/X7Lsuz7fu3b6k1BigEpBqQYP2JAigEpBqQYP2JAigEpBqQYP2JAigEpBqQYP2JAigEpBqQYP2JAigEpBqQYP2JAigEpBqQYP2JAnhNj27ZxHN/v9/f7vU5385wYn8/n9XoNwzBN03W6l/P8BwSpsfw4c1/6AAAAAElFTkSuQmCC
The same image when passed through https://www.base64encode.org/ gives this result:
iVBORw0KGgoAAAANSUhEUgAAAIQAAABPCAIAAAClCfqHAAAABGdBTUEAALGPC/xhBQAAAAlwSFlzAAAOwwAADsMBx2+oZAAAAQ1JREFUeF7t1KGRgwAURdFVyHQbSwOkKlrIoECDSwusoYgDc497396Z/3eGUQxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgxIMSDFgBQDUgzIE2IcxzHP87qu176tJ8T4/X7Lsuz7fu3b6k1BigEpBqQYkGJAigEpBqQYkGJAigEpBqQYkGJAigEpBqQYkGJAigEpBqQYkGJAigEpBqQYkGJAigEpBqQYkGJAnhNj27ZxHN/v9/f7vU5385wYn8/n9XoNwzBN03W6l/P8BwSpsfw4c1/6AAAAAElFTkSuQmCC
I have tried to highlight some of the differences. It is clear that both encoded strings are different.
Problem is that I have to pass this image's Base64 encoding to another system and it is accepting the one from https://www.base64encode.org/ but rejecting the one generated by Groovy.
Any ideas what I am doing wrong here?

You are hiting an encoding problem here. Binary data is not character data; character data is effected by encodings. Instead of text use the bytes of the file. E.g.
def f = "/tmp/screenshot-000.png" as File
assert f.bytes.encodeBase64().toString()==("/tmp/encoded_20190208131326.txt" as File).text

Answer from user cfrick was extremely helpful. Unfortunately, it didn't solve my problem. I believe the reason was that I was on an older version of Groovy.
This code eventually solved my problem:
String base64Image = "";
File file = new File(imagePath);
FileInputStream imageInFile = new FileInputStream(file);
byte[] imageData = new byte[file.size()];
imageInFile.read(imageData);
base64Image = Base64.getEncoder().encodeToString(imageData);

Related

Is it possible to load ansi encoded string using nodejs

I have large quantity of html files (around 2k).
These html`s are result of conversion from word documents.
The files have some hebrew text inside html tags. I can see the text perfectly using vscode or notepad++ editors.
My goal is to loop through the folder and insert the contents of files into some DB.
Since i have a little knowledge of nodejs - i decided to build the "looping" using node.
Here is where i finished so far:
fs.readdir('./myFolder', function (err, files) {
total = files.length;
let fileArr = []
for(var x=0, l = files.length; x<l; x++) {
const content = fs.readFileSync(`./myFolder/${files[x]}`, 'utf8');
let title = content.match(/<title>(.*?)<\/title>/g).pop()
fileArr.push({id:files[x] , title})
}
});
The problem is: although the text displayed correctly inside editors -when debugging - i can see that "title" variable get strings which consists of question marks
I guess the problem is with file encoding, am i right here?
If so - is there way to decode the string?
P.S. my OS is windows10
Thanks
There are a couple of possibilities here, it may be possible that your input files are in a multibyte encoding (such as utf8 utf16 etc) and your debugger is simply not showing the correct characters due to font restrictions.
I would try writing the title variable to some test file like so:
fs.writeFileSync(`title-test-${x}.txt`, title, "utf8");
And see if the title looks correct in your text editor.
It may also be possible that the files are encoded in an extended ascii encoding such as Windows 1255 or ISO 8859-8. If this is the case, fs.readFileSync will not work correctly since it does not support these encodings (see node.js encoding list)
If the files are encoded using a single-byte extended ascii encoding, it should be possible to convert to a more portable encoding (such as utf8).
I'd recommend the iconv-lite module for this purpose, you can do a lot with it!
For example, to convert from a Windows 1255 file to utf8 you could try:
const iconv = require("iconv-lite");
const fs = require("fs");
// Convert from an encoded buffer to JavaScript string.
const fileData = iconv.decode(fs.readFileSync("./hebrew-win1255.txt"), "win1255");
// Convert from JavaScript string to a buffer.
const outputBuffer = iconv.encode(fileData, "utf8");
// Write output file..
fs.writeFileSync("./hebrew-utf8-output.txt", outputBuffer);

Protobuf RuntimeWarning : Unexpected end-group tag Not all data was converted

I have this utf-8 encoded file from which i need to collect the hexadecimal dump which is protobuf viable and then feed it to the protobuf.The .proto file works as expected and life is almost perfect.
message_content = message_content.replace(" ","")
message_content = binascii.unhexlify(message_content)
I convert the string to raw bytes and then feed it to the protobuf
msg.ParseFromString(message_content)
from which results the error
RuntimeWarning: Unexpected end-group tag: Not all data was converted
msg.ParseFromString(message_content)
I can't tell if I collect the hex part poorly or if its corrupted.
message_content looks like this:
b"87\x00\x00C\x17\x11\x10j\x17\x11\x10\x0c\x00\xc2\x00\x08\xec\xad\xe8\xe0\xf9\x04\x10\x01\x1a\x1f\x08\xea\xae\x18\x12\x14\x01\x00\x0f\x00\x02\x02|\xf0%\x00\x01&\x00\x01'\x00\x01*\x00\x01*\x01\x00\x1a\x00\x1a \x08\xea\xae\x14\x12\x14\x01\x00\x0f\x00\x02\x02|\xf0%\x00\x01&\x00\x01'\x00\x01(\x00\x01*\x02\x00\x00\x1a\x00\x1a#\x08\xea.\x12\x14\x01\x00\x0f\x00\x02\x02|\xf0%\x00\x01&\x00\x011\x00\x012\x00\x01*\x06\x00\x00\x00\x00\x00\x00\x1a\x00\x1a \x08\xea\xae\x14\x12\x14\x01\x00\x0f\x00\x02\x02|\xf0%\x00\x01&\x00\x01'\x00\x01(\x00\x02*\x02\x00\x00\x1a\x00\x1a\x1d\x08\xea\xae\x0c\x12\x11\x01\x00\x0f\x00\x02\x02|\xf0%\x00\x01&\x00\x011\x00\x01*\x02\x00\x00\x1a\x00"
I had a similar problem. Finally I found there is a problem with the data provided by the upstream source. You can also check whether there is a problem with the base64 data source.
I met the same err.
the err code is:
rsp_serialize = base64.b64decode(str(rsp))
item_info = StrucItem()
item_info.ParseFromString(rsp_serialize)
the right code is:
rsp_serialize = base64.b64decode(str(rsp, encoding="utf-8"))
item_info = StrucItem()
item_info.ParseFromString(rsp_serialize)
so you should check your upstream data is ok?

Decoding Base 64 In Groovy Returns Garbled Characters

I'm using an API which returns a Base64 encoded file that I want to parse and harvest data from. I'm having trouble decoding the Base64, as it comes back with garbled characters. The code I have is below.
Base64 decoder = new Base64()
def jsonSlurper = new JsonSlurper()
def json = jsonSlurper.parseText(Requests.getInventory(app).toString())
String stockB64 = json.getAt("stock")
byte[] decoded = decoder.decode(stockB64)
println(new String(decoded, "US-ASCII"))
I've also tried println(new String(decoded, "UTF-8")) and this returns the same garbled output. I've pasted in an example snipped of the output for reference.
� ���v���
��W`�C�:�f��y�z��A��%J,S���}qF88D q )��'�C�c�X��������+n!��`nn���.��:�g����[��)��f^���c�VK��X�W_����������?4��L���D�������i�9|�X��������\���L�V���gY-K�^����
��b�����~s��;����g���\�ie�Ki}_������
What am I doing wrong here?
You don't need the Base64 class wherever you took it from. You can simply do stockB64.decodeBase64() to get the decoded byte array. Are you sure what you have there is actual text that is encoded. Usually base64 encoded means that this is some binary like an image. If it is text you could have put it as string in the json simply. Maybe save the resulting byte array to a file and then investigate the file type by content.

How can I produce a Base64 encoded PDF that is read by iText's PdfReader

I am reading an existing PDF (INPUT) using iText:
PdfReader reader = new PdfReader(INPUT);
I am using this reader instance to manipulate the PDF, but the end result needs to be Base64 encoded (I need to insert it into a db2 database as a text BLOB). How can I make sure that iText's output is Base64 encoded?
Your question is unclear, or at least ambiguous.
Asking How can I convert PdfReader pdfReader in Base64? doesn't make any sense, and that's why your question is unclear. Your problem is probably that you either have input that is encoded using Base64, or that you want output that you want to have encoded using Base64. That makes your question ambiguous.
If INPUT is String that represents a PDF file encoded using Base64, then you can decode it like this:
import com.itextpdf.text.pdf.codec.Base64;
...
PdfReader reader = new PdfReader(Base64.decode(INPUT));
If INPUT is (the path to) a PDF file that you want to manipulate with as result a PDF that is encoded as a Base64 String, then you can do this like this:
import com.itextpdf.text.pdf.codec.Base64;
PdfReader reader = new PdfReader(INPUT);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
PdfStamper stamper = new PdfStamper(reader, baos);
// do stuff with stamper
stamper.close();
String base64 = Base64.encode(baos.toByteArray());
There may be other ways to produce the Base64 output, e.g. using some kind of Base64OutputStream, but I preferred to use the Base64 class that is shipped with iText.
If you don't need to manipulate the PDF, you don't even need iText. You can simply use the answer to this question: Out of memory when encoding file to base64
UPDATE:
in a comment to this answer, you wrote:
I have this:
byte[] bdata = blob.getBytes(1, (int) blob.length());
InputStream inputStream = blob.getBinaryStream();
String recuperataDaDb = convertStreamToString(inputStream);
byte[] decompilata = Base64.decode(recuperataDaDb);
I want write this "decompilata " in pdf file whit itext.jar
You have two options:
[1.] You don't need iText. You can simply write the byte[] to a file as described here: byte[] to file in Java
FileOutputStream stream = new FileOutputStream(path);
try {
stream.write(bytes);
} finally {
stream.close();
}
[2.] You read the answer to this question:
PdfReader reader = new PdfReader(decompilata);
FileOutputStream fos = new FileOutputStream(pathToFile);
PdfStamper stamper = new PdfStamper(reader, fos);
stamper.close();
I am very surprised by your eagerness to use iText. You really don't need iText to meet your requirement. All you need is an education on how to write Java code to perform some simple I/O.

Node Buffers, from utf8 to binary

I'm receiving data as utf8 from a source and this data was originally in binary form (it was a Buffer). I have to convert back this data to a Buffer. I'm having a hard time figuring how to do this.
Here's a small sample that shows my problem:
var hexString = 'e61b08020304e61c09020304e61d0a020304e61e65';
var buffer1 = new Buffer(hexString, 'hex');
var str = buffer1.toString('utf8');
var buffer2 = new Buffer(str, 'utf8');
console.log('original content:', hexString);
console.log('buffer1 contains:', buffer1.toString('hex'));
console.log('buffer2 contains:', buffer2.toString('hex'));
prints
original content: e61b08020304e61c09020304e61d0a020304e61e65
buffer1 contains: e61b08020304e61c09020304e61d0a020304e61e65
buffer2 contains: efbfbd1b08020304efbfbd1c09020304efbfbd1d0a020304efbfbd1e65
Here, I would like buffer2 to be the exact same thing as buffer1.
How can I convert an utf8 string to its original binary Buffer?
You cannot expect binary data converted to utf8 and back again to be the same as the original binary data because of the way utf8 works (especially when invalid utf8 characters are replaced with \ufffd).
You have to use another format that correctly preserves the data. This could be 'hex', 'base64', 'binary', or some other binary-safe format provided by a third-party module. Obviously you should probably keep it as a Buffer if you can.
The accepted answer is misleading. Your main problem is that you're dealing with invalid UTF-8. If the data were valid, the conversion would not cause issues.
Specifically, take the first two bytes: e61b.
In binary, that's: 11100110, 00011011. This is invalid. Take a look at this diagram from the utf-8 wikipedia page.
This says that if a byte starts with 1110, the next byte must start with two bytes starting with 10 after it. This is not the case here.
Whenever js hits an invalid character, it replaces it with �, the unicode replacement character. The codepoint for that is U+FFFD, and the utf-8 encoding of that code point is efbfbd. Notice that this shows up in your output a few times.

Resources