DocuSign Connect: pdfbytes leads to corrupted pdf file - docusignapi

I am trying to connect docusign with my java application and I was successful.
I have created listener to listen response of docusign after user complete sign process so that document saved/updated automatically in my system.
I am able to get that response in xml format with pdfbytes but as soon as I create pdf from that pdfBytes,I am not able to opening that pdf(might be corrupted pdfbytes).
I am making base64 decoding of that byte before generating pdf.

This is a common problem when the pdfbytes are not managed as a run of binary bytes. At some point you may be treating the data as a string. The PDF file becomes corrupted at that point.
Issues to check:
When you Base64 decode the string, the result is binary. Is your receiving variable capable of receiving binary data? (No codeset transformations.)
When you write your binary buffer to the output file, check that your output file format is binary clean. This is especially an issue on Windows systems.
If you're still having a problem, edit your question to include your code.

Related

Convert Binary content of PDF file To JSON format using node.js

we want Json format from binary content of pdf file using node.js.
Actually we are getting binary content of pdf from 3 party api response , using this response we will save in our database ,so give me working code for convert binary pdf format to json format
in simple words
Please let us know , "any working code so i have just pass binary data got json data" .
The JSON format natively doesn't support binary data.
Use Base64 or base85
I think the best you can do space-wise is base85 which represents four bytes as five characters. However, this is only a 7% improvement over base64, it's more expensive to compute, and implementations are less common than for base64 so it's probably not a win.

Binary data from mongodb gets corrupted

When I upload a photo it converts to base64 and then when I send to mongodb using Mongoose it saves as Binary. But when I call the same picture back from the database it returns as Buffer array. After converting to base64 it returns as a base64 string but completely different from the original base64. The new base64 is unable to be rendered in browser because it has been corrupted.
Below are pictures of the different strings
This is the initial base64
This is the Buffer array
This is the corrupted base64 after converting from the buffer array using Buffer.from(avatar).toString('base64').
Please note that I appended to it "data:image/png;base64," before rendering in the browser and it still did not render.
Please can someone tell me what I am doing wrong?
the best solution is convert to png or jpg file and upload only path and save image to folder.
Here is how I solved it.
I converted from binary to utf8 instead of to base64.
There is a huge difference bewteen
Buffer.from(binary_data, 'binary').toString('utf8')
and
Buffer.from(binary_data, 'binary').toString('base64')

AmazonClientException when uploading file with spring-integration-aws S3MessageHandler

I have configured an S3MessageHandler from spring-integration-aws to upload a File object to S3.
The upload fails with the following trace:
Caused by: com.amazonaws.AmazonClientException: Data read has a different length than the expected: dataLength=0; expectedLength=26; includeSkipped=false; in.getClass()=class com.amazonaws.internal.ResettableInputStream; markedSupported=true; marked=0; resetSinceLastMarked=false; markCount=1; resetCount=0
at com.amazonaws.util.LengthCheckInputStream.checkLength(LengthCheckInputStream.java:152)
...
Looking at the source code for S3MessageHandler, I'm not sure how uploading a File would ever succeed. The s3MessageHandler.upload() method does the following when I trace its execution:
Creates a FileInputStream for the File.
Computes the MD5 hash for the file contents, using the input stream.
Resets the stream if it can be reset (not possible for FileInputStream).
Sets up the S3 transfer using the input stream. This fails because the stream is at the EOF, so the number of transferable bytes doesn't match what's in the Content-Length header.
Am I missing something, or is this a bug in the message handler?
Yes; it's a bug; please open an Issue in GitHub and/or a JIRA Issue.
For FileInputStream a new one should be created, for InputStream payloads, we need to assert that markSupported() is true if MD5 consumes the stream.
Consider Contributing a fix after "signing" the CLA.
EDIT
I opened JIRA Issue INTEXT-225.

Node.js Buffer and Encoding

I have an HTTP endpoint where user uploads file. I need to read the file contents and then store it to DB. I can read it to Buffer and getting string from it.
The problem is, then file content is not UTF-8 I can see "strange" symbols in output string.
Is that possible somehow to detect the encoding of Buffer contents and serialise it to string correctly?

Zip in j2me using jazzlib

I am using jazzlib package in j2me application to compress the xml file in zip format using ZipOutputStream and the send the compress stream to the server as a string . I am able to do unzip the in mobile using ZipInputStream. But in server i am not able to unzip , i got
EOF exception. when i copy the compressed stream from console and put into browser, the empty space put special character like [] in compressed stream. I didnt understand what happened. Plz help
You send the compressed stream as a String? That's your problem(s) right there:
compressed data is binary data (i.e. byte[]).
String is designed to handle textual (Unicode) data and not arbitrary binary data
converting arbitrary binary data to String is bound to lead to problems
So if you want to handle (send/receive/...) binary data, make sure you never use a String/Reader/Writer to handle the data anywhere in the process. Stay with byte[]/InputStream/OutputStream.

Resources