DocuSign document blank after request and save - docusignapi

After requesting a document via the DocuSign api and writing it to the file system it appears blank after opening it. The docs say it returns a "PDF File" and the response body is returned as below.
const doc =
await rp.get(`${apiBaseUrl}/${BASE_URI_SUFFIX}/accounts/${accountId}/envelopes/${envelopeId}/documents/${document.documentId}`,
{auth: { bearer: token }}
);
fs.writeFile(document.name, new Buffer(doc, "binary"), function(err) {
if (err) throw err;
console.log('Saved!');
});
Response body:
{
"documents": [
{
"name": "Name of doc.docx",
"content": "%PDF-1.5\n%\ufffd\ufffd\ufffd\ufffd\n%Writing objects...\n4 0 obj\n<<\n/Type /Page\n/Resources 5 0 R\n/Parent 3 0 R\n/MediaBox [0 0 612 792 ]\n/Contents [6 0 R 7 0 R 8 0 R 9 0 R 10 0 R ]\n/Group <<\n/Type /Group\n/S /Transparency\n/CS /DeviceRGB\n>>\n/Tabs /S\n/StructParents 0\n>>\nendobj\n5 0 obj\n<<\n/Font <<\n/F1 11 0 R\n/F2 12 0 R\n/F3 13 0 R\n>>\n/ExtGState <<\n/GS7 14 0 R\n/GS8 15 0 R\n>>\n/ProcSet [/PDF /Text ...
}
]}
Screenshot of document:

The EnvelopeDocuments::get API method returns the PDF itself, not an object as you are showing.
For a working example of the method, see example 7, part of the Node.js set of examples.
Added
Also, the fs.writeFile call supports writing from a string source. I'd try:
fs.writeFile(document.name, doc, {encoding: "binary"},
function(err) {
if (err) throw err;
console.log('Saved!');
});
Incorrect encoding
Your question shows the pdf's content as a string with the control characters encoded as unicode strings:
"%PDF-1.5\n%\ufffd\ufffd\ufffd\ufffd\n%Writing objects...
but this is not correct. The beginning of a PDF file includes binary characters that are not displayable except in a hex editor. This is what you should see at the top of a PDF:
Note the 10th character. It is hex c4. In your string, the equivalent character has been encoded as \ufffd (it is ok that they aren't the same character, they are two different PDFs). The fact that the character has been encoded is your problem.
Solutions
Convince the requests library and the fs.WriteFile methods to not encode the data. Or to decode it as needed. See this solution for the requests library.
Or use the DocuSign Node.js SDK as I show in the example code referenced above.

Related

How can I add value to my args in discord.js?

message.channel.bulkDelete(args[0]+1)
.then(messages => message.channel.send(`${emojiyes} Deleted **${messages.size}** messages!`) | console.log(`Deleted ${messages.size} messages!`))
This causes deleting for example 21 messages, not 2 (_clear 2 deletes 21 messages, not 3). Can someone help me?
args[0] is a string and when combing that with one you are getting "2"+1 which results in 21. If you convert the string to a number first, it will calculate correctly. By using the parseInt() function we can convert the string into a number.
message.channel.bulkDelete(parseInt(args[0])+1)
.then(messages => message.channel.send(`${emojiyes} Deleted **${messages.size}** messages!`) | console.log(`Deleted ${messages.size} messages!`))

How can i get response size of headers REST API

I know in headers we can get content length, which indeed gives Content body length But I need Response size of headers
For example:
API Response =
{
"1": 1
}
If I print console.log(res.getHeader('content-length')); it gives 7 which is content length of body.
But I need response size of header which is 377Bytes(header + body) as shown in postman
One of the possible way to get the bytes of file is to download it , by using curl command in the following mannner:
curl -so /dev http://www.yourip.org/http-your-file/ -w '%{size_download}'
where -w/--write-out
defines what to display after a completed and successful operation
There is a NPM module to get object sizeof, you can install it with npm install object-sizeof
var sizeof = require('object-sizeof');
// 2B per character, 6 chars total => 12B
console.log(sizeof({abc: 'def'}));
// 8B for Number => 8B
console.log(sizeof(12345));
var param = {
'a': 1,
'b': 2,
'c': {
'd': 4
}
};
// 4 one two-bytes char strings and 3 eighth-bytes numbers => 32B
console.log(sizeof(param));
You can use it however you want in your code.
Example ::
console.log(sizeof(response.body));

How to read binary data in pyspark

I'm reading binary file http://snap.stanford.edu/data/amazon/productGraph/image_features/image_features.b
using pyspark.
import array
from io import StringIO
img_embedding_file = sc.binaryRecords("s3://bucket/image_features.b", 4106)
def mapper(features):
a = array.array('f')
a.frombytes(features)
return a.tolist()
def byte_mapper(bytes):
return str(bytes)
decoded_embeddings = img_embedding_file.map(lambda x: [byte_mapper(x[:10]), mapper(x[10:])])
When just product_id is selected from the rdd using
decoded_embeddings = img_embedding_file.map(lambda x: [byte_mapper(x[:10]), mapper(x[10:])])
The output for product_id is
["b'1582480311'", "b'\\x00\\x00\\x00\\x00\\x88c-?\\xeb\\xe2'", "b'7#\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'", "b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'", "b'\\xec/\\x0b?\\x00\\x00\\x00\\x00K\\xea'", "b'\\x00\\x00c\\x7f\\xd9?\\x00\\x00\\x00\\x00'", "b'L\\xa6\\n>\\x00\\x00\\x00\\x00\\xfe\\xd4'", "b'\\x00\\x00\\x00\\x00\\x00\\x00\\xe5\\xd0\\xa2='", "b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'", "b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'"]
The file is hosted on s3.
The file in each row has first 10 bytes for product_id next 4096 bytes as image_features
I'm able to extract all the 4096 image features but facing issue when reading the first 10 bytes and converting it into proper readable format.
EDIT:
Finally, the problem comes from the recordLength. It's not 4096 + 10 but 4096*4 + 10. Chaging to :
img_embedding_file = sc.binaryRecords("s3://bucket/image_features.b", 16394)
Should work.
Actually you can find this in the provided code from the web site you downloaded the binary file:
for i in range(4096):
feature.append(struct.unpack('f', f.read(4))) # <-- so 4096 * 4
Old answer:
I think the issue comes from your byte_mapper function.
That's not the correct way to convert bytes to string. You should be using decode:
bytes = b'1582480311'
print(str(bytes))
# output: "b'1582480311'"
print(bytes.decode("utf-8"))
# output: '1582480311'
If you're getting the error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x88 in position 4: invalid start byte
That means product_id string contains non-utf8 characters. If you don't know the input encoding, it's difficult to convert into strings.
However, you may want to ignore those characters by adding option ignore to decode function:
bytes.decode("utf-8", "ignore")

How to use logstash to grok the message which is a hash

All
I'm using logstash to ship logs from the remote server.
The message i got is a hash type like this:
[2014-12-06 23:59:57] 112.254.70.37 <AUDIO> {"type":"Stat", "eid":4800316, "mid":"87192133091532", "ccid":3228662, "ver":102, "ip":"114.113.200.227", "port":9081, "jitter":"0 0 0 0 0 ", "break":"0 0 0 0 0 ", "interrupt":"0 0 0 0 0 ", "tcp_rtt":"40 40 45 50 50 ", "udp_rtt":"31 33 35 40 35 ", "all_pkts":"107180 107193 107249 107323 107358 ", "lost":"0 0 0 0 0 ", "delay":"40.78", "pull":"3 3 3 3 3 "}
Then how can I write the grok part, I search the doc everywhere, but i still don't konw how...
thx!
First, you have to parse out your json data by grok filter. Then, use json filter to parse all the hashmap value. With this config I can parse your log and create all the field:value. Hope this can help you.
input {
stdin{
}
}
filter {
grok {
match => [ "message" , "\[%{TIMESTAMP_ISO8601:datatime}\] %{IP:ip} <%{WORD:level}> %{GREEDYDATA:data}"]
}
json {
source => "data"
}
}
output {
stdout{
codec => rubydebug
}
}

How do I insert a row with a TimeUUIDType column in Cassandra?

In Cassandra, I have the following Column Family:
<ColumnFamily CompareWith="TimeUUIDType" Name="Posts"/>
I'm trying to insert a record into it as follows using a C++ generated function generated by Thrift:
ColumnPath new_col;
new_col.__isset.column = true; /* this is required! */
new_col.column_family.assign("Posts");
new_col.super_column.assign("");
new_col.column.assign("1968ec4a-2a73-11df-9aca-00012e27a270");
client.insert("Keyspace1", "somekey", new_col, "Random Value", 1234, ONE);
However, I'm getting the following error: "UUIDs must be exactly 16 bytes"
I've even tried the Cassandra CLI with the following command:
set Keyspace1.Posts['somekey']['1968ec4a-2a73-11df-9aca-00012e27a270'] = 'Random Value'
but I still get the following error:
Exception null
InvalidRequestException(why:UUIDs must be exactly 16 bytes)
at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:11994)
at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:659)
at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:632)
at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:420)
at org.apache.cassandra.cli.CliClient.executeCLIStmt(CliClient.java:80)
at org.apache.cassandra.cli.CliMain.processCLIStmt(CliMain.java:132)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:173)
Thrift is a binary protocol; 16 bytes means 16 bytes. "1968ec4a-2a73-11df-9aca-00012e27a270" is 36 bytes. You need to get your library to give you the raw, 16 bytes form.
I don't use C++ myself, but "version 1 uuid" is the magic string you want to google for when looking for a library that can do this. http://www.google.com/search?q=C%2B%2B+version+1+uuid

Resources