I am downloading the signed documents using the docusign getDocument API, but the downloaded file is coming in a weird format(json as per the Content-type header). What format is this and how can I convert it to a viewable/readable format.
Here is a snippet from the response -
%PDF-1.5\n%ûüýþ\n%Writing objects...\n4 0 obj\n<<\n/Type /Page\n/Resources 5 0 R\n/Parent 3 0 R\n/MediaBox [0 0 595.32000 841.92000 ]\n/Contents [6 0 R 7 0 R 8 0 R ]\n/Group <<\n/Type /Group\n/S /Transparency\n/CS /DeviceRGB\n>>\n/Tabs /S\n/StructParents 0\n>>\nendobj\n5 0 obj\n<<\n/Font <<\n/F1 9 0 R\n/F2 10 0 R\n/F3 11 0 R\n/F4 12 0 R\n/F5 13 0 R\n>>\n/ExtGState <<\n/GS7 14 0 R\n/GS8 15 0 R\n>>\n/ProcSet [/PDF /Text /ImageB /ImageC /ImageI ]\n/XObject <<\n/X0 16 0 R\n>>\n>>\nendobj\n2 0 obj\n<<\n/Producer (PDFKit.NET 21.1.200.20790)\n/CreationDate (D:20210429103256-07'00')\n/ModDate (D:20210429103256-07'00')\n/Author ()\n/Creator ()\n/Keywords <>\n/Subject ()\n/Title ()\n>>\nendobj\n6 0 obj\n<<\n/Length 4\n>>\nstream\n\n q \nendstream\nendobj\n7 0 obj\n<<\n/Filter /FlateDecode\n/Length 7326\n>>\nstream\nxœ½]ëo\u001c7’ÿnÀÿCã\u0016¸›YDm¾šd\u0007A\u0000É’\u001dçüZ[Þà\u0010ï\u0007Å’ma-É‘FÉæ¿?
What language are you using, and are you using one of the DocuSign SDKs or a raw API call?
When I make a GetDocument call (specifically {{vx}}/accounts/{{accountid}}/envelopes/{{envelopeId}}/documents/combined, for example), the response headers have Content-Type: application/pdf and Content-Dispostion: file; filename=file.pdf, and the body of the response is the binary of the PDF file itself.
A snippet of mine begins with this:
%PDF-1.5
%����
%Writing objects...
4 0 obj
<<
/Type /Page
/Parent 3 0 R
/Resources 10 0 R
/MediaBox [0 0 612 792 ]
/Contents [11 0 R 12 0 R 13 0 R 14 0 R 15 0 R 16 0 R 17 0 R ]
/Group <<
/Type /Group
/S /Transparency
/CS /DeviceRGB
So it looks like whatever system you have picking up the response is including \n newlines and potentially other control characters.
You'll want to look at how your tools handle the API response: if you can dump the raw output from DocuSign to a PDF file, that would work, but with the extra formatting being injected it's no longer valid.
I used this piece of code to save the response from DocuSign getDocument API to a PDF document-
envelopesApi.getDocument(ACCOUNT_ID, envelope_id, 'combined', function (error, document, response) {
if (document) {
var filename = ‘docusign.pdf';
fs.writeFile(filename, new Buffer(document, 'binary'), function (err) {
if (err) console.log('Error: ' + err);
});
}
});
Related
I have a large data sheet (see below) in .csv format. I want to replace the numbers in each column and row with zero if it is smaller than a certain value, let's say 0.1.
Could anyone give me a hand? Thanks a lot.
I guess it can be done with sed as in this example
BCC_ACR_CR BCC_ACR_CU BCC_ACR_FE BCC_ACR_MN BCC_ACR_MO
0.2826027 3.89E-12 0.58420346 2.23E-13 0.2105587
0.27986588 3.80E-12 0.58501168 2.27E-13 0.20890705
0.27986588 3.80E-12 0.58501168 2.27E-13 0.20890705
0.27986588 3.80E-12 0.58501168 2.27E-13 0.20890705
0.28038733 3.81E-12 0.58196375 5.88E-05 0.21239142
0.26855376 3.27E-12 0.60364524 2.06E-13 0.11205138
0.27220042 3.28E-12 0.60349573 2.08E-13 0.11530944
0.36294869 3.14E-12 0.50515464 1.64E-13 3.12E-12
0.36294869 3.14E-12 0.50515464 1.64E-13 3.12E-12
0.40837234 3.07E-12 0.47202708 1.73E-13 3.03E-12
0.3643896 3.25E-12 0.50431576 1.63E-13 3.14E-12
0.3643896 3.25E-12 0.50431576 1.63E-13 3.14E-12
0.35885258 3.21E-12 0.50978952 1.64E-13 3.12E-12
Here is one for GNU awk. Field separator is assumed to be a run of space so empty fields are not allowed:
$ gawk -v value=0.1 ' # give treshold values as parameter
BEGIN { RS="( +\n?|\n)" } # every field is considered a record
{
ORS=RT # RT stores actualized RS
if($0<value) # comparison
$0=sprintf("0%-" length()-1 "s","") # format data to fit the field
}1' file # output
Output:
BCC_ACR_CR BCC_ACR_CU BCC_ACR_FE BCC_ACR_MN BCC_ACR_MO
0.2826027 0 0.58420346 0 0.2105587
0.27986588 0 0.58501168 0 0.20890705
0.27986588 0 0.58501168 0 0.20890705
0.27986588 0 0.58501168 0 0.20890705
0.28038733 0 0.58196375 0 0.21239142
0.26855376 0 0.60364524 0 0.11205138
0.27220042 0 0.60349573 0 0.11530944
0.36294869 0 0.50515464 0 0
0.36294869 0 0.50515464 0 0
0.40837234 0 0.47202708 0 0
0.3643896 0 0.50431576 0 0
0.3643896 0 0.50431576 0 0
0.35885258 0 0.50978952 0 0
I want to open Learn_full_data.txt extract some rows from it and write them on a new file called All_Data.txt using a foor loop.
Learn_full_data.txt Table:
vp run trial img_order mimg perc_aha_norm perc_gen_norm moon_onset moon_pulse moon_pulse_time answer_time answer_pulse answer_pulse_time fix_time fix_pulse fixpulse_time flash_onset flash_pulse flash_pulse_time_(flash_onset) tar_time_(greyscale) tar_pulse tarpulse_time answer RT_answer aha RT_aha condition solved_testphase RT_solvedtest oldnew RT_oldnew remknow RT_remknow
1 1 1 70 mimg433 0,4375 0,5625 18066 6 20029 20083 7 22029 22099 8 24029 24116 8 24029 24633 10 28029 nicht_erkannt 1055 Aha 1145 exp 0 0 old 2030 know 381
1 1 2 146 mimg665 0,6 0,4 30666 12 32029 32683 13 34029 34699 16 40028 40716 16 40028 41233 18 44028 erkannt 990 keinAha 1240 exp 1 2758 old 634 rem 1063
2 1 1 130 mimg640 0,666667 1 17366 5 19328 19383 6 21328 21399 8 25328 25416 8 25328 25933 10 29328 erkannt 871 keinAha 2121 base 1 2891 old 3105 know 533
2 1 2 83 mimg500 0,454545 0,272727 33966 13 35328 35983 14 37328 37999 15 39328 40016 15 39328 40533 17 43328 nicht_erkannt 1031 Aha 1153 exp 0 0 new 2358 kA 2358
The row Vp has two subjects, so I created a list with the subjects from the row Vp (there are many more, but I've just pasted an excerpt from it):
list = ['1','2']
Now I want to iterate over the list with this code (if the item in the list is the same as Vp, than write on All_Data.txt some rows from Learn_full_data.txt):
Learn = open('Learn_full_data.txt','r')
file = open('All_Data.txt','w')
file.write('Vp\tImg\tDescription\tPerc_gen_norm\tPerc_aha_norm\tCond\tGen\tRt_Gen\tRt_Solved\tInsight\tRt_Insight\tOldNew\tRt_OldNew\tRemKnow\tRt_RemKnow\n')
for i in list:
for splitted in Learn:
splitted = splitted.split()
Vp = splitted[0]
Img = str(splitted[4])
Perc_gen_norm = splitted[6]
Perc_aha_norm = splitted[5]
Cond = splitted[26]
Gen = splitted[22]
Rt_Gen = splitted[23]
Insight = splitted[24]
Rt_Insight = splitted[25]
Rt_Solved = splitted[28]
OldNew = splitted[29]
Rt_OldNew = splitted[30]
RemKnow = splitted[31]
Rt_Remknow = splitted[32]
if i == str(Vp):
file.write(str(Vp)+'\t'+str(Img)+'\t'+'Description'+'\t'+str(Perc_gen_norm)+'\t'+str(Perc_aha_norm)+'\t'+str(Cond)+'\t'+str(Gen)+'\t'+str(Rt_Gen)+'\t'+str(Insight)+'\t'+str(Rt_Insight)+'\t'+str(Rt_Solved)+'\t'+str(OldNew)+'\t'+str(Rt_OldNew)+'\t'+str(RemKnow)+'\t'+str(Rt_Remknow)+'\n’)
The Code output is just the first iteration from the list. I was expecting it to continue iterating:
Vp Img Description Perc_gen_norm Perc_aha_norm Cond Gen Rt_Gen Rt_Solved Insight Rt_Insight OldNew Rt_OldNew RemKnow Rt_RemKnow
1 mimg433 Description 0,5625 0,4375 exp nicht_erkannt 1055 Aha 1145 0 old 2030 know 381
1 mimg665 Description 0,4 0,6 exp erkannt 990 keinAha 1240 2758 old 634 rem 1063
The second iteration designated on the list doesn't happen. The second item of the list is '2' and the Vp item is also '2', so the second iteration should return the same for Vp '2' as it did for Vp '1'. Why does the for loop stop in Vp '1'?
The problem is that you iterate through all the lines in your code in the first iteration of your for i in list loop. In the second iteration, e.g. i = 2, the read cursor is still at the end of the file. You have to set it to the first line in each iteration. This can be done with Learn.seek(0):
for i in list:
Learn.seek(0)
for splitted in Learn:
splitted = splitted.split('\t')
Vp = splitted[0]
Img = str(splitted[4])
Perc_gen_norm = splitted[6]
Perc_aha_norm = splitted[5]
Cond = splitted[26]
Gen = splitted[22]
Rt_Gen = splitted[23]
Insight = splitted[24]
Rt_Insight = splitted[25]
Rt_Solved = splitted[28]
OldNew = splitted[29]
Rt_OldNew = splitted[30]
RemKnow = splitted[31]
Rt_Remknow = splitted[32]
if i == str(Vp):
file.write(str(Vp)+'\t'+str(Img)+'\t'+'Description'+'\t'+str(Perc_gen_norm)+'\t'+str(Perc_aha_norm)+'\t'+str(Cond)+'\t'+str(Gen)+'\t'+str(Rt_Gen)+'\t'+str(Insight)+'\t'+str(Rt_Insight)+'\t'+str(Rt_Solved)+'\t'+str(OldNew)+'\t'+str(Rt_OldNew)+'\t'+str(RemKnow)+'\t'+str(Rt_Remknow))
I'm writing a keyboard-events parser for Linux, using node.js. It's working somewhat okay, but sometimes it seems like node is skipping a few bytes. I'm using a ReadStream to get the data, handle it, process it, and eventually output it when a separator character is encountered (in my case, \n).
Here is the part of my class that handles the read data:
// This method is called through this callback:
// this.readStream = fs.createReadStream(this.path);
// this.readStream.on("data", function(a) { self.parse_data(self, a); });
EventParser.prototype.parse_data = function(self, data)
{
/*
* Data format :
* {
* 0x00 : struct timeval time { long sec (8), long usec (8) } (8 bytes)
* 0x08 : __u16 type (2 bytes)
* 0x10 : __u16 code (2 bytes)
* 0x12 : __s32 value (4 bytes)
* } = (16 bytes)
*/
var dataBuffer = new Buffer(data);
var slicedBuffer = dataBuffer.slice(0, 16);
dataBuffer = dataBuffer.slice(16, dataBuffer.length);
while (dataBuffer.length > 0 && slicedBuffer.length == 16)
{
var type = GetDataType(slicedBuffer),
code = GetDataCode(slicedBuffer),
value = GetDataValue(slicedBuffer);
if (type == CST.EV.KEY)
{ // Key was pressed: KEY event type
if (code == 42 && value == 1) { self.shift_pressed = true; }
if (code == 42 && value == 0) { self.shift_pressed = false; }
console.log(type + "\t" + code + "\t" + value + "\t(" + GetKey(self.shift_pressed, code) + ")")
// GetKey uses a static array to get the actual character
// based on whether the shift key is held or not
if (value == 1)
self.handle_processed_data(GetKey(self.shift_pressed, code));
// handle_processed_data adds characters together, and outputs the string when encountering a
// separator character (in this case, '\n')
}
// Take a new slice, and loop.
slicedBuffer = dataBuffer.slice(0, 16);
dataBuffer = dataBuffer.slice(16, dataBuffer.length);
}
}
// My system is in little endian!
function GetDataType(dataBuffer) { return dataBuffer.readUInt16LE(8); }
function GetDataCode(dataBuffer) { return dataBuffer.readUInt16LE(10); }
function GetDataValue(dataBuffer) { return dataBuffer.readInt32LE(12); }
I'm basically filling up the data structure explained at the top using a Buffer. The interesting part is the console.log near the end, which will print everything interesting (related to the KEY event) that passes in our callback! Here is the result of such log, complete with the expected result, and the actual result:
EventParserConstructor: Listening to /dev/input/event19
/* Expected result: CODE-128 */
/* Note that value 42 is the SHIFT key */
1 42 1 ()
1 46 1 (C)
1 42 0 ()
1 46 0 (c)
1 42 1 ()
1 24 1 (O)
1 42 0 ()
1 24 0 (o)
1 42 1 ()
1 32 1 (D)
1 42 0 ()
1 32 0 (d)
1 42 1 ()
1 18 1 (E)
1 42 0 ()
1 18 0 (e)
1 12 0 (-)
1 2 0 (1)
1 3 1 (2)
1 3 0 (2)
1 9 1 (8)
1 9 0 (8)
1 28 1 (
)
[EventParser_Handler]/event_parser.handle_processed_data: CODE28
/* Actual result: CODE28 */
/* The '-' and '1' events can be seen in the logs, but only */
/* as key RELEASED (value: 0), not key PRESSED */
We can clearly see the - and 1 character events passing by, but only as key releases (value: 0), not key presses. The weirdest thing is that most of the time, the events are correctly translated. But 10% of the time, this happens.
Is ReadStream eating up some bytes, occasionally? If yes, what alternative should I be using?
Thanks in advance!
Well, turns out that my loop was rotten.
I was assuming that the data would only come in chunks of 16 bytes... Which obviously isn't always the case. So sometimes, I had packets of <16 bytes being left over and lost between two 'data' event callbacks.
I added this by adding an excessBuffer field to my class and using it to fill my initial slicedBuffer when receiving data.
I am maintaining and developing two Akka Scala applications that interface with a Serial device to gather sensor information. The main difference between the two is that one (My CO2 sensor application) uses 1% CPU while the other (My Power sensor application) uses 250% CPU. This is both the case on a Linux machine (Raspberry Pi 3) as well as on my Windows Desktop PC. Code-wise the main difference is that CO2 uses the Serial library (http://fazecast.github.io/jSerialComm/) directly, while the Power sensor app goes through a layer of middleware to convert the In/OutputStreams of the Serial library to Akka Source/Sink as such:
val port = SerialPort.getCommPort(comPort)
port.setBaudRate(baudRate)
port.setFlowControl(flowControl)
port.setComPortParameters(baudRate, dataBits, stopBits, parity)
port.setComPortTimeouts(timeoutMode, timeout, timeout)
val isOpen = port.openPort()
if(!isOpen) {
error(s"Port $comPort could not opened. Use the following documentation for troubleshooting: https://github.com/Fazecast/jSerialComm/wiki/Troubleshooting")
throw new Exception("Port could not be opened")
}
(reactive.streamSource(port.getInputStream), reactive.streamSink(port.getOutputStream))
When I saw this high CPU usage I immediately slapped a Profiler (VisualVM) against it which told me the following:
After googling for Unsafe.park I found the following answer: https://stackoverflow.com/a/29414580/1122834 - Using this information I checked the amount of context switching WITH and WITHOUT my Power sensor app, and the results were very clear about the root cause of the issue:
pi#dex:~ $ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
10 0 32692 80144 71228 264356 0 0 0 5 7 8 38 5 55 2 0
1 0 32692 80176 71228 264356 0 0 0 76 12932 18856 59 6 35 0 0
1 0 32692 80208 71228 264356 0 0 0 0 14111 20570 60 8 32 0 0
1 0 32692 80208 71228 264356 0 0 0 0 13186 16095 65 6 29 0 0
1 0 32692 80176 71228 264356 0 0 0 0 14008 23449 56 6 38 0 0
3 0 32692 80208 71228 264356 0 0 0 0 13528 17783 65 6 29 0 0
1 0 32692 80208 71228 264356 0 0 0 28 12960 16588 63 6 31 0 0
pi#dex:~ $ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 32692 147320 71228 264332 0 0 0 5 7 8 38 5 55 2 0
0 0 32692 147296 71228 264332 0 0 0 84 963 1366 0 0 98 2 0
0 0 32692 147296 71228 264332 0 0 0 0 962 1347 1 0 99 0 0
0 0 32692 147296 71228 264332 0 0 0 0 947 1318 1 0 99 0 0
As you can see, the amount of context switches went down by ~12000 a second just by killing my application. I continued by checking which exact threads were doing this, and it seems Akka is really eager to do stuff:
Both a comment here and on another SO question point towards tweaking the parallelism settings of Akka. I added the following to my application.conf - to no result.
akka {
log-config-on-start = "on"
actor{
default-dispatcher {
# Dispatcher is the name of the event-based dispatcher
type = Dispatcher
# What kind of ExecutionService to use
executor = "fork-join-executor"
# Configuration for the fork join pool
default-executor {
fallback = "fork-join-executor"
}
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 1
# Parallelism (threads) ... ceil(available processors * factor)
parallelism-factor = 1.0
# Max number of threads to cap factor-based parallelism number to
parallelism-max = 1
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 1
}
}
stream{
default-blocking-io-dispatcher {
type = PinnedDispatcher
executor = "fork-join-executor"
throughput = 1
thread-pool-executor {
core-pool-size-min = 1
core-pool-size-factor = 1.0
core-pool-size-max = 1
}
fork-join-executor {
parallelism-min = 1
parallelism-factor = 1.0
parallelism-max = 1
}
}
}
}
This seems to improve the CPU usage (100% -> 65%) but still, the CPU usage is unnecessarily high.
UPDATE 21-11-'16
It would appear the problem is inside my graph. When not running the graph the CPU usage goes down immediately to normal levels. The graph is the following:
val streamGraph = RunnableGraph.fromGraph(GraphDSL.create() { implicit builder =>
import GraphDSL.Implicits._
val responsePacketSource = serialSource
.via(Framing.delimiter(ByteString(frameDelimiter), maxFrameLength, allowTruncation = true))
.via(cleanPacket)
.via(printOutput("Received: ",debug(_)))
.via(byteStringToResponse)
val packetSink = pushSource
.via(throttle(throttle))
val zipRequestStickResponse = builder.add(Zip[RequestPacket, ResponsePacket])
val broadcastRequest = builder.add(Broadcast[RequestPacket](2))
val broadcastResponse = builder.add(Broadcast[ResponsePacket](2))
packetSink ~> broadcastRequest.in
broadcastRequest.out(0) ~> makePacket ~> printOutput("Sent: ",debug(_)) ~> serialSink
broadcastRequest.out(1) ~> zipRequestStickResponse.in0
responsePacketSource ~> broadcastResponse.in
broadcastResponse.out(0).filter(isStickAck) ~> zipRequestStickResponse.in1
broadcastResponse.out(1).filter(!isStickAck(_)).map (al => {
val e = completeRequest(al)
debug(s"Sinking: $e")
e
}) ~> Sink.ignore
zipRequestStickResponse.out.map { case(request, stickResponse) =>
debug(s"Mapping: request=$request, stickResponse=$stickResponse")
pendingPackets += stickResponse.sequenceNumber -> request
request.stickResponse trySuccess stickResponse
} ~> Sink.ignore
ClosedShape
})
streamGraph.run()
When removing the filters from broadcastResponse, the CPU usage goes down to normal levels. This leads me to believe that the zip never happens, and therefore, the graph goes into an incorrect state.
The problem is that Fazecast's jSerialComm library has a number of different time-out modes.
static final public int TIMEOUT_NONBLOCKING = 0x00000000;
static final public int TIMEOUT_READ_SEMI_BLOCKING = 0x00000001;
static final public int TIMEOUT_WRITE_SEMI_BLOCKING = 0x00000010;
static final public int TIMEOUT_READ_BLOCKING = 0x00000100;
static final public int TIMEOUT_WRITE_BLOCKING = 0x00001000;
static final public int TIMEOUT_SCANNER = 0x00010000;
Using the non blocking read() method (TIMEOUT_NONBLOCKING) results in a very high CPU usage when combined with the Akka Stream's InputStreamPublisher. To prevent this simply use TIMEOUT_READ_SEMI_BLOCKING or TIMEOUT_READ_BLOCKING.
I have the following string (char of 1X48) in cell in Matlab
{ {1 , 0 , 0 } , { 0 , 1 , 0 } , { 0 , 0 , 1 } }.
I am trying to get three separate strings in new line with just space, so that data will look like
1 0 0
0 1 0
0 0 1
I will really appreciate if anyone has any idea to covert in matlab.
Thanks
Better to use cell2mat function
In your case you can try something like this,
temp = { {1 , 0 , 0 } , { 0 , 1 , 0 } , { 0 , 0 , 1 } };
out = [cell2mat(temp{1, 1}); cell2mat(temp{1, 2}); cell2mat(temp{1, 3})]
I hope it will help!!
You are looking for strjoin:
a = { {1 , 0 , 0 } , { 0 , 1 , 0 } , { 0 , 0 , 1 } };
b = strjoin(a, ' ')
Then:
b =
1 0 0 0 1 0 0 0 1
Edit: if you want to get new lines involved, you can use
b = [strjoin(a(1), ' '); strjoin(a(2), ' '); strjoin(a(3), ' ');]
Then:
b =
1 0 0
0 1 0
0 0 1
P.S.: strjoin works in MATLAB R2013b. For earlier versions, you can download the strjoin function from here.