How can I parse form-data manually at node js?
I get chunks of the large file at the server side. And can't merge it, because it will borrow RAM equals to the (file size + file info size).
As far as I understand, I can't use the libraries, such as multer/connect-busboy/connect-multiparty etc.
In simple terms, now I get following data at server side and can't figure out how to receive, example, a file name, and how to determine, which byte is responsible for the beginning of the file.
[1] Get request...
[1] data: <Buffer 2d 2d 2d 2d 2d 2d 57 65 62 4b 69 74 46 6f 72 6d 42 6f 75 6e 64 61 72 79 62 66 30 31 63 62 36 38 71 4d 41 4d 36 6d 51 31 0d 0a 43 6f 6e 74 65 6e 74 2d ... 65486 more bytes>
[1] data: <Buffer 02 10 00 00 01 ce 00 00 01 ef 00 00 14 e4 00 00 01 de 00 00 01 b2 00 00 01 d8 00 05 f0 1a 00 00 06 70 00 00 00 fa 00 00 00 cf 00 00 0a 3b 00 00 01 4a ... 65486 more bytes>
[1] data: <Buffer 31 52 2e 9a 5b df 11 19 5f c2 5d 38 ed db 23 00 06 05 bd 8f 71 e7 6b 03 9d 3a 88 e3 24 10 00 0f a1 10 43 82 ff 67 bd 5d eb 8e 59 fe a9 d3 e6 45 f2 9e ... 65486 more bytes>
[1] data: <Buffer 2b 6c cf 18 40 13 d8 fd 09 fa 56 25 e6 6d 6a 57 94 8c a4 9c d1 7b 99 bb 20 c0 21 e7 4a 1f 72 5a a5 3d 8e 84 5d 97 eb 1b 0f db b1 a9 81 f9 db ef 06 36 ... 65486 more bytes>
[1] data: <Buffer bb 29 4b 0d 06 1d 69 e1 c7 47 a7 99 c6 e4 c3 48 2e 85 6a b3 57 01 68 09 00 aa 6d b4 7b c7 07 2f 73 c0 c4 6b c4 48 9b 8f 81 64 8e 25 c7 a3 de c3 4f 2a ... 65486 more bytes>
[1] data: <Buffer 21 2b 1c dd 57 02 23 f5 43 a1 70 72 b4 8d 8d a8 4a 4b c1 e1 21 75 84 ed 07 00 34 de 5c 8d b0 0a 6f 99 60 28 34 b1 c5 bd a2 3c 3e d5 01 0f 62 57 a8 e8 ... 65486 more bytes>
[1] data: <Buffer 2d 96 e3 47 75 ce 25 63 77 26 15 96 b6 43 b9 d2 59 7d 12 c9 64 b2 11 f2 39 29 3e bf 83 1a f3 15 7b 14 ee f3 33 52 5c 39 34 75 06 41 2f e7 11 e8 26 4e ... 65486 more bytes>
[1] data: <Buffer 4a 9d 20 c7 a8 55 35 22 26 ff c7 b4 11 d9 6c 4d ce e1 a0 b2 b3 01 37 da 6a ad ae 98 fd 9b 8e 9c 8b 4a 27 8e e3 10 4a dc 80 f2 50 df 88 d9 c8 f9 ef 48 ... 65486 more bytes>
[1] data: <Buffer 52 ee b3 96 1f 7c af df fa 3f 9c 9a f7 01 20 7d 3f ea 4e 7e aa 4e d9 57 31 cb b5 f3 09 49 c7 ee 92 83 2e cb 58 d2 ea 1b b0 35 2a 3c 6c 27 75 0c d0 39 ... 65486 more bytes>
...more
...more
...and more
After receiving all the above data, I need to get file name and send each chunk of the file(not other info, such as name/mime type) to another client. (It's file sharing web resource)
Another solutions?
At the same time, I could to send the file separated manually by the chunks of type ArrayBuffer. Each chunk would be encoded additional information, like a "file name", "mime type". But in this case, I should to separate file at client side and get again the above problem the lack of memory.
A working solution, but a bit insecure. Send the file name and mime type separately from the main file data.
The part of the code responsible for sending data:
app.get(`/${senderId}/get`, (request, response) => {
let countChunks = 0;
// waiting for the sender's request now...
app.post(`/${senderId}/send`, (requestSender, responseSender) => {
// start sharing...
let countChunks = 0;
requestSender.on("data", (chunk) => {
if (countChunks === 0) {
// need get the name from the chunks
const fileName = "test_name";
response.setHeader("Content-Disposition", `attachment; filename="${fileName}"`);
}
countChunks += 1;
// need to get the first byte of the file data and start sending starting with it
response.write(chunk);
})
requestSender.on("end", () => {
response.end();
})
})
})
I`m trying to encrypt data via RSA using public key (128-bytes == 1024-bits) received from auth server.
Here is a code in Node.js:
const NodeRSA = require('node-rsa');
const openData = Buffer.from('example');
const rsaPublicKey = Buffer.from('04 D4 8B 30 F6 1C 89 8B 36 0B 32 BB 64 25 ED C0 76 1D 23 76 A9 49 D4 E7 24 99 24 C4 2E D7 D8 90 96 AF EE 53 3F 65 CE 3F 42 34 AB 56 47 7B 9A DD D5 7C 97 21 6F 37 2D 5A 7A E0 72 08 38 7A 18 85 AA FF C8 14 96 84 BB 65 33 68 11 E5 C4 9F CE 9F 19 1A C7 29 A5 13 80 4B D4 7E 8C 63 81 A1 FE 99 7D 11 35 46 08 93 BF D2 23 28 47 04 B4 B6 2B EF 5D 30 CF 33 CB D5 0E 28 A6 87 63 62 22 1E 46 74'.split(' ').join(''), 'hex');
const key = new NodeRSA();
key.importKey({
n: rsaPublicKey,
e: 65537
}, 'components-public');
const encryptedData = key.encrypt(openData);
But I`ve got error:
/Users/xok/node_modules/node-rsa/src/NodeRSA.js:283
throw Error('Error during encryption. Original error: ' + e);
^
Error: Error during encryption. Original error: Error: error:0306E06C:bignum routines:BN_mod_inverse:no inverse
at NodeRSA.module.exports.NodeRSA.$$encryptKey (/Users/xok/node_modules/node-rsa/src/NodeRSA.js:283:19)
at NodeRSA.module.exports.NodeRSA.encrypt (/Users/xok/node_modules/node-rsa/src/NodeRSA.js:238:21)
If I use another key received from the same auth server, then all is OK:
const rsaPublicKey = Buffer.from('41 F5 0E FC 66 16 4D 28 89 E8 50 C9 8A CD C7 64 A7 5B D8 D0 98 4E 29 9F 52 FC 24 6C EA A5 5B 23 CD 37 B5 1E 9F F9 61 C5 FD C7 95 35 51 13 A0 4A 94 7E FA 23 92 0E DA 4E AD B8 98 86 6F EC 7D D4 C3 DA BF 98 01 A0 3F 8C 7A EC EE CB 53 2F 26 4C 66 2D D6 48 48 25 02 09 85 35 9F 6F F8 5F F7 1B BD 0A E0 02 61 B8 81 6A EE B2 F3 B0 BA EF 18 25 48 B6 1B 73 CB 32 33 E7 13 A7 3B D1 D7 D8 95 A9'.split(' ').join(''), 'hex');
Question: Could you say what I`m doing wrong, please?
I am using FIO tool on linux to run some IO's. I am interested to look at data contents that are generated as part of the FIO command.
My command:
sudo fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=write --bs=4k --direct=0 --size=512M --numjobs=1 --runtime=240 --group_reporting --filename=venkata --buffer_compress_percentage=50 --output=fioad
I am interested to see how the data is generated with 50% compress buffer option. Is there any way to look / output the FIO IO input data?
Is there any way to look / output the FIO IO input data?
Just examine the file the I/O is being done on (venkata) with a tool like hexdump? The one thing I'd caution is because your I/O file is 512 megabytes big you will almost certainly want to use the -n flag of hexdump or pipe the output to less to prevent your terminal overflowing... Here's a safe reduced example job to make analysis quicker/easier:
$ fio --name=bufcontents --filename=/tmp/fio.tmp --size=4k --bs=4k --buffer_compress_percentage=50
bufcontents: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
[...]
Run status group 0 (all jobs):
READ: bw=4000KiB/s (4096kB/s), 4000KiB/s-4000KiB/s (4096kB/s-4096kB/s), io=4096B (4096B), run=1-1msec
$ hexdump -C /tmp/fio.tmp
00000000 35 e0 28 cc 38 a0 99 16 06 9c 6a a9 f2 cd e9 0a |5.(.8.....j.....|
00000010 80 53 2a 07 09 e5 0d 15 70 4a 25 f7 0b 39 9d 18 |.S*.....pJ%..9..|
00000020 4e a9 ac d9 8e ab 9d 13 29 95 8e 86 9b 48 4e 12 |N.......)....HN.|
00000030 a5 52 3d 26 cc 05 db 1b 54 2a 75 db 9a 4d d8 1d |.R=&....T*u..M..|
00000040 4a a5 44 c6 f8 9b 39 00 a9 94 23 c6 5c d0 90 0c |J.D...9...#.\...|
00000050 95 f2 6f ce f9 b6 c2 13 52 7e 83 40 a7 6f ce 07 |..o.....R~.#.o..|
00000060 ca 6f e7 28 b3 2d e4 10 f9 ed 37 ad 42 f1 48 0f |.o.(.-....7.B.H.|
00000070 bf 7d aa 5e 8c c7 d6 00 b7 cf f5 4c 9c a9 cd 08 |.}.^.......L....|
00000080 f6 39 c3 a1 b8 8e 8c 18 3e 67 3d 77 f5 40 ef 0b |.9......>g=w.#..|
00000090 e7 ac 48 fb 7f 2c 35 1c 9c 95 f5 a8 eb a7 d7 19 |..H..,5.........|
000000a0 b3 b2 50 aa 82 20 89 0f 56 96 f0 fb e7 ce d4 03 |..P.. ..V.......|
000000b0 ca 12 53 b4 e4 9b e0 17 59 62 25 0d 53 b9 0f 0e |..S.....Yb%.S...|
000000c0 4b 2c 78 b0 97 70 47 13 89 85 e9 df d6 15 6a 09 |K,x..pG.......j.|
000000d0 b1 b0 38 19 c6 d2 c0 0e 16 96 ce 6a bc 0d 0c 15 |..8........j....|
000000e0 c2 d2 4e 42 50 4c dd 08 58 da e8 9e 62 88 c1 15 |..NBPL..X...b...|
000000f0 4b 1b b1 e6 97 e1 ee 00 69 a3 30 6f da e8 9e 17 |K.......i.0o....|
00000100 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
*
00000200 a8 71 09 48 d3 ad 5f c5 35 2e 2d b0 b5 51 5a 13 |.q.H.._.5.-..QZ.|
[...]
(100 in hex is 256 in decimal, 200 in hex is 512 in decimal etc)
Alternatively, since you asked this on Stack Overflow (which is for programming questions), fio is open source and the fio source is available on GitHub, we can read the source there (note that you didn't say WHICH version of fio you are using so I shall assume the very latest at the tile of writing which is fio-3.21):
https://github.com/axboe/fio/blob/fio-3.21/io_u.c#L2166 (fill_io_buffer() in io_u.c): when compress_percentage is non-zero, fio will try to repeatedly call fill_random_buf_percentage() until it has created a block sized buffer worth of data. In the example job you set a block size of 4k (so the min and max block sizes will both be 4k) and compress_chunk will be 512 (its default). Therefore fill_random_buf_percentage() will be called 8 times and each time with its segment and len parameters being 512.
https://github.com/axboe/fio/blob/fio-3.21/lib/rand.c#L137 (__fill_random_buf_percentage() in rand.c): This works out the percentage of the segment it is filling that needs to be random data and the rest is set to either zeros (the default) or the buffer_pattern (if set). With your example the first 256 bytes will be random data and the next 256 will be zeros.
I would like to automate a task in Google Sheets (because it's quite tedious and time consumming to do it manually).
A B C D E F G H I J
01 A00 D1 4F F6 24 C8 BD 4F 75 9E 7D E7 53 98 6C 9F EC 4E C4
02 A01 C5 8F 8E 68 E5 39 7D 41 36 6B 38 3D 99 54 61 83 C6 42
03 A02 72 F4 99 CA 91 C1 1E 58 25 3A 96 33 91 4E FC 87 70 1C
04 A03 65 11 17 82 78 3F 56 18 23 77 2F B3 4A B0 67 FF 66 1B
05 A04 D4 BA D8 58 F5 3A DA 21 32 43 03 95 94 18 78 76 68 53
06 A05 1D ED 7D 41 86 BB A4 07 CC 00 5F 17 BB 7D B3 30 28 C8
07 A06 98 97 EF 9A 85 53 E5 A9 8D 3A C7 6C 8D D0 44 FF 1C 4C
08 A07 8F 26 E1 BB 88 46 74 46 42 0F E2 B7 4D 5C 34 F0 4A C5
09 A08 AD FD 61 93 EF 9F 50 7A 10 24 65 6D 2F 2D BF F4 45 1B
10 A09 5A 5F A2 93 A6 F6 76 DB D5 FE 3F 33 28 3E 3E F9 F5 8F
11 A10 B3 30 07 7E 9A AA 70 AB 78 63 16 E4 23 E4 93 3B BA 28
12 A11 24 A6 DA 5F 15 CC E4 F3 AB 18 4B FE EB 2E 2D 74 9A AC
13 A12 C4 0D 22 54 DA 9F 8A 69 A8 B3 44 2B 91 C1 7D 41 40 17
14 A13 01 1E BA FC 27 89 71 5A 1B BF B3 01 E4 73 A3 9F DE 24
15 A14 6E D7 71 8F 44 B6 4C 16 95 A6 BF C6 21 B9 D0 48 08 DA
16 A15 79 F2 E7 53 D9 4D 3D B4 3B 7E 9D 80 25 EB 7F 0B 43 33
If I search a value (for example "67 FF"), firstly I would like to display the value of the A column corresponding to the raw of the search (in this example: "67 FF" => I4 => A4 => "A03").
Secondly, I would like to display (in an other cell) the number of columns and raws (the table vertically loops) compared to the B1 cell (in the same example: "67 FF" => I4 => 3 raws & 7 columns).
But, if there are several results, I would like to display the closest value to B1 (the table vertically loops).
Other examples:
"E7 53" => G1 ; C16 => C16 => A16 => "A15" / 1 raw & 1 column
"7D 41" => E2 ; C6 ; I13 => E2 => A2 => "A01" / 1 raw & 3 columns
"B3 30" => I6 ; B11 => B11 => A11 => "A10" / 6 raws & 0 column
I hope it's not too late to answer. Nonetheless, please check google-sheet-lookup-value-in-2d-range.
Spreadsheet contains formula to lookup value by looping both horizontally and vertically.
formula to lookup value by looping vertically:
=ifs(
NOT(ISERROR(MATCH(A29,$B$1:$J$1,0))),MATCH(A29,$B$1:$J$1,0)+1&","&1,
NOT(ISERROR(MATCH(A29,$B$2:$J$2,0))),MATCH(A29,$B$2:$J$2,0)+1&","&2,
NOT(ISERROR(MATCH(A29,$B$3:$J$3,0))),MATCH(A29,$B$3:$J$3,0)+1&","&3,
NOT(ISERROR(MATCH(A29,$B$4:$J$4,0))),MATCH(A29,$B$4:$J$4,0)+1&","&4,
NOT(ISERROR(MATCH(A29,$B$5:$J$5,0))),MATCH(A29,$B$5:$J$5,0)+1&","&5,
NOT(ISERROR(MATCH(A29,$B$6:$J$6,0))),MATCH(A29,$B$6:$J$6,0)+1&","&6,
NOT(ISERROR(MATCH(A29,$B$7:$J$7,0))),MATCH(A29,$B$7:$J$7,0)+1&","&7,
NOT(ISERROR(MATCH(A29,$B$8:$J$8,0))),MATCH(A29,$B$8:$J$8,0)+1&","&8,
NOT(ISERROR(MATCH(A29,$B$9:$J$9,0))),MATCH(A29,$B$9:$J$9,0)+1&","&9,
NOT(ISERROR(MATCH(A29,$B$10:$J$10,0))),MATCH(A29,$B$10:$J$10,0)+1&","&10,
NOT(ISERROR(MATCH(A29,$B$11:$J$12,0))),MATCH(A29,$B$11:$J$11,0)+1&","&11,
NOT(ISERROR(MATCH(A29,$B$12:$J$13,0))),MATCH(A29,$B$12:$J$12,0)+1&","&12,
NOT(ISERROR(MATCH(A29,$B$13:$J$13,0))),MATCH(A29,$B$13:$J$13,0)+1&","&13,
NOT(ISERROR(MATCH(A29,$B$14:$J$14,0))),MATCH(A29,$B$14:$J$14,0)+1&","&14,
NOT(ISERROR(MATCH(A29,$B$15:$J$15,0))),MATCH(A29,$B$15:$J$15,0)+1&","&15,
NOT(ISERROR(MATCH(A29,$B$16:$J$16,0))),MATCH(A29,$B$16:$J$16,0)+1&","&16
)
I got some ADTS AAC raw data from somewhere(actually it is extracted from a demuxed file) and in theory it should be corrected encoded. it looks like this:
Frame1:
21 19 94 ED A1 09 45 58 09
40 02 CA AA 85 D4 E5 C5 58 A9 73 00 0C 75 1C 5D
A7 4E 52 40 90 38 71 9C 65 D5 C4 22 0B 28 7D EF
F8 42 33 15 03 BA 6C DE B1 74 B4 A1 4E 0A 21 05
15 34 6B FD D9 E7 8F BF FF 79 5C D3 7D 90 79 F6
65 57 08 3A F7 C5 14 85 5E D7 C3 7D 2A 85 E1 7A
86 BA 3A AC 13 0D AE D1 1B 65 69 B6 71 92 E5 8A
BC CB 5C 7A 6F D7 F2 2B 38 C9 0E 2A 40 2F 8E 90
9B 1F A2 3A 9C 39 A8 35 CE 69 14 CD 64 54 70 00
50 07 CE 37 83 6E F0 01 18 AA A8 49 B2 8B 8F A1
37 17 1C 06 00 00 00 06 00 72
Frames2:
21 19 95 14 C2 0A A9 61 19 8B CB 9B 56 AE A7
0A A0 34 DA EA D9 34 28 0C F8 DC 0C 30 97 12 A7
DD 3F F5 FE 7B 65 52 61 6D 7F DA BE D3 EB 30 CA
A6 94 54 8E D4 0A 32 E1 EA FD AD 02 82 B5 1E 40
4C 04 3A BE 56 21 5D 7D 5D B3 31 2A 5D AF 4E FF
A6 48 B9 42 E3 87 DE 5C 59 4B B9 BB C3 2C AD 50
6B 35 C8 24 6C 06 82 86 B2 26 17 E2 C6 DD 9A 43
53 91 D3 68 8D 67 8E 7D 0A 28 EB 7D F1 BB FC 56
5E 13 25 F9 77 E6 27 BF DA 4E 09 38 86 20 0A 00
F9 C6 F0 1D DE 00 21 05 4F 28 C0 A0 5F 0E 18 00
03 00 0E
.....
And for each following frames there is a quite strange similiar header as:
21 19 xx xx
For examples:
21 19 94 E1 ..
21 19 95 03 ..
....
So do you know what does this header mean?
This is how ADTS AAC looks like, for example for stereo:
adts_header()
channel_pair_element()
adts_header()
channel_pair_element()
adts_header()
channel_pair_element()
adts_header()
channel_pair_element()
etc...
This seems like it's not ADTS header at all. ADTS header is typicaly not used in some other container, like mp4, but is used for standalone AAC files only. ADTS header starts with 12 bit syncword 1111 1111 1111. So all ones, and this is not the case in your example.
In case muxer stripped out any header there was, you might have raw AAC which should start with single_channel_element() in case of mono or channel_pair_element() in case of stereo.
single_channel_element() starts with 3 bits 000
cannel_pair_element() starts with 3 bits 001
Your sample starts with 0010 0001 0001 1001 so it might be channel_pair_element().
You probably have stereo but without any header, like so:
channel_pair_element()
channel_pair_element()
channel_pair_element()
channel_pair_element()
etc.
You should ask the muxer to tell you the number of channels, sampling rate, etc, and you are ready to continue decoding. Muxer should grab this info from mp4 or whichever container your AAC was originaly in.
It most likely a mpeg4 latm format. if you run mediainfo tool to check, it will output as below:
$mediainfo a.aac
General
Complete name : a.aac
Format : LATM
File size : 821 KiB
Overall bit rate mode : Variable
Audio
Format : AAC
Format/Info : Advanced Audio Codec
Format profile : HE-AACv2 / HE-AAC / LC
Bit rate mode : Variable
Channel(s) : 2 channels / 1 channel / 1 channel
Channel positions : Front: L R / Front: C / Front: C
Sampling rate : 48.0 KHz / 48.0 KHz / 24.0 KHz
Compression mode : Lossy
Such format usually generated after ADTS header removed or from DTV channel. DTV data transfer use LATM format to save bandwidth, so no ADTS header there but use some codec config buffer to initialize decoder.