Node.js how to ungzip GET response made through TCP socket - node.js

When i send a GET request to Hearthstone website, and once i've concatenated all the sent frames, i still get a compressed string
My request:
GET /hearthstone/en/ HTTP/1.1
Host: eu.battle.net
Connection: keep-alive
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Referer: https://www.google.fr/
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.8,fr;q=0.6
Cookie: eu-cookie-compliance-agreed=1; _ga=GA1.3.780909635.1489783325; _gat_bnetgtm=1
My logs:
[Socket] > Connected !
Connected to 185.60.115.40
[Socket - Data] > Received data !
--- Header ---
HTTP/1.1 200 OK
Date: Sat, 18 Mar 2017 10:56:12 GMT
Server: Apache
X-Frame-Options: SAMEORIGIN
Retry-After: 600
Content-Language: en-GB
Vary: Accept-Encoding
Content-Encoding: gzip
Keep-Alive: timeout=5, max=4000
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html;charset=UTF-8
4331
▼ ♥��{�∟W�'��
�1�4_`�4→������s㑑�*V��H��(�2����s�=��8w��▼v���▲�3���d���[�k��N�rw�_U��������↔��xd�q<zIe♠T�Ƅe�݁��1▼졲b��|#▼w���l8U���ɠ�L�z�(�?�6��b♠e���y�bw�N�XUy>��↑↑"�*�U���MJ50F♦�Z�_�_�6���↓�R>Q�j☼��*9V;b�ȧj�nZ��j��~�XQ��U�)��$IE�&�'l��'i�n��R[�&iv`¶j☻�����/TRcT☻��E+&�♥6e��∟f�→i\�◄�☻��♣=������+]�Ŗ�|)�§)���r<e�v⸉♀}���Ȳm�EI�Ca{�u_�♫z,���↑���:,[��X�T6J�����§�����♂"?◄▲w�g♣Q �(♠Fu4S
l�-��l�
������~K�>*�Leu�T�(�↓�ָ-p ���q��'j(����▲$M�9����*tz►d`����T↔��[=�?Le��+§ȯL��c�YZ���R�
♀�zY�DYBL��♦�� ‼��n�y������gh�x²r�f����ady1e�&�P������Y�ͭ�►7�y1��}
v�����,�◄‼-��
,�X�$n∟Z>�=�$��1���k���'*rc�☼bK��§$�r�ͅ�����+�X�§♂/f�↔s♂§mΤ�#ԳC�ZV↑� g☺�l7�<♠f��$t���*����H►��'�Jҗ�׍�z�����hU*i↕��
��w�΃f�↕�FA�8,f�=�4QQ�]�♫-� �#8♥��V■♂� �ȋ�������g♥♣▼�!�B˶D↕A↑E<H,♂� dQ1�N ��sm'♫�L☻m()� f��<I֯����\♠Y�H�TH�8�\�Cԡ7cGz�↔p x Ӄ,��P��k�>wb�8�#� i�‼~∟+t�▼♦�+"W��‼:��#�Z�Q� �0r�↔8"�↔?N<��aU�Q��������8�↓w��B��VB�Z♫�☺↨◄�9���l��§��P▼H1?☻Nv¶J.l
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket - Data] > Received data !
[Socket] > End message received !
--- HTML ---
<Buffer 48 54 54 50 2f 31 2e 31 20 32 30 30 20 4f 4b 0d 0a 44 61 74 65 3a 20 53 61 74 2c 20 31 38 20 4d 61 72 20 32 30 31 37 20 31 30 3a 35 36 3a 31 32 20 47 ... >
My code:
send ()
{
let chuncks = []
this.client.socket.once('data', data =>
{
/* Header */
chuncks.push(data)
console.log('--- Header ---')
console.log(data.toString())
this.client.socket.on('data', data =>
{
chuncks.push(data)
})
})
this.client.socket.on('end', () =>
{
let html = Buffer.concat(chuncks)
console.log('--- HTML ---')
console.log(html)
let decoded = require('zlib').gunzipSync(html)
})
this.client.write( this.request ) // Above request
}
And, what i don't understand, is why things goes wrong, Am i make this properly ?
Thanks !

Related

Connecting React Production build with Express Gateway

Our React Development build runs flawless with Express Gateway setup on localhost. After build React for production and when we run serve -s build login page comes as it is the entry point of the app. It gets 200 ok response when we put sign-in credential. But when we looked into it we can see the request to server was not successful cause token it saves to browser application is undefined and we checked the response, It is "You need to enable javascript...". JS is enabled no doubt. I have checked By using
axios.post('http://localhost:8080/api/v1/auth/sign-in', userData)
It works fine but when setup proxy:
axios.post('/auth/sign-in', userData)
react doesn’t run
Here is the part of yml for express gateway setup:
http:
port: 8080
apiEndpoints:
auth-service:
host: "*"
paths: ["/api/v1/auth/*", "/api/v1/auth"]
mail-service:
host: "*"
paths: ["/api/v1/mail/*", "/api/v1/mail"]
serviceEndpoints:
auth-service-endpoint:
url: http://localhost:3003/
mail-service-endpoint:
url: http://localhost:3005/
policies:
- proxy
pipelines:
auth-service-pipeline:
apiEndpoints:
- auth-service
policies:
- proxy:
action:
serviceEndpoint: auth-service-endpoint
changeOrigin: true
stripPath: true
mail-service-pipeline:
apiEndpoints:
- mail-service
policies:
- proxy:
action:
serviceEndpoint: mail-service-endpoint
changeOrigin: true
stripPath: true
I put the setupProxy.js on src directory of React:
const { createProxyMiddleware } = require('http-proxy-middleware');
module.exports = function(app) {
app.use(createProxyMiddleware('/api/v1',
{ target: 'http://localhost:8080',
secure: false,
changeOrigin: true,
// pathRewrite: {
// "^/api": "/api/v1",
// }
}
));
}
Currently everything is on same machine. We are not using docker.
The application runs on Dev environment but shows 200 ok response in production build
Any help will be appreciated.
[Edit]
krypton:admin-dashboard-server hasan$ curl -v http://localhost:3001/find_all_services/1/10
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 3001 (#0)
> GET /find_all_services/1/10 HTTP/1.1
> Host: localhost:3001
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Access-Control-Allow-Origin: *
< X-DNS-Prefetch-Control: off
< X-Frame-Options: SAMEORIGIN
< Strict-Transport-Security: max-age=15552000; includeSubDomains
< X-Download-Options: noopen
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Content-Type: application/json; charset=utf-8
< Content-Length: 1833
< ETag: W/"729-LM91B3vCUrbvesBrp32ykiXXkQo"
< Date: Tue, 12 Jan 2021 14:57:24 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
[{"id":1,"name":"Laser Hair Remove"},
{"id":2,"name":"Facial Treatments"}
]
krypton:admin-dashboard-server hasan$ curl -v
http://localhost:8080/api/v1/services/find_all_services/1/10
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /api/v1/services/find_all_services/1/10 HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< access-control-allow-origin: *
< x-dns-prefetch-control: off
< x-frame-options: SAMEORIGIN
< strict-transport-security: max-age=15552000; includeSubDomains
< x-download-options: noopen
< x-content-type-options: nosniff
< x-xss-protection: 1; mode=block
< content-type: application/json; charset=utf-8
< content-length: 1833
< etag: W/"729-LM91B3vCUrbvesBrp32ykiXXkQo"
< date: Tue, 12 Jan 2021 15:03:45 GMT
< connection: keep-alive
<
* Connection #0 to host localhost left intact
[{"id":1,"name":"Laser Hair Remove"},
{"id":2,"name":"Facial Treatments"}
]
krypton:admin-dashboard-server hasan$ curl -v -H "Content-Type: application/json" -X POST -d
'{"email":"mh.mithun#gmail.com","password":"safe123"}'
http://localhost:8080/api/v1/auth/sign-in
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> POST /api/v1/auth/sign-in HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 52
>
* upload completely sent off: 52 out of 52 bytes
< HTTP/1.1 200 OK
< access-control-allow-origin: *
< x-dns-prefetch-control: off
< x-frame-options: SAMEORIGIN
< strict-transport-security: max-age=15552000; includeSubDomains
< x-download-options: noopen
< x-content-type-options: nosniff
< x-xss-protection: 1; mode=block
< content-type: application/json; charset=utf-8
< content-length: 270
< etag: W/"10e-S+kd8b4Yfl7un04FVGe3MFLFEaY"
< date: Tue, 12 Jan 2021 15:40:12 GMT
< connection: keep-alive
<
* Connection #0 to host localhost left intact
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhbGdvcml0aG0iOiJIUzI1N"
krypton:admin-dashboard-server hasan$ curl -v -H "Content-Type: application/json" -X POST -d '{"email":"mh.mithun#gmail.com","password":"safe123"}' http://localhost:3003/sign-in
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 3003 (#0)
> POST /sign-in HTTP/1.1
> Host: localhost:3003
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 52
>
* upload completely sent off: 52 out of 52 bytes
< HTTP/1.1 200 OK
< Access-Control-Allow-Origin: *
< X-DNS-Prefetch-Control: off
< X-Frame-Options: SAMEORIGIN
< Strict-Transport-Security: max-age=15552000; includeSubDomains
< X-Download-Options: noopen
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Content-Type: application/json; charset=utf-8
< Content-Length: 270
< ETag: W/"10e-LW/1l5fXf5BaiF3KJMvG60xRthE"
< Date: Tue, 12 Jan 2021 15:45:33 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhbGdvcml0aG0i"

Save video file received from multipart form in Serverless Offline

I have an website running Angular4 with a simple form uploading data using ng2-file-upload. I'm sending those files to a Node.js-based serverless offline server where my intention is to simply write those files received from the form to disk.
I tried to do in many different ways, and in the end I found this right here that parses that form from the event into a json. The resulting json contains a buffer in one of the fields with the video data like so:
{ Host: 'localhost:3000',
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:59.0) Gecko/20100101 Firefox/59.0',
Accept: '*/*',
'Accept-Language': 'en-US,en;q=0.5',
'Accept-Encoding': 'gzip, deflate',
Referer: 'http://localhost:4200/myupload',
'Content-Length': 2391623,
'Content-Type': 'multipart/form-data; boundary=---------------------------2125290100942661667805976894',
Origin: 'http://localhost:4200',
Connection: 'keep-alive' }
{ file:
{ type: 'file',
filename: 'y9K18CGEeiI.webm',
contentType: 'video/webm',
content: <Buffer 1a 45 e3 01 00 00 00 00 00 00 1f 42 fd fd 01 42 fd fd 01 42 fd 04 42 fd 08 42 fd fd 77 65 62 6d 42 fd fd 02 42 fd fd 02 18 53 fd 67 01 00 00 00 00 14 ... > } }
Now what I'm trying to do is to save the file in the buffer using fs:
module.exports.handler = (event, context, callback) => {
let data = multipart.parse(event, false);
fs.writeFile('meme.webm', data.file.content, 'binary', function(err) {
if(err) {
console.log(err);
} else {
console.log('saved!');
}
});
// etc ...
};
The file saves to the disk with the correct size (1.3MB), same as the original file. Unfortunately, I can't seem to open it on the other side and I assume it's either because othe encoding or because of the way I'm writing it to disk. Any ideas?
For anyone with this problem, check this issue right here. It's a problem with serverless offline converting file data and there's not much that can be done it seems other than applying the fork.

Use express sendFile for HEAD requests

sendFile is for sending files and it also figures out some interesting headers from the file (like content length). For a HEAD request I would ideally want the exact same headers but just skip the body.
There doesn't seem to be an option for this in the API. Maybe I can override something in the response object to stop it from sending anything?
Here's what I got:
res.sendFile(file, { headers: hdrs, lastModified: false, etag: false })
Has anyone solved this?
As Robert Klep has already written, the sendFile already has the required behavior of sending the headers and not sending the body if the request method is HEAD.
In addition to that, Express already handles HEAD requests for routes that have GET handlers defined. So you don't even need to define any HEAD handler explicitly.
Example:
let app = require('express')();
let file = __filename;
let hdrs = {'X-Custom-Header': '123'};
app.get('/file', (req, res) => {
res.sendFile(file, { headers: hdrs, lastModified: false, etag: false });
});
app.listen(3322, () => console.log('Listening on 3322'));
This sends its own source code on GET /file as can be demonstrated with:
$ curl -v -X GET localhost:3322/file
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3322 (#0)
> GET /file HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:3322
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< X-Custom-Header: 123
< Accept-Ranges: bytes
< Cache-Control: public, max-age=0
< Content-Type: application/javascript
< Content-Length: 267
< Date: Tue, 11 Apr 2017 10:45:36 GMT
< Connection: keep-alive
<
[...]
The [...] is the body that was not included here.
Without adding any new handler this will also work:
$ curl -v -X HEAD localhost:3322/file
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3322 (#0)
> HEAD /file HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost:3322
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< X-Custom-Header: 123
< Accept-Ranges: bytes
< Cache-Control: public, max-age=0
< Content-Type: application/javascript
< Content-Length: 267
< Date: Tue, 11 Apr 2017 10:46:29 GMT
< Connection: keep-alive
<
This is the same but with no body.
Express uses send to implement sendFile, which already does exactly what you want.

Difference between curl expressions

I have an API server running at localhost:3000 and I am trying to query it using these two expressions:
[wani#lenovo ilparser-docker]$ time (curl "localhost:3000/parse?lang=hin&data=देश" )
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.023s
user 0m0.009s
sys 0m0.004s
[wani#lenovo ilparser-docker]$ time (curl -XGET localhost:3000/parse -F lang=hin -F data="देश" )
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m1.101s
user 0m0.020s
sys 0m0.070s
Why does the second expression take so much more time?
With more verbosity:
[wani#lenovo ilparser-docker]$ time curl -v localhost:3000/parse -F lang=hin -F data="देश"
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
> POST /parse HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 244
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=------------------------1eb5e5991b976cb1
>
* Done waiting for 100-continue
< HTTP/1.1 200 OK
< Content-Length: 70
< Server: Mojolicious (Perl)
< Content-Type: application/json;charset=UTF-8
< Date: Mon, 21 Mar 2016 11:06:09 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m1.106s
user 0m0.027s
sys 0m0.068s
[wani#lenovo ilparser-docker]$ time curl -v localhost:3000/parse --data lang=hin --data data="देश"
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> POST /parse HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 23
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 23 out of 23 bytes
< HTTP/1.1 200 OK
< Server: Mojolicious (Perl)
< Content-Length: 70
< Connection: keep-alive
< Date: Mon, 21 Mar 2016 11:06:24 GMT
< Content-Type: application/json;charset=UTF-8
<
* Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.031s
user 0m0.011s
sys 0m0.003s
Expect: 100-continue sounded fishy, so I cleared that header:
[wani#lenovo ilparser-docker]$ time curl -v -F lang=hin -F data="देश" "localhost:3000/parse" -H Expect: --trace-time
16:48:04.513691 * Trying 127.0.0.1...
16:48:04.513933 * Connected to localhost (127.0.0.1) port 3000 (#0)
16:48:04.514083 * Initializing NSS with certpath: sql:/etc/pki/nssdb
16:48:04.610095 > POST /parse HTTP/1.1
16:48:04.610095 > Host: localhost:3000
16:48:04.610095 > User-Agent: curl/7.43.0
16:48:04.610095 > Accept: */*
16:48:04.610095 > Content-Length: 244
16:48:04.610095 > Content-Type: multipart/form-data; boundary=------------------------24f30647b16ba82d
16:48:04.610095 >
16:48:04.618107 < HTTP/1.1 200 OK
16:48:04.618194 < Content-Length: 70
16:48:04.618249 < Server: Mojolicious (Perl)
16:48:04.618306 < Content-Type: application/json;charset=UTF-8
16:48:04.618370 < Date: Mon, 21 Mar 2016 11:18:04 GMT
16:48:04.618430 < Connection: keep-alive
16:48:04.618492 <
16:48:04.618590 * Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.117s
user 0m0.023s
sys 0m0.082s
Now the only time taking thing left is: Initializing NSS with certpath: sql:/etc/pki/nssdb. Why does curl do that in this context?
After a little help on IRC from #DanielStenberg, I came to know that the db load is present because curl inits nss in that case since curl needs a good random source for the boundary separator used for -F . Curl could have used getrandom() syscall or read bits out of /dev/urandom since boundary separators don't need to be cryptographically secure in any way, but curl just wants secure random in some other places so curl reuses the random function that it already has.

D2C message using IoT Hub

I’d like to ask you a favor. I do have a problem confirming previously readed D2C message using IoT Hub. I am using REST API to pick up message like (I have replace SIG)
Request:
GET https://iot-hub-pospa.azure-devices.net/devices/18596c88-01e6-3f16-427b-10028d7305c5/messages/devicebound?api-version=2015-08-15-preview HTTP/1.1
IoTHub-MessageLockTimeout: 3600
Accept: application/json
Authorization: SharedAccessSignature sr=iot-hub-pospa.azure-devices.net&sig={sig}&se=1485558838&skn=iothubowner
Host: iot-hub-pospa.azure-devices.net
If-None-Match: "1c5006a4-2288-4a2f-b7ea-dcdf9b5bbc99"
Connection: Close
X-P2P-PeerDist: Version=1.1
X-P2P-PeerDistEx: MinContentInformation=1.0, MaxContentInformation=2.0
Accept-Encoding: peerdist
Response:
HTTP/1.1 200 OK
Content-Length: 35
ETag: "dfc78580-d251-4156-a5f6-c2a30811a504"
Server: Microsoft-HTTPAPI/2.0
iothub-messageid: 02cdb012-9749-48a9-bfb3-5812a4740675
iothub-to: /devices/18596c88-01e6-3f16-427b-10028d7305c5/messages/deviceBound
iothub-expiry:
iothub-correlationid:
iothub-ack: full
iothub-sequencenumber: 56
iothub-enqueuedtime: 2/2/2016 9:57:34 AM
iothub-deliverycount: 0
Date: Tue, 02 Feb 2016 10:21:43 GMT
Connection: close
2/2/2016 10:57:34 AM - Test message
Then when confirming I am getting HTTP 412 like
Request:
DELETE https://iot-hub-pospa.azure-devices.net/devices/18596c88-01e6-3f16-427b-10028d7305c5/messages/devicebound/02cdb012-9749-48a9-bfb3-5812a4740675?api-version=2015-08-15-preview HTTP/1.1
Accept: application/json
If-Match: "02cdb012-9749-48a9-bfb3-5812a4740675"
Authorization: SharedAccessSignature sr=iot-hub-pospa.azure-devices.net&sig={sig}&se=1485558838&skn=iothubowner
Host: iot-hub-pospa.azure-devices.net
Content-Length: 0
Connection: Close
Response:
HTTP/1.1 412 Precondition Failed
Content-Length: 330
Content-Type: application/json; charset=utf-8
Server: Microsoft-HTTPAPI/2.0
iothub-errorcode: DeviceMessageLockLost
Date: Tue, 02 Feb 2016 10:21:49 GMT
Connection: close
 
{"Message":"ErrorCode:DeviceMessageLockLost;Message 02cdb012-9749-48a9-bfb3-5812a4740675 lock was lost for Device 18596c88-01e6-3f16-427b-10028d7305c5\r\nTracking Id:05994074a3664933a0910b5fc70e04e5-G:GatewayWorkerRole.6-B:1-P:cffe397b-f627-4435-bd54-48f5ba79c3ca-TimeStamp:02/02/2016 10:21:49\r\nErrorCode:DeviceMessageLockLost"}
 
 
Does anybody know what should I do to successfully confirm/delete message from IoT Hub, please? Thanks
static DeviceClient _deviceClient;
_deviceClient = DeviceClient.CreateFromConnectionString(<IoTHubURI>, TransportType.Http1);
public void SendMessage(IDictionary<string, object> dictionary)
{
Microsoft.Azure.Devices.Client.Message message = new Microsoft.Azure.Devices.Client.Message();
try
{
foreach (var r in dictionary)
{
message.Properties[r.Key] = r.Value.ToString();
}
_deviceClient.SendEventAsync(message);
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}

Resources