How to query jayson (nodejs) with curl - node.js

I am trying to figured out how to query using CURL on the "jayson" npm package for nodejs available from here https://github.com/tedeh/jayson. I am using the test program below on the server side. Node is running properly and respond to Curl with an error output. no way to find what should be passed in Curl or what should be changed on the node program:
var jayson = require(__dirname + '/../..');
var server = jayson.server({
echo: function(msg, callback) {
if(msg != null)
callback(null, msg);
},
add: function(a, b, callback) {
if( (a!= null) && (b!= null) )
callback(null, a + b);
}
});
server.http().listen(90);
this is the CURL command:
$ curl -v -i -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0", "id":"curltest", "method":"echo", "params": ["hello"] }' http://localhost:90
and this is the nodejs answer received by Curl if the curl command is not compliant to jayson:
* About to connect() to localhost port 90 (#0)
* Trying 127.0.0.1...
* connected
* Connected to localhost (127.0.0.1) port 90 (#0)
> POST / HTTP/1.1
> User-Agent: curl/7.28.1
> Host: localhost:90
> Accept: */*
> Content-Type: application/json
> Content-Length: 16
>
* upload completely sent off: 16 out of 16 bytes
* HTTP 1.1 or later with persistent connection, pipelining supported
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Content-Length: 79
Content-Length: 79
< Content-Type: application/json
Content-Type: application/json
< Date: Tue, 25 Mar 2014 19:26:21 GMT
Date: Tue, 25 Mar 2014 19:26:21 GMT
< Connection: keep-alive
Connection: keep-alive
<
* Connection #0 to host localhost left intact
{"jsonrpc":"2.0","id":null,"error":{"code":-32600,"message":"Invalid request"}}* Closing connection #0
Many thanks for the help,
Rémi

You are running the curl from windows. That's why the single quote around the parameters doesn't work for you. Change them into double quotes:
curl -i -X POST -H "Content-Type: application/json" -d "{\"echo\": \"Name\"}" http://localhost:90/?
Also, try to run the command with -v appending at the end. It will show you the debug msg of curl. Show us the msg if it doesn't work for you.

I found the trick and I have corrected the error above. We should be jsonrpc 2.0 compliant as Jayson is considering jsonrpc2 only.
This is the correct curl command line for the "echo" script:
$ curl -v -i -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0", "id":"curltest", "method":"echo", "params": ["hello"] }' localhost:90
This is the correct curl command line for the "add" script:
$ curl -v -i -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0", "id":"curltest", "method":"add", "params": [1, 2] }' http://localhost:90

Related

How to send curl request with post data imported from a file

I have a curl command like below which works fine and I get the response back. I am posting json data to an endpoint which gives me response back after hitting it.
curl -v 'url' -H 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Connection: keep-alive' -H 'DNT: 1' -H 'Origin: url' --data-binary '{"query":"\n{\n data(clientId: 1234, filters: [{key: \"o\", value: 100}], key: \"world\") {\n title\n type\n pottery {\n text\n pid\n href\n count\n resource\n }\n }\n}"}' --compressed
Now I am trying to read the binary data from temp.json file outside but somehow it doesn't work and I get an error -
curl -v 'url' -H 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Connection: keep-alive' -H 'DNT: 1' -H 'Origin: url' --data-binary "#/Users/david/Downloads/temp.json" --compressed
I have stored json in below temp.json file -
{
data(clientId: 1234, filters: [{key: "o", value: 100}], key: "world") {
title
type
pottery {
text
pid
href
count
resource
}
}
}
This is the error I am getting -
.......
* upload completely sent off: 211 out of 211 bytes
< HTTP/1.1 500 Internal Server Error
< date: Fri, 28 May 2021 23:38:12 GMT
< server: envoy
< content-length: 0
< x-envoy-upstream-service-time: 1
<
* Connection #0 to host url left intact
* Closing connection 0
Is there anything wrong in my above curl command?
Update
If I copy the exact same content in the temp.json file that I have in my original curl with \n then it works fine. So looks like that is the issue.
It means I need to find a way to convert new lines to \n manually from temp.json before sending the curl request or is there any other way?

How do I redirect all of the cURL command with POST to output towards stdout

Apologies if this has been asked before but I've trawled through a lot of similar questions but wasn't able to figure out so here goes:
I have a cURL command that does a HTTP POST.
How do I make the output to be redirected to standard output.
The command I am using inside a docker-container is the following:
curl -v -X POST "http://username:pass123#data-service:8081/api" -H "Content-Type: application/json" -d #postBody
This should give an output something like:
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 172.18.0.7...
* TCP_NODELAY set
* Connected to data-service (172.18.0.7) port 8081 (#0)
* Server auth using Basic with user 'username'
> POST /v1/sources HTTP/1.1
> Host: data-service:8081
> Authorization: Basic Z38JEsJ65JI9128hhtJlZW21XQ==
> User-Agent: curl/7.52.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 192
>
* upload completely sent off: 192 out of 192 bytes
< HTTP/1.1 200 OK
< Content-Type: application/json
< Transfer-Encoding: chunked
< Server: Jetty(8.1.8.v20121106)
<
{
"action" : "GO"
* Curl_http_done: called premature == 0
* Connection #0 to host data-service left intact
}
How do I redirect all of this to stdout including the returned body.
I have tried the following but it doesn't redirect everything:
curl -vs POST "http://username:pass123#data-service:8081/api" -H "Content-Type: application/json" -d #postBody 2> dev/console
Any ideas?!
Thank you.

CouchDB can't get cookie auth session for nonadmin user

For admin user:
$ curl -X POST localhost:5984/_session -d "username=admin&password=admin"
{"ok":true,"name":"admin","roles":["_admin"]}
$ curl -vX GET localhost:5984/_session --cookie AuthSession=YWRtaW...
{"ok":true,"userCtx":{"name":"admin","roles":["_admin"]},"info":{"authentication_db":"_users","authentication_handlers":["cookie","default"],"authenticated":"cookie"}}
but for regular user:
$ curl -vX POST localhost:5984/_session -d "username=user&password=123"
{"ok":true,"name":"user","roles":["users"]}
$ curl -vX GET localhost:5984/_session --cookie AuthSession=ZGlqbzo...
{"ok":true,"userCtx":{"name":null,"roles":[]},"info":{"authentication_db":"_users","authentication_handlers":["cookie","default"]}}
The same thing happens when im doing XmlHttpRequest via iron-ajax element, or simply from chrome. What am I doing wrong?
CouchDB version: 2.1.1
Config:
[chttpd]
bind_address = 0.0.0.0
port = 5984
authentication_handlers = {couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}
[httpd]
enable_cors = true
[couch_httpd_auth]
allow_persistent_cookies = true
timeout = 60000
[cors]
credentials = true
origins = *
headers = accept, authorization, content-type, origin, referer
methods = GET, PUT, POST, HEAD, DELETE
I didn't quite get your problem, but here is what I do with curl to authenticate with cookie as a nonadmin user:
First I run curl with -v option to see the header fields:
$ curl -k -v -X POST https://192.168.1.106:6984/_session -d 'username=jan&password=****'
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 192.168.1.106...
* Connected to 192.168.1.106 (192.168.1.106) port 6984 (#0)
* found 148 certificates in /etc/ssl/certs/ca-certificates.crt
* found 604 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* server certificate verification SKIPPED
* server certificate status verification SKIPPED
* error fetching CN from cert:The requested data were not available.
* common name: (does not match '192.168.1.106')
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: O=Tech Studio
* start date: Sat, 31 Mar 2018 04:37:51 GMT
* expire date: Tue, 30 Mar 2021 04:37:51 GMT
* issuer: O=Tech Studio
* compression: NULL
* ALPN, server did not agree to a protocol
> POST /_session HTTP/1.1
> Host: 192.168.1.106:6984
> User-Agent: curl/7.47.0
> Accept: */*
> Content-Length: 25
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 25 out of 25 bytes
< HTTP/1.1 200 OK
< Set-Cookie: AuthSession=amFuOjVBRTk3MENGOuKAb68qYzf5jJ7bIOq72Jlfw-Qb; Version=1; Secure; Path=/; HttpOnly
< Server: CouchDB/2.1.1 (Erlang OTP/18)
< Date: Wed, 02 May 2018 08:03:27 GMT
< Content-Type: application/json
< Content-Length: 44
< Cache-Control: must-revalidate
<
{"ok":true,"name":"jan","roles":["sample"]}
* Connection #0 to host 192.168.1.106 left intact
I see in the above header fields the cookie:
Set-Cookie: AuthSession=amFuOjVBRTk3MENGOuKAb68qYzf5jJ7bIOq72Jlfw-Qb; Version=1; Secure; Path=/; HttpOnly
I use the above cookie to authenticate as a nonadmin user and get the user info for the same nonadmin user like this:
$ curl -k -X GET https://192.168.1.106:6984/_users/org.couchdb.user:jan -H 'Cookie: AuthSession=amFuOjVBRTk3MENGOuKAb68qYzf5jJ7bIOq72Jlfw-Qb'
{"_id":"org.couchdb.user:jan","_rev":"3-f11b227a6e1236fa502af668fdbf326d","name":"jan","roles":["sample"],"type":"user","password_scheme":"pbkdf2","iterations":10,"derived_key":"a973123ebd9dbc2a543d477a506268b018e7aab4","salt":"0ef2111a894062b08ffd723fd34b6b75"}
Problem gone when i removed from my local.ini
authentication_handlers = {couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}
Because i used incorrect handler: couch_httpd_auth in the config for chttpd, when that handler is only written to work with the original couch_httpd module

Difference between curl expressions

I have an API server running at localhost:3000 and I am trying to query it using these two expressions:
[wani#lenovo ilparser-docker]$ time (curl "localhost:3000/parse?lang=hin&data=देश" )
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.023s
user 0m0.009s
sys 0m0.004s
[wani#lenovo ilparser-docker]$ time (curl -XGET localhost:3000/parse -F lang=hin -F data="देश" )
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m1.101s
user 0m0.020s
sys 0m0.070s
Why does the second expression take so much more time?
With more verbosity:
[wani#lenovo ilparser-docker]$ time curl -v localhost:3000/parse -F lang=hin -F data="देश"
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
> POST /parse HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 244
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=------------------------1eb5e5991b976cb1
>
* Done waiting for 100-continue
< HTTP/1.1 200 OK
< Content-Length: 70
< Server: Mojolicious (Perl)
< Content-Type: application/json;charset=UTF-8
< Date: Mon, 21 Mar 2016 11:06:09 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m1.106s
user 0m0.027s
sys 0m0.068s
[wani#lenovo ilparser-docker]$ time curl -v localhost:3000/parse --data lang=hin --data data="देश"
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 3000 (#0)
> POST /parse HTTP/1.1
> Host: localhost:3000
> User-Agent: curl/7.43.0
> Accept: */*
> Content-Length: 23
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 23 out of 23 bytes
< HTTP/1.1 200 OK
< Server: Mojolicious (Perl)
< Content-Length: 70
< Connection: keep-alive
< Date: Mon, 21 Mar 2016 11:06:24 GMT
< Content-Type: application/json;charset=UTF-8
<
* Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.031s
user 0m0.011s
sys 0m0.003s
Expect: 100-continue sounded fishy, so I cleared that header:
[wani#lenovo ilparser-docker]$ time curl -v -F lang=hin -F data="देश" "localhost:3000/parse" -H Expect: --trace-time
16:48:04.513691 * Trying 127.0.0.1...
16:48:04.513933 * Connected to localhost (127.0.0.1) port 3000 (#0)
16:48:04.514083 * Initializing NSS with certpath: sql:/etc/pki/nssdb
16:48:04.610095 > POST /parse HTTP/1.1
16:48:04.610095 > Host: localhost:3000
16:48:04.610095 > User-Agent: curl/7.43.0
16:48:04.610095 > Accept: */*
16:48:04.610095 > Content-Length: 244
16:48:04.610095 > Content-Type: multipart/form-data; boundary=------------------------24f30647b16ba82d
16:48:04.610095 >
16:48:04.618107 < HTTP/1.1 200 OK
16:48:04.618194 < Content-Length: 70
16:48:04.618249 < Server: Mojolicious (Perl)
16:48:04.618306 < Content-Type: application/json;charset=UTF-8
16:48:04.618370 < Date: Mon, 21 Mar 2016 11:18:04 GMT
16:48:04.618430 < Connection: keep-alive
16:48:04.618492 <
16:48:04.618590 * Connection #0 to host localhost left intact
{"tokenizer":"<Sentence id=\"1\">\n1\tदेश\tunk\n<\/Sentence>\n"}
real 0m0.117s
user 0m0.023s
sys 0m0.082s
Now the only time taking thing left is: Initializing NSS with certpath: sql:/etc/pki/nssdb. Why does curl do that in this context?
After a little help on IRC from #DanielStenberg, I came to know that the db load is present because curl inits nss in that case since curl needs a good random source for the boundary separator used for -F . Curl could have used getrandom() syscall or read bits out of /dev/urandom since boundary separators don't need to be cryptographically secure in any way, but curl just wants secure random in some other places so curl reuses the random function that it already has.

Why is PHP cURL on Linux changing the Content-Type request header?

It seems PHP's built-in cURL module changes the header fields before sending them.
I developed a small class to communicate with an encoder device trough HTTP requests, using cURL to do the task. The code works fine under Windows, however, when I run it under Debian, the device responds with an HTTP 406 error.
The error code indicates that the server cannot respond in the requested format. (More info)
This is strange, since the response type is determined by the extension of the URL (and the possible modes are xml and json), and I didn't set explicitly the Accept parameter in the header.
Using the CURLOPT_VERBOSE parameter, it dumps the following data:
* Hostname was NOT found in DNS cache
* Trying 172.19.0.9...
* Connected to 172.19.0.9 (172.19.0.9) port 1080 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* Server certificate:
* subject: C=US; ST=Illinois; L=Lake Forest; O=Haivision Network Video, Inc.; OU=PRODUCT DEVELOPMENT; CN=localhost.localdomai n; emailAddress=support#haivision.com
* start date: 2016-01-22 14:40:48 GMT
* expire date: 2026-01-19 14:40:48 GMT
* issuer: C=US; ST=Illinois; L=Lake Forest; O=Haivision Network Video, Inc.; OU=PRODUCT DEVELOPMENT; CN=localhost.localdomain ; emailAddress=support#haivision.com
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> POST /ecs/auth.xml HTTP/1.1
Host: 172.19.0.9:1080
Accept: */*
Content-Length: 86
Content-Type: application/x-www-form-urlencoded
* upload completely sent off: 86 out of 86 bytes
< HTTP/1.1 406 Not Acceptable
* Server nginx is not blacklisted
< Server: nginx
< Date: Fri, 01 Apr 2016 08:45:30 GMT
< Content-Type: application/xml
< Content-Length: 135
< Connection: keep-alive
<
* Connection #0 to host 172.19.0.9 left intact
It looks like the Content-Type: application/xml changed to application/x-www-form-urlencoded, and I think this is the main reason why the request fails so miserably.
The array being passed to the curl_setopts() function looks like this:
array(11) {
[19913]=>
bool(true)
[64]=>
bool(false)
[52]=>
bool(false)
[68]=>
int(10)
[10023]=>
array(5) {
["Authorization"]=>
string(10) "Basic ==Og"
["Cache-Control"]=>
string(8) "no-cache"
["Content-Type"]=>
string(15) "application/xml"
["Connection"]=>
string(10) "keep-alive"
["Content-Length"]=>
int(86)
}
[20079]=>
array(2) {
[0]=>
object(Pest)#43 (6) {
["curl_opts"]=>
array(9) {
[19913]=>
bool(true)
[64]=>
bool(false)
[52]=>
bool(false)
[68]=>
int(10)
[10023]=>
array(0) {
}
[20079]=>
*RECURSION*
[81]=>
int(0)
[84]=>
int(2)
[41]=>
bool(true)
}
["base_url"]=>
string(23) "https://172.19.0.9:1080"
["last_response"]=>
NULL
["last_request"]=>
NULL
["last_headers"]=>
NULL
["throw_exceptions"]=>
bool(true)
}
[1]=>
string(13) "handle_header"
}
[81]=>
int(0)
[84]=>
int(2)
[41]=>
bool(true)
[10036]=>
string(4) "POST"
[10015]=>
string(86) "<?xml version="1.0" encoding="UTF-8"?>
<user username="#########" password="########"/>
"
As you can see, there's no Accept tag, and the content type is set to application/xml.
So here's my question: why does curl changing the request's header? If the roots of the problem are something else, then what's the reason it works on Win10 and not on Debian Jessie?
Update (16. 04. 04.):
Fun thing, that the same version of cURL library doesn't work in PHP, but it does in cli:
curl -X POST -H "Authorization: Basic aGFpYWRtaW46bWFuYWdlcg==" -H "Content-Type: application/xml" -H "Cache-Control: no-cache" -H "Postman-Token: 760f1aac-619f-4b64-ec06-0146554fcecf" -d '<?xml version="1.0"?><user username="########" password="#######" />' "https://172.19.0.9:1080/ecs/auth.xml"
<?xml version="1.0"?>
<sessionid value="fd7b8fd0-ac5e-4f72-a01c-142082de24f1"/>
CURL version on the linux box is 7.26.0 (x86_64-pc-linux-gnu) libcurl/7.26.0.
Thanks in advance, and sorry for the wall of text.

Resources