Sending a POST request with JSON data in Linux - linux

I first work with Postman to test an api to do simple POST requests and everything worked fine as it is easy to add values in the body part.
Now I need to do the same from a Linux console. I tried using the curl command.
curl -X POST -H "Content-Type: application/json" -d'{"DeviceId":"L","TransactionValue":"360","RSSI":"360","Time":"2018-07-30T11:02:00"}' https://IPADDRESS.com.au
And other commands based on the curl but everytime I have this error log displayed on my Linux console :
(35) ssl_handshake returned - PolarSSL: (-0x7780) SSL - A fatal alert message was received from our peer
I don't understand why I have an issue on the Linux console and not on Postman and what is this issue.
Many thanks in advance.
Cheers!!

the problem is almost certainly with the backend ssl library, in your case, PolarSSL/mbedTLS, it seems your version is from 2014 or older.
which version of PolarSSL are you using? you can find out by running curl --version , what does it output?
try updating to the latest version of MbedTLS and re-compile curl, and try again, you can find the latest version here: https://tls.mbed.org/tech-updates/releases
(btw, seems PolarSSL and mbedTLS refers to the same thing, they just changed the name back in 2015. the fact that the error message refers to it as PolarSSL implies that you're using an old version from 2014 or or older)

Your command is right, the problem could be that you have an outdated curl / PolarSSL version since it seems to be working with postman.
Here is an example using curl 7.61.0 posting to httpbin.org:
$ curl -X POST https://httpbin.org/post -H "Content-type: application/json" -d '{"DeviceId":"L","TransactionValue":"360","RSSI":"360","Time":"2018-07-30T11:02:00"}'
{
"args": {},
"data": "{\"DeviceId\":\"L\",\"TransactionValue\":\"360\",\"RSSI\":\"360\",\"Time\":\"2018-07-30T11:02:00\"}",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Connection": "close",
"Content-Length": "83",
"Content-Type": "application/json",
"Host": "httpbin.org",
"User-Agent": "curl/7.43.0"
},
"json": {
"DeviceId": "L",
"RSSI": "360",
"Time": "2018-07-30T11:02:00",
"TransactionValue": "360"
},
"origin": "178.197.231.122",
"url": "https://httpbin.org/post"
}
If the previous example works, then try setting the host, for example:
$ curl -X POST https://IPADDRESS.com.au \
-H "Host: httpbin.org" \
-H "Content-type: application/json" \
-d '{"DeviceId":"L","TransactionValue":"360","RSSI":"360","Time":"2018-07-30T11:02:00"}'
Notice the -H "Host: httpbin.org" replace it with your own domain.

Related

Using wiremock to proxy/record calls to AWS Cloudfront service

We have a container-based service running in AWS ECS with the front end hosted by AWS Cloudfront, and authorization handled by AWS Cognito. I'm trying to configure Wiremock to be a proxy for this service so I can record the calls and mappings to later use in unit tests for a client app I'm writing in python.
I'm running the Wiremock server in standalone mode, and have it proxying to calls to the url of our service. However, Cloudfront keeps returning either a 403-Bad Request error or 403-Forbidden error when I connect via Wiremock.
When I use curl, and pass all the correct headers (Content-Type: application/json, Authentication: Bearer ) it works just fine when I use https://myservice.example.com/api/foo. But as soon as I swap out "myservice.example.com" for "localhost:8000", I get the Cloudfront generated errors.
I'm guessing I have some mis-configuration where, despite passing the headers to Wiremock, I haven't properly told Wiremock to pass those headers on to "the service", which is really Cloudfront.
Not being a Java guy, I'm finding the Wiremock docs a little difficult to understand, and am trying to use the command-line arguments to configure Wiremock like this:
/usr/bin/java -jar \
./wiremock-jre8-standalone-2.35.0.jar \
--port=8000 \
--verbose \
--root-dir=test_data/wiremock \
--enable-browser-proxying \
--preserve-host-header \
--print-all-network-traffic \
--record-mappings \
--trust-proxy-target=https://myservice.example.com \
--proxy-all=https://myservice.example.com
Request:
$ curl -k -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer ${JWT}" \
http://127.0.0.1:8000/api/foo
Response:
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>CloudFront</center>
</body>
</html>
When using exactly the same curl command, but changing the URL to point directly at my service instead of the proxy, I get the response I expected (hoped for?) through the proxy:
curl -k -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer ${JWT}" \
https://myservice.example.com/api/foo
[
{
"id": "09d91ea0-7cb0-4786-b3fc-145fc88a1a3b",
"name": "foo",
"created": "2022-06-09T02:32:11Z",
"updated": "2022-06-09T20:08:43Z",
},
{
"id": "fb2b6454-4336-421a-bc2f-f1d588a78d12",
"name": "bar",
"created": "2022-10-05T06:23:24Z",
"updated": "2022-10-05T18:34:32Z",
}
]
Any help would be greatly appreciated.
Thanks.

Prefect2.0 How to trigger a flow using just curl?

Here is my dead simple flow:
from prefect import flow
import datetime
#flow
def firstflow(inreq):
retval={}
retval['type']=str(type(retval))
retval['datetime']=str(datetime.datetime.now())
print(retval)
return retval
I run prefect orion and prefect agent.
Make a trigger using web ui (deployments run) ... the agent succesfully pull and do the job.
My question is how to do the trigger using just curl?
Note : I already read http://127.0.0.1:4200/docs.
but my lame brain couldn't find how to do it.
note:
Lets say my flow id is : 7ca8a456-94d7-4aa1-80b9-64894fdca93b
Parameters I want to be processed is {'msg':'Hello world'}
blindly Tried with
curl -X POST -H 'Content-Type: application/json' http://127.0.0.1:4200/api/flow_runs \
-d '{"flow_id": "7ca8a456-94d7-4aa1-80b9-64894fdca93b", "parameters": {"msg": "Hello World"}, "tags": ["test"]}'
but prefect orion say
INFO: 127.0.0.1:53482 - "POST /flow_runs HTTP/1.1" 307 Temporary Redirect
Sincerely
-bino-
It's certainly possible to do it via curl but it might be painful especially if your flow has parameters. There's much easier way to trigger a flow that will be tracked by the backend API - run the flow Python script and it will have exactly the same effect. This is because the (ephemeral) backend API of Prefect 2.0 is always active in the background and all flow runs, even those started from a terminal, are tracked in the backend.
Regarding curl, it looks like you are missing the trailing slash after flow_runs. Changing your command to this one should work:
curl -X POST -H 'Content-Type: application/json' http://127.0.0.1:4200/api/flow_runs/ \
-d '{"flow_id": "7ca8a456-94d7-4aa1-80b9-64894fdca93b", "parameters": {"msg": "Hello World"}, "tags": ["test"]}'
The route which might be more helpful, though, is this one - it will create a flow run from a deployment and set it into a scheduled state - the default state is pending, which would cause the flow run to be stuck. This should work directly:
curl -X POST -H 'Content-Type: application/json' \
http://127.0.0.1:4200/api/deployments/your-uuid/create_flow_run \
-d '{"name": "curl", "state": {"type": "SCHEDULED"}}'

Code MODULE_NOT_FOUND was returned when a node red node is installed using the HTTP API

Currently I am trying to install node in NodeRED using the HTTP POST /nodes using the following curl command:
curl -X POST -H "Accept: application/json" -H "Content-Type: application/json" -i http://localhost:1880/nodes -d "{\"module\": \"C:\\test\\testRemoteNodeWindow\"}"
But I am getting a 400 Bad request response, this one:
{"code":"MODULE_NOT_FOUND","message":"Cannot find module 'C:\test\testRemoteNodeWindow'"}
But I have noticed that the node was added as dependency to the node_red_config/package.json
{
"name": "node-red-project",
"description": "A Node-RED Project",
"version": "0.0.1",
"private": true,
"dependencies": {
"testRemoteNodeWindow": "file:testRemoteNodeWindow"
}
}
And the symbolic link was created in node_red_config/node_modules, the issue only is happening in Windows the strange is that I am using the same node/node-red/npm version in a Linux machine and the node is created using the HTTP POST /node without any problem. Does anyone know if this can be a configuration problem or something like that?
Regards.
The MODULE_NOT_FOUND error means that whilst it has successfully run the npm install of your module, the runtime has then failed to find a valid Node-RED module with that name.
This usually means your module does not have a node-red section in its package.json file, as described here. Without that, the runtime does not recognise the module as a valid Node-RED module.
I found the issue, this regarding with the Windows path for example I use:
curl -X POST -H "Accept: application/json" -H "Content-Type: application/json" http://localhost:1880/nodes -d "{\"module\": \"C:/test/testRemoteNodeWindow\"}"
The node is installed and I am getting HTTP 200 response but if I use:
curl -X POST -H "Accept: application/json" -H "Content-Type: application/json" -i http://localhost:1880/nodes -d "{\"module\": \"C:\\test\\testRemoteNodeWindow\"}"
I got the MODULE_NOT_FOUND code.

How to install full node setup for Cardano (ada) coin

I'm setting up a full node instance on my aws server for Cardano (ada), but cardano documents display a popup that this document is not fully updated. Can anyone help how to install full node on my server.
How to use json RPC calls to access this ada blockchain in testnet or mainnet. Because the example-
*curl -X POST https://localhost:8090/api/v1/wallets \
-H "Accept: application/json; charset=utf-8" \
-H "Content-Type: application/json; charset=utf-8" \
--cert ./scripts/tls-files/client.pem \
--cacert ./scripts/tls-files/ca.crt \
-d '{
"operation": "create",
"backupPhrase": ["squirrel","material","silly","twice","direct","slush","pistol","razor","become","junk","kingdom","flee"],
"assuranceLevel": "normal",
"name": "MyFirstWallet",
"spendingPassword": "5416b2988745725998907addf4613c9b0764f04959030e1b81c603b920a115d0"
}'*
in ada documentation used certificate verifications. Can anyone help how it can be implemented in nodejs.
I have already installed Daedalus wallet, nix.

Connecting Eclipse Hono and Ditto

I have eclipse-hono installed in one machine and Eclipse ditto installed in the other which is connected to same WIFI. I am trying to consume data from Eclipse hono to Eclipse ditto.
I have created a tenant named tenantallAdapters and registered a device named 4716.
Let us assume that I need to send the temperature sensor data from the registered device in the tenant to a Hono consumer as shown in the below snippet.
curl -i -X POST \
-u sensor10#tenantAllAdapters \
-H 'Content-Type: application/json' \
--data-binary '{"temp": 5}' \
http://10.196.2.164:8080/telemetry
I also start the Hono-consumer as below
mvn spring-boot:run -Drun.arguments=\
--hono.client.host=10.196.2.164,\
--hono.client.username=consumer#HONO,\
--hono.client.password=verysecret,\
--hono.auth.amqp.bindAddress=10.196.2.164,\
--hono.auth.amqp.keyPath=target/certs/auth-server-key.pem,\
--hono.auth.amqp.certPath=target/certs/auth-server-cert.pem,\
--hono.auth.amqp.trustStorePath=target/certs/trusted-certs.pem,\
--tenant.id=tenantAllAdapters
I am successfully able to receive the data in Hono consumer.
Instead of Hono consumer, how can I consume the same data in Ditto?
Edited : As per the blog in the first comment below:
“Test connection” command via HTTP in order to test if the Ditto sandbox can connect to the Hono one is as follows
$ curl -X POST -i -u devops:devopsPw1! -H 'Content-Type: application/json' -d '{
"targetActorSelection": "/system/sharding/connection",
"headers": {
"aggregate": false
},
"piggybackCommand": {
"type": "connectivity.commands:testConnection",
"connection": {
"id": "hono-sandbox-connection-1",
"connectionType": "amqp-10",
"connectionStatus": "open",
"uri": "amqp://consumer%40HONO:verysecret#hono.eclipse.org:15672",
"failoverEnabled": true,
"sources": [{
"addresses": [
"telemetry/org.eclipse.ditto",
"event/org.eclipse.ditto"
],
"authorizationContext": ["nginx:demo5"]
}]
}
}
}' https://ditto.eclipse.org/devops/piggyback/connectivity?timeout=8000
I am not sure if I am missing anything
I had followed https://www.eclipse.org/ditto/2018-05-02-connecting-ditto-hono.html even before I posted this question here.
But the only thing I was missing was the password of the devops user. As mentioned earlier and also as mentioned in the given link, I was using devopsPw1! as the password. Hono and Ditto got connected once I changed the password to foobar.

Resources