I have a cronjob that uses curl to send http post to my home-assistant.io server that in turn uses google_say to make my Google Home tell people to start getting ready in the morning... for a bit of fun. :)
It works great but when trying to add some dynamic content such as saying the day of the week, I'm struggling with the construct of using date within curl. I would also like it determine the number of days until the weekend. I have tried the following:
"message": "Its "'"$(date +%A)"'" morning and x days until the weekend. Time to get ready."
but get an error saying:
<html><head><title>500 Internal Server Error</title></head><body><h1>500 Internal Server Error</h1>Server got itself in trouble</body></html>
Am I wrong in thinking that "'"$(date +%A)"'" should work in this situation? Also I'd like to add how many days until the weekend, probably something like:
6 - $(date +%u)
I appreciate that I could do this very easily by doing some calculations before curl and referencing those but would like to do it in a single line if possible. The line is referenced from an .sh file at present, not a single line in cron.
This is the full line as requested:
curl -X POST -H "x-ha-access: apiPass" -H "Content-Type: application/json" -d '{"entity_id": "media_player.Living_room_Home", "message": "Its "'"$(date +%A)"'" morning and 2 days until the weekend. Time to get ready."}' http://ipAddr:8123/api/services/tts/google_say?api_password=apiPass
Thanks.
It works perfectly fine with this line:
curl --trace-ascii 1 -X POST -H "x-ha-access: apiPass" -H "Content-Type: application/json" -d '{"entity_id": "media_player.Living_room_Home", "message": "Its '$(date +%A)' morning and 2 days until the weekend. Time to get ready."}'
With result:
== Info: Trying ::1...
== Info: TCP_NODELAY set
== Info: Connected to localhost (::1) port 80 (#0)
=> Send header, 197 bytes (0xc5)
0000: POST /api/services/tts/google_say?api_password=apiPass HTTP/1.1
0041: Host: localhost
0052: User-Agent: curl/7.50.3
006b: Accept: */*
0078: x-ha-access: apiPass
008e: Content-Type: application/json
00ae: Content-Length: 130
00c3:
=> Send data, 130 bytes (0x82)
0000: {"entity_id": "media_player.Living_room_Home", "message": "Its T
0040: uesday morning and 2 days until the weekend. Time to get ready.
0080: "}
== Info: upload completely sent off: 130 out of 130 bytes
<= Recv header, 24 bytes (0x18)
0000: HTTP/1.1 404 Not Found
<= Recv header, 28 bytes (0x1c)
0000: Server: Microsoft-IIS/10.0
<= Recv header, 37 bytes (0x25)
0000: Date: Tue, 07 Nov 2017 21:12:21 GMT
<= Recv header, 19 bytes (0x13)
0000: Content-Length: 0
<= Recv header, 2 bytes (0x2)
0000:
== Info: Curl_http_done: called premature == 0
== Info: Connection #0 to host localhost left intact
Would this help?
echo $(( $(date -d 'next saturday' +%j) - $(date +%j) - 1 )) days until the weekend
The -d option in GNU date lets you provide a surprisingly flexible description of the date you want.
Related
The following command will wait > 60 seconds on Ubuntu 22.04:
curl --verbose --retry-max-time 0 --retry 0 --connect-timeout "30" --max-time "60" "https://www.google.com/"
Here is the testing result:
root#test:~# echo $(date)
Tue Dec 6 07:26:04 PM CST 2022
root#test:~# curl --verbose --retry-max-time 0 --retry 0 --connect-timeout "30" --max-time "60" "https://www.google.com/"
* Resolving timed out after 30000 milliseconds
* Closing connection 0
curl: (28) Resolving timed out after 30000 milliseconds
root#test:~# echo $(date)
Tue Dec 6 07:28:26 PM CST 2022
Here is the version:
root#test:~# curl --version
curl 7.81.0 (x86_64-pc-linux-gnu) libcurl/7.81.0 OpenSSL/3.0.2 zlib/1.2.11 brotli/1.0.9 zstd/1.4.8 libidn2/2.3.2 libpsl/0.21.0 (+libidn2/2.3.2) libssh/0.9.6/openssl/zlib nghttp2/1.43.0 librtmp/2.3 OpenLDAP/2.5.12
Release-Date: 2022-01-05
Protocols: dict file ftp ftps gopher gophers http https imap imaps ldap ldaps mqtt pop3 pop3s rtmp rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd
The expected behavior:
The whole timeout should not exceed the 60 seconds or 30 seconds as it
already timeout in Resolving DNS.
What's the problem and how to fix the timeout with curl?
It is possible that DNS is not resolving properly and causing cURL to take longer. The issue you describe, looking at your cURL version seems to be similar to what is below:
https://unix.stackexchange.com/questions/571246/curl-max-time-and-connect-timeout-not-working-at-all
Instead of cracking your head with rebuilding cURL and configuring it with c-ares, it might be worth a try to test whether a server is reachable first and then initiate cURL:
ping -c 1 google.com &>/dev/null && cURL ...whatever
By default ping will extend timeout by max 4 seconds, which you can finetune using -W option
I'm trying to make a GET request to an old Linux machine using cURL inside WSL2/Debian. The connection between my windows PC and the remote Linux is via VPN. VPN is working as I can ping the IP, as well as VNC to it (via Windows).
The curl command I'm using on WSL2/Debian is:
curl -k --header 'Host: 10.48.1.3' --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2'
Using the verbose option, I get:
Note: Unnecessary use of -X or --request, GET is already inferred.
* Expire in 0 ms for 6 (transfer 0x555661293fb0)
* Expire in 4000 ms for 8 (transfer 0x555661293fb0)
* Trying 10.48.1.3...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x555661293fb0)
* Connection timed out after 4001 milliseconds
* Closing connection 0
curl: (28) Connection timed out after 4001 milliseconds
After the max-time of 4s the command is cancelled
When I execute the same command on the same computer but using Windows Powershell, it works:
curl.exe -k --max-time 4 --location --request GET 'https://10.48.1.3/path/to/API/json/get?id=Parameter1&id=Parameter2' -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.48.1.3:443...
* Connected to 10.48.1.3 (10.48.1.3) port 443 (#0)
* schannel: disabled automatic use of client certificate
* schannel: using IP address, SNI is not supported by OS.
* schannel: ALPN, offering http/1.1
* ALPN, server did not agree to a protocol
Parameter1 HTTP/1.1
> User-Agent: curl/7.79.1 > Accept: */* > * Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Tue, 17 May 2022 15:39:00 GMT
< Server: IosHttp/0.0.1 (MANUFACTURER)
< Content-Type: application/json
< Content-Length: 239
< Strict-Transport-Security: max-age=15768000
<
PAYLOAD OF API* Connection #0 to host 10.48.1.3 left intact
Using Postman inside Windows works also.
Inside WSL2/Debian I'm able to ping the machine, but ssh is not working either; the cursor just stays there blinking without getting back any answer from remote machine:
$ ping 10.48.1.3 -c 2
PING 10.48.1.3 (10.48.1.3) 56(84) bytes of data.
64 bytes from 10.48.1.3: icmp_seq=1 ttl=61 time=48.9 ms
64 bytes from 10.48.1.3: icmp_seq=2 ttl=61 time=28.4 ms
--- 10.48.1.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 2ms
rtt min/avg/max/mdev = 28.353/38.636/48.919/10.283 ms
$ ssh -c aes256-cbc root#10.48.1.3
^C # Cancelled as nothing happened for several minutes
On Windows Powershell both, ping and ssh work:
> ssh -c aes256-cbc root#10.48.1.3
The authenticity of host '10.48.1.3 (10.48.1.3)' can't be established.
ECDSA key fingerprint is SHA256:FINGERPRINTDATAOFMACHINE.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
I've about 100 similar machines on field to which I need to cURL, and this error is coming in about 10 of them, the rest 90 work fine (also on WSL2/Debian).
I guess the error may come from the SSL Version on my WSL2/Debian... Has anyone an idea how to solve this problem?
I see other responses from same server getting gzipped. I have a certain URL that is not getting gzipped. I can only think the problem may be the size of the content but I see no setting in IIS 8 that pertains to a size limit.
All static and dynamic and url and http compression is installed and enabled. I can't find any logs that contain any helpful info on why this URL is not getting compressed.
For example, a response that is gzipped from IIS. (See response header Content-Encoding: gzip)
curl 'http://....../small_json/' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: application/json, text/plain, */*' --compressed -D /tmp/headers.txt -o /dev/null; cat /tmp/headers.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 65382 100 65382 0 0 233k 0 --:--:-- --:--:-- --:--:-- 233k
HTTP/1.1 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Content-Encoding: gzip
Content-Language: en
Vary: Accept, Accept-Language, Cookie,Accept-Encoding
Server: Microsoft-IIS/8.5
X-Frame-Options: SAMEORIGIN
Date: Sun, 08 Apr 2018 01:50:54 GMT
Content-Length: 65382
Larger JSON response does not have Content-Encoding: gzip:
curl 'http://....../big_json/' -H 'Accept-Encoding: gzip, deflate' -H 'Accept: application/json, text/plain, */*' --compressed -D /tmp/headers.txt -o /dev/null; cat /tmp/headers.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4755k 0 4755k 0 0 1018k 0 --:--:-- 0:00:04 --:--:-- 1373k
HTTP/1.1 200 OK
Transfer-Encoding: chunked
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Content-Language: en
Vary: Accept, Accept-Language, Cookie
Server: Microsoft-IIS/8.5
X-Frame-Options: SAMEORIGIN
Date: Sun, 08 Apr 2018 01:51:11 GMT
I've set compression settings to be very liberal:
FERB info for the compressed response:
FERB info for the non-compressed response:
I don't know if it still rellevant,
But you have to set dynamicCompressionLevel to a high value.
By default is 0.
I setup an IoT hub on Azure, and created a device called "ServerTemp". I generated an SAS token, and it seems to be accepted (I don't get 401). But I'm getting a 400 Bad Request.
Here is the request I'm sending via curl:
curl -v -H"Authorization:SharedAccessSignature sr=A794683.azure-devices.net&sig=<snip>" -H"Content-Type:application/json" -d'{"deviceId":"ServerTemp","temperature":70}' https://A479683.azure-devices.net/devices/ServerTemp/messages/events?api-version=2016-11-14
Request and Response (output from curl):
> POST /devices/ServerTemp/messages/events?api-version=2016-11-14 HTTP/1.1
> Host: A479683.azure-devices.net
> User-Agent: curl/7.47.0
> Accept: */*
> Authorization:SharedAccessSignature sr=A794683.azure-devices.net&sig=<snip>
> Content-Type:application/json
> Content-Length: 42
>
* upload completely sent off: 42 out of 42 bytes
< HTTP/1.1 400 Bad Request
< Content-Length: 151
< Content-Type: application/json; charset=utf-8
< Server: Microsoft-HTTPAPI/2.0
< iothub-errorcode: ArgumentInvalid
< Date: Sun, 15 Apr 2018 22:21:50 GMT
<
* Connection #0 to host A479683.azure-devices.net left intact
{"Message":"ErrorCode:ArgumentInvalid;BadRequest","ExceptionMessage":"Tracking ID:963189cb515345e69f94300655d3ca23-G:10-TimeStamp:04/15/2018 22:21:50"}
What am I doing wrong?
Make sure you add the expiry time &se= (as in &se=1555372034) when you form the SAS. It should be the very last parameter. That's the only way i can reproduce the HTTP 400 you're seeing (by omitting it). You should get a 204 No Content once you fix that.
The resource (&sr=) part also seems a bit off in your case, there's no device being specified. Use Device Explorer to generate a device SAS (or just to see how it should look like): Management > SAS Token.
SAS structure —
SharedAccessSignature sig={signature-string}&se={expiry}&skn={policyName}&sr={URL-encoded-resourceURI}
$ curl -i https://poorlyfundedskynet.azure-devices.net/devices/dexter/messages/events?api-version=2016-11-14 \
-H "Authorization: SharedAccessSignature sr=poorlyfundedskynet.azure-devices.net%2fdevices%2fdexter&sig=RxxxxxxxtE%3d&se=1555372034" \
-H "Content-Type: application/json" \
-d'{"deviceId":"dexter","temperature":70}'
HTTP/1.1 204 No Content
Content-Length: 0
Server: Microsoft-HTTPAPI/2.0
Date: Sun, 15 Apr 2018 23:54:25 GMT
You can monitor ingress with Device Explorer or iothub-explorer:
Probably this would work as well: Azure IoT Extension for Azure CLI 2.0
I want to sort and calculate how much clients downloaded files (3 types) from my server.
I installed tshark and ran followed command that should capture GET requests:
`./tshark 'tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' -R'http.request.method == "GET"'`
so sniffer starts to work and every second I get new row, here is a result:
0.000000 144.137.136.253 -> 192.168.4.7 HTTP GET /pids/QE13_593706_0.bin HTTP/1.1
8.330354 1.1.1.1 -> 2.2.2.2 HTTP GET /pids/QE13_302506_0.bin HTTP/1.1
17.231572 1.1.1.2 -> 2.2.2.2 HTTP GET /pids/QE13_382506_0.bin HTTP/1.0
18.906712 1.1.1.3 -> 2.2.2.2 HTTP GET /pids/QE13_182406_0.bin HTTP/1.1
19.485199 1.1.1.4 -> 2.2.2.2 HTTP GET /pids/QE13_302006_0.bin HTTP/1.1
21.618113 1.1.1.5 -> 2.2.2.2 HTTP GET /pids/QE13_312106_0.bin HTTP/1.1
30.951197 1.1.1.6 -> 2.2.2.2 HTTP GET /nginx_status HTTP/1.1
31.056364 1.1.1.7 -> 2.2.2.2 HTTP GET /nginx_status HTTP/1.1
37.578005 1.1.1.8 -> 2.2.2.2 HTTP GET /pids/QE13_332006_0.bin HTTP/1.1
40.132006 1.1.1.9 -> 2.2.2.2 HTTP GET /pids/PE_332006.bin HTTP/1.1
40.407742 1.1.2.1 -> 2.2.2.2 HTTP GET /pids/QE13_452906_0.bin HTTP/1.1
what I need to do to store results type and count like /pids/*****.bin in to other file.
Im not strong in linux but sure it can be done with 1-3 rows of script.
Maybe with awk but I don't know what is the technique to read result of sniffer.
Thank you,
Can't you just grep the log file of your webserver?
Anyway, to extract the lines of captured http traffic relative to your server files, just try with
./tshark 'tcp port 80 and \
(((ip[2:2] - ((ip[0]&0xf)<<2)) - ((tcp[12]&0xf0)>>2)) != 0)' \
-R'http.request.method == "GET"' | \
egrep "HTTP GET /pids/.*.bin"