I'm trying to use curl to send an image to a device.
this is the code :
#!/bin/bash
curl --header 'Access-Token: d78sdf8bd8bv6d98bd7d6df6b' \
--header 'Content-Type: application/json' \
--data-binary '{"type":"file","title":"Test IMG SEND","body":"Sending Dragon from Debian 8","file_name":"dr1.jpg","file_type":"image/jpeg","file_url":"https://dl2.pushbulletusercontent.com/HJGFC56597ggiyui78698GYGUFI7865/dr1.jpg"}' \
--request POST \
https://api.pushbullet.com/v2/pushes
and this is what I got from the previous upload :
{"data":{"acl":"public-read","awsaccesskeyid":"JSUH(=Y£GhHUIOG898787","content-type":"image/jpeg","key":"HJGFC56597ggiyui78698GYGUFI7865/dr1.jpg","policy":"ecvjksdblvuio3ghuv393783230cgfgsaidfg3","signature":"hjveirvhj34veupiv34'vvg3vg78"},"file_name":"dr1.jpg","file_type":"image/jpeg","file_url":"https://dl2.pushbulletusercontent.com/HJGFC56597ggiyui78698GYGUFI7865/dr1.jpg","upload_url":"https://upload.pushbullet.com/upload-legacy/yVayDlcd
To me, it seems ok, but obviously there is something wrong. Can anyone point me to a solution?
EDIT :
I'm sorry, the problem is that the answer from pushbullet is that "The param 'file_url' has an invalid value" and I'm not able to understand where is the problem, becouse I just copied the file_url from the previous answer from the upload-request, which should be -> file_url":"https://dl2.pushbulletusercontent.com/HJGFC56597ggiyui78698GYGUFI7865/dr1.jpg ...
This error isn't clear as to what's actually invalid about the file_url request. What it should say is, "The file url you specified points to a file that does not yet exist". In other words you need to make sure the file is actually uploaded first before you can link to it.
Their docs aren't great. I had the same issue and finally stumbled upon the answer. This is actually a 3-part process:
1) POST https://api.pushbullet.com/v2/upload-request – It wasn't immediately obvious, but this is requesting access to upload your file to Push Bullet's AWS S3 Bucket. Use the data in the response for the next step.
2) POST https://upload.pushbullet.com/upload-legacy/yVayDlcd – (This should be the value from upload_url from Step 1) Everything in the data object should be posted along with the file to the upload_url in the response. The expected response should be a 204 with no content. This means it was successful.
3) POST https://api.pushbullet.com/v2/pushes – Finally now that the file exists in Push Bullet's system you can push using the file_url value from Step 1.
Hopefully this explanation adds some clarity to their docs.
Related
I have problems when I want to generate a new MR through the CURL. This is what I'm writing.
curl --request POST --header "PRIVATE-TOKEN: $TOKEN_FINTECH" "https://$gitlab/api/v4/projects/$id/merge_request/" {"source_branch":"TestBranch","target_branch":"LAD-Wiru","title":"This is a test","state":"opened"}
But when I run my job with this line, it returns the following
{"error":"404 Not Found"}curl: (3) URL using bad/illegal format or missing URL curl: (3) URL using bad/illegal format or missing URL curl: (3) URL using bad/illegal format or missing URL
I searched in several places but I still don't understand how to solve it. :C
In general, you want to pass the parameters by appending them to the URI.
You're also missing an s at the end of merge_requests, and passing some attributes (such as state) that are not available in the create MR endpoint, so you'll need to correct those.
Most likely, you want something like this:
curl --request POST --header "PRIVATE-TOKEN: $TOKEN_FINTECH" "https://gitlab.example.com/api/v4/projects/1234/merge_requests?source_branch=TestBranch&target_branch=LAD-Wiru&title=This%20is%20a%20test"
If you prefer not to append the attributes to the URI, you can use --data instead of --request POST.
I use a Telegram bot to incorporate weather alerts from a local weatherservice into my homeautomation system. Today I discovered a weird problem because the message containing the weather alert wasn't sent. If I try this in bash on Linux:
output="Nationaal Hitteplan";curl "https://api.telegram.org/botxxxxxxxxx:longsecuritycode/sendMessage?chat_id=xxxxxxxxx&text=$output"
(I removed my personal tokens in the above command of course...)
then I get a 400 Bad Request and the message is not sent.
If I change output="Nationaal Hitteplan" to output="Nationaal hitteplan" then the message is send as it is supposed to be.
I don't see what's wrong here. The term Nationaal Hitteplan is basically a set of advisories about what to do in hot weather. It has no negative meaning associated with it but apparently Telegram detects a problem.
Does someone have a solution for this except from changing the term as described above?
Url's that contains special characters like a space should be url-encoded.
Use the following curl command to let curl let the encoding:
token='xxxxxxxxx:longsecuritycode';
output="Nationaal Hitteplan";
curl -G \
--data-urlencode 'chat_id=1234567' \
--data-urlencode "text=${output}" \
"https://api.telegram.org/bot${token}/sendMessage"
How to urlencode data for curl command?
I want to fetch all records (from Solr) with a timestamp older than 30 days via cURL command.
What I have tried:
curl -g "http://localhost:8983/solr/input_records/select?q=timestamp:[* TO NOW/DAY-30DAYS]"
I do not understand why this does not work but it does not fetch anything. It simply returns nothing. If I replace '[* TO NOW/DAY-30DAYS]' with an actual value, it will retrieve that record.
Additional relevant information, this is how to delete all records older than 30 days (it works). Again, I do not want to delete, rather just fetch the data.
curl -g "http://localhost:8983/solr/input_records/update?commit=true" -H "Content-Type: text/xml" --data-binary "<delete><query>timestamp:[* TO NOW/DAY-30DAYS]</query></delete>"
Thanks in advance!
This error is happening because you don't have proper URL encoding for your request. Most likely the problem is spaces - need to replace them with %20, same applies to other symbols
Try this:
curl -g "http://localhost:8983/solr/input_records/select?q=timestamp:[*%20TO%20NOW/DAY-30DAYS]
A further addition to Mysterion's answer,
Since you are doing this using curl, you are facing the issue of the URL encoding.
If you just mention
http://localhost:8983/solr/input_records/select?q=timestamp:[* TO NOW/DAY-30DAYS]
in your browser (chrome or others)
the Url encoding is automatically handled by the browser and you would get your response as expected.
I have a problem with attachments in couchdb.
Let's say I have a document with a big attachment (100 MB). It means that each time you're modifying the document (not the attachment, just one field of the document), it will duplicate the 100 MB attachment.
Is it possible to force couchdb to create references of attachments when they are not modified (couchdb can easily verify if the attachment has been modified with the MD5)?
Edit:
According to this it should be able to do it but how? Mine (personal install) doesn't do it by default!
Normally, what you expect to find is the default behaviour of CouchDB. I think it could depend on how the API is used however. For example, following sample scenario works fine (on CouchDB 1.5)
All commands are given in bash syntax, so you can reproduce easily (just make sure to use correct document id and revision numbers).
Create 10M sample file for upload
dd if=/dev/urandom of=attach.dat bs=1024 count=10240
Create test DB
curl -X PUT http://127.0.0.1:5984/attachtest
Database expected data_size is about few bytes at this point. You can query it as follows, and look for data_size attribute.
curl -X GET http://127.0.0.1:5984/attachtest
which gives in my test:
{"db_name":"attachtest","doc_count":1,"doc_del_count":0,"update_seq":2,"purge_seq":0,"compact_running":false,"disk_size":8287,"data_size":407,"instance_start_time":"1413447977100793","disk_format_version":6,"committed_update_seq":2}
Create sample document
curl -X POST -d '{"hello": "world"}' -H "Content-Type: application/json" http://127.0.0.1:5984/attachtest
This command gives an output with document id and revision, which are then should be used hereafter
Now, attach sample file to the document; command should use id and revision as logged in the output of the previous one:
curl -X PUT --data-binary #attach.dat -H "Content-Type: application/octet-stream" http://127.0.0.1:5984/attachtest/DOCUMENT-ID/attachment\?rev\=DOCUMENT-REVISION-1
Last command output denotes that revision 2 have been created, so the document was updated indeed. One can check the database size now, which should be around 10000000 (10M). Again, looking for data_size in the following command's output:
curl -X GET http://127.0.0.1:5984/attachtest
Now, geting the document back from DB. It will be used then to update it. Important to have in it:
the _rev in the document, to be able to update it
attachment stub, to denote that attachment should not be deleted, but kept intact
curl -o document.json -X GET http://127.0.0.1:5984/attachtest/DOCUMENT-ID
Update document content, not changing the attachment itself (keeping the stub there). Here this will simply change one attribute value.
sed -i 's/world/there/' document.json
and update document in the DB
curl -X PUT -d #document.json -H "Content-Type: application/json" http://127.0.0.1:5984/attachtest/DOCUMENT-ID
Last command output denotes that revision 3 have been created, so we now that the document is updated indeed.
Finally, now we can verify the database size! Expected data_size is still around 10000000 (10M), not 20M:
curl -X GET http://127.0.0.1:5984/attachtest
And this should work fine. For example, on my machine it gives:
{"db_name":"attachtest","doc_count":1,"doc_del_count":0,"update_seq":8,"purge_seq":0,"compact_running":false,"disk_size":10535013,"data_size":10493008,"instance_start_time":"1413447977100793","disk_format_version":6,"committed_update_seq":8}
So, still 10M.
It means that each time you're modifying the document (not the
attachment, just one field of the document), it will duplicate the 100
MB attachment.
In my testing I found the opposite - the same attachment is linked through multiple revisions of the same document with no loss of space.
Please can you retest to be certain of this behaviour?
curl --data "<xml>" --header "Content-Type: text/xml" --request PROPFIND url.com
By reading the curl man page I could not understand how the above commandline is using --data option.
Question:
What does above commandline do ?
Why doesn't man page describe this usage? If it does then what did I not understand after reading the man page?
The --data flag is for defining POST data.
The command sends a POST with contents <xml> and MIME type text/xml. However, with the flag --request, you are setting the HTTP method from POST to PROPFIND and sending the request to url.com.
I also did manage to find the --data flag in the manual:
-d, --data <data>
(HTTP) Sends the specified data in a POST request to the HTTP server,
in the same way that a browser does when a user has filled in an HTML
form and presses the submit button. This will cause curl to pass the
data to the server using the content-type
application/x-www-form-urlencoded. Compare to -F, --form.
-d, --data is the same as --data-ascii. To post data purely binary, you should instead use the --data-binary option. To URL-encode the
value of a form field you may use --data-urlencode.
If any of these options is used more than once on the same command
line, the data pieces specified will be merged together with a
separating &-symbol. Thus, using '-d name=daniel -d skill=lousy' would
generate a post chunk that looks like 'name=daniel&skill=lousy'.
If you start the data with the letter #, the rest should be a file
name to read the data from, or - if you want curl to read the data
from stdin. The contents of the file must already be URL-encoded.
Multiple files can also be specified. Posting data from a file named
'foobar' would thus be done with --data #filename. When --data is told
to read from a file like that, carriage returns and newlines will be
stripped out.