curl --data "<xml>" --header "Content-Type: text/xml" --request PROPFIND url.com
By reading the curl man page I could not understand how the above commandline is using --data option.
Question:
What does above commandline do ?
Why doesn't man page describe this usage? If it does then what did I not understand after reading the man page?
The --data flag is for defining POST data.
The command sends a POST with contents <xml> and MIME type text/xml. However, with the flag --request, you are setting the HTTP method from POST to PROPFIND and sending the request to url.com.
I also did manage to find the --data flag in the manual:
-d, --data <data>
(HTTP) Sends the specified data in a POST request to the HTTP server,
in the same way that a browser does when a user has filled in an HTML
form and presses the submit button. This will cause curl to pass the
data to the server using the content-type
application/x-www-form-urlencoded. Compare to -F, --form.
-d, --data is the same as --data-ascii. To post data purely binary, you should instead use the --data-binary option. To URL-encode the
value of a form field you may use --data-urlencode.
If any of these options is used more than once on the same command
line, the data pieces specified will be merged together with a
separating &-symbol. Thus, using '-d name=daniel -d skill=lousy' would
generate a post chunk that looks like 'name=daniel&skill=lousy'.
If you start the data with the letter #, the rest should be a file
name to read the data from, or - if you want curl to read the data
from stdin. The contents of the file must already be URL-encoded.
Multiple files can also be specified. Posting data from a file named
'foobar' would thus be done with --data #filename. When --data is told
to read from a file like that, carriage returns and newlines will be
stripped out.
Related
I have links .php how do I substitute my values in all parameters using curl post?
Provided that I do not know what the parameters are in these php links, curl should determine for itself what parameters are in the post request and substitute my values.
If I know the parameter, then I can send it to the links like this:
while read p; do
curl $p -X POST --connect-timeout 18 --cookie "" --user-agent "" -d "parametr=helloworld" -w "%{url}:%{time_total}s\n"
done < domain.txt > output.txt
And if I do not know the parameters, what should I do? How to make curl automatically substitute values into parameters? For example, the value: "hello world" provided that I did not know "parameter"
It's simply not possible. curl is a client program and has no way of knowing or finding out which request parameters are supported by a server or which are not.
Unless of course, the API is properly documented and available as an OpenAPI/Swagger specification for example. If it isn't, you're out of luck.
I have problems when I want to generate a new MR through the CURL. This is what I'm writing.
curl --request POST --header "PRIVATE-TOKEN: $TOKEN_FINTECH" "https://$gitlab/api/v4/projects/$id/merge_request/" {"source_branch":"TestBranch","target_branch":"LAD-Wiru","title":"This is a test","state":"opened"}
But when I run my job with this line, it returns the following
{"error":"404 Not Found"}curl: (3) URL using bad/illegal format or missing URL curl: (3) URL using bad/illegal format or missing URL curl: (3) URL using bad/illegal format or missing URL
I searched in several places but I still don't understand how to solve it. :C
In general, you want to pass the parameters by appending them to the URI.
You're also missing an s at the end of merge_requests, and passing some attributes (such as state) that are not available in the create MR endpoint, so you'll need to correct those.
Most likely, you want something like this:
curl --request POST --header "PRIVATE-TOKEN: $TOKEN_FINTECH" "https://gitlab.example.com/api/v4/projects/1234/merge_requests?source_branch=TestBranch&target_branch=LAD-Wiru&title=This%20is%20a%20test"
If you prefer not to append the attributes to the URI, you can use --data instead of --request POST.
I'm trying to use curl to send an image to a device.
this is the code :
#!/bin/bash
curl --header 'Access-Token: d78sdf8bd8bv6d98bd7d6df6b' \
--header 'Content-Type: application/json' \
--data-binary '{"type":"file","title":"Test IMG SEND","body":"Sending Dragon from Debian 8","file_name":"dr1.jpg","file_type":"image/jpeg","file_url":"https://dl2.pushbulletusercontent.com/HJGFC56597ggiyui78698GYGUFI7865/dr1.jpg"}' \
--request POST \
https://api.pushbullet.com/v2/pushes
and this is what I got from the previous upload :
{"data":{"acl":"public-read","awsaccesskeyid":"JSUH(=Y£GhHUIOG898787","content-type":"image/jpeg","key":"HJGFC56597ggiyui78698GYGUFI7865/dr1.jpg","policy":"ecvjksdblvuio3ghuv393783230cgfgsaidfg3","signature":"hjveirvhj34veupiv34'vvg3vg78"},"file_name":"dr1.jpg","file_type":"image/jpeg","file_url":"https://dl2.pushbulletusercontent.com/HJGFC56597ggiyui78698GYGUFI7865/dr1.jpg","upload_url":"https://upload.pushbullet.com/upload-legacy/yVayDlcd
To me, it seems ok, but obviously there is something wrong. Can anyone point me to a solution?
EDIT :
I'm sorry, the problem is that the answer from pushbullet is that "The param 'file_url' has an invalid value" and I'm not able to understand where is the problem, becouse I just copied the file_url from the previous answer from the upload-request, which should be -> file_url":"https://dl2.pushbulletusercontent.com/HJGFC56597ggiyui78698GYGUFI7865/dr1.jpg ...
This error isn't clear as to what's actually invalid about the file_url request. What it should say is, "The file url you specified points to a file that does not yet exist". In other words you need to make sure the file is actually uploaded first before you can link to it.
Their docs aren't great. I had the same issue and finally stumbled upon the answer. This is actually a 3-part process:
1) POST https://api.pushbullet.com/v2/upload-request – It wasn't immediately obvious, but this is requesting access to upload your file to Push Bullet's AWS S3 Bucket. Use the data in the response for the next step.
2) POST https://upload.pushbullet.com/upload-legacy/yVayDlcd – (This should be the value from upload_url from Step 1) Everything in the data object should be posted along with the file to the upload_url in the response. The expected response should be a 204 with no content. This means it was successful.
3) POST https://api.pushbullet.com/v2/pushes – Finally now that the file exists in Push Bullet's system you can push using the file_url value from Step 1.
Hopefully this explanation adds some clarity to their docs.
I have a problem with attachments in couchdb.
Let's say I have a document with a big attachment (100 MB). It means that each time you're modifying the document (not the attachment, just one field of the document), it will duplicate the 100 MB attachment.
Is it possible to force couchdb to create references of attachments when they are not modified (couchdb can easily verify if the attachment has been modified with the MD5)?
Edit:
According to this it should be able to do it but how? Mine (personal install) doesn't do it by default!
Normally, what you expect to find is the default behaviour of CouchDB. I think it could depend on how the API is used however. For example, following sample scenario works fine (on CouchDB 1.5)
All commands are given in bash syntax, so you can reproduce easily (just make sure to use correct document id and revision numbers).
Create 10M sample file for upload
dd if=/dev/urandom of=attach.dat bs=1024 count=10240
Create test DB
curl -X PUT http://127.0.0.1:5984/attachtest
Database expected data_size is about few bytes at this point. You can query it as follows, and look for data_size attribute.
curl -X GET http://127.0.0.1:5984/attachtest
which gives in my test:
{"db_name":"attachtest","doc_count":1,"doc_del_count":0,"update_seq":2,"purge_seq":0,"compact_running":false,"disk_size":8287,"data_size":407,"instance_start_time":"1413447977100793","disk_format_version":6,"committed_update_seq":2}
Create sample document
curl -X POST -d '{"hello": "world"}' -H "Content-Type: application/json" http://127.0.0.1:5984/attachtest
This command gives an output with document id and revision, which are then should be used hereafter
Now, attach sample file to the document; command should use id and revision as logged in the output of the previous one:
curl -X PUT --data-binary #attach.dat -H "Content-Type: application/octet-stream" http://127.0.0.1:5984/attachtest/DOCUMENT-ID/attachment\?rev\=DOCUMENT-REVISION-1
Last command output denotes that revision 2 have been created, so the document was updated indeed. One can check the database size now, which should be around 10000000 (10M). Again, looking for data_size in the following command's output:
curl -X GET http://127.0.0.1:5984/attachtest
Now, geting the document back from DB. It will be used then to update it. Important to have in it:
the _rev in the document, to be able to update it
attachment stub, to denote that attachment should not be deleted, but kept intact
curl -o document.json -X GET http://127.0.0.1:5984/attachtest/DOCUMENT-ID
Update document content, not changing the attachment itself (keeping the stub there). Here this will simply change one attribute value.
sed -i 's/world/there/' document.json
and update document in the DB
curl -X PUT -d #document.json -H "Content-Type: application/json" http://127.0.0.1:5984/attachtest/DOCUMENT-ID
Last command output denotes that revision 3 have been created, so we now that the document is updated indeed.
Finally, now we can verify the database size! Expected data_size is still around 10000000 (10M), not 20M:
curl -X GET http://127.0.0.1:5984/attachtest
And this should work fine. For example, on my machine it gives:
{"db_name":"attachtest","doc_count":1,"doc_del_count":0,"update_seq":8,"purge_seq":0,"compact_running":false,"disk_size":10535013,"data_size":10493008,"instance_start_time":"1413447977100793","disk_format_version":6,"committed_update_seq":8}
So, still 10M.
It means that each time you're modifying the document (not the
attachment, just one field of the document), it will duplicate the 100
MB attachment.
In my testing I found the opposite - the same attachment is linked through multiple revisions of the same document with no loss of space.
Please can you retest to be certain of this behaviour?
I am using Apache Stanbol. It works for enhancing the text, however when I tried sentiment analysis and sentence detection, it doesn't work.
I tried this code
curl -v -X POST -H "Accept: text/plain" -H "Content-type: text/plain; \
charset=UTF-8" --data "Some text for analysis" \
"http://localhost:8081/enhancer/engine/sentiment-wordclassifier"
But it gives blank { } output, I tried changing the header attributes but no luck.
am I missing something? Do I need to do some configuration first?
I even tried adding analyzer in the enhancer chain but the same blank output, also tried REST API for opennlp-sentence, but it didn't work.
I guess you are sending data to the wrong endpoint... usually calls to the enhancer need to be done to all chains:
http://host/stanbol/enhancer
or to a concrete chain:
http://host/stanbol/enhancer/chain/<name>
The enhancer results couldn't be serialized as plain text, but in any of the RDF serialization supported by Stanbol. So the Accept header would need to be any of those, text/turtle for instance.
Further details at the documentation: http://stanbol.apache.org/docs/trunk/components/enhancer/#RESTful_API