I have links .php how do I substitute my values in all parameters using curl post?
Provided that I do not know what the parameters are in these php links, curl should determine for itself what parameters are in the post request and substitute my values.
If I know the parameter, then I can send it to the links like this:
while read p; do
curl $p -X POST --connect-timeout 18 --cookie "" --user-agent "" -d "parametr=helloworld" -w "%{url}:%{time_total}s\n"
done < domain.txt > output.txt
And if I do not know the parameters, what should I do? How to make curl automatically substitute values into parameters? For example, the value: "hello world" provided that I did not know "parameter"
It's simply not possible. curl is a client program and has no way of knowing or finding out which request parameters are supported by a server or which are not.
Unless of course, the API is properly documented and available as an OpenAPI/Swagger specification for example. If it isn't, you're out of luck.
I am trying to find flow of a request in my application.I have multiple components that are responsible for handling that request.
For instance I have 4 components and I want to display all the logs in the same terminal instance to check the flow of the request. How can I achieve that?
Currently I am displaying all logs in different terminal instances like this.
tail -f path-to-log/component1.log
tail -f path-to-log/component2.log
I hope my question is understandable.
Just use:
tail -f component1.log component2.log
Try multitail. It allows you to tail multiple files at the same time.
I'm trying to generate a simple list of url's that get a 'PASS' from Varnish. varnishlog is a great utility, but it appears that it can't do this task, as it primarily logs HITS, and has no tag for PASS.
Any idea if there is a way to log this? Perhaps in vcl_pass subroutine?
Nevermind... I got it.
varnishlog -b -k 5000 -i TxURL
Has anyone noticed that the service started sending the image's URL with an extra backslash before every slash?
e.g.:
"icon": "https:\ /\ /foursquare.com\ /img\ /categories\ /food\ /default.png".
instead of:
https://foursquare.com/img/categories/food/default.png
Is this normal? Thanks in advance.
We (foursquare) made the change yesterday to the way we serialize JSON output, but should have done so in a way that won't break any modestly mature JSON handler. We may tweak how we do serialization in the future (always in compliance with the JSON spec) and we recommend you build your system to be robust against future changes.
Here is the question.
Given the url http://www.example.com, can we read the first N bytes out of the page?
using wget, we can download the whole page.
using curl, there is -r, 0-499 specifies the first 500 bytes. Seems solve the problem.
You should also be aware that many HTTP/1.1 servers do not have this feature enabled, so that when you attempt to get a range, you'll instead get the whole document.
using urlib in python. similar question here, but according to Konstantin's comment, is that really true?
Last time I tried this technique it failed because it was actually impossible to read from the HTTP server only specified amount of data, i.e. you implicitly read all HTTP response and only then read first N bytes out of it. So at the end you ended up downloading the whole 1Gb malicious response.
So the problem is that how can we read the first N bytes from the HTTP server in practice?
Regards & Thanks
You can do it natively by the following curl command (no need to download the whole document). According to the curl man page:
RANGES
HTTP 1.1 introduced byte-ranges. Using this, a client can request to get only one or more subparts of a specified document. curl
supports this with the -r flag.
Get the first 100 bytes of a document:
curl -r 0-99 http://www.get.this/
Get the last 500 bytes of a document:
curl -r -500 http://www.get.this/
`curl` also supports simple ranges for FTP files as well.
Then you can only specify start and stop position.
Get the first 100 bytes of a document using FTP:
curl -r 0-99 ftp://www.get.this/README
It works for me even with a Java web app deployed to GigaSpaces.
curl <url> | head -c 499
or
curl <url> | dd bs=1 count=499
should do
Also there are simpler utils with perhaps borader availability like
netcat host 80 <<"HERE" | dd count=499 of=output.fragment
GET /urlpath/query?string=more&bloddy=stuff
HERE
Or
GET /urlpath/query?string=more&bloddy=stuff
You should also be aware that many
HTTP/1.1 servers do not have this
feature enabled, so that when you
attempt to get a range, you'll instead
get the whole document.
You will have to get the whole web anyways, so you can get the web with curl and pipe it to head, for example.
head
c, --bytes=[-]N
print the first N bytes of each file; with the leading '-', print all
but the last N bytes of each file
I came here looking for a way to time the server's processing time, which I thought I could measure by telling curl to stop downloading after 1 byte or something.
For me, the better solution turned out to be to do a HEAD request, since this usually lets the server process the request as normal but does not return any response body:
time curl --head <URL>
Make a socket connection. Read the bytes you want. Close, and you're done.