Terminal - How to run the HTTP request 'PUT' - linux

So, what I am trying to do is run from Terminal in Linux an HTTP request, 'PUT'. Not POST, not GET, 'PUT'.
I know in terminal you can just type 'GET http://example.com/', but when I did 'PUT http://example.com' (And a bunch of other variables after that...), Terminal said that PUT is not a command.
Here's what I tried:
:~$ PUT http://example.com
PUT: command not found
Well, is there a substitute for the command 'PUT', or some way of sending that HTTP request from terminal?
I don't want to use any external programs.... I don't want to download or install anything. Any other ways?

I would use curl to achieve this: curl -X PUT http://example.com

curl -X PUT -d arg=val -d arg2=val2 http://sssss.zzzz
will work or use postman for HTTP requests www.getpostman.com if terminal is not your main concern, else, CURL is always there.

You are getting
Terminal said that PUT is not a command.
because the information is not being redirected via a network connection (to something that understands HTTP). bash has limited support by itself for communicating over a network, as discussed in
Tech Tip: TCP/IP Access Using bash
More on Using Bash's Built-in /dev/tcp File (TCP/IP)
Advanced Bash-Scripting Guide: Example 29-1. Using /dev/tcp for troubleshooting
Besides that, the HTTP specification says of PUT:
The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server. If the Request-URI does not point to an existing resource, and that URI is capable of being defined as a new resource by the requesting user agent, the origin server can create the resource with that URI.
Clarifying, if you are PUTing to an existing URI, you may be able to do this, and the command implictly needs some data to reflect a modification.
The example in HTTP - Methods (TutorialsPoint) shows a PUT command used to store an HTML body on a URI. Your script has to redirect the data (as well as the initial request) onto the network connection.
You could do all of that using a here-document, or redirecting a file, e.g., (using that example to show how it might be adapted):
cat >/dev/tcp/example.com/80 <<EOF
PUT /hello.htm HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
Host: www.tutorialspoint.com
Accept-Language: en-us
Connection: Keep-Alive
Content-type: text/html
Content-Length: 182
<html>
<body>
<h1>Hello, World!</h1>
</body>
</html>
EOF
But your script should also provide for reading the server's response.

Using the -X flag with whatever HTTP verb you want:
curl -X PUT -H "Content-Type: multipart/form-data;" -d arg=val -d arg2=val2 localhost:8080
This example also uses the -d flag to provide arguments with your PUT request.

Related

Ubuntu 18, proxy not working on terminal but work on browser

(related and perhaps more simple problem to solve: proxy authentication by MSCHAPv2)
Summary: I am using a Ubuntu 18, the proxy is working with web-browser but not with terminal applications (wget, curl or apt update). Any clues? Seems the problem is to interpretate a proxy's "PAC file"... Is it? How to translate to Linux's proxy variables? ... Or the problem is simple: my proxy-config (see step-by-step procedure below) was wrong?
Details:
By terminal env | grep -i proxy we obtain
https_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
http_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
no_proxy=localhost,127.0.0.0/8,::1
NO_PROXY=localhost,127.0.0.0/8,::1
ftp_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
and browser (Firefox) is working fine for any URL, but:
wget http://google.com say Resolving pac._ProxyDomain_ (pac._ProxyDomain_)... etc.etc.0.26 connecting to pac._ProxyDomain_ (pac._ProxyDomain_)|etc.etc.0.26|:80... conected.
Proxy request has been sent, waiting for response ... 403 Forbidden
2019-07-25 12:52:19 ERROR 403: Forbidden.
curl http://google.com say "curl: (5) Could not resolve proxy: pac._ProxyDomain_/proxy.pac"
Notes
(recent news here: purge exported proxy changes something and not tested all again...)
The proxy configuration procedures that I used (there are some plug-and-play PAC file generator? I need a PAC file?)
Config procedures used
All machine was running, with a direct non-proxy internet connection... Them the machine goes to the LAN with the proxy.
Add lines of "export *_proxy" (http, https and ftp) in my ~/.profile. The URL definitions are in the form http_proxy="http://user:pwd#etc" (supposing that is correct, because testesd before with user:pwd#http://pac.domain/proxy.pac syntax and Firefox promped proxy-login)(if the current proxy-password is using # character, need to change?)
Add lines of "export *_proxy" in my ~root/.profile.(need it?)
(can reboot and test with echo $http_proxy)
visudo procedure described here
reboot and navigate by Firefox without need of login, direct (good is working!). Testing env | grep -i proxy, it shows all correct values as expected.
Testing wget and curl as the begin of this report, proxy bug.
Testing sudo apt update, bug.
... after it more one step, supponing that for apt not exist a file, created by sudo nano /etc/apt/apt.conf.d/80proxy and add 3 lines for Acquire::*::proxy "value"; with value http://user:pass#pac._ProxyDomain_/proxy.pac:8080. where pass is etc%23etc, url-encoded.
Summary of tests performed
CONTEXT-1.1
(this was a problem but now ignoring it to focus on more relevant one)
After (the proxied) cable connection and proxy configurations in the system. (see above section "Config procedures used"). Proxy-password with special character.
curl http://google.com say "curl: (5) Could not resolve proxy..."
When change all .profile from %23 to # the error on wget changes, but curl not. Wget changes to "Error parsing proxy URL http://user:pass#pac._ProxyDomain_/proxy.pac:8080: Bad port number"
PS: when used $ on password the system (something in the internal export http_proxy command or use of http_proxy confused it with a variable).
CONTEXT-1.2
Same as context-1.1 above, but password with no special character. Good and clean proxy-password.
curl http://google.com say "curl: (5) Could not resolve proxy..."
CONTEXT-2
After (the proxied) cable connection and no proxy configurations in the system (but confirmed that connection is working on browser after automatic popup form login).
curl -x 192.168.0.1:8080 http://google.com "curl: (7) Failed to connect..."
curl --verbose -x "http://user:pass#pac._proxyDomain_/proxy.pac" http://google.com say "curl: (5) Could not resolve proxy..."
Other configs in use
As #Roadowl suggested to check:
files ~/.netrc and ~root/.netrc not exists
file more /etc/wgetrc exists, but all commented, exept by passive_ftp = on

Header param with underscore in http requests not available at server side when requesting via postman

Following is the curl export of the API call which is failing -
curl -X GET \
'http://endpoint.in/dummy/path?mobile=777777777' \
-H 'Content-Type: application/json' \
-H 'auth_token: iubsaicbsaicbasiucbsa'
The header param auth_token is not available at all in the server side, as checked from logs.
The same curl however works when directly issued as a command. I have the latest postman version v6.2.3 installed.
Also, the same API end point works when requested via other tools like Advanced REST client of chrome.
Previously, I had also checked read this thread http://nginx.org/en/docs/http/ngx_http_core_module.html#underscores_in_headers
Many servers, like nginx, have a config which if set, discards headers with underscore in name.
However, I could not verify the same because I could not find out exactly how is the server deployed in this. It is a node application and we run this command to run the application -
nohup /bin/forever start -o logs/out.log -e logs/err.log app.js
ps -ef | grep node shows the following -
root 5981 1 0 Jul19 ? 00:00:00 /root/.nvm/v7.2.1/bin/node /usr/lib/node_modules/forever/bin/monitor app.js
root 5991 5981 0 Jul19 ? 00:00:04 /root/.nvm/v7.2.1/bin/node /usr/local/another/path/to/app.js
Update
This is interfering in our automated testing as well, via Jmeter.
Update
We have nginx running on the server and it seems to be calling node process. We observed that on the server where this is working fine, the nginx config file has this setting -
underscores_in_headers on;
But this is not present in the config file of the server where it is not working.
Another observation - I am using latest postman version - 6.2.5, where the issue is there. However, when I sent the same postman collection to another teammate and he hit it after installing postman, it worked for him. I am still not sure whether the issue is with postman or the server setup.
Underscores are not explicitly forbidden in headers, but in the past for CGI underscores were converted to dashes. Because of this legacy NGINX and Apache HTTPD treat underscores in headers as potentially problematic.
https://stackoverflow.com/a/22856867/2955337
You can explicitly set underscores_in_headers on;, but the default is off, so by default NGINX does not accept underscores
http://nginx.org/en/docs/http/ngx_http_core_module.html#underscores_in_headers
curl apparently converts underscores to dashes to circumvent this issue.
https://github.com/requests/requests/issues/1292#issuecomment-15997612

Nodejs headers not matching actual request

There is a problem with NodeJS v7.9.0 where there can be a request
curl -i -H Accept:application/json -H range:bytes=1-8 -X GET http://localhost:8080/examples/text.txt
However node's request header doesn't match when it is logged
console.log(req.headers.range)
The logged value varies between different values for the exact same request
(some values logged from that request: bytes=1-2, bytes=1-3, bytes=1-4, bytes=1-5, bytes=1-6, bytes=1, 7 bytes=1-8)
Is this a problem with NodeJS or something else with the computer's setup? And how does one fix it
Note the requests are being made with "Rest Web service client" (chrome plugin), and the request above is the equivalent curl command.

Docker - How to check if curl command inside Dockerfile had response code 200

Inside a Dockerfile I try to download an artifact using curl. I have checked that although the artifact doesn't exist (thus getting a 404) the docker build keeps running.
RUN curl -H 'Cache-Control: no-cache' ${STANDALONE_LOCATION} -o $JBOSS_HOME/standalone/configuration/standalone.xml
Is there a way to check that the curl response code is 200 and throw an error otherwise?
You can add -f (or --fail) to the curl call, which causes curl to silently fail on server errors. From the curl manpage:
-f/--fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better enable scripts etc to better deal with failed attempts. In normal cases when a HTTP server fails to deliver a document, it returns an HTML document stating so (which often also describes why and more). This flag will prevent curl from outputting that and return error 22.
This method is not fail-safe and there are occasions where non-successful response codes will slip through, especially when authentication is involved (response codes 401 and 407).

How can I test my browser ignoring Location headers?

I want to test a site with my Firefox ignoring Location: headers like this example in PHP.
header('Location: another-page.php');
Is there a plugin available to do this, or any other method?
Would my best bet be surfing the site with Lynx? Does Lynx ignore them?
Thanks
You could try bringing up the pages with cURL.
It is a command line application that is invoked via:
curl http://url
cURL does not follow Location: headers by default.

Resources