I'm using linkerd and have to use global tunnel to proxy everything via localhost:4140. The problem is that this seems to cause loggly to stop working. As soon as the global tunnel is active, loggly doesn't receive any messages. How can I change it?
globalTunnel.initialize({
host: 'localhost',
port: 4140
});
I have seen, that I can pass a proxy variable in the config for the loggy instance.
var logglyStream = new Bunyan2Loggly(logglyConfig);
Thanks for the help.
globalTunnel overrides all http requests, so assuming that the Loggly library uses the standard http library, further proxy configuration in the Loggly library is not necessary.
I think there may be two issues here:
Linkerd Routing Rules
linkerd needs routing rules to proxy to the outside internet. You'll need a dtab that recognizes host:port requests and routes them accordingly:
dtab: |
/ip-hostport => /$/inet;
/svc => /$/io.buoyant.hostportPfx/ip-hostport;
Confirm routing works with this command:
$ http_proxy=localhost:4140 curl -s -o /dev/null -w "%{http_code}" www.google.com:80
200
Loggly header processing
It appears that Loggly fails all requests that contain headers with forward slashes in them:
# working request:
$ curl -H "foo: bar" -s -o /dev/null -w "%{http_code}" logs-01.loggly.com
403
# failed request:
$ curl -H "foo: /bar" -s -o /dev/null -w "%{http_code}" logs-01.loggly.com
400
Linkerd sets several headers on outbound requests for tracing, service discovery, and context information. Some of those headers include strings with forward slashes.
To get around this, we have two options:
Modify linkerd to clear headers on outbound requests. I've filed github.com/linkerd/linkerd/issues/1218 to track this work.
Set up a proxy server to handle outbound requests for Loggly, as documented in https://github.com/loggly/loggly-jslogger#setup-proxy-for-ad-blockers. Then, assuming that service is set up at internal-nginx-proxy, you can use this routing rule:
dtab: |
/svc/logs-01.loggly.com => /$/inet/internal-nginx-proxy/80;
I'm not familiar with linkerd but it sends logs to logs-01.loggly.com either on port 80 or 443 for secure. Is that proxied through your tunnel?
Related
I have a x.example which serves traffic for both a.example and b.example.
x.example has certificates for both a.example and b.example. The DNS for a.example and b.example is not yet set up.
If I add an /etc/hosts entry for a.example pointing to x.example's ip and run curl -XGET https://a.example, I get a 200.
However if I run curl --header 'Host: a.example' https://x.example, I get:
curl: (51) SSL: no alternative certificate subject name matches target
host name x.example
I would think it would use a.example as the host. Maybe I'm not understanding how SNI/TLS works.
Because a.example is an HTTP header the TLS handshake doesn't have access to it yet? But the URL itself it does have access to?
Indeed SNI in TLS does not work like that. SNI, as everything related to TLS, happens before any kind of HTTP traffic, hence the Host header is not taken into account at that step (but will be useful later on for the webserver to know which host you are connecting too).
So to enable SNI you need a specific switch in your HTTP client to tell it to send the appropriate TLS extension during the handshake with the hostname value you need.
In case of curl, you need at least version 7.18.1 (based on https://curl.haxx.se/changes.html) and then it seems to automatically use the value provided in the Host header. It alo depends on which OpenSSL (or equivalent library on your platform) version it is linked to.
See point 1.10 of https://curl.haxx.se/docs/knownbugs.html that speaks about a bug but explains what happens:
When given a URL with a trailing dot for the host name part: "https://example.com./", libcurl will strip off the dot and use the name without a dot internally and send it dot-less in HTTP Host: headers and in the TLS SNI field.
The --connect-to option could also be useful in your case. Or --resolve as a substitute to /etc/hosts, see https://curl.haxx.se/mail/archive-2015-01/0042.html for am example, or https://makandracards.com/makandra/1613-make-an-http-request-to-a-machine-but-fake-the-hostname
You can add --verbose in all cases to see in more details what is happening. See this example: https://www.claudiokuenzler.com/blog/693/curious-case-of-curl-ssl-tls-sni-http-host-header ; you will also see there how to test directly with openssl.
If you have a.example in your /etc/hosts you should just run curl with https://a.example/ and it should take care of the Host header and hence SNI (or use --resolve instead)
So to answer your question directly, replace
curl --header 'Host: a.example' https://x.example
with
curl --connect-to a.example:443:x.example:443 https://a.example
and it should work perfectly.
The selected answer helped me find the answer, even though it does not contain the answer. The answer in the mail/archive link Patrick Mevzek provided has the wrong port number. So even following that answer will cause it to continue to fail.
I used this container to run a debugging server to inspect the requests. I highly suggest anyone debugging this kind of issue do the same.
Here is how to address the OP's question.
# Instead of this:
# curl --header 'Host: a.example' https://x.example
# Do:
host=a.example
target=x.example
ip=$(dig +short $target | head -n1)
curl -sv --resolve $host:443:$ip https://$host
If you want to ignore bad certificates matches, use -svk instead of -sv
curl -svk --resolve $host:443:$ip https://$host
Note: Since you are using https, you must use 443 in the --resolve argument instead of 80 as was stated on the mail/archive
I had a similar need. Didn't have sudo access to update hosts file.
I use resolve parameter and also added the DNS host name as a header parameter.
--resolve <dns name>:<port>:<ip addr>
curl --request POST --resolve dns_name:443:a.b.c.d 'https://dns_name/x/y' --header 'Host: dns_name' ....
Cheers..
I built and installed squid 3.5.23 as follows:
./configure --prefix=/usr/local/squid
make all
make install
Here is the default squid.conf used by the version. I made minimal modifications to to the file to make my setup anonymous:
forwarded_for delete
request_header_access Via deny all
request_header_access Cache-Control deny all
After I got the (remote) proxy server running, I confirmed that I could configure my (local) browser to send traffic through it. I then took it to the next step, and had my router send all traffic originating from my local network to my proxy server:
iptables -t nat -A PREROUTING -s 192.168.11.0/24 -d ! 192.168.11.0/24 -p tcp -j DNAT --to-destination 100.200.30.40:3128
However, all of my requests came back with a 400 from squid (BAD REQUEST). On investigating further, I discovered that the request headers were using relative urls (my browser is smart enough to always use absolute urls if it knows it is talking to a proxy server).
I know HTTP 1.1 headers are required to have a Host header, which squid can use to determine the original destination of packets it receives. How do I configure the proxy server to use that header? I am looking for the squid 3.5 equivalent of httpd_accel_uses_host_header on
Running squid in accelerator mode fixed this:
http_port 3128 accel
I have magento website in Linux server (Varnish cache), some of the product details page shows error as
Error 503 Backend fetch failed Guru Meditation: XID: 98757
My website IP is 52.163.xxx.xx
Please find the below details and help me to fix this issue.
/etc/default/varnish
DAEMON_OPTS="-a :8080 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
/etc/varnish/default.vcl
backend default{
.host = "127.0.0.1";
.port = "8080";
}
sudo service varnish restart
Stopping HTTP accelerator varnishd No /usr/sbin/varnishd found running; none killed.
[fail]
Starting HTTP accelerator varnishd [fail]
bind(): Address already in use
bind(): Address already in use
Error: Failed to open (any) accept sockets.
As I understand it, you are running varnish and backend webserver (say nginx or apache) on the very same linux machine, right?
First of all, try to run this command:
sudo netstat -anp | grep LISTEN | grep 8080
And see what process is bound on port 8080 and on which ip.
First part of your question suggests varnish is running, just not be able to connect to backend.
But the second part tells me you are not able to start varnish.
So please make it clear and perhaps attach output from the command above.
Let's continue with second part, i.e. varnish not able to start.
I guess you have backend server running on 8080, be it nginx, apache, whatever.
Your varnish backend config confirms it after all.
Check that web server is bound on 127.0.0.1 and not on 0.0.0.0 not to allow public traffic to connect directly do backend web server.
If this is the case, you have to change listening ip:port of varnish to non-colliding combination.
You can either:
change port to something else as 8080, let's say 80
change port of backend web server to something else if you need 8080 to be public
double check your backend web server is listening on localhost only and bind varnish to your public ip instead of 0.0.0.0 (default, means all machine's ips)
You can do the last option by changing main varnish configuration to:
DAEMON_OPTS="-a 52.163.xxx.xx:8080 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"
This scenario has one important drawback. If you somehow come to new public ip, you have to change it in main varnish configuration too. If this is something you can encode into automation recipe, it shouldn't be problem. But if you manage it by hand, be sure you have really good documenting practice or you'll be hunting ghost bugs in future. :)
One mistake is having both Varnish and your backend server running on the same port 8080. You have 2 options to solve this:
Most straightforward and simple. Adjust Varnish DAEMON_OPTS to listen on port 80.
It may still work on the same ports, provided that you make Varnish and your backend server listen on different interfaces:
Varnish would normally listen on external interface. Thus, adjust your Varnish listen parameter to be bound to specific IP: DAEMON_OPTS="-a 52.163.xxx.xx:8080 ...
Bind your backend server (Apache, Nginx, whatever) to listen only on the loopback interface, 127.0.0.1.
Your VCL is "empty" and you should be using corresponding plugin for Magento which will ensure that Varnish caches things, by generating correct VCL file for you:
Magento 1.x: Turpentine plugin
Magento 2.x: .. is able to generate VCL from admin backend of your Magento installation.
I want to verify that my web application does not have a path traversal vulnerability.
I'm trying to use curl for that, like this:
$ curl -v http://www.example.com/directory/../
I would like the HTTP request to be explicitly made to the /directory/../ URL, to test that a specific nginx rule involving proxy is not vulnerable to path traversal. I.e., I would like this HTTP request to be sent:
> GET /directory/../ HTTP/1.1
But curl is rewriting the request as to the / URL, as can be seen in the output:
* Rebuilt URL to: http://www.example.com/
(...)
> GET / HTTP/1.1
Is it possible to use curl for this test, forcing it to pass the exact URL in the request? If not, what would be an appropriate way?
The curl flag you are looking for is curl --path-as-is .
I'm not aware of a way to do it via curl, but you could always use telnet. Try this command:
telnet www.example.com 80
You'll see:
Trying xxx.xxx.xxx.xxx...
Connected to www.example.com.
Escape character is '^]'.
You now have an open connection to www.example.com. Now just type in your command to fetch the page:
GET /directory/../ HTTP/1.1
And you should see your result. e.g.
HTTP/1.1 400 Bad Request
You can use an intercepting proxy to capture a request to your application and repeat the request with parameters changed, such as the raw URL that is requested from the application.
The free version of Burp Suite will allow this using the Repeater.
However, there are alternatives that should also allow this such as Zap, WebScarab and Fiddler2.
I have this proxy address: 125.119.175.48:8909
How can I perform a HTTP request using cURL like curl http://www.example.com, but specifying the proxy address of my network?
From man curl:
-x, --proxy <[protocol://][user:password#]proxyhost[:port]>
Use the specified HTTP proxy.
If the port number is not specified, it is assumed at port 1080.
General way:
export http_proxy=http://your.proxy.server:port/
Then you can connect through proxy from (many) application.
And, as per comment below, for https:
export https_proxy=https://your.proxy.server:port/
The above solutions might not work with some curl versions I tried them for myself(curl 7.22.0). But what worked for me was:
curl -x http://proxy_server:proxy_port --proxy-user username:password -L http://url
Hope it solves the issue better!
Beware that if you are using a SOCKS proxy, instead of a HTTP/HTTPS proxy, you will need to use the --socks5 switch instead:
curl --socks5 125.119.175.48:8909 http://example.com/
You can also use --socks5-hostname instead of --socks5 to resolve DNS on the proxy side.
as an adition to airween, another good idea is to add this into your .bashrc, so you'll be able to switch from non proxied to proxied environment:
alias proxyon="export http_proxy='http://YOURPROXY:YOURPORT';export https_proxy='http://YOURPROXY:YOURPORT'"
alias proxyoff="export http_proxy='';export https_proxy=''"
WHERE YOURPROXY:YOURPORT is exactly that, your ip and port proxy :-).
Then, simply doing
proxyon
your system will start to use the proxy, and just the opposite with:
proxyoff
use the following
curl -I -x 192.168.X.X:XX http://google.com
192.168.X.X:XX put your proxy server ip and port.
-v verbose mode it will give more details including headers and response.
I like using this in order to get the IP under which I am seen
curl -x http://proxy_server:proxy_port https://api.ipify.org?format=json && echo
Hope this helps someone.
For curl you can configure proxy in your ~/.curlrc (_curlrc on Windows) file by adding proxy value, the syntax is:
proxy = http://username:password#proxy-host:port
curl -I "https://www.google.com" -x 1.1.1.1:8080
Just summarizing all great mentioned answers:
curl -x http://<user>:<pass>#<proxyhost>:<port>/ -o <filename> -L <link>
With a proxy with authentication I use:
curl -x <protocol>://<user>:<password>#<host>:<port> --proxy-anyauth <url>
because, I don't know why curl doesn't use/catch http[s]_proxy environment variables.
You don't need to export the http[s]_proxy shell variable if you're just setting the proxy for a one off command. e.g.
http_proxy=http://your.proxy.server:port curl http://www.example.com
That said, I'd prefer curl -x if I knew I was always going to use a proxy.
sudo curl -x http://10.1.1.50:8080/ -fsSL https://download.docker.com/linux/ubuntu/gpg
This worked perfectly for me, the error comes because curl need to set
the proxy
Remmember replace the proxy with your proxy, mine, "example" was
http://10.1.1.50:8080/.
curl -vv -ksL "https://example.com" -x "http://<proxy>:<port>"
Depending on your workplace, you may also need to specify the -k or the --insecure option for curl in order to get past potential issues with CA certificates.
curl -x <myCompanyProxy>:<port> -k -O -L <link to file to download>
In case the proxy is using automatic proxy with PAC file. We can find the actual proxy from the javascript from the PAC URL.
And if the proxy needs authentication, we can first use a normal web-browser to access the website which will promote authentication dialog. After authentication, we can use wireshark to capture the http package sends to the proxy server, from the http package, we can get the auth token from http header: Proxy-Authorization
Then we can set the http_proxy environment variable and also include auth token in the http header: Proxy-Authorization
export http_proxy=http://proxyserver:port
curl -H "Proxy-Authorization: xxxx" http://targetURL
curl -x socks5://username:password#ip:port example.com
For http proxy tunnels (needed for the TLS protocol), you need to specify -p (aka --proxytunnel) instead of -x.
curl post about proxies
tl;dr the proxy tunnel uses a newer "CONNECT" keyword instead of a modified "GET"
This was needed for the node http-proxy-middleware library.
Only got a clue once I used wget which worked out of the box.