While trying to execute an http get, I keep getting the connection time out error. The same code runs perfectly fine when I run it on my system, but this problem occurs when I try to run it on my server. The same url when tried with curl returns response consistently. Both the systems have ubuntu 10.04, and are using open-jdk. Both use commons-httpclient-3.1.jar, and no multi-threading, multiple connections are involved. While I understand there has to be something different somewhere which is causing the difference in behaviour, I am not able to figure out where to start looking. Any pointers?
Is your server sitting behind a proxy? We've had to make all our apps making remote calls (outside our datacenter) proxy aware, otherwise the calls will be blocked by the proxy server. Another thing to try is take java out of the picture and make some basic calls with curl.
Related
In our application we have a REST integration with another service
It worked fine untill recently, however, sometimes we are returned with 502 Bad Gateway in the application
We send HTTP requests through curl and it seems that to be working but then we are presented with the error I described above
The client is believed to have a firewall installed
Whenever we face the problem, I usually ask the administrator to turn this thing off and it does the trick but I'm not sure whether it has something to do with the code?
My question is what causes such behaviour and how to avoid it in the future?
I have a problem with an Express.js service running on production that I'm not able to replicate on my localhost. I have already tried requesting all the urls to production again to my local machine, but on my machine everything works fine. So I suspect that the problem comes with the data on the http headers (cookies, user agents, languages...).
So, is there a way, (some express module, or sniffer that runs on ubuntu) that allows me to easily create a dump on the server with the whole header so I can later repeat those exact requests to my localhost?
You can capture network packages with https://www.wireshark.org/, analyze them and maybe find the difference between your local environment and the production one.
You can try to use a Proxy-Tool like Charles (https://www.charlesproxy.com/) or Fiddler (http://www.telerik.com/fiddler) to log your Browser Requests.
So I have an issue where I am remotely connecting to a Mongodb on my linode box, and I am not longer able to access this DB. I can from my local machine, but when I am on this remote server (where the website runs) it is not working. I am not seeing the connection come into the Mongo logs, so this leads me to believe that maybe it is being intercepted by a firewall?
How can I watch traffic and see if this connection is at least making it to my linode box? I am connecting with Mongoid, so if there is a way to prove no connection there too.
Right now the only error I get is a erb view timeout execution error - not much to go on.
Thanks,
Daniel
I'm using servlets 3 with jetty 8.1.1 and the SslContextFactory on an amazon ec2 machine (m1-small).
The first HTTPS request from localhost (of the amazone machine) is about 150ms and further
requests seem to get faster (down to ~40ms) but not as close as to
the HTTP response time of only 20ms - why? Is encryption really that
slow?
Also when comparing HTTPS and HTTP from outside of the amazon cloud
the difference gets even worse: HTTPS requests are at least 400ms
slower!? How can that be? Is the encrypted content also bigger? And
how can I debug it or make all faster?
Some more informations: all 'measurements' are unscientificly done via time curl http://mydomain.com/ping but are reproducable. Also there is an ec2 load balancer in between. I'm sure I've configured something wrong or there is a big misunderstanding from me. Let me know!
update to 8.1.7
check the time from localhost on the aws machine for reference
check using the IP vs DNS, quite often those sorts of long pauses involve dns issues
set your /etc/hosts to bypass a DNS look for host as a test as well
-Dorg.eclipse.jetty.LEVEL=DEBUG on the server side to enable debug, should help your correlate the roundtrip inside of jetty and compare to actual network results
ssl decryption does incur some performance hit, hard to say that that would be all of your differences here though
odds are this is not specific to jetty but something in the environment, which hopefully some bullet above will help steer you in the right direction
I need to find out how to enable SSL sessions. For this I've created a new question as it is unclear how to turn on in jetty and how to handle on the client side
I am trying to run a node.js program behind a corporate firewall. However, I am unable to explicitly tell node.js which proxy to use and therefore all my external connections time out.
I read on a some post that I could use connect-proxy as an HTTP proxy for my tunneling needs, but I have no idea how to actually use it.
I want to run the following:
$ node program.js
using connect-proxy.
The only command I was able to get so far is this:
$ connect-proxy -H myproxy.com:8083 google.com
GET
HTTP/1.0 302 Found
Location: http://www.google.com/
...
Before going further it is worth trying the environment variable that a number of other languages and tools support.
export http_proxy=http://proxyserver:port
Often they use port 8080 but check the Javascript in the PAC file loaded by your browser to be sure.
If that produces a different result but still doesn't connect, you probably need to do NTLM auth with the proxy and the only way I know to do this is to run NTLMAPS before running your app. If you are really interested in getting this working transparently, then porting NTLMAPS to Javascript should do the trick.