I am using a third party api which accepts json input and response back with a json format output. Locally I checked the api response on port 8181 and it works great. When I am deploying and testing the same on production environment over AWS, its failing with error :
Could not get any response
There seems to be an error connecting to https://ec2 instance public ip:8181/auth/raw
I am able to ping the public ip of the server. I have already tried exploring the solution but could not find any.
Please suggest how can i resolve this.
I got to solve it myself after breaking my head like anything by adding Custom TCP Rule on port 8181 over Inbound under security group of the instance.
Related
I am using Nginx as my https server to serve my http content from my node server.
I am also hosting my server on google cloud.
I have been keep getting a 504 Gateway Timeout Error; So I wonder if it is because I didnt set my upstream server (node server) 8080 port open. Then it works. Not so sure if it is the correct way to do it
But then I kept looking other docs or tutorial online. I never see people configure in such way to connect to node server. They mainly only left the port 80 opened. So I wondered if my config in server block causing the 504 gateway problem
----------second update
this is my setting, and the default_server is written by default
but i always see doc have included a variable - server_name ; Actually I dont quite understand this varibale. May I know should I consider it or not for later use, although it works now
Aside, I got an
Server Error from my app.
FetchError: request to https://34.96.213.54:443/search/guest2 failed, reason: self-signed certificate
Why is that it works on chrome,although I get that api directly and postman successfully.
third updated------
About self-signed certificate: You need to buy one or using a free service like https://letsencrypt.org .Beside that your questions are so basic so you have to research more on nginx docs (http://nginx.org/en/docs/http/server_names.html)
I have created a simple GraphQL Subscription using Nest.js/Apollo GraphQL over Node.js. My client application which is a react.js/apollo client works find with the server. The client subscibes to the server via GraphQL similar to:
subscription
{
studentAdded
{
id
}
}
My problem is that it works only locally. When I deploy my server back-end to a hosted docker over internet, client won't receive data anymore.
I have traced the client, it sends GET request on ws://api.example.com:8010/graphql and receives the successful HTTP/1.1 101 Switching Protocols response. However, nothing is received from server like when the server was on my local machine. Checking the remote server log showed me that the client successfully connects to server. There, I can see onConnect log messages.
Now I need any guidance to solve the problem.
I check several things myself. Firstly, I thought WebSocket address is prohibited in the network but then realized that it is on same port as normal HTTP. Secondly, supposed that WebSocket messages/frames are transmitted over UDP but I was not correct, it is over TCP and no need to worry about network settings.
Additionally I have read several github threads and StackOverflow questions. But did not find any clue. I am not directly using Node.js/WebSocket, instead, I am using Nest.js/GraphqQL subscription. It has made my search tougher.
Your help is highly appreciated.
I've been trying to deploy a repository https://github.com/evelynhathaway/triton-poll to heroku, but since I am fairly new to NodeJs, I am unable to detect the problem. But I guess it's due to the port because heroku doesn't use static ports.
Any help would be appreciated.
Thank You in advance.
I looked at the fork and you did a couple of mistakes. I don't have the time to fix, test and get it to run but I can show you how I solved it before.
All the relevant code changes can be found in this commit (different project):
https://github.com/vegeta897/d-zone/commit/63730fd7f44d2716a31fcae55990d83c84d5ffea
The project is divided into a client and server part.
You can see here, https://github.com/vegeta897/d-zone/blob/63730fd7f44d2716a31fcae55990d83c84d5ffea/script/websock.js#L16, how I combined server and client into one. This only works because the static client files are served via http/https and the server uses websocket, no http ws/wss
When you publish a server on Heroku you need to bind to their dynamic port. However when you want to access the web server you do not specify a port. The hostname is automatically translated into an ip-address + port combo. I did this here: https://github.com/vegeta897/d-zone/blob/63730fd7f44d2716a31fcae55990d83c84d5ffea/web/main.js#L44 When deployed on Heroku the socketURL does not contain a port number.
Finally you bind to the server. I did it here https://github.com/vegeta897/d-zone/blob/63730fd7f44d2716a31fcae55990d83c84d5ffea/script/websock.js#L55 and here https://github.com/vegeta897/d-zone/blob/63730fd7f44d2716a31fcae55990d83c84d5ffea/socket-config.js#L30
You also have to make sure that your clients files are built properly and served.
Working with an extremely simple proxy service configured on the new 1.0.0 Micro Integrator by WSO2. I use the Integration Studio and it's buildin intergator to run and test the functionality. It seems however that for some reason I cannot call my proxy service.
I can clearly see my changes are reflected as it boots up and the following line appears:
ProxyService named 'myprox' has been deployed from file
Also, it mentions that the endpoints have been configured:
INFO {org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager} - Pass-through EI_INTERNAL_HTTP_INBOUND_ENDPOINT Listener started on 0.0.0.0:9201
The custom proxy service is now narrowed down to just a LOG and RESPOND mediator. Whatever URL I use, the same error keeps popping up:
WARN {org.wso2.carbon.inbound.endpoint.internal.http.api.InternalAPIDispatcher} - No Internal API found to dispatch the message
So far I have tried every type of combination I can imagine, with every one of them providing above message. The latest I tried was:
http://localhost:9201/services/myprox
I tried with and without the "/services/" subdirectory. I tried with and without HTTPS using the provided 9164 port. I also tried the variations of the 8290 and 8253 ports to no avail.
When I run this CAR file with EI 6.5.0. I can get result on the url mentioned above.
What is going on here?
It seems you are trying to call the inbound endpoint port for a proxy. The WARN message you have shown indicates that. In micro integrator the default port for proxy services is 8290. So your proxy URL should look like below.
http://localhost:8290/services/myprox
(Please note that the above mentioned port is the default one. It might change, if you started have the server with a port offset or configured differently in your settings.)
Please go through this blog for a proxy sample created and deployed into Micro Integrator from Integration Studio.
https://www.yenlo.com/blog/a-first-look-at-wso2-enterprise-integrator-6.5.0-m5-micro-integrator-and-developer-studio
I've a sitecore azure deployment 2.0. Unfortunately, when I try to run this from company network I get the error below:
A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection
failed because connected host has failed to respond 213.199.180.206:80
When I try below on the same machine it works:
http://www.google.com
https://www.google.com
Wondering what exactly is causing the above issue given both 443 and 80 works well via IE.
Thanks.
Definitely sounds like a corporate firewall/gateway problem. Have written a blog post with my experiences of just these types of issues. http://reservoirdevs.wordpress.com/2013/10/18/sitecore-azure-walkthrough-and-gotchas/
My solution was to try from outside the corporate network. It then worked fine.
This sounds like your firewall on your Azure machine is not set to allow incoming http traffic on port 80. Although there could be a lot of other reasons for this timeout.