I am working on a Django server that should verify a payment via a webhook post. When I spawn the server in development mode and tunneled using ngrok, I don't receive the incoming webhook. I have verified using webhook.site that the payment service did send the webhook, and therefore the problem seems to be my ngrok service not allowing the traffic through. My ngrok was started using
./ngrok http -region=eu 8000
ngrok by #inconshreveable (Ctrl+C to quit)
Session Status online
Account JianDk (Plan: Free)
Version 2.3.40
Region Europe (eu)
Web Interface http://127.0.0.1:4040
Forwarding http://20e8-94-147-65-45.eu.ngrok.io -> http://localhost:8000
Forwarding https://20e8-94-147-65-45.eu.ngrok.io -> http://localhost:8000
Connections ttl opn rt1 rt5 p50 p90
12 0 0.00 0.01 1.26 233.97
When I google similar problems with the webhook traffic over ngrok, it seems from this post that ngrok does not allow traffic through https.
Instead of ngrok, I used localtunnel. This solved the problem.
Related
There is a front-end npm server hosted on a public port 80 in my production environment. I can launch a remote web browser client of this front-end server using its public hostname, e.g., http://hostname:80, and successfully load the webpage.
The Javascript in this app makes HTTP GET/POST requests to a back-end server to fetch some data on the URL: http://hostname:5000. This back-end server is running on the same production environment but on a private port, e.g., 5000, i.e., this port will not be visible outside the firewall.
As I understand it, this HTTP request is essentially made from the remote web browser client sitting outside the firewall. Due to the Firewall (UFW) policy, any request made from this client on private port 5000 gets blocked.
I do not want to allow the private port 5000 in the UFW, and I do not want to run the back-end server on a public port of the production server.
What are the solutions for this?
I have heard about the Nginx proxy server which redirects client connections on a public port (80) to a Node application running on a different port (3000).
Reference: https://blog.logrocket.com/how-to-run-a-node-js-server-with-nginx/
However, I am not certain if the Nginx server would be able to handle the client requests beyond the UFW rules.
The majority of request information is sent to the backend from the proxy.
A request sent through the nginx proxy will act like a request directly to the backend.
Some fields may not be passed, for example:
From nginx.org:
By default, nginx does not pass the header fields “Date”, “Server”, “X-Pad”, and “X-Accel-...” from the response of a proxied server to a client. The proxy_hide_header directive sets additional fields that will not be passed
I have setup a SonarQube server on an Azure Windows Server 2016 machine, which sits behind an Azure Application Gateway, with SSL termination. Essentially requests are sent to a public ip address, using HTTPS, the Application Gateway manages SSL with an Azure Self-Signed certificate and sends the request in HTTP to the backend pool, where the VM with SonarQube sits.
I made sure that for the ApplicationGateway the frontend listener uses HTTPS (on port 9000) and the backend HTTP settings is set to HTTP (still on port 9000).
I successfully manage to connect to the VM via browser, i.e. browsing https://“publicIP”:9000. I can also receive the response to this request https://“publicIP”:9000/api/server/version (the response is 9.0.0.45539). In both cases, I have to confirm to proceed (after receiving “Your Connection is not private. NET:ERR_CERT_AUTHORITY_INVALID”), but that should be expected with self-signed certificates.
The problem arises when I try to run an Azure DevOps YAML pipeline (which used to work fine, in the first tests with only HTTP connection). The error I receive is
“[error][SQ] API GET ‘/api/server/version’ failed, error was: {“code”:“UNABLE_TO_VERIFY_LEAF_SIGNATURE”}”, in the SonarQubePrepare#4 task:
task: SonarQubePrepare#4
inputs:
SonarQube: 'SonarQubeServiceConnection'
scannerMode: 'MSBuild'
projectKey: 'DevTest'
SonarQubeServiceConnection is the Azure DevOps service connection which includes the public IP address (with port) and the personal access token (for SonarQube).
From browsing for answers, it seem that the error UNABLE_TO_VERIFY_LEAF_SIGNATURE should be related to SSL certificate problems, but I would have though that the Application Gateway SSL termination should have had prevented any SSL checks from the SonarQube side.
Thanks for any help given.
You can fix this issue by setting "NODE_TLS_REJECT_UNAUTHORIZED" = 0
Is it possible to build a custom SMTP server on Google App Engine to listen for incoming email, using the Python smtpd module?
App Engine's hosted and custom runtimes are meant for HTTP traffic (ports 80 and 443). You will not be able to receive traffic on the ports necessary to operate an SMTP server.
In fact, ports 25, 465 and 587 are blocked for outbound connections across all of Google Cloud. Instead, you can use an external service such as SendGrid, Mailgun, or Mailjet: https://cloud.google.com/compute/docs/tutorials/sending-mail#choosing_an_email_service_to_use
(This article is about sending email but these services allow you to receive email as well.)
Is it possible to deploy a node.js app on Cloud Foundry that listens for HTTPS requests on port 443?
Well, the good news is that you don't have to do that. The Cloud Foundry platform takes care of it for you.
All you need to do is push your app and assign a route to the app. Your platform operations team will already have everything set up so that traffic for both HTTP and HTTPS routes through to your application.
The only thing you probably want to do in your application is to look at the x-forwarded-proto (should be http or https) or x-forwarded-port (80 or 443) header. You can use this to determine if the client's connection was over HTTP or HTTPS, and if it's HTTP then issue a redirect to ask the client to connect over HTTPS (this force clients to use HTTPS).
You can read more about this in the docs at the following link:
https://docs.cloudfoundry.org/adminguide/securing-traffic.html
Having said all that, if you really want to control the certs for some reason you can do that. You would need to map a TCP route to your application. This will enable TCP traffic to flow directly to your application. Then you can configure your application as an HTTPS endpoint on the mapped TCP route and port.
Some notes about this:
You will almost certainly end up with some high numbered port, not 443. The platform will have a pool of available ports, which is configured by your operations team, and you are limited to using only those ports.
The platform and buildpacks will not help set up TLS, you will need to handle that all on your own. The good news is that it should work exactly the same as if your app were running on a VM or your local laptop.
You will need to create your own TLS certs and push them with the application. You can probably use Let's Encrypt, but you may need to obtain these through your employer, if you work for a large company.
I'm developing a security system. It has a proxy server acting like a ssl termination using Nginx which will forwards all tcp/ip connections from clients to other third-party systems.
The client-proxy connections must be authenticated and securely encrypted by ssl.
And my problems is:
Each client is a computer which installed Windows 7 OS or higher. It has been installed with some third-party applications which cannot be interfered. For the better user experience, all clients' tcp/ip outbound requests from any applications must be transparently "converted" into (or "wrapped" in) ssl requests before coming to proxy server. And my first idea is to develop a network driver to access these requests using windows api, namely WFP(Windows Filtering Platform). I have read its online documentation but it's not easy to understand. Can you have to find some projects like this or tell me which sections in the documentation need to be focused? Thank you in advance!
Your issue is a bit unclear but here are my thoughts:
You want to have full encryption between the End User Client to the App Service.
Current:
Client --(443: TLS)--> NGINX --(Clear; Port 80)--> App Service
(Terminate TLS)
Change:
Client --(443: TLS)--> NGINX --(TLS; Port 443)--> App Proxy -(Plain; 80)-> App Service
(Terminate TLS) (Nginx with self-signed Cert)
The change is to add an additional Nginx server on the app server to provide the last layer of TLS between the load balancer and the App Service.
If your App service has the capability to serve SSL connections directly that's even better as you can just configure that in place of running an additional Nginx server on the app host. (If you wanted you could run apache or any other web server that supports proxy/load balancing capabilities)
If you are worried about the App Service port, it won't make a difference, the idea is that the App Proxy (being Nginx or the likes) will handle the encryption on a different port to then pass via localhost to the App Service (in plain text).
Additional Resources:
Can Nginx do TCP load balance with SSL termination?
https://serverfault.com/questions/978922/nginx-proxy-pass-to-https
https://reinout.vanrees.org/weblog/2017/05/02/https-behind-proxy.html
https://nginx.org/en/docs/http/ngx_http_ssl_module.html