Azure ML Compute Instance: how to expose an app listening to a port to the outside world? - azure-machine-learning-service

I have a Gradio app listening to a specific port, running in an Azure ML Compute Instance. I'm not using Docker, just a basic Python app.
I want to expose that port to the outside world, so that I can visit the compute's url/ip and access the app. Security is not an issue right now. I don't really want to use Docker (I know about custom apps).
Here's what I've tried:
Punching a hole through the compute's firewall, with sudo ufw allow <port>
Accessing the compute, as follows
http(s)://<compute-ip>:<port> - times out
https://<compute>.<location>.instances.azureml.ms:<port> - times out
https://<compute>-<port>.<location>.instances.azureml.ms/ - loads the app, but the routes don't work so the app errors out (I get {"detail":"Method Not Allowed"} on anything but the root url)
My question is: how do I make this work? Is there a way to configure the compute to allow my routing urls? Or am I stuck using Docker + custom apps?

Related

Is there a way to "host" an existing web service on port X as a network path of another web service on port 80?

What I'm trying to do is create an access website for my own services that run on my linux server at home.
The services I'm using are accessible through <my_domain>:<respective_port_num>.
For example there's a plex instance which is listening on port X and transmission-remote (a torrenting client) listening on port Y and another custom processing service on port Z
I've created a simple website using python flask which I can access remotely which redirects paths to ports (so <my_domain>/plex turns into <my_domain>:X), is there a way to display these services on the network paths I've assigned to them so I don't need to open ports for each service? I want to be able to channel an existing service on :X to <my_domain>/plex without having to modify it, I'm sure it's possible.
I have a bit of a hard time to understand your question.
You certainly can use e.g. nginx as a reverse proxy in front of your web application, listen to any port and then redirect it to the upstream application on any port - e.g. your Flask application.
Let's say, my domain is example.com.
I then can configure e.g. nginx to listen on port 80 (and 443 for SSL), and then proxy all requests to e.g. port 8000, where Flask is running locally.
Yes, this is called using nginx as a reverse proxy. It is well documented on the internet and even the official docs. Your nginx.conf would have something like:
location /my/flask/app/ {
# Assuming your flask app is at localhost:8000
proxy_pass http://localhost:8000;
}
From user's perspective, they will be connecting to your.nginx.server.com/my/flask/app/. But behind the scenes nginx will actually forward the request to your app, and serve its response back to the user.
You can deploy nginx as a Docker container, I recommend doing this as it will keep the local files and configs separate from your own work and make it easier for you to fiddle with it as you learn. Keep in mind that nginx is only HTTP though. You can't use it to proxy things like SSH or arbitrary protocols (not without a lot of hassle anyway). If the services generate their own URLs, you might also need to configure them to anticipate the nginx redirects.
BTW, usually flask is not served directly to the internet, but instead nginx talks to something like Gunicorn to handle various network related concerns: https://vsupalov.com/what-is-gunicorn/

How to configure Port Forwarding with Google Cloud Compute Engine for a Node.JS application

I'm trying to configure a port forwarding (port 80 to port 8080) for a Node.js application hosted on Google Cloud Compute Engine (Ubuntu and Nginx).
My ultimate goal is to have an url like "api.domain.com" showing exactly the same thing from "api.domain.com:8080" (:8080 is working actually).
But because it's a virtual server on Google platform, I'm not sure what kind of configuration I can do.
I tried these solutions without success (probably because it's a Google Cloud environment):
Forwarding port 80 to 8080 using NGINX
Best practices when running Node.js with port 80 (Ubuntu / Linode)
So two questions here:
1.Where I need to configure the port forwarding?
Directly in my Ubuntu instance with Nginx or Linux config files?
With gcloud command?
In a secret place in the UI of console.cloud.google.com?
2.What settings or configuration I need to save?
One possibility is to use Google Cloud Load balancer.
https://cloud.google.com/load-balancing/docs/
1) Create a backend service that listen on port 8080
2) Create a frontend service that listen on port 80
3) Then forward frontend trafic on this backend service
4) Bonus : You can create a ssl certificate auto managed by GCP https://cloud.google.com/load-balancing/docs/ssl-certificates#managed-certs
For the benefit of future readers, here how I figured out how to configure the port forwarding.
You will need to be sure that your Firewall on Google Platform is well configured. Follow this process well described here: Google Cloud - Configuring Firewall Rules. You will need to be sure that port 80 (or 443 for HTTPS) and your Node.JS port (e.g 8080 in my case) are opened.
You will need to configure the port forwarding directly on the server. As far as I know, as opposed to the firewall rules, this is not a configuration that you can do in the Google Cloud platform UI. In my case, I need to edit the Nginx config file located in: /etc/nginx/sites-available/default.
Use this example for reference to edit your Nginx config file: nginx config for http/https proxy to localhost:3000
Once edited, you need to restart your Nginx service with this command: sudo systemctl restart nginx
Verify the state of Nginx service with this command: sudo systemctl status nginx
Your port should be redirected correctly to your Node.js application.
Thanks to #John Hanley and #howie for the orientation about Nginx configuration.
EDIT: This solution is still working but the accepted answer is easier.

Do I need a different server to run node.js

sorry if this is a wrong question on this forum but I am simply just stuck and need some advice. I have a shared hosting service and a cloud based hosting server with node.js installed. I want to host my website as normal but I also want to add real time chat and location tracking using node.js I am confused with what I am reading in several places because node.js is itself a server but not designed to host websites? So I have to run 2 different servers? One for the website and one to run node.js? When I setup the cloud one with a node.js script running I can no longer access the webpages.
Whats the best way for me achieve this as I am just going round in circles. Also is there a way I can set up a server on my PC and run and test both of these together before hand so I see what is needed and get it working as it will stop me ordering servers I dont need.
Many thanks for any help or advice.
Node can serve webpages using a framework like Express, but can cause conflicts if run on the same port as another webserver program (Apache, etc). One solution could be to serve your webpages through your webserver on port 80 (or 443 for HTTPS) and run your node server on a different port in order to send information back and forth.
There are a number of ways you can achieve this but here is one popular approach.
You can use NGINX as your front facing web server and proxy the requests to your backend Node service.
In NGINX, for example, you will configure your upstream service as follows:
upstream lucyservice {
server 127.0.0.1:8000;
keepalive 64;
}
The 8000 you see above is just an example, you may be running your Node service on a different port.
Further in your config (in the server config section) you will proxy the requests to your service as follows:
location / {
proxy_pass http://lucyservice;
}
You're Node service can be running in a process manager like forever / pm2 etc. You can have multiple Node services running in a cluster depending on how many processors your machine has etc.
So to recap - your front facing web server will be handling all traffic on port 80 (HTTP) and or 443 (HTTPS) and this will proxy the requests to your Node service running on whatever port(s) you define. All of this can happen on one single server or multiple if you need / desire.

Express app inside docker. Restricting access to host localhost only

I am trying to restrict access to my express/nodejs app so that it can only be accessed from it's domain url. Currently if I go to http://ip-address-of-server:3000, the app gets served directly, bypassing nginx.
I have tried adding 'localhost' in the app.listen --
app.listen(4000, 'localhost' , () => console.log('Server running'));
but this ends up making the app completely inaccessible. Even through nginx.
The app is running inside a docker container. And nginx is running on the host. I think this might be causing it but don't know how to fix this.
You can use Docker host mode networking for your app since you mentioned that "The app is running inside a docker container. And nginx is running on the host.".
Ref - https://docs.docker.com/network/host/
This way, it will be reachable via nginx on the same network. Your "localhost" settings will start working as usual after launching the container in docker host mode networking.
Looks like you want to do IP based filtering i.e. you want the node.js program to be accessible only from nginx (localhost or remote) and locally.
There are two ways
Use express-ipfilter middleware. https://www.npmjs.com/package/express-ipfilter to filter requests based on ips
Let node.js listen to everything but change the iptables on the host to restrict port access to specific ips. Expose the port of the node.js container to the host using -p and close the iptable for that port to the outside world
I prefer the second way as it is more robust and restricts traffic at the network level

How to open a port for http traffic on ec2 from node app?

So I have an ec2 instance running a node app on port 3000, very typical setup. However I now need to run additional apps on this server, which currently are running on their own servers, also on port 3000. So I need to migrate them all to one server, and presumably run them on different ports.
So if I want to run node apps and have them on 3000, 3010, 3020, etc, how do I do this the right way?
You need to authorize inbound traffic to your ec2 instance via AWS Console, or API. Here is a good description how to do that :
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html
Since authorizing is normally a one off, probably better to do it through the AWS Console, however, if one of your requirements is to spin up node apps in different ports in an automated fashion, then you'll probably want to look at this:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html#authorizeSecurityGroupIngress-property

Resources