I want build a router to control my internet access (wlan via server).
Only a few websites (via white/blacklist) should be available at specific times.
Are there any good packages for routing/proxying web (http/s, ftp) and email (pop/imap/smtp) traffic?
What you actually need is a good Firewall. Any decent firewall should be able to filter traffic by day-of-week and time-of-day. Even many of the better SOHO routers can do this. If your router can't do this, you should use a spare PC or server to act as a gateway, run Linux or BSD on that and configure a firewall accordingly. Most Linux versions have IPTABLES which is a simple but effective firewall which will do what you want.
To make things easy, set the PC up to be the DHCP server for the network and configure it so that, when other PC's get an IP address, the gateway IP is set to the same box (you may be able to get your normal router to do this instead otherwise turn off DHCP on the router).
Ideally, if using a gateway PC, set your routers to ONLY accept traffic from that gateway - better still, turn off NAT on the router and let the gateway do it too.
Here is a fairly comprehensive "how-to".
If all of that seems too much, you should consider upgrading your router to one that does all this for you. I personally use the Billion 7800N which would probably be suitable.
If you need an HTTP proxy check out node-http-proxy. I don't know much about FTP and mail proxies though.
Related
We've developed a server software and for ease of use for end-users, we are using the localtunnel-server app on one of our linux servers to get around the need for port forwarding and messing around with firewalls.
The problem is that it seems to tunnel "all" traffic on the port 80. However, we are afraid of this being abused. We would like to restrict traffic somehow and I wanted to know if that was even possible.
For example, let's say our app uses the "/myapp" virtual directory on the localhost website. So if a request is supposed to go to http://localhost/myapp/index.html then the traffic gets tunneled to http://mytunnel.myserver.com/myapp/index.html
The problem is, if there are other sites running on localhost, http://localhost/someotherapp also gets through. We'd like to block urls that don't match a format or contain keywords such as "/myapp"
Is that even possible? And if so, any guidance on how to achieve this, would be greatly appreciated.
Could deploy Bosh and small footprint tanzu application service(tas) in Azure, without using the domains.All Vms are running.Can i access the ccapi and apps manager with the IP address instead of the api.SYSTEMDOMAIN?
The short answer is no. You really, really want to have DNS set up properly.
Here's the long answer that is more nuanced.
All requests to your foundation go through the Gorouter. Gorouter will take the incoming request, look at the Host header and use that to determine where to send the request. This happens the same for system services like CAPI and UAA as it does for apps you deploy to the foundation.
DNS is a requirement because of the Host header. A browser trying to access CAPI or an application on your foundation is going to set the Host header based on the DNS entry you type into your browser's address bar. The cf CLI is going to do the same thing.
There are some ways to work around this:
If you are strictly using a client like curl where you can set the Host header to arbitrary values. In that way, you could set the host header to api.system_domain and at the same time connect to the IP address of your foundation. That's not a very elegant way to use CF though.
You can manually set entries in your /etc/hosts` (or similar on Windows). This is basically a way to override DNS resolution and supply your own custom IP.
You would need to do this for uaa.system_domain, login.system_domain, api.system_domain and any host names you want to use for apps deployed to your foundation, like my-super-cool-app.apps_domain. These should all point to the IP of the load balancer that's in front of your pool of Gorouters.
If you add enough entries into /etc/hosts you can make the cf CLI work. I have done this on occasion to bypass the load balancer layer for troubleshooting purposes.
Where this won't work is on systems where you can't edit /etc/hosts, like customers or external users of software running on your foundation or if you're trying to deploy apps on your foundation that talk to each other using routes on CF (because you can't edit /etc/hosts in the container). Like if you have app-a.apps_domain and app-b.apps_domain and app-a needs to talk to app-b. That won't work because you have no DNS resolution for apps_domain.
You can probably make app-to-app communication work if you are able to use container-to-container networking and the apps.internal domain though. The resolution for that domain is provided by Bosh DNS. You have to be aware of this difference though when deploying your apps and map routes on the apps.internal domain, as well as setting network policy to allow traffic to flow between the two.
Anyway, there might be other hiccups. This is just off the top of my head. You can see it's a lot better if you can set up DNS.
The most easy way to achieve a portable solution is a service like xip.io that will work out of the box. I have setup and run a lot of PoCs that way, when wildcard DNS was something that enterprise IT was still oblivious about.
It works like this (excerpt from their site):
What is xip.io?
xip.io is a magic domain name that provides wildcard DNS
for any IP address. Say your LAN IP address is 10.0.0.1.
Using xip.io,
10.0.0.1.xip.io resolves to 10.0.0.1
www.10.0.0.1.xip.io resolves to 10.0.0.1
mysite.10.0.0.1.xip.io resolves to 10.0.0.1
foo.bar.10.0.0.1.xip.io resolves to 10.0.0.1
...and so on. You can use these domains to access virtual
hosts on your development web server from devices on your
local network, like iPads, iPhones, and other computers.
No configuration required!
so I have 0% experience with web programming, and the project I am working on doesn't have anything to do with it, but I hit a small roadblock and need to solve a small port problem.
So we want to build a cluster of GPU machines on Azure for some Deep Learning calculations, and want to install some applications on them and let our scientists use the app' GUIs to launch and monitor their jobs. The problem is that an app A for example runs on port 5050, but our firewall doesn't let us communicate to unusual ports. The problem is easy to fix from the Azure side, but our IT team won't let us modify our security policies.
That's why I need to find a hacky and fast solution to overcome this, I don't want to spend my whole internship doing something complicated for it, just something that does the job.
What I thought about was to have some kind of server running on the machines (let's say Machine A has public IP address ipbA and private IP ipvA) that when we type "http://ipba/app1" on our browser, the server on A will fetch the page "http://ipva:5050" (or "----://ipba/app2" -> "----://ipva:5051") and display it, but does this work if the page needs to be interactive because we would like to launch jobs?
I have no clue how to do this, if you could please just tell me what I should look into, google and read about, or if there is an easier way to handle it, (maybe some VPN stuff, which I don't prefer since we're moving towards a hybrid cloud architecture and I don't think we would want to VPN into all the different cloud platforms) that would be awesome :)
Two common solutions for your problem:
Set-up a reverse proxy on a standard port (such as 80 or 443 if you want some SSL certificates headaches).
All your domain names will point to the reverse proxy (single IP) but the reverse proxy will forward the traffic transparently to the real servers on their special ports.
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension
For the technical details, in short: you keep in file(s) the configuration for each domain or subdomain and where they should be forwarded.
Chain of events:
User types http://interface-1.company.com
Browser resolves interface-1.company.com (DNS: IP Reverse Proxy)
Browser connects on reverse proxy (port 80)
Reverse proxy reads configuration which says where to forward
Proxy forwards request to realserver.company.com:5050
Realserver relays response to reverse proxy
reverse proxy relays response to browser
I think that is what you are trying to achieve.
Set-up a VPN service which will be connectible through the proxy of your company and provide VPN clients to the end-users. OpenVPN clients can use an HTTPS proxy connection (your company proxy) to establish connexion to a remote VPN.
Once connected on the VPN, everyone uses the VPN's IP address + firewall policy, and are therefore no more restricted by the company's firewalling policy. Any kind of traffic can also be forwarded. This is harder to set up and your security team might not accept it. However, it's a fully functional solution and it can also offer additional security features if implemented properly.
I do not recommend going this way for all the paperwork that would involve.
I am currently making a website. I'd want people to try it out. They can do so right now if I send them my IP and port and they put it in the URL. My computer acts as the server right now.
Is there a way to use my computer as the server but without actually sharing directly my IP? Some kind of rerouting. I am not looking for something very secure, I am only looking for a solution that doesn't involve putting my IP in the URL.
You can register a domain name (or use a free equivalent like FreeDNS), but your IP will still be visible to anyone who pings you server. You could rent a VPS and use that to proxy requests to your server, or you could use an anonymizing service like Tor to keep your IP hidden, but there's really no reason to go through all that trouble. If you're worried about people having your IP address, there's no reason to, because there's not really much people can do with it. If you're looking for an easier way for you to share it and for people to remember it, I suggest FreeDNS or No-IP.
You might want to look into using ngrok - https://ngrok.com/.
It allows you to run general internet traffic to any port on your local machine, via somesubdomain.nkgrok.com. Also, it works if you're behind a firewall - you just open up a connection to ngrok from your computer, and ngrok will forward incoming traffic to your computer through that connection.
Basically I have a Debian box running asterisk assigned an IP via DHCP with host-name XXX. My windows browser can resolve the host-name but if I use host-name in X-Lite or my SPA922 phone it fails to resolve. Is there any way of getting this to work without depending on the router or assigning a static IP (request is to make it portable). I was thinking zero-conf but am unsure (box has limited HDD too). Any help is most appreciated.
There are a couple of ways you could do this.
Personally I replaced my stock router firmware with DD-WRT and it then serves as both the DHCP server and supports a local name server (so all machines on the local LAN can access each other by name).
Not quite "not depending on the router" but pretty low configuration, and nothing to configure at all on the client devices really.