Security of HTTP within Corporate LAN - security

I'm writing a network application for enterprises wherein I intend to run an HTTP server on almost all hosts within a LAN. Some people have told me that this is a huge security hole because HTTP is often allowed through corporate firewalls and therefore by running an HTTP server on every node I'll potentially expose all of them to the outside. Is this a genuine risk?
As per my understanding the firewall exception for HTTP is generally based on the port number 80, not based on the inspection of packet contents. Therefore, if I run my HTTP server on a port other than 80 then the firewall will not allow access to it and hence I should be fine. Is this understanding correct?
I know there are application firewalls, but I'm not quite able to fathom how they operate within a typical corporate network or how common they are in the enterprise world. Any information on this will be helpful.
Update
The application referred to above is a P2P content sharing application whose operation is entirely confined within a subnet. I wish to use HTTP for the transfer for content from one node to another. The reason for choosing HTTP is that it is simple to implement and provides many application level features such as PUTs and DELETEs out of the box. My concern here is if choosing HTTP automatically puts all nodes at a security risk because typically corporate firewalls (by which I mean NATs) typically allow inbound HTTP traffic from outside the network.

If your only concern is attackers being able to directly connect to your HTTP server from outside of the corporate network, then this is unlikely due to NAT, as roelofs commented.
But is this truly your only concern? You have not specified what your application does, but is it acceptable to you (and to your users) that people inside the corporate network will be able to sniff your application's traffic? Is it acceptable that attackers who penetrate the network (and this happens all the time) can intercept, replay, and tamper with the traffic?
Why would you not use HTTPS?

Do not assume that you're secure just because you're behind a firewall.
If Web browsers inside the firewall can access both internal and external sites, an external site can probe the internal sites and convince the browsers to send requests to them.
This is an extremely common problem; the simplest example is the attacks against home DSL router administrative interfaces.

Related

Easy way to do port translation to bypass our own firewall

so I have 0% experience with web programming, and the project I am working on doesn't have anything to do with it, but I hit a small roadblock and need to solve a small port problem.
So we want to build a cluster of GPU machines on Azure for some Deep Learning calculations, and want to install some applications on them and let our scientists use the app' GUIs to launch and monitor their jobs. The problem is that an app A for example runs on port 5050, but our firewall doesn't let us communicate to unusual ports. The problem is easy to fix from the Azure side, but our IT team won't let us modify our security policies.
That's why I need to find a hacky and fast solution to overcome this, I don't want to spend my whole internship doing something complicated for it, just something that does the job.
What I thought about was to have some kind of server running on the machines (let's say Machine A has public IP address ipbA and private IP ipvA) that when we type "http://ipba/app1" on our browser, the server on A will fetch the page "http://ipva:5050" (or "----://ipba/app2" -> "----://ipva:5051") and display it, but does this work if the page needs to be interactive because we would like to launch jobs?
I have no clue how to do this, if you could please just tell me what I should look into, google and read about, or if there is an easier way to handle it, (maybe some VPN stuff, which I don't prefer since we're moving towards a hybrid cloud architecture and I don't think we would want to VPN into all the different cloud platforms) that would be awesome :)
Two common solutions for your problem:
Set-up a reverse proxy on a standard port (such as 80 or 443 if you want some SSL certificates headaches).
All your domain names will point to the reverse proxy (single IP) but the reverse proxy will forward the traffic transparently to the real servers on their special ports.
https://httpd.apache.org/docs/2.4/howto/reverse_proxy.html
https://www.digitalocean.com/community/tutorials/how-to-use-apache-http-server-as-reverse-proxy-using-mod_proxy-extension
For the technical details, in short: you keep in file(s) the configuration for each domain or subdomain and where they should be forwarded.
Chain of events:
User types http://interface-1.company.com
Browser resolves interface-1.company.com (DNS: IP Reverse Proxy)
Browser connects on reverse proxy (port 80)
Reverse proxy reads configuration which says where to forward
Proxy forwards request to realserver.company.com:5050
Realserver relays response to reverse proxy
reverse proxy relays response to browser
I think that is what you are trying to achieve.
Set-up a VPN service which will be connectible through the proxy of your company and provide VPN clients to the end-users. OpenVPN clients can use an HTTPS proxy connection (your company proxy) to establish connexion to a remote VPN.
Once connected on the VPN, everyone uses the VPN's IP address + firewall policy, and are therefore no more restricted by the company's firewalling policy. Any kind of traffic can also be forwarded. This is harder to set up and your security team might not accept it. However, it's a fully functional solution and it can also offer additional security features if implemented properly.
I do not recommend going this way for all the paperwork that would involve.

Securing a simple Linux server that holds a MySQL database?

A beginner question, but I've looked through many questions on this site and haven't found a simple, straightforward answer:
I'm setting up a Linux server running Ubuntu to store a MySQL database.
It's important this server is secure as possible, as far as I'm aware my main concerns should be incoming DoS/DDoS attacks and unauthorized access to the server itself.
The database server only receives incoming data from one specific IP (101.432.XX.XX), on port 3000. I only want this server to be able to receive incoming requests from this IP, as well as prevent the server from making any outgoing requests.
I'd like to know:
What is the best way to prevent my database server from making outgoing requests and receiving incoming requests solely from 101.432.XX.XX? Would closing all ports ex. 3000 be helpful in achieving this?
Are there any other additions to the linux environment that can boost security?
I've taken very basic steps to secure my phpmyadmin portal (linked to the MySQL database), such as restricting access to solely my personal IP address.
To access the database server requires the SSH key (which itself is password protected).
A famous man once said "security is a process, not a product."
So you have a db server that should ONLY listen to one other server for db connections and you have the specific IP for that one other server. There are several layers of restriction you can put in place to accomplish this
1) Firewall
If your MySQL server is fortunate enough to be behind a firewall, you should be able to block out all connections by default and allow only certain connections on certain ports. I'm not sure how you've set up your db server, or whether the other server that wants to access it is on the same LAN or not or whether both machines are just virtual machines. It all depends on where your server is running and what kind of firewall you have, if any.
I often set up servers on Amazon Web Services. They offer security groups that allow you to block all ports by default and then allow access on specific ports from specific IP blocks using CIDR notation. I.e., you grant access in port/IP combination pairs. To let your one server get through, you might allow access on port 3000 to IP address 101.432.xx.xx.
The details will vary depending on your architecture and service provider.
2) IPTables
Linux machines can run a local firewall (i.e., a process that runs on each of your servers itself) called iptables. This is some powerful stuff and it's easy to lock yourself out. There's a brief post here on SO but you have to be careful. It's easy to lock yourself out of your server using IPtables.Keep in mind that you need to permit access on port 22 for all of your servers so that you can login to them. If you can't connect on port 22, you'll never be able to login using ssh again. I always try to take a snapshot of a machine before tinkering with iptables lest I permanently lock myself out.
There is a bit of info here about iptables and MySQL also.
3) MySQL cnf file
MySQL has some configuration options that can limit any db connections to localhost only - i.e., you can prevent any remote machines from connecting. I don't know offhand if any of these options can limit the remote machines by IP address, but it's worth a look.
4) MySQL access control via GRANT, etc.
MySQL allows you very fine-grained control over who can access what in your system. Ideally, you would grant access to information or functions only on a need-to-know basis. In practice, this can be a hassle, but if security is what you want, you'll go the extra mile.
To answer your questions:
1) YES, you should definitely try and limit access to your DB server's MySQL port 3000 -- and also port 22 which is what you use to connect via SSH.
2) Aside from ones mentioned above, your limiting of PHPMyAdmin to only your IP address sounds really smart -- but make sure you don't lock yourself out accidentally. I would also strongly suggest that you disable password access for ssh connections, forcing the use of key-pairs instead.You can find lots of examples on google.
What is the best way to prevent my database server from making outgoing requests and receiving incoming requests solely from 101.432.XX.XX? Would closing all ports ex. 3000 be helpful in achieving this?
If you don't have access to a separate firewall, I would use ip tables. There are a number of managers available for you on this. So yes. Remember that if you are using IPtables, make sure you have a way of accessing the server via OOB (short for out of band, which means accessing it in such a way that if you make a mistake in IP tables, you can still access it via console/remote hands/IPMI, etc)
Next up, when creating users, you should only allow that subnet range plus user/pass authentication.
Are there any other additions to the linux environment that can boost security? I've taken very basic steps to secure my phpmyadmin portal (linked to the MySQL database), such as restricting access to solely my personal IP address.
Ubuntu ships with something called AppArmor. I would investigate that. That can be helpful to prevent some shenanigans. An alternative is SELinux.
Further, take more steps with phpmyadmin. That is your weakest link in the security tool chain we are building.
To access the database server requires the SSH key (which itself is password protected).
If security is a concern, I would NOT use SSH key style access. Instead, I would use MySQLs native support for SSL certificate authentication. Here is now to configure it with phpmyadmin.

Do all servers need to use the HTTPS protocol or just public facing servers?

I have a front end web server running over HTTPS - this is public facing - i.e. port is open.
I also have a backend API server that my webserver makes API requests to - this is public facing and requires authentication - port is open.
These 2 servers run over HTTPS.
Behind the API server, there are lots of other servers. The API server reverse proxies to these servers. Ports for these other servers are not open to incoming traffic. They can only be talked to via the API server.
My Question ... Do the "lots of other servers" need to run over HTTPS or, given that they cannot be accessed externally, can they run over HTTP safely instead?
I thought this would be a common question but I could not find an answer to it. Thanks. If this is a dupe please point me to the right answer.
TL;DR you should encrypt the traffic unless it's on the same host.
You can't trust your network. Malwares in your own network can intercept/modify http requests.
It's not theoretical attacks, but real life example:
Routers (probably hacked) inside the network of some websites injecting ads: https://www.blackhat.com/docs/us-16/materials/us-16-Nakibly-TCP-Injection-Attacks-in-the-Wild-A-Large-Scale-Study-wp.pdf
Indian network sniffing between cloudfare and back-end: https://medium.com/#karthikb351/airtel-is-sniffing-and-censoring-cloudflares-traffic-in-india-and-they-don-t-even-know-it-90935f7f6d98#.hymc3785e
The now famous "SSl Added and removed here :-)" from the NSA
The question is how much do you trust the connection between the public IP and the backend server?
If it is not your data center, at least any privileged employee of the ISP could see/change the data. I guess that's not something your customers would like to hear.
If it is your data center, meaning you are a kind of ISP still everybody who has physical access to the data center can potentially sniff the clear text traffic. Or in general, anybody who has access to the wire can see the traffic, it is much harder to implement a strict access control in your company.

How to setup forward proxy on Windows server for outgoing HTTP and HTTPS requests?

I have a windows server 2012 VPS running a web app behind Cloudflare. The app needs to initiate outbound connections based on user actions (eg upload image from URL). The problem is that this 'leaks' my server's IP address and increases risk of DDOS attacks.
So I would like to prevent my server's IP from being discovered by setting up a forward proxy. So far my research has shown that this is no simple task, and would involve setting up another VPS to act as a proxy.
Does this extra forward proxy VPS have to be running windows ? Are their any paid services that could act as a forward proxy for my server (like cloudflare's reverse proxy system)?
Also, it seems that the suggested IIS forward proxy plugin, Application Request Routing, does not work for HTTPS.
Is there a solution for both types of outgoing (HTTPS + HTTP) requests?
I'm really lost here, so any help or suggestions would be appreciated.
You are correct in needing a "Forward Proxy". A good analogy for this is the proxy settings your browser has for outbound requests. In your case, the web application behaves like a desktop browser and can be configured to make the resource request through a proxy.
Often you can control this for individual requests at the application layer. An example of doing so with C#: C# Connecting Through Proxy
As far as the actual proxy server: No, it does not need to run Windows or IIS. Yes, you can use a proxy service. The vast majority of proxy services are targeted towards consumers and are used for personal privacy or to get around network restrictions. As such, I have no direct recommendations.
Cloudflare actually has recommendations regarding this: https://blog.cloudflare.com/ddos-prevention-protecting-the-origin/.
Features like "upload from URL" that allow the user to upload a photo from a given URL should be configured so that the server doing the download is not the website origin server.
This may be a more comfortable risk mitigator, as it wouldn't depend on a third party proxy service. A request for upload could be handled as a web service call to a dedicated "file downloader" server. Keep in mind that if you have a queued process for another server to do the work, and that server is hosted in the same infrastructure, both might be impacted by a DDoS, depending on the type of DDoS.
Your question implies that you may be comfortable using a non-windows server. Many softwares exist that can operate as a proxy(most web servers), but suffer from the same problem as ARR - lack of support for the HTTP "CONNECT" verb, which is used by modern browsers to start an HTTPS connection before issuing a "GET". SQUID is very popular, open source, and supports everything to connect to.. anything. It's not trivial to set up. Apache also has support for this in "mod_proxy_connect", but I have no experience in that and the online documentation isn't very robust. It's Apache, though, so it may be worth the extra investigation.

How secure is IP address filtering?

I'm planning to deploy an internal app that has sensitive data. I suggested that we put it on a machine that isn't exposed to the general internet, just our internal network. The I.T. department rejected this suggestion, saying it's not worth it to set aside a whole machine for one application. (The app has its own domain in case that's relevant, but I was told they can't block requests based on the URL.)
Inside the app I programmed it to only respect requests if they come from an internal I.P. address, otherwise it just shows a page saying "you can't look at this." Our internal addresses all have a distinct pattern, so I'm checking the request I.P. against a regex.
But I'm nervous about this strategy. It feels kind of janky to me. Is this reasonably secure?
IP filtering is better than nothing, but it's got two problems:
IP addresses can be spoofed.
If an internal machine is compromised (that includes a client workstation, e.g. via installation of a Trojan), then the attacker can use that as a jump host or proxy to attack your system.
If this is really sensitive data, it doesn't necessarily need a dedicated machine (though that is best practice), but you should at least authenticate your users somehow, and don't run less sensitive (and more easily attacked) apps on the same machine.
And if it is truly sensitive, get a security professional in to review what you're doing.
edit: incidentally, if you can, ditch the regex and use something like tcpwrappers or the firewalling features within the OS if it has any. Or, if you can have a different IP address for your app, use the firewall to block external access. (And if you have no firewall then you may as well give up and email your data to the attackers :-)
I would rather go with SSL and some certificates, or a simple username / password protection instead of IP filtering.
It depends exactly HOW secure you really need it to be.
I am assuming your server is externally hosted and not connected via a VPN. Therefore, you are checking that the requesting addresses for your HTTPS (you are using HTTPS, aren't you??) site are within your own organisation's networks.
Using a regex to match IP addresses sounds iffy, can't you just use a network/netmask like everyone else?
How secure does it really need to be? IP address spoofing is not easy, spoofed packets cannot be used to establish a HTTPS connection, unless they also manipulate upstream routers to enable the return packets to be redirected to the attacker.
If you need it to be really secure, just get your IT department to install a VPN and route over private IP address space. Set up your IP address restrictions for those private addresses. IP address restrictions where the routing is via a host-based VPN are still secure even if someone compromises an upstream default gateway.
If your application is checking the IP Address, then it is extremely vulnerable. At that point you don't have any protection at the router which is where IP filtering really needs to be. Your application is probably checking HTTP header information for the sending IP address and this is extremely easy to spoof. If you lock the IP address down at the router, that is a different story and will buy you some real security about who can access the site from where.
If are you are doing is accessing the application internally, then SSL won't buy you much unless you are trying to secure the information from parties internal to the organization, or you require client certificates. This is assuming you won't ever access the site from an external connection (VPNs don't count, because you are tunneling into the internal network and are technically part of it at that point). It won't hurt either and isn't that hard to setup, just don't think that it is going to be the solution to all your problems.
IP Whitelisting is, as others have mentioned, vulnerable to IP spoofing and Man-in-the-Middle attacks. On an MITM, consider that some switch or router has been compromised and will see the "replies". It can either monitor or even alter them.
Consider also vulnerabilities with SSL encryption. Depending on the effort, this can be foiled in a MITM as well, as well as the well-known gaffs with the reuse of the primes, etc.
Depending on the sensitivity of your data, I would not settle for SSL, but would go with StrongSWAN or OpenVPN for more security. If handled properly, these will be a lot less vulnerable to a MITM.
Reliance on whitelisting alone (even with SSL) I would consider "low grade", but may be sufficient for your needs. Just be keenly aware of the implications and do not fall into the trap of a "false sense of security".
If it's limited by IP address, then although they can spoof the IP address, they won't be able to get the reply. Of course, if it's exposed to the internet, it can still get hit by attacks other than against the app.
Just because all your internal IPs match a given regex, that doesn't mean that all IPs that match a given regex are internal. So, your regex is a point of possible security failure.
I don't know what technology you used to build your site, but if it's Windows/ASP.net, you can check the permissions of the requesting machine based on its Windows credentials when the request is made.
Like all security, it's useless on it's own. If you do have to put it on a public-facing webserver, use IP whitelisting, with basic username/password auth, with SSL, with a decent monitoring setup, with an up-to-date server application.
That said, what is the point in having the server be publicly accessible, then restrict it to only internal IP addresses? It seems like it's basically reinventing what NAT gives you for free, and with an internal-only server, plus you have to worry about web-server exploits and the likes.
You don't seem to gain anything by having it externally accessible, and there are many benefits to having having it be internal-only..
My first thought on the ressource issue would be to ask if it wouldn't be possible to work some magic with a virtual machine?
Other than that - if the IP addresses you check up against are either IPs you KNOW belongs to computers that are supposed to access the application or in the local IP range, then I cannot see how it could not be secure enough (I am actually using a similiar approach atm on a project, although it is not incredibly important that the site is kept "hidden").
A proper firewall can protect against IP spoofing, and it's not as easy as say spoofing your caller ID, so the argument /not/ to use IP filtering because of the spoofing danger is a bit antiquated. Security is best applied in layers, so you're not relying on only one mechanism. That's why we have WAF systems, username+password, layer 3 firewalls, layer 7 firewalls, encryption, MFA, SIEM and a host of other security measures, each which adds protection (with increasing cost).
If this is a web application you're talking about (which was not clear from your question), the solution is fairly simple without the cost of advanced security systems. Whether using IIS, Apache, etc. you have the ability to restrict connections to your app to a specific target URL as well as source IP address - no changes to your app necessary - on a per-application basis. Preventing IP-based web browsing of your app, coupled with IP source restrictions, should give you significant protection against casual browsing/attacks. If this is not a web app, you'll have to be more specific so people know whether OS-based security (as has been proposed by others) is your only option or not.
Your security is only as strong as your weakest link. In the grand scheme of things, spoofing an IP is child's play. Use SSL and require client certs.
It became useful first to distinguish among different kinds of IP vpn based on the administrative relationships, not the technology, interconnecting the nodes. Once the relationships were defined, different technologies could be used, depending on requirements such as security and quality of service.
Maybe this will help ? I've been looking for the same answer, and found this stackoverflow as well as this idea from Red Hat Linux Ent. I'll try it soon. I hope it helps.
iptables -A FORWARD -s 192.168.1.0/24 -i eth0 -j DROP
Where 0/24 is the LAN range you want to protect. The idea is to block "internet" facing (Forward) devices from being able to spoof the local IP network.
Ref: http://www.centos.org/docs/4/html/rhel-sg-en-4/s1-firewall-ipt-rule.html

Resources