Firewalls preventing product activation - firewall

I'm looking to implement a basic product activation scheme such that when the program is launched it will contact our server via http to complete the activation. I'm wondering if it is a big problem (especially with bigger companies or educational organizations) that firewalls will block the outgoing http request and prevent activation. Any idea how big as issue this may be?

In my experience when HTTP traffic is blocked by a hardware firewall then there is more often than not a proxy server which is used to browse the internet. Therefore it is good practice to allow the user to enter proxy and authentication details.
The amount of times I have seen applications fail due to not using a corporate proxy server and therefore being blocked by the firewall astonishes me.

there are personal software solutions to purposely block outgoing connections. Check out little snitch. This program can set up rules that explicitly block your computer from making connections to certain domains, IP's and / or Ports. A common use for this program is to stop one's computer from "phoning home" to an activation server.

I can't tell you how prevalent this will be, sorry. But I can give you one data point.
In this company Internet access is granted on an as needed basis. There is one product I have had to support which is wonderful for its purpose and reasonably priced, but I will never approve its purchase again - the licensing is too much of a hassle to be worth it.

I'd say that it may not be common, but if any one of your customers is a business it's likely that you will encounter someone who tryes to run your software behind a restricted internet connection or a proxy. Your software will need to handle this situation, otherwise you will ahve a pissed off customer who cannot use your product, and you will lose the sale for sure.
If you are looking for a third party tool, I've used InstallKey (www.lomacons.com) for product activations. This thing has functionaility that allows for validating with and without an internet connection.

Related

Access load-balanced website when DNS lookup is restricted on server

The scenario is - I need to send push notification to Apple push server hosted at gateway.sandbox.push.apple.com. This Apple server is load balanced and the destination IP address can be anything in 17.x.x.x block.
Now my server which will be requesting Apple server is in secure environment and is behind firewalls. I have got the IP range 17.x.x.x unblocked, but DNS resolving is not possible on that server. That server also doesn't have Internet access on it.
What I did was - I pinged the Apple server from another system and got the Apple server's IP address for the moment. Then I mapped that IP address with the DNS name in the hosts file of my Windows server. This worked, but now the IP address can change anytime at the Apple end, and this will break things.
What can I do in this scenario?
You can talk to your security people and in cooperation with them come up with a proper, internally supported, way to provide what you need. What you need is to look up an address, and then talk to that address. Currently, you are only provided half of that.
What you're asking us for is a way to circumvent your own organization's security policies (policies that admittedly appear stupid, but that's another matter entirely). Even if someone here can come up with a technical way to do what you ask that works for now, it's likely to break at any time, since you're working at odds with your own workplace. Also, what will your bosses say if they find out that you're violating security policies?
Security very often comes down to tradeoffs. As the saying goes, the only truly secure system is one that has been encased in concrete and sunk to the bottom of the sea. But such a system will also be somewhat difficult to get useful work out of, so usually we accept lesser security in order to get work done. In your case, the tradeoff currently sits in a place that prevents you from doing whatever it is you're working on. So your organization needs to make a choice: change the tradeoff so that your machine can look up names, or keep the current tradeoff and accept that your task will not be done.
I'm sorry that I can't give you a straight up "Sure, do this" kind of answer, but your problem really is not technical.

What security risks are posed by using a local server to provide a browser-based gui for a program?

I am building a relatively simple program to gather and sort data input by the user. I would like to use a local server running through a web browser for two reasons:
HTML forms are a simple and effective means for gathering the input I'll need.
I want to be able to run the program off-line and without having to manage the security risks involved with accessing a remote server.
Edit: To clarify, I mean that the application should be accessible only from the local network and not from the Internet.
As I've been seeking out information on the issue, I've encountered one or two remarks suggesting that local servers have their own security risks, but I'm not clear on the nature or severity of those risks.
(In case it is relevant, I will be using SWI-Prolog for handling the data manipulation. I also plan on using the SWI-Prolog HTTP package for the server, but I am willing to reconsider this choice if it turns out to be a bad idea.)
I have two questions:
What security risks does one need to be aware of when using a local server for this purpose? (Note: In my case, the program will likely deal with some very sensitive information, so I don't have room for any laxity on this issue).
How does one go about mitigating these risks? (Or, where I should look to learn how to address this issue?)
I'm very grateful for any and all help!
There are security risks with any solution. You can use tools proven by years and one day be hacked (from my own experience). And you can pay a lot for security solution and never be hacked. So, you need always compare efforts with impact.
Basically, you need protect 4 "doors" in your case:
1. Authorization (password interception or, for example improper, usage of cookies)
2. http protocol
3. Application input
4. Other ways to access your database (not using http, for example, by ssh port with weak password, taking your computer or hard disk etc. In some cases you need properly encrypt the volume)
1 and 4 are not specific for Prolog but 4 is only one which has some specific in a case of local servers.
Protect http protocol level means do not allow requests which can take control over your swi-prolog server. For this purpose I recommend install some reverse-proxy like nginx which can prevent attacks on this level including some type of DoS. So, browser will contact nginx and nginx will redirect request to your server if it is a correct http request. You can use any other server instead of nginx if it has similar features.
You need install proper ssl key and allow ssl (https) in your reverse proxy server. It should be not in your swi-prolog server. Https will encrypt all information and will communicate with swi-prolog by http.
Think about authorization. There are methods which can be broken very easily. You need study this topic, there are lot of information. I think it is most important part.
Application input problem - the famose example is "sql injection". Study examples. All good web frameworks have "entry" procedures to clean all possible injections. Take an existing code and rewrite it with prolog.
Also, test all input fields with very long string, different charsets etc.
You can see, the security is not so easy, but you can select appropriate efforts considering with the impact of hacking.
Also, think about possible attacker. If somebody is very interested particulary to get your information all mentioned methods are good. But it can be a rare case. Most often hackers just scan internet and try apply known hacks to all found servers. In this case your best friend should be Honey-Pots and prolog itself, because the probability of hacker interest to swi-prolog internals is extremely low. (Hacker need to study well the server code to find a door).
So I think you will found adequate methods to protect all sensitive data.
But please, never use passwords with combinations of dictionary words and the same password more then for one purpose, it is the most important rule of security. For the same reason you shouldn't give access for your users to all information, but protection should be on the app level design.
The cases specific to a local server are a good firewall, proper network setup and encription of hard drive partition if your local server can be stolen by "hacker".
But if you mean the application should be accessible only from your local network and not from Internet you need much less efforts, mainly you need check your router/firewall setup and the 4th door in my list.
In a case you have a very limited number of known users you can just propose them to use VPN and not protect your server as in the case of "global" access.
I'd point out that my post was about a security issue with using port forwarding in apache
to access a prolog server.
And I do know of a successful prolog injection DOS attack on a SWI-Prolog http framework based website. I don't believe the website's author wants the details made public, but the possibility is certainly real.
Obviously this attack vector is only possible if the site evaluates Turing complete code (or code which it can't prove will terminate).
A simple security precaution is to check the Request object and reject requests from anything but localhost.
I'd point out that the pldoc server only responds by default on localhost.
- Anne Ogborn
I think SWI_Prolog http package is an excellent choice. Jan Wielemaker put much effort in making it secure and scalable.
I don't think you need to worry about SQL injection, indeed would be strange to rely on SQL when you have Prolog power at your fingers...
Of course, you need to properly manage the http access in your server...
Just this morning there has been an interesting post in SWI-Prolog mailing list, about this topic: Anne Ogborn shares her experience...

How to protect a website from DoS attacks

What is the best methods for protecting a site form DoS attack. Any idea how popular sites/services handles this issue?.
what are the tools/services in application, operating system, networking, hosting levels?.
it would be nice if some one could share their real experience they deal with.
Thanks
Sure you mean DoS not injections? There's not much you can do on a web programming end to prevent them as it's more about tying up connection ports and blocking them at the physical layer than at the application layer (web programming).
In regards to how most companies prevent them is a lot of companies use load balancing and server farms to displace the bandwidth coming in. Also, a lot of smart routers are monitoring activity from IPs and IP ranges to make sure there aren't too many inquiries coming in (and if so performs a block before it hits the server).
Biggest intentional DoS I can think of is woot.com during a woot-off though. I suggest trying wikipedia ( http://en.wikipedia.org/wiki/Denial-of-service_attack#Prevention_and_response ) and see what they have to say about prevention methods.
I've never had to deal with this yet, but a common method involves writing a small piece of code to track IP addresses that are making a large amount of requests in a short amount of time and denying them before processing actually happens.
Many hosting services provide this along with hosting, check with them to see if they do.
I implemented this once in the application layer. We recorded all requests served to our server farms through a service which each machine in the farm could send request information to. We then processed these requests, aggregated by IP address, and automatically flagged any IP address exceeding a threshold of a certain number of requests per time interval. Any request coming from a flagged IP got a standard Captcha response, if they failed too many times, they were banned forever (dangerous if you get a DoS from behind a proxy.) If they proved they were a human the statistics related to their IP were "zeroed."
Well, this is an old one, but people looking to do this might want to look at fail2ban.
http://go2linux.garron.me/linux/2011/05/fail2ban-protect-web-server-http-dos-attack-1084.html
That's more of a serverfault sort of answer, as opposed to building this into your application, but I think it's the sort of problem which is most likely better tackled that way. If the logic for what you want to block is complex, consider having your application just log enough info to base the banning policy action on, rather than trying to put the policy into effect.
Consider also that depending on the web server you use, you might be vulnerable to things like a slow loris attack, and there's nothing you can do about that at a web application level.

Accessing data in internal production databases from a web server in DMZ

I'm working on an external web site (in DMZ) that needs to get data from our internal production database.
All of the designs that I have come up with are rejected because the network department will not allow a connection of any sort (WCF, Oracle, etc.) to come inside from the DMZ.
The suggestions that have come from the networking side generally fall under two categories -
1) Export the required data to a server in the DMZ and export modified/inserted records eventually somehow, or
2) Poll from inside, continually asking a service in the DMZ whether it has any requests that need serviced.
I'm averse to suggestion 1 because I don't like the idea of a database sitting in the DMZ. Option 2 seems like a ridiculous amount of extra complication for the nature of what's being done.
Are these the only legitimate solutions? Is there an obvious solution I'm missing? Is the "No connections in from DMZ" decree practical?
Edit: One line I'm constantly hearing is that "no large company allows a web site to connect inside to get live production data. That's why they send confirmation emails". Is that really how it works?
I'm sorry, but your networking department are on crack or something like that - they clearly do not understand what the purpose of a DMZ is. To summarize - there are three "areas" - the big, bad outside world, your pure and virginal inside world, and the well known, trusted, safe DMZ.
The rules are:
Connections from outside can only get to hosts in the DMZ, and on specific ports (80, 443, etc);
Connections from the outside to the inside are blocked absolutely;
Connections from the inside to either the DMZ or the outside are fine and dandy;
Only hosts in the DMZ may establish connections to the inside, and again, only on well known and permitted ports.
Point four is the one they haven't grasped - the "no connections from the DMZ" policy is misguided.
Ask them "How does our email system work then?" I assume you have a corporate mail server, maybe exchange, and individuals have clients that connect to it. Ask them to explain how your corporate email, with access to internet email, works and is compliant with their policy.
Sorry, it doesn't really give you an answer.
I am a security architect at a fortune 50 financial firm. We had these same conversations. I don't agree with your network group. I understand their angst, and I understand that they would like a better solution but most places don't opt with the better choices (due to ignorance on their part [ie the network guys not you]).
Two options if they are hard set on this:
You can use a SQL proxy solution like greensql (I don't work for them, just know of them) they are just greensql dot com.
The approach they refer to that most "Large orgs" use is a tiered web model. Where you have a front end web server (accessed by the public at large), a mid-tier (application or services layer where the actual processes occurs), and a database tier. The mid-tier is the only thing that can talk to the database tier. In my opinion this model is optimal for most large orgs. BUT that being said, most large orgs will run into either a vendor provided product that does not support a middle tier, they developed without a middle tier and the transition requires development resources they dont have to spare to develop the mid-tier web services, or plain outright there is no priorty at some companies to go that route.
Its a gray area, no solid right or wrong in that regard, so if they are speaking in finality terms then they are clearly wrong. I applaud their zeal, as a security professional I understand where they are coming from. BUT, we have to enable to business to function securely. Thats the challange and the gauntlet I always try and throw down to myself. how can I deliver what my customer (my developers, my admins, my dbas, business users) what they want (within reason, and if I tell someone no I always try to offer an alternative that meets most of their needs).
Honestly it should be an open conversation. Here's where I think you can get some room, ask them to threat model the risk they are looking to mitigate. Ask them to offer alternative solutions that enable your web apps to function. If they are saying they cant talk, then put the onus on them to provide a solution. If they can't then you default to it working. Site that you open connections from the dmz to the db ONLY for the approved ports. Let them know that DMZ is for offering external services. External services are not good without internal data for anything more than potentially file transfer solutions.
Just my two cents, hope this comment helps. And try to be easy on my security brethren. We have some less experienced misguided in our flock that cling to some old ways of doing things. As the world evolves the threat evolves and so does our approach to mitigation.
Why don't you replicate your database servers? You can ensure that the connection is from the internal servers to the external servers and not the other way.
One way is to use the ms sync framework - you can build a simple windows service that can synchronize changes from internal database to your external database (which can reside on a separate db server) and then use that in your public facing website. Advantage is, your sync logic can filter out sensitive data and keep only things that are really necessary. And since the entire control of data will be in your internal servers (PUSH data out instead of pull) I dont think IT will have an issue with that.
The connection formed is never in - it is out - which means no ports need to be opened.
I'm mostly with Ken Ray on this; however, there appears to be some missing information. Let's see if I get this right:
You have a web application.
Part of that web application needs to display data from a different production server (not the one that normally backs your site).
The data you want/need is handled by a completely different application internally.
This data is critical to the normal flow of your business and only a limited set needs to be available to the outside world.
If I'm on track, then I would have to say that I agree with your IT department and I wouldn't let you directly access that server either.
Just take option 1. Have the production server export the data you need to a commonly accessible drop location. Have the other db server (one in the DMZ) pick up the data and import it on a regular basis. Finally, have your web app ONLY talk to the db server in the dmz.
Given how a lot of people build sites these days I would also be loath to just open a sql port from the dmz to the web server in question. Quite frankly I could be convinced to open the connection if I was assured that 1. you only used stored procs to access the data you need; 2. the account information used to access the database was encrypted and completely restricted to only running those procs; 3. those procs had zero dynamic sql and were limited to selects; 4. your code was built right.
A regular IT person would probably not be qualified to answer all of those questions. And if this database was from a third party, I would bet you might loose support if you were to start accessing it from outside it's normal application.
Before talking about your particular problem I want to deal with the Update that you provided.
I haven't worked for a "large" company - though large is hard to judge without a context, but I have built my share of web applications for the non profit and university department that I used to work for. In both situations I have always connected to the production DB that is on the internal Network from the Web server on the DMZ. I am pretty sure many large companies do this too; think for example of how Sharepoint's architecture is setup - back-end indexing, database, etc. servers, which are connected to by front facing web servers located in the DMZ.
Also the practice of sending confirmation e-mails, which I believe you are referring to confirmations when you register for a site don't usually deal with security. They are more a method to verify that a user has entered a valid e-mail address.
Now with that out of the way, let us look at your problem. Unfortunately, other than the two solutions you presented, I can't think of any other way to do this. Though some things that you might want to think about:
Solutions 1:
Depending on the sensitivity of the data that you need to work with, extracting it onto a server on the DMZ - whether using a service or some sort of automatic synchronization software - goes against basic security common sense. What you have done is move the data from a server behind a firewall to one that is in front of it. They might as well just let you get to the internal db server from the DMZ.
Solution 2:
I am no networking expert, so please correct me if I am wrong, but a polling mechanism still requires some sort of communication back from the web server to inform the database server that it needs some data back, which means a port needs to be open, and again you might as well tell them to let you get to the internal database without the hassle, because you haven't really added any additional security with this method.
So, I hope that this helps in at least providing you with some arguments to allow you to access the data directly. To me it seems like there are many misconceptions in your network department over how a secure database backed web application architecture should look like.
Here's what you could do... it's a bit of a stretch, but it should work...
Write a service that sits on the server in the DMZ. It will listen on three ports, A, B, and C (pick whatever port numbers make sense). I'll call this the DMZ tunnel app.
Write another service that lives anywhere on the internal network. It will connect to the DMZ tunnel app on port B. Once this connection is established, the DMZ tunnel app no longer needs to listen on port B. This is the "control connection".
When something connects to port A of the DMZ tunnel app, it will send a request over the control connection for a new DB/whatever connection. The internal tunnel app will respond by connecting to the internal resource. Once this connection is established, it will connect back to the DMZ tunnel app on port C.
After possibly verifying some tokens (this part is up to you) the DMZ tunnel app will then forward data back and forth between the connections it received on port A and C. You will effectively have a transparent TCP proxy created from two services running in the DMZ and on the internal network.
And, for the best part, once this is done you can explain what you did to your IT department and watch their faces as they realize that you did not violate the letter of their security policy, but you are still being productive. I tell you, they will hate that.
If all development solutions cannot be applied because of system engineering restriction in DMZ then give them the ball.
Put your website in intranet, and tell them 'Now I need inbound HTTP:80 or HTTPS:443 connections to that applications. Set up what you want : reverse proxies, ISA Server, protocols break, SSL... I will adapt my application if necessary.'
About ISA, I guess they got one if you have exchange with external connections.
Lot of companies are choosing this solution when a resource need to be shared between intranet and public.
Setting up a specific and intranet network with high security rules is the best way to make the administration, integration and deployment easier. What is easier is well known, what is known is masterized : less security breach.
More and more system enginers (like mines) prefer to maintain an intranet network with small 'security breach' like HTTP than to open other protocols and ports.
By the way, if they knew WCF services, they would have accepted this solution. This is the most secure solution if well designed.
Personnaly, I use this two methods : TCP(HTTP or not) Services and ISA Server.

Software and Security - do you follow specific guidelines?

As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.

Resources