Is Google App Engine secure enough for financial applications? - security

I am wondering whether Google App Engine is secure enough for financial applications? This would involve storing sensitive information, access to users' funds, etc. Are there any applications like that already running on App Engine?

Generally speaking, App Engine is more secure than most other options.
(1) Google has more people working on security than most companies can afford to work on their own servers or VMs.
When a new security threat is discovered, Google is very likely to fix it quickly, compared to dedicated servers/VMs, where you have to rely on your own sys admin to fix it in time.
(2) There are no OS, firewalls, etc., to configure, which reduces a possibility of a wrong configuration that exposes a security hole. Run times are also limited.
Ultimately, the vast majority of all security breaches happen for two reasons:
wrong application code/architecture
human factor (people storing their passwords in email messages, choosing weak passwords, doing harm on purpose, phishing, etc.)
Neither of these factors are any different on App Engine than on any other platform.
As for the second part of your question, Snapchat has a lot of, should I say, very sensitive information. It runs on App Engine.

Webfilings, a company that handles financial data for most of the Fortune 500, runs on Google App Engine. Their statement about it in the second link I've given:
Google App Engine gave us that speed we needed to grow, but as Murray stated, “being on the Google foundation gives us and our customers peace of mind.” WebFilings needed a platform with a strong approach to security that was very reliable because our customers’ security is a top priority. Because our customers rely on accurate and timely access to the cloud as they input pre-released financial information, security is top-of-mind at our company. As a result, we want our customers’ security to be in the best hands, in Google App Engine’s hands.

Related

Can Azure Performance Tests be used to DDoS?

I'm playing around with Azure Load Test for my own app and I just realized that I can type any URL into the "Test Target" field. Does this mean that any Azure subscriber could "simulate" 1,000,000 concurrent users against my website, effectively DDoSing me?
You have just hit upon the license and ethical problem of performance testing tools. They are a necessarily violent and destructive class of tools: the tactical nuclear weapon of tests designed to find and exploit the weak parts of your architecture.
If you read the license agreements of almost every tool there is typically a clause about using the tools only against sites you own, manage, control or where you have explicit permission from that do hold those attributes. Also, most sites have user agreements which prohibit the use of automation against their sites outside of a defined supported API and also prohibitions against performance testing without coordination.
Does anything stop you from driving around the web neighborhood and tossing nuclear grenades at websites? No, not really. You have your license and user agreement constraints, assuming you choose to honor them - many don't. You have the threat of prosecution for engaging in a DDoS attack against a site - this is usually the greater curb against behavior.
Likewise, because of their destructive power these tools should not be placed in the hands of people without training and supervision for a period of time lest they start asking, "Hmmmmm, I wonder what happens when I use 10,000,000 fake email addresses for account signups in test....only to find that they have now destroyed the ability to transfer mail in and out of the enterprise because the SMTP relay has 10,000,000 confirmation emails queued up and waiting to timeout after 72 hours as undeliverable.

Explaining windows azure to layman or students

I am looking for simple analogies to explain windows azure, app fabric, etc to students or layman person. Please let me know if you have any suggestions.
Thanks
N
Well, first I would try and talk about how we used to build and maintain things. Buying our own hardware, building it, programming it, and connecting it to the internet. That's the old way. Then, I would pivot into what cloud service providers are. In a nutshell, they are just somebody else's servers. Usually Amazons, Microsoft's or Googles servers. AWS/Azure/GCP.
Here is a quick youtube video explaining it in layman's terms.
https://www.youtube.com/watch?v=1ERdeg8Sfv4
Cloud service providers offer web portal, a website, where folks can click and build services like storage, backup, DNS, database, more websites, load balancing, and - maybe the most popular - virtual machine hosting.
What makes CSPs so successful is economies of scale. CSPs will build huge data centers and engineer them to provide the kind of services that most businesses need. COntrast that to if every business were to build their own from scratch. There are however lots of challenges to these CSPs, like needing a lot more spare capacity and having to build something that fits everyone as opposed to something that fits a particular user. So, for a small business, whether they save money depends on their use case. You might save more building from scratch, but then you'd have to train and pay folks to maintain your own servers.
One of the most revolutionary benefits that cloud service providers brought into the market is that purchasing additional capacity is much easier and faster. You might have taken weeks to buy hardware and install it at your location. Or if you are renting though traditional suppliers you might take a few hours to let them manually reconfigure things. However they now make everything automatic so you can get a new server within seconds. This have allowed businesses to build their applications to allow them to scale on demand. This means that they pay different amount of money for the services depending on how much they use. This have the ability to reduce costs but it again require more time to develop and maintain the more complex applications.

Are services like AWS secure enough for an organization that is highly responsible for it's clients privacy?

Okay, so we have to store our clients` private medical records online and also the web site will have a lot of requests, so we have to use some scaling solutions.
We can have our own share of a datacenter and run something like Zend Server Cluster Manager on it, but services like Amazon EC2 look a lot easier to manage, and they are incredibly cheaper too. We just don't know if they are secure enough!
Are they?
Any better solutions?
More info: I know that there is a reference server and it's highly secured and without it, even the decrypted data on the cloud server would be useless. It would be a bunch of meaningless numbers that aren't even linked to each other.
Making the question more clear: Are there any secure storage and process service providers that guarantee there won't be leaks from their side?
First off, you should contact AWS and explain what you're trying to build and the kind of data you deal with. As far as I remember, they have regulations in place to accommodate most if not all the privacy concerns.
E.g., in Germany such thing is a called a "Auftragsdatenvereinbarung". I have no idea how this relates and translates to other countries. AWS offers this.
But no matter if you go with AWS or another cloud computing service, the issue stays the same. And therefor, whatever is possible is probably best answered by a lawyer and based on the hopefully well educated (and expensive) recommendation, I'd go cloud shopping, or maybe not. If you're in the EU, there are a ton of regulations especially in regards to medical records -- some countries add more to it.
From what I remember it's basically required to have end to end encryption when you deal with these things.
Last but not least security also depends on the setup and the application, etc..
For complete and full security, I'd recommend a system that is not connected to the Internet. All others can fail.
You should never outsource highly sensitive data. Your company and only your company should have access to it - in both software and hardware terms. Even if your hoster is generally trusted someone there might just steal hardware.
Depending on the size of your company you should have your custom servers - preferable even unaccessible for the technicans in your datacenter (supposing you don't own the datacenter ;).
So the more important the data is, the less foreign people should have access to it in any means. In the best case you can name all people that have access to them in any way.
(Update: This might not apply to anonymous data, but as you're speaking of customers I don't think that applies here?)
(On a third thought: There're are probably laws to take into consideration of how you have to handle that kind of information ;)

Software and Security - do you follow specific guidelines?

As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.

How long to retain an archive of web server traffic logs?

We've currently got four web servers in a farm generating IIS web logs about 100Mb per day. These can be compressed pretty effieciently down to somewhere around 5% of their size.
We are planning to use waRmZip to move them off the servers and onto a SAN. After a week or so we can be confident we don't have any technical issues to investigate so the only other thing would be using them for trend analysis as a compliment to Google Analytics.
What retention periods do people recommend? Are there any legal requirements to keep this data?
Legal requirements will depend on your country, how much you're logging, and quite possibly the nature of your business. Talk to your company's lawyers - legal advice on SO is likely to be worth what you pay for it.
If you're only storing 5MB per day, you should be able to store them for basically as long as you want without worrying on the technical front.
Please consider the sensitivity of your web log data as well. I have no idea whether access to your web apps would be considered sensitive if made public, but you need to realize that your web logs contain the necessary information to potentially identify individuals (esp. in conjunction with other information available elsewhere). Your privacy policies should reflect how long you retain these logs and what purposes to which they will be put. Google, I think, recently decided to anonymize their logs after 9 months to help protect user privacy. Granted, their situation is a little different since they collect so much information, but you need to consider your customer's needs as well as your own when determining how long and in what form to keep your logs.
I tend to keep mine forever. That's mainly for trend analysis because Google misses some visitors (non-JavaScript ones).

Resources