Applied security - security

Background
Terminals are a compination of hardware and software. Terminal's main responsibility is to
- collect data (with it's sensors)
- process and transmit collected data to data server over the Internet.
The terminal has Internet access either via WLAN or GPRS. Terminal are running embedded Linux.
Things to consider, security perspective
Transmission of collected data over the air to data server.
Remote software updates over the air (is controlled by the data server),
Local software updates
Identification and authentication of terminal and server
What else should be considered in this type of system?
My question is divided in 3 parts.
Firstly, what kind of issues should be thought about when thinking about security with this kind of system.
Secondly, what ciphers, key exchange mechanism and security techniques could be applied in different parts of the answer of the first question.
And lastly, is there any good books/resources available covering this matter. Specifically targeting this type of application area or similar with practical advice on solutions.
I know my questions are little bit out there. I'm familiar with different ciphers (symmetric and asymmetric), but have found particularly difficult to find any pratical guidance in implementing security in real world systems. I hope this questions hits some traffic. I'm sure there are many of us out there facing similar challenges.
I can provide more details, just point me out where more information is required.

The actual important question is the first part of what your question. "What kind of Issues should be thought about." Only you can really determine that, via two tools: A threat model, and a security model.
A threat model is about who is trying to attack the system: Script Kiddies? Skilled Organized Crime Hackers? The evil overlord's government?
A security model describes what you are trying to protect. Should anybody be able to read the data? Should you be able to detect injection of false data?
First come up with a plan on what your requirements are, then look for technical solutions.

Related

Do IOT devices provide real privacy of data?

So we are a startup been doing most of the work on cloud and looking at moving processing on device itself, so owner of the devices don't loose functionality once we decide to move on.
But we had this question we are debating is
Do IOT devices provide real privacy of data?
I know "real" is very subjective, but if we decide otherwise. Please suggest
Any supportive studies either ways. Seems like a broad question .. but
I think a lot of it would depend on what data are you retrieving from these devices and how are you handling it in cloud.
Also i think it would depend on the hardware of the device; like how much secure it is from that point of view
This is way too broad. A large proportion of IoT devices are horribly insecure and also offer little in the way of privacy. So if you're talking about existing devices, then the answer to your question is no.
That doesn't mean that IoT is inherently insecure or privacy-invading, just that the vast majority of devices have chosen to make it so, undermining trust in all of it - look at all the stuff that Google and Amazon have been trying to get away with.
You can of course build your own, but when you say "once we decide to move on", it suggests that you want these devices to operate peer-to-peer without a cloud connection (i.e. when there's nobody paying for servers). This is entirely possible using things like tor and signal protocols, but it's not easy, and you're unlikely to find a comprehensive answer on Stack Overflow. You're going to need some good privacy- and security-aware developers to make that work, and they won't be cheap.

How do we "test" our security policy?

DISCLAIMER: At my place of work we are aware that, as none of us are security experts, we can't avoid hiring security consultants to get a true picture of our security status and remedial actions for vulnerabilities. This question is asked in the spirit of trying to be a little less dumb and a bit more aware of the issues.
In my place of work, a small business with a sum total of 7 employees, we need to do some work on reviewing our application for security flaw and vulnerabilities. We have identified two main requirements in a security tester:
They are competent, thorough and know their stuff.
They are able to leave us with a clear idea of the work we need to do to make our security better.
This process will be iterative so we will have a scan, do the remedial work and repeat. This will be a regular occurrence going forward.
The problem we have is: How do we know 1? And, even if we're reasonably sure of 1, how on earth do we proceed to 2?
Our first idea was to do some light security scanning on our code ourselves and see if we could identify any definite issues. Then, if the security consultants we choose identify those issues and a few more we're well on the way to 1 and 2. The only problem is that I've been trawling the interweb for days now looking at OWASP, Metasploit, w3af, burp, wikto, sectools (and Stack Overflow, natch)...
As far as I can tell security software seems to come in two flavours, complex open source security stuff for security experts and expensive complex proprietary security stuff for security experts.
I am not a security expert, I am an intermediate level business systems programmer looking for guidance. Is there no approachable scanner type software or similar which will give me an overview of the state of my codebase? Am I just going to have to take a part time degree in order to understand this stuff at a brass tacks level? Or am I missing something?
I read that you're first interested in hiring someone and knowing they're good. Well, you've got a few options, but the easiest is to talk to someone in the know. I've worked with a few companies, and can tell you that Neohapsis and Matasano are very good (though it'll cost you).
The second option you have is to research the company. Who have they worked with? Can they give you references? What do the references have to say? What vulns has the company published to the world? What was the community response (were they shouted down, was the vuln considered minor, or was it game changing, like the SSL MitM vuln)? Have any of the company's employees talked at a conference? Was it a respected conference? Was the talk considered good by the attendees?
Second, you're interested in understanding the vulnerabilities that are reported to you. A good testing company will (a) give you a document describing what they did and did not do, what vulnerabilities they found, how to reproduce the vulnerabilities, and how they know the vulnerability is valid, and (b) will meet with you (possibly teleconference) to review the vulnerabilities and explain how the vulns work, and (c) will have written into the contract that they will retest once after you fix the vulns to validate that they are truly fixed.
You can also get training for your developers (or hire someone who has a good reputation in the field) so they can understand what's what. SafeLight is a good company. SANS offers good training, too. You can use training tools like OWASP's webgoat, which walks you through common web app vulns. Or you can do some reading - NIST SP 800 is a freely downloadable fantastic intro to computer security concepts, and the Hacking Exposed series do a good job teaching how to do the very basic stuff. After that Microsoft Press offers a great set of books about security and security development lifecycle activities. SafeCode offers some good, short recommendations.
Hope this helps!
If you can afford to hire expert security consultants, then that may be your best bet given that your in-house security skills are low.
If not, there is not escaping the fact that you are going to need to understand more about security, how to identify threats, and how to write tests to test for common security exploits like XSS, SQL injection, CSRF, and so on.
Automated security vulnerability software (static code analysis and runtime vulnerability scanning) are useful, but they are only ever going to be one piece in your overall security approach. Automated tools do not identify all exploits, and they can leave you with a false sense of security, or a huge list of false positives. Without the ability to interpret the output of these tools, you might as well not have them.
One tool I would recommend for external vulnerability scanning is QualysGuard. They have a huge and up to date database of common exploits that they can scan for in public facing web applications, web servers, DNS servers, firewalls, VPN servers etc., and the output of the reports usually leaves you with a very clear idea of what is wrong, and what to do about it. But again, this would only be one part in your overall security approach.
If you want to take a holistic approach to security that covers not only the components in your network, applications, databases, and so on, but also the processes (eg. change management, data retention policy, patching) you may find the PCI-DSS specification to be a useful guide, even if you are not storing credit card numbers.
Wow. I wasn't really expecting this little activity.
I may have to alter this answer depending on my experiences but in continuing to wade through the acres of verbiage on my quest for something approachable I happened on a project which has been brought into the OWASP fold:
http://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
It boasts, and I quote from the project documentation's introduction:
[ZAP] is designed to be used by people
with a wide range of security
experience and as such is ideal for
developers and functional testers who
a (sic) new to penetration testing.
EDIT: After having a swift play with ZAP this morning, although I couldn't directly switch on the attack mode on our site right away I can see that the proxy works in a manner very similar to OWASP's Web Scarab (Would link but lack of rep and anti-spam rules prevent this. Web Scarab is more technically oriented, it seems, looking over the feature list Scarab does more stuff, but it doesn't have a pen test vulnerability scanner. I'll update more once I've worked out how to have a go with the vulnerability scanner.
Anyone else who would like to pitch in and have a go would be welcome to do so and comment or answer as well below.

What should every programmer know about security? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am an IT student and I am now in the 3rd year in university. Until now we've been studing a lot of subjects related to computers in general (programming, algorithms, computer architecture, maths, etc).
I am very sure that nobody can learn every thing about security but sure there is a "minimum" knowledge every programmer or IT student should know about it and my question is what is this minimum knowledge?
Can you suggest some e-books or courses or anything can help to start with this road?
Principles to keep in mind if you want your applications to be secure:
Never trust any input!
Validate input from all untrusted sources - use whitelists not blacklists
Plan for security from the start - it's not something you can bolt on at the end
Keep it simple - complexity increases the likelihood of security holes
Keep your attack surface to a minimum
Make sure you fail securely
Use defence in depth
Adhere to the principle of least privilege
Use threat modelling
Compartmentalize - so your system is not all or nothing
Hiding secrets is hard - and secrets hidden in code won't stay secret for long
Don't write your own crypto
Using crypto doesn't mean you're secure (attackers will look for a weaker link)
Be aware of buffer overflows and how to protect against them
There are some excellent books and articles online about making your applications secure:
Writing Secure Code 2nd Edition - I think every programmer should read this
Building Secure Software: How to Avoid Security Problems the Right Way
Secure Programming Cookbook
Exploiting Software
Security Engineering - an excellent read
Secure Programming for Linux and Unix HOWTO
Train your developers on application security best pratices
Codebashing (paid)
Security Innovation(paid)
Security Compass (paid)
OWASP WebGoat (free)
Rule #1 of security for programmers: Don't roll your own
Unless you are yourself a security expert and/or cryptographer, always use a well-designed, well-tested, and mature security platform, framework, or library to do the work for you. These things have spent years being thought out, patched, updated, and examined by experts and hackers alike. You want to gain those advantages, not dismiss them by trying to reinvent the wheel.
Now, that's not to say you don't need to learn anything about security. You certainly need to know enough to understand what you're doing and make sure you're using the tools correctly. However, if you ever find yourself about to start writing your own cryptography algorithm, authentication system, input sanitizer, etc, stop, take a step back, and remember rule #1.
Every programmer should know how to write exploit code.
Without knowing how systems are exploited you are accidentally stopping vulnerabilities. Knowing how to patch code is absolutely meaningless unless you know how to test your patches. Security isn't just a bunch of thought experiments, you must be scientific and test your experiments.
Security is a process, not a product.
Many seem to forget about this obvious matter of fact.
I suggest reviewing CWE/SANS TOP 25 Most Dangerous Programming Errors. It was updated for 2010 with the promise of regular updates in the future. The 2009 revision is available as well.
From http://cwe.mitre.org/top25/index.html
The 2010 CWE/SANS Top 25 Most Dangerous Programming Errors is a list of the most widespread and critical programming errors that can lead to serious software vulnerabilities. They are often easy to find, and easy to exploit. They are dangerous because they will frequently allow attackers to completely take over the software, steal data, or prevent the software from working at all.
The Top 25 list is a tool for education and awareness to help programmers to prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped. Software customers can use the same list to help them to ask for more secure software. Researchers in software security can use the Top 25 to focus on a narrow but important subset of all known security weaknesses. Finally, software managers and CIOs can use the Top 25 list as a measuring stick of progress in their efforts to secure their software.
A good starter course might be the MIT course in Computer Networks and Security. One thing that I would suggest is to not forget about privacy. Privacy, in some senses, is really foundational to security and isn't often covered in technical courses on security. You might find some material on privacy in this course on Ethics and the Law as it relates to the internet.
The Web Security team at Mozilla put together a great guide, which we abide by in the development of our sites and services.
The importance of secure defaults in frameworks and APIs:
Lots of early web frameworks didn't escape html by default in templates and had XSS problems because of this
Lots of early web frameworks made it easier to concatenate SQL than to create parameterized queries leading to lots of SQL injection bugs.
Some versions of Erlang (R13B, maybe others) don't verify ssl peer certificates by default and there are probably lots of erlang code that is susceptible to SSL MITM attacks
Java's XSLT transformer by default allows execution of arbitrary java code. There has been many serious security bugs created by this.
Java's XML parsing APIs by default allow the parsed document to read arbitrary files on the filesystem. More fun :)
You should know about the three A's. Authentication, Authorization, Audit. Classical mistake is to authenticate a user, while not checking if user is authorized to perform some action, so a user may look at other users private photos, the mistake Diaspora did. Many, many more people forget about Audit, you need, in a secure system, to be able to tell who did what and when.
Remember that you (the programmer) has to secure all parts, but the attacker only has to succeed in finding one kink in your armour.
Security is an example of "unknown unknowns". Sometimes you won't know what the possible security flaws are (until afterwards).
The difference between a bug and a security hole depends on the intelligence of the attacker.
I would add the following:
How digital signatures and digital certificates work
What's sandboxing
Understand how different attack vectors work:
Buffer overflows/underflows/etc on native code
Social engineerring
DNS spoofing
Man-in-the middle
CSRF/XSS et al
SQL injection
Crypto attacks (ex: exploiting weak crypto algorithms such as DES)
Program/Framework errors (ex: github's latest security flaw)
You can easily google for all of this. This will give you a good foundation.
If you want to see web app vulnerabilities, there's a project called google gruyere that shows you how to exploit a working web app.
when you are building any enterprise or any of your own software,you should just think like a hacker.as we know hackers are also not expert in all the things,but when they find any vulnerability they start digging into it by gathering information about all the things and finally attack on our software.so for preventing such attacks we should follow some well known rules like:
always try to break your codes(use cheatsheets & google the things for more informations).
be updated for security flaws in your programming field.
and as mentioned above never trust in any type of user or automated inputs.
use opensource applications(their most security flaws are known and solved).
you can find more security resource on the following links:
owasp security
CERT Security
SANS Security
netcraft
SecuritySpace
openwall
PHP Sec
thehackernews(keep updating yourself)
for more information google about your application vendor security flows.
Why is is important.
It is all about trade-offs.
Cryptography is largely a distraction from security.
For general information on security, I highly recommend reading Bruce Schneier. He's got a website, his crypto-gram newsletter, several books, and has done lots of interviews.
I would also get familiar with social engineering (and Kevin Mitnick).
For a good (and pretty entertaining) book on how security plays out in the real world, I would recommend the excellent (although a bit dated) 'The Cuckoo's Egg' by Cliff Stoll.
Also be sure to check out the OWASP Top 10 List for a categorization of all the main attack vectors/vulnerabilities.
These things are fascinating to read about. Learning to think like an attacker will train you of what to think about as you're writing your own code.
Salt and hash your users' passwords. Never save them in plaintext in your database.
Just wanted to share this for web developers:
security-guide-for-developershttps://github.com/FallibleInc/security-guide-for-developers

Software and Security - do you follow specific guidelines?

As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.

What are the best programmatic security controls and design patterns?

There's a lot of security advice out there to tell programmers what not to do. What in your opinion are the best practices that should be followed when coding for good security?
Please add your suggested security control / design pattern below. Suggested format is a bold headline summarising the idea, followed by a description and examples e.g.:
Deny by default
Deny everything that is not explicitly permitted...
Please vote up or comment with improvements rather than duplicating an existing answer. Please also put different patterns and controls in their own answer rather than adding an answer with your 3 or 4 preferred controls.
edit: I am making this a community wiki to encourage voting.
Principle of Least Privilege -- a process should only hold those privileges it actually needs, and should only hold those privileges for the shortest time necessary. So, for example, it's better to use sudo make install than to su to open a shell and then work as superuser.
All these ideas that people are listing (isolation, least privilege, white-listing) are tools.
But you first have to know what "security" means for your application. Often it means something like
Availability: The program will not fail to serve one client because another client submitted bad data.
Privacy: The program will not leak one user's data to another user
Isolation: The program will not interact with data the user did not intend it to.
Reviewability: The program obviously functions correctly -- a desirable property of a vote counter.
Trusted Path: The user knows which entity they are interacting with.
Once you know what security means for your application, then you can start designing around that.
One design practice that doesn't get mentioned as often as it should is Object Capabilities.
Many secure systems need to make authorizing decisions -- should this piece of code be able to access this file or open a socket to that machine.
Access Control Lists are one way to do that -- specify the files that can be accessed. Such systems though require a lot of maintenance overhead. They work for security agencies where people have clearances, and they work for databases where the company deploying the database hires a DB admin. But they work poorly for secure end-user software since the user often has neither the skills nor the inclination to keep lists up to date.
Object Capabilities solve this problem by piggy-backing access decisions on object references -- by using all the work that programmers already do in well-designed object-oriented systems to minimize the amount of authority any individual piece of code has. See CapDesk for an example of how this works in practice.
DARPA ran a secure systems design experiment called the DARPA Browser project which found that a system designed this way -- although it had the same rate of bugs as other Object Oriented systems -- had a far lower rate of exploitable vulnerabilities. Since the designers followed POLA using object capabilities, it was much harder for attackers to find a way to use a bug to compromise the system.
White listing
Opt in what you know you accept
(Yeah, I know, it's very similar to "deny by default", but I like to use positive thinking.)
Model threats before making security design decisions -- think about what possible threats there might be, and how likely they are. For, for example, someone stealing your computer is more likely with a laptop than with a desktop. Then worry about these more probable threats first.
Limit the "attack surface". Expose your system to the fewest attacks possible, via firewalls, limited access, etc.
Remember physical security. If someone can take your hard drive, that may be the most effective attack of all.
(I recall an intrusion red team exercise in which we showed up with a clipboard and an official-looking form, and walked away with the entire "secure" system.)
Encryption ≠ security.
Hire security professionals
Security is a specialized skill. Don't try to do it yourself. If you can't afford to contract out your security, then at least hire a professional to test your implementation.
Reuse proven code
Use proven encryption algorithms, cryptographic random number generators, hash functions, authentication schemes, access control systems, rather than rolling your own.
Design security in from the start
It's a lot easier to get security wrong when you're adding it to an existing system.
Isolation. Code should have strong isolation between, eg, processes in order that failures in one component can't easily compromise others.
Express risk and hazard in terms of cost. Money. It concentrates the mind wonderfully.
Well understanding of underlying assumptions on crypto building blocks can be important. E.g., stream ciphers such as RC4 are very useful but can be easily used to build an insecure system (i.e., WEP and alike).
If you encrypt your data for security, the highest risk data in your enterprise becomes your keys. Lose the keys, and data is lost; compromise the keys and all your data is compromised.
Use risk to make security decisions. Once you determine the probability of different threats, then consider the harm that each could do. Risk is, by definition
R = Pe × H
where Pe is the probability of the undersired event, and H is the hazard, or the amount of harm that could come from the undesired event.
Separate concerns. Architect your system and design your code so that security-critical components can be kept together.
KISS (Keep It Simple, Stupid)
If you need to make a very convoluted and difficult to follow argument as to why your system is secure, then it probably isn't secure.
Formal security designs sometimes refer to a thing called the TCB (Trusted Computing Base). But even an informal design has something like this - the security enforcing part of your code, the part you can't avoid relying on. This needs to be well encapsulated and as simple and small as possible.

Resources