What, if any, is the difference between a software bug and a software vulnerability?
A bug is when a system isn't behaving as it's designed to behave.
A vulnerability is a way of abusing the system (most commonly in a security-related way) - whether that's due to a design fault or an implementation fault. In other words, something can have a vulnerability due to a defective design, even if the implementation of that design is perfect.
Vulnerability is a subset of bug.
A bug is any defect in a product.
A vulnerability is bug that manifests as an opportunity for malicious use of the product. Vulnerabilities generally are not that clearly evident, but require ingenuity to be exploited.
The two can sometimes overlap, but I'd say a "bug" is a mistake, while a "vulnerability" is, like the name suggests, a weakness.
From a programming perspective, I believe there is no difference between a bug and a vulnerability. They are both mistakes in the software.
However, from a security perspective, a vulnerability is a class of bugs that can be manipulated in some fashion by a malicious person.
A bug is a failure of your system to meet requirements.
Vulnerability is a subset of bug - it is when your system can be forced into a failure mode that does not meet requirements, usually by (ab)using your system (or something your system relies on) in an unexpected way.
Usually a vulnerability may result in failure to meet a requirement in one or more of these areas:
confidentiality
integrity
availability
or you can combine the last two:
confidentiality
reliability (= integrity + availability)
If you use Bugzilla, anything you need to do something with is a bug ;)
In my eyes vulnerabilities are a subset of bugs that enable someone to perform a malicious or harmful operation with your software.
Bugs are just code that does not work properly (how you define properly is subject to opinion).
Wikipedia:
In computer security, the term
vulnerability is applied to a weakness
in a system which allows an attacker
to violate the integrity of that
system
For example, home computers are vulnerable to physical threats like flood and hand grenades, but they are not considered a "bug". In enterprise environment, these threats are treated with more seriousness if the risk of system shutting down is great enough, maybe for air traffic support or nuclear reactor management.
Business continuity planning/disaster recovery and high availability usually deals with physical threats and failures by redundant hardware and distributing servers to remote locations.
Classification of software defect (or "bug") can be subjective, since it depends on the intent of the software design and requirements. A feature for a given set of audience may be interpreted as a vulnerability to the other if abused. For example, stackoverflow.com now discloses self-closed questions to those with 10k reps. Some may say it is a vulnerability since it violates common expectation of ordinary users (Like I said, it's a subjective call).
A bug is the failure of software to meet requirements. I would consider these to be the ideal requirements, so it would make sense to say that there's a bug in the requirements analysis, although that's more debatable.
A vulnerability is a feature, intended or otherwise, that can be exploited maliciously. It is not necessarily a bug, provided that it was deliberate.
To change subjects, it is a vulnerability that my home wireless has a guessable WPA password, but that was a conscious choice, to facilitate use by my guests. That's an example of requirements leading to a vulnerability. If I'd entered a weak password because I didn't know better, that would have been a bug as well as a vulnerability.
Related
I see the words compromise and exploit being used interchangeably. When I did basic Google searches for this question, the answers were about the difference between an exploit and a vulnerability, not the comparison I queried.
Exploit: software that makes use of a flaw of the operating system or host software to gain some kind of a benefit.
Compromise: the act of damaging the integrity, confidentiality or availability of a computer; for example, through an exploit.
I just stumbled upon this article:
Lessons from Anonymous on cyberwar
A cyberwar is brewing, and Anonymous reprisal attacks on HBGary Federal shows how deep the war goes.
http://english.aljazeera.net/indepth/opinion/2011/03/20113981026464808.html
Are there techniques to really protect our own developed software against highly sophisticated techniques ?
For example can't a simple crc check be enough ?
uPDATE: My question is not about protecting the software against cracking, it's about protecting the USER pc from being infiltrated. why wouldn't be enough to just check the crc and avoid running if it is not right ?
There is no guaranteed full protection. If you want your software be uncrackable or unexploitable, don't release it at all.
You want your software to verify checksums? Of what?
If the highly sophisticated techniques exploit problems in the network, the operating system or hardware; then no, user-ring software doesn't enter the picture so verifying checksums won't help.
If you want to checksum every incoming message, then you need to be able to enumerate all the possible safe incoming messages, in which case you've got an easier to use filter than checksums.
If you want to reject every incoming message that doesn't match a small list of checksums, then you've just turned one kind of attack into another : a denial of availability. This may be fine for some systems but not most.
There is no protection against 0-days, after all thats the very definition of an 0-day. ALL software is vulnerable.
But you can plan on failure. That is the point behind Defense in depth.
What you can take into mind is the principal of least privilege access. Security in layers.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am an IT student and I am now in the 3rd year in university. Until now we've been studing a lot of subjects related to computers in general (programming, algorithms, computer architecture, maths, etc).
I am very sure that nobody can learn every thing about security but sure there is a "minimum" knowledge every programmer or IT student should know about it and my question is what is this minimum knowledge?
Can you suggest some e-books or courses or anything can help to start with this road?
Principles to keep in mind if you want your applications to be secure:
Never trust any input!
Validate input from all untrusted sources - use whitelists not blacklists
Plan for security from the start - it's not something you can bolt on at the end
Keep it simple - complexity increases the likelihood of security holes
Keep your attack surface to a minimum
Make sure you fail securely
Use defence in depth
Adhere to the principle of least privilege
Use threat modelling
Compartmentalize - so your system is not all or nothing
Hiding secrets is hard - and secrets hidden in code won't stay secret for long
Don't write your own crypto
Using crypto doesn't mean you're secure (attackers will look for a weaker link)
Be aware of buffer overflows and how to protect against them
There are some excellent books and articles online about making your applications secure:
Writing Secure Code 2nd Edition - I think every programmer should read this
Building Secure Software: How to Avoid Security Problems the Right Way
Secure Programming Cookbook
Exploiting Software
Security Engineering - an excellent read
Secure Programming for Linux and Unix HOWTO
Train your developers on application security best pratices
Codebashing (paid)
Security Innovation(paid)
Security Compass (paid)
OWASP WebGoat (free)
Rule #1 of security for programmers: Don't roll your own
Unless you are yourself a security expert and/or cryptographer, always use a well-designed, well-tested, and mature security platform, framework, or library to do the work for you. These things have spent years being thought out, patched, updated, and examined by experts and hackers alike. You want to gain those advantages, not dismiss them by trying to reinvent the wheel.
Now, that's not to say you don't need to learn anything about security. You certainly need to know enough to understand what you're doing and make sure you're using the tools correctly. However, if you ever find yourself about to start writing your own cryptography algorithm, authentication system, input sanitizer, etc, stop, take a step back, and remember rule #1.
Every programmer should know how to write exploit code.
Without knowing how systems are exploited you are accidentally stopping vulnerabilities. Knowing how to patch code is absolutely meaningless unless you know how to test your patches. Security isn't just a bunch of thought experiments, you must be scientific and test your experiments.
Security is a process, not a product.
Many seem to forget about this obvious matter of fact.
I suggest reviewing CWE/SANS TOP 25 Most Dangerous Programming Errors. It was updated for 2010 with the promise of regular updates in the future. The 2009 revision is available as well.
From http://cwe.mitre.org/top25/index.html
The 2010 CWE/SANS Top 25 Most Dangerous Programming Errors is a list of the most widespread and critical programming errors that can lead to serious software vulnerabilities. They are often easy to find, and easy to exploit. They are dangerous because they will frequently allow attackers to completely take over the software, steal data, or prevent the software from working at all.
The Top 25 list is a tool for education and awareness to help programmers to prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped. Software customers can use the same list to help them to ask for more secure software. Researchers in software security can use the Top 25 to focus on a narrow but important subset of all known security weaknesses. Finally, software managers and CIOs can use the Top 25 list as a measuring stick of progress in their efforts to secure their software.
A good starter course might be the MIT course in Computer Networks and Security. One thing that I would suggest is to not forget about privacy. Privacy, in some senses, is really foundational to security and isn't often covered in technical courses on security. You might find some material on privacy in this course on Ethics and the Law as it relates to the internet.
The Web Security team at Mozilla put together a great guide, which we abide by in the development of our sites and services.
The importance of secure defaults in frameworks and APIs:
Lots of early web frameworks didn't escape html by default in templates and had XSS problems because of this
Lots of early web frameworks made it easier to concatenate SQL than to create parameterized queries leading to lots of SQL injection bugs.
Some versions of Erlang (R13B, maybe others) don't verify ssl peer certificates by default and there are probably lots of erlang code that is susceptible to SSL MITM attacks
Java's XSLT transformer by default allows execution of arbitrary java code. There has been many serious security bugs created by this.
Java's XML parsing APIs by default allow the parsed document to read arbitrary files on the filesystem. More fun :)
You should know about the three A's. Authentication, Authorization, Audit. Classical mistake is to authenticate a user, while not checking if user is authorized to perform some action, so a user may look at other users private photos, the mistake Diaspora did. Many, many more people forget about Audit, you need, in a secure system, to be able to tell who did what and when.
Remember that you (the programmer) has to secure all parts, but the attacker only has to succeed in finding one kink in your armour.
Security is an example of "unknown unknowns". Sometimes you won't know what the possible security flaws are (until afterwards).
The difference between a bug and a security hole depends on the intelligence of the attacker.
I would add the following:
How digital signatures and digital certificates work
What's sandboxing
Understand how different attack vectors work:
Buffer overflows/underflows/etc on native code
Social engineerring
DNS spoofing
Man-in-the middle
CSRF/XSS et al
SQL injection
Crypto attacks (ex: exploiting weak crypto algorithms such as DES)
Program/Framework errors (ex: github's latest security flaw)
You can easily google for all of this. This will give you a good foundation.
If you want to see web app vulnerabilities, there's a project called google gruyere that shows you how to exploit a working web app.
when you are building any enterprise or any of your own software,you should just think like a hacker.as we know hackers are also not expert in all the things,but when they find any vulnerability they start digging into it by gathering information about all the things and finally attack on our software.so for preventing such attacks we should follow some well known rules like:
always try to break your codes(use cheatsheets & google the things for more informations).
be updated for security flaws in your programming field.
and as mentioned above never trust in any type of user or automated inputs.
use opensource applications(their most security flaws are known and solved).
you can find more security resource on the following links:
owasp security
CERT Security
SANS Security
netcraft
SecuritySpace
openwall
PHP Sec
thehackernews(keep updating yourself)
for more information google about your application vendor security flows.
Why is is important.
It is all about trade-offs.
Cryptography is largely a distraction from security.
For general information on security, I highly recommend reading Bruce Schneier. He's got a website, his crypto-gram newsletter, several books, and has done lots of interviews.
I would also get familiar with social engineering (and Kevin Mitnick).
For a good (and pretty entertaining) book on how security plays out in the real world, I would recommend the excellent (although a bit dated) 'The Cuckoo's Egg' by Cliff Stoll.
Also be sure to check out the OWASP Top 10 List for a categorization of all the main attack vectors/vulnerabilities.
These things are fascinating to read about. Learning to think like an attacker will train you of what to think about as you're writing your own code.
Salt and hash your users' passwords. Never save them in plaintext in your database.
Just wanted to share this for web developers:
security-guide-for-developershttps://github.com/FallibleInc/security-guide-for-developers
As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.
There's a lot of security advice out there to tell programmers what not to do. What in your opinion are the best practices that should be followed when coding for good security?
Please add your suggested security control / design pattern below. Suggested format is a bold headline summarising the idea, followed by a description and examples e.g.:
Deny by default
Deny everything that is not explicitly permitted...
Please vote up or comment with improvements rather than duplicating an existing answer. Please also put different patterns and controls in their own answer rather than adding an answer with your 3 or 4 preferred controls.
edit: I am making this a community wiki to encourage voting.
Principle of Least Privilege -- a process should only hold those privileges it actually needs, and should only hold those privileges for the shortest time necessary. So, for example, it's better to use sudo make install than to su to open a shell and then work as superuser.
All these ideas that people are listing (isolation, least privilege, white-listing) are tools.
But you first have to know what "security" means for your application. Often it means something like
Availability: The program will not fail to serve one client because another client submitted bad data.
Privacy: The program will not leak one user's data to another user
Isolation: The program will not interact with data the user did not intend it to.
Reviewability: The program obviously functions correctly -- a desirable property of a vote counter.
Trusted Path: The user knows which entity they are interacting with.
Once you know what security means for your application, then you can start designing around that.
One design practice that doesn't get mentioned as often as it should is Object Capabilities.
Many secure systems need to make authorizing decisions -- should this piece of code be able to access this file or open a socket to that machine.
Access Control Lists are one way to do that -- specify the files that can be accessed. Such systems though require a lot of maintenance overhead. They work for security agencies where people have clearances, and they work for databases where the company deploying the database hires a DB admin. But they work poorly for secure end-user software since the user often has neither the skills nor the inclination to keep lists up to date.
Object Capabilities solve this problem by piggy-backing access decisions on object references -- by using all the work that programmers already do in well-designed object-oriented systems to minimize the amount of authority any individual piece of code has. See CapDesk for an example of how this works in practice.
DARPA ran a secure systems design experiment called the DARPA Browser project which found that a system designed this way -- although it had the same rate of bugs as other Object Oriented systems -- had a far lower rate of exploitable vulnerabilities. Since the designers followed POLA using object capabilities, it was much harder for attackers to find a way to use a bug to compromise the system.
White listing
Opt in what you know you accept
(Yeah, I know, it's very similar to "deny by default", but I like to use positive thinking.)
Model threats before making security design decisions -- think about what possible threats there might be, and how likely they are. For, for example, someone stealing your computer is more likely with a laptop than with a desktop. Then worry about these more probable threats first.
Limit the "attack surface". Expose your system to the fewest attacks possible, via firewalls, limited access, etc.
Remember physical security. If someone can take your hard drive, that may be the most effective attack of all.
(I recall an intrusion red team exercise in which we showed up with a clipboard and an official-looking form, and walked away with the entire "secure" system.)
Encryption ≠ security.
Hire security professionals
Security is a specialized skill. Don't try to do it yourself. If you can't afford to contract out your security, then at least hire a professional to test your implementation.
Reuse proven code
Use proven encryption algorithms, cryptographic random number generators, hash functions, authentication schemes, access control systems, rather than rolling your own.
Design security in from the start
It's a lot easier to get security wrong when you're adding it to an existing system.
Isolation. Code should have strong isolation between, eg, processes in order that failures in one component can't easily compromise others.
Express risk and hazard in terms of cost. Money. It concentrates the mind wonderfully.
Well understanding of underlying assumptions on crypto building blocks can be important. E.g., stream ciphers such as RC4 are very useful but can be easily used to build an insecure system (i.e., WEP and alike).
If you encrypt your data for security, the highest risk data in your enterprise becomes your keys. Lose the keys, and data is lost; compromise the keys and all your data is compromised.
Use risk to make security decisions. Once you determine the probability of different threats, then consider the harm that each could do. Risk is, by definition
R = Pe × H
where Pe is the probability of the undersired event, and H is the hazard, or the amount of harm that could come from the undesired event.
Separate concerns. Architect your system and design your code so that security-critical components can be kept together.
KISS (Keep It Simple, Stupid)
If you need to make a very convoluted and difficult to follow argument as to why your system is secure, then it probably isn't secure.
Formal security designs sometimes refer to a thing called the TCB (Trusted Computing Base). But even an informal design has something like this - the security enforcing part of your code, the part you can't avoid relying on. This needs to be well encapsulated and as simple and small as possible.