What is the difference between an exploit and a compromise? - security

I see the words compromise and exploit being used interchangeably. When I did basic Google searches for this question, the answers were about the difference between an exploit and a vulnerability, not the comparison I queried.

Exploit: software that makes use of a flaw of the operating system or host software to gain some kind of a benefit.
Compromise: the act of damaging the integrity, confidentiality or availability of a computer; for example, through an exploit.

Related

What is a good list of resources to learn system security?

I'm interested in learning system security. I'm thinking of installing Damn Vulnerable Linux on one of my spare machines to test on. Anyone have any recommendations, reads, methods or tutorials?
These days exploiting C/C++ and operating systems is not easy. You are starting with a massive topic. The only more complex security topic would be cryptography. With anything you need to start small and then work your way up. You should start with web application security. You should be learning about the most common vulnerabilities such as XSS and SQL Injection, Google Gruyre is a good resource.
If you are very skilled then you might be able to get though old paper smashing the stack for fun and profit. A good book for learning how to attack modern c/c++ applications is Exploiting Software: How to break code.
I feel that information security is too broad a topic, and list of interesting problems are too many to enumerate. Since you say about Damn vulnerable linux, I assume that you are confining this to operating system.
If so, some interesting topics would be - i) Buffer Overflow attacks - Stack smashing attacks, Integer overflow and Heap smashing attacks, etc. and ii) TOCTOU attacks. http://insecure.org/ is a good resource and has bunch of tutorials on them. Also, the vulnerabilities, and some attack payload can be found in vulnerability reporting DB such as secunia.org and cert.org. Also, it might be worth to study about network exploits - on how deep-packet inspection can detect simple worms. Advanced topics might include polymorphic and self-modifying worms. Firewalls can be an eventual topic.

Are there methods that would help to make software immune to viruses?

Are there any known methods that can be used to make your software immune to viruses? To be more specific, the development is done in .NET. Is there anything built into .NET that would help with this? I had someone specifically ask me about this and I did not have the slightest clue.
Update: Immunity in this situation would mean immune to exploitation. At least I believe that's what the person asking the question to me wanted to know.
I realize this is an extremely open ended question and that software development is way off from being at a point where it software developed can really be immune to viruses.
When you say immune to viruses, do you mean immune to exploitation? I think any software being immune to viruses is some way off for any development platform.
In general, there are two important activities that lead to secure software
1) Safety by design - does the
inherent design of a system allow an
attacker to control the host machine. However, a secure design won't protect against insecure coding.
2) Secure coding - most
importantly, don't trust user input. However, secure coding won't protect against an insecure design.
There are several features of .NET that help reduce the liklihood of exploitation.
1) .NET takes care of memory
allocation and destruction. This helps prevent exploitable heap corruption.
2) .NET, by default, will bounds check array accesses. In C based languages, overwriting, or underwriting into arrays has allowed attackers to write memory into the target process.
.
3) .NET execution model is not
vulnerable to stack overflows in the
same way as the normal x86 execution
model. Arrays are allocated on the
heap.
4) you can set Code Access permissions, if used properly these help prevent untrusted code performing privileged actions on the host machine.

What should every programmer know about security? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am an IT student and I am now in the 3rd year in university. Until now we've been studing a lot of subjects related to computers in general (programming, algorithms, computer architecture, maths, etc).
I am very sure that nobody can learn every thing about security but sure there is a "minimum" knowledge every programmer or IT student should know about it and my question is what is this minimum knowledge?
Can you suggest some e-books or courses or anything can help to start with this road?
Principles to keep in mind if you want your applications to be secure:
Never trust any input!
Validate input from all untrusted sources - use whitelists not blacklists
Plan for security from the start - it's not something you can bolt on at the end
Keep it simple - complexity increases the likelihood of security holes
Keep your attack surface to a minimum
Make sure you fail securely
Use defence in depth
Adhere to the principle of least privilege
Use threat modelling
Compartmentalize - so your system is not all or nothing
Hiding secrets is hard - and secrets hidden in code won't stay secret for long
Don't write your own crypto
Using crypto doesn't mean you're secure (attackers will look for a weaker link)
Be aware of buffer overflows and how to protect against them
There are some excellent books and articles online about making your applications secure:
Writing Secure Code 2nd Edition - I think every programmer should read this
Building Secure Software: How to Avoid Security Problems the Right Way
Secure Programming Cookbook
Exploiting Software
Security Engineering - an excellent read
Secure Programming for Linux and Unix HOWTO
Train your developers on application security best pratices
Codebashing (paid)
Security Innovation(paid)
Security Compass (paid)
OWASP WebGoat (free)
Rule #1 of security for programmers: Don't roll your own
Unless you are yourself a security expert and/or cryptographer, always use a well-designed, well-tested, and mature security platform, framework, or library to do the work for you. These things have spent years being thought out, patched, updated, and examined by experts and hackers alike. You want to gain those advantages, not dismiss them by trying to reinvent the wheel.
Now, that's not to say you don't need to learn anything about security. You certainly need to know enough to understand what you're doing and make sure you're using the tools correctly. However, if you ever find yourself about to start writing your own cryptography algorithm, authentication system, input sanitizer, etc, stop, take a step back, and remember rule #1.
Every programmer should know how to write exploit code.
Without knowing how systems are exploited you are accidentally stopping vulnerabilities. Knowing how to patch code is absolutely meaningless unless you know how to test your patches. Security isn't just a bunch of thought experiments, you must be scientific and test your experiments.
Security is a process, not a product.
Many seem to forget about this obvious matter of fact.
I suggest reviewing CWE/SANS TOP 25 Most Dangerous Programming Errors. It was updated for 2010 with the promise of regular updates in the future. The 2009 revision is available as well.
From http://cwe.mitre.org/top25/index.html
The 2010 CWE/SANS Top 25 Most Dangerous Programming Errors is a list of the most widespread and critical programming errors that can lead to serious software vulnerabilities. They are often easy to find, and easy to exploit. They are dangerous because they will frequently allow attackers to completely take over the software, steal data, or prevent the software from working at all.
The Top 25 list is a tool for education and awareness to help programmers to prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped. Software customers can use the same list to help them to ask for more secure software. Researchers in software security can use the Top 25 to focus on a narrow but important subset of all known security weaknesses. Finally, software managers and CIOs can use the Top 25 list as a measuring stick of progress in their efforts to secure their software.
A good starter course might be the MIT course in Computer Networks and Security. One thing that I would suggest is to not forget about privacy. Privacy, in some senses, is really foundational to security and isn't often covered in technical courses on security. You might find some material on privacy in this course on Ethics and the Law as it relates to the internet.
The Web Security team at Mozilla put together a great guide, which we abide by in the development of our sites and services.
The importance of secure defaults in frameworks and APIs:
Lots of early web frameworks didn't escape html by default in templates and had XSS problems because of this
Lots of early web frameworks made it easier to concatenate SQL than to create parameterized queries leading to lots of SQL injection bugs.
Some versions of Erlang (R13B, maybe others) don't verify ssl peer certificates by default and there are probably lots of erlang code that is susceptible to SSL MITM attacks
Java's XSLT transformer by default allows execution of arbitrary java code. There has been many serious security bugs created by this.
Java's XML parsing APIs by default allow the parsed document to read arbitrary files on the filesystem. More fun :)
You should know about the three A's. Authentication, Authorization, Audit. Classical mistake is to authenticate a user, while not checking if user is authorized to perform some action, so a user may look at other users private photos, the mistake Diaspora did. Many, many more people forget about Audit, you need, in a secure system, to be able to tell who did what and when.
Remember that you (the programmer) has to secure all parts, but the attacker only has to succeed in finding one kink in your armour.
Security is an example of "unknown unknowns". Sometimes you won't know what the possible security flaws are (until afterwards).
The difference between a bug and a security hole depends on the intelligence of the attacker.
I would add the following:
How digital signatures and digital certificates work
What's sandboxing
Understand how different attack vectors work:
Buffer overflows/underflows/etc on native code
Social engineerring
DNS spoofing
Man-in-the middle
CSRF/XSS et al
SQL injection
Crypto attacks (ex: exploiting weak crypto algorithms such as DES)
Program/Framework errors (ex: github's latest security flaw)
You can easily google for all of this. This will give you a good foundation.
If you want to see web app vulnerabilities, there's a project called google gruyere that shows you how to exploit a working web app.
when you are building any enterprise or any of your own software,you should just think like a hacker.as we know hackers are also not expert in all the things,but when they find any vulnerability they start digging into it by gathering information about all the things and finally attack on our software.so for preventing such attacks we should follow some well known rules like:
always try to break your codes(use cheatsheets & google the things for more informations).
be updated for security flaws in your programming field.
and as mentioned above never trust in any type of user or automated inputs.
use opensource applications(their most security flaws are known and solved).
you can find more security resource on the following links:
owasp security
CERT Security
SANS Security
netcraft
SecuritySpace
openwall
PHP Sec
thehackernews(keep updating yourself)
for more information google about your application vendor security flows.
Why is is important.
It is all about trade-offs.
Cryptography is largely a distraction from security.
For general information on security, I highly recommend reading Bruce Schneier. He's got a website, his crypto-gram newsletter, several books, and has done lots of interviews.
I would also get familiar with social engineering (and Kevin Mitnick).
For a good (and pretty entertaining) book on how security plays out in the real world, I would recommend the excellent (although a bit dated) 'The Cuckoo's Egg' by Cliff Stoll.
Also be sure to check out the OWASP Top 10 List for a categorization of all the main attack vectors/vulnerabilities.
These things are fascinating to read about. Learning to think like an attacker will train you of what to think about as you're writing your own code.
Salt and hash your users' passwords. Never save them in plaintext in your database.
Just wanted to share this for web developers:
security-guide-for-developershttps://github.com/FallibleInc/security-guide-for-developers

Strong Link - Weak Link in software security

Give me an example on how I could apply the Strong Link - Weak Link principle in designing a security component for a piece of software. Is there such a concept of "weak" modules in software security, where in case of an attack these will deliberately fail first, and determine the impossibility of the attacker to access and compromise any other, more sensitive data?
One thing that can happen accidentally is to fail (as DoS) under a dictionary attack. Generally you would want to throttle, which I guess is a weaker version of weak module.

Bugs versus vulnerabilities?

What, if any, is the difference between a software bug and a software vulnerability?
A bug is when a system isn't behaving as it's designed to behave.
A vulnerability is a way of abusing the system (most commonly in a security-related way) - whether that's due to a design fault or an implementation fault. In other words, something can have a vulnerability due to a defective design, even if the implementation of that design is perfect.
Vulnerability is a subset of bug.
A bug is any defect in a product.
A vulnerability is bug that manifests as an opportunity for malicious use of the product. Vulnerabilities generally are not that clearly evident, but require ingenuity to be exploited.
The two can sometimes overlap, but I'd say a "bug" is a mistake, while a "vulnerability" is, like the name suggests, a weakness.
From a programming perspective, I believe there is no difference between a bug and a vulnerability. They are both mistakes in the software.
However, from a security perspective, a vulnerability is a class of bugs that can be manipulated in some fashion by a malicious person.
A bug is a failure of your system to meet requirements.
Vulnerability is a subset of bug - it is when your system can be forced into a failure mode that does not meet requirements, usually by (ab)using your system (or something your system relies on) in an unexpected way.
Usually a vulnerability may result in failure to meet a requirement in one or more of these areas:
confidentiality
integrity
availability
or you can combine the last two:
confidentiality
reliability (= integrity + availability)
If you use Bugzilla, anything you need to do something with is a bug ;)
In my eyes vulnerabilities are a subset of bugs that enable someone to perform a malicious or harmful operation with your software.
Bugs are just code that does not work properly (how you define properly is subject to opinion).
Wikipedia:
In computer security, the term
vulnerability is applied to a weakness
in a system which allows an attacker
to violate the integrity of that
system
For example, home computers are vulnerable to physical threats like flood and hand grenades, but they are not considered a "bug". In enterprise environment, these threats are treated with more seriousness if the risk of system shutting down is great enough, maybe for air traffic support or nuclear reactor management.
Business continuity planning/disaster recovery and high availability usually deals with physical threats and failures by redundant hardware and distributing servers to remote locations.
Classification of software defect (or "bug") can be subjective, since it depends on the intent of the software design and requirements. A feature for a given set of audience may be interpreted as a vulnerability to the other if abused. For example, stackoverflow.com now discloses self-closed questions to those with 10k reps. Some may say it is a vulnerability since it violates common expectation of ordinary users (Like I said, it's a subjective call).
A bug is the failure of software to meet requirements. I would consider these to be the ideal requirements, so it would make sense to say that there's a bug in the requirements analysis, although that's more debatable.
A vulnerability is a feature, intended or otherwise, that can be exploited maliciously. It is not necessarily a bug, provided that it was deliberate.
To change subjects, it is a vulnerability that my home wireless has a guessable WPA password, but that was a conscious choice, to facilitate use by my guests. That's an example of requirements leading to a vulnerability. If I'd entered a weak password because I didn't know better, that would have been a bug as well as a vulnerability.

Resources