Related
I have been told by our senior developer that Apache Realms (using either .htaccess files or the httpd config) are insecure and therefore bad practise to use in any sort of admin backend system. Is this true?
The way I see it, people cant see any .htxxx files under apache anyway, so how else can someone get around a Realm? Of course if they're inside your sever then they can do as they please, but at this point any security is insecure anyway, right?
Are there anyways of hacking or bypassing a Realm? Are they insecure to use?
If so, examples would be useful.
here are some general weaknesses of http basic authentication, in no particular order and with no particular endorsement of them.
using apache as your security layer means ops people are responsible for a critical layer instead of developers. This might mean bypassing code reviews and other measures, depending on your departments' setup.
using realms means you are vulnerable to various well known attacks, whereas a custom login php solution might be more obscure and less likely to be probed for. (maybe)
apache realms rely on specific well known files with well known formats, meaning any flaw involving adding or editing files to the system might compromise your login
apache realms may also be attacked thru shell commands, meaning any flaw involving exec or system might compromise your login
apache realms are vulnerable to brute force attacks, as they do not support captcha, or rate limiting
theft of your password file means a potential compromise of all your passwords, which may cause a lot of headaches if you have a lot of users.
having a sensitive password file might complicate deployment, backup, etc etc
All of the above issues are dealable, and as with most security arguments there are probably right answers, most likely your senior dev has numerous reasons you are not privy to.
This may sound like a weird question but is there any where I can download a website that is vulnerable to sql injection the url kind not login bypass?
I'm making a vulnerability scanner and I want to learn some SQLi so i can include it in my project.
Thanks, it doesn't need to be fancy. Just enough to practice on.
OWASP WebGoat is the usual example. Includes SQL injection vulnerabilities.
No, you cannot download their site to test for injection vulnerabilities. You need to download their whole DB and configs to do what you are saying. If you want to benevolently go checking the security of various sites, you have to ask them about their system and model it on your own. OWASP works on systems not recently updated with patches, like the comment of tackline-its a first port.
OWAPS's WebGoat is an application that is built to be vulnerable to attack, it is a simulation of real world vulnerabilities. The Whitebox is a collection of real world vulnerabilities, it has 2 web applications that where abandoned because the applications where so insecure. It also has a set of challenges there are vulnerable code snips taken from real world applications. This project has real world sql injection as well more serious vulnerabilities.
Try scanning the vulnerable apps with Wapiti (open source) or Acunetix ($) or NTOSpider($$$). Then try using the applications, create blog posts ect, and then scan it again.
Also check out Damn Vulnerable Linux and Google Jarlsburg.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am an IT student and I am now in the 3rd year in university. Until now we've been studing a lot of subjects related to computers in general (programming, algorithms, computer architecture, maths, etc).
I am very sure that nobody can learn every thing about security but sure there is a "minimum" knowledge every programmer or IT student should know about it and my question is what is this minimum knowledge?
Can you suggest some e-books or courses or anything can help to start with this road?
Principles to keep in mind if you want your applications to be secure:
Never trust any input!
Validate input from all untrusted sources - use whitelists not blacklists
Plan for security from the start - it's not something you can bolt on at the end
Keep it simple - complexity increases the likelihood of security holes
Keep your attack surface to a minimum
Make sure you fail securely
Use defence in depth
Adhere to the principle of least privilege
Use threat modelling
Compartmentalize - so your system is not all or nothing
Hiding secrets is hard - and secrets hidden in code won't stay secret for long
Don't write your own crypto
Using crypto doesn't mean you're secure (attackers will look for a weaker link)
Be aware of buffer overflows and how to protect against them
There are some excellent books and articles online about making your applications secure:
Writing Secure Code 2nd Edition - I think every programmer should read this
Building Secure Software: How to Avoid Security Problems the Right Way
Secure Programming Cookbook
Exploiting Software
Security Engineering - an excellent read
Secure Programming for Linux and Unix HOWTO
Train your developers on application security best pratices
Codebashing (paid)
Security Innovation(paid)
Security Compass (paid)
OWASP WebGoat (free)
Rule #1 of security for programmers: Don't roll your own
Unless you are yourself a security expert and/or cryptographer, always use a well-designed, well-tested, and mature security platform, framework, or library to do the work for you. These things have spent years being thought out, patched, updated, and examined by experts and hackers alike. You want to gain those advantages, not dismiss them by trying to reinvent the wheel.
Now, that's not to say you don't need to learn anything about security. You certainly need to know enough to understand what you're doing and make sure you're using the tools correctly. However, if you ever find yourself about to start writing your own cryptography algorithm, authentication system, input sanitizer, etc, stop, take a step back, and remember rule #1.
Every programmer should know how to write exploit code.
Without knowing how systems are exploited you are accidentally stopping vulnerabilities. Knowing how to patch code is absolutely meaningless unless you know how to test your patches. Security isn't just a bunch of thought experiments, you must be scientific and test your experiments.
Security is a process, not a product.
Many seem to forget about this obvious matter of fact.
I suggest reviewing CWE/SANS TOP 25 Most Dangerous Programming Errors. It was updated for 2010 with the promise of regular updates in the future. The 2009 revision is available as well.
From http://cwe.mitre.org/top25/index.html
The 2010 CWE/SANS Top 25 Most Dangerous Programming Errors is a list of the most widespread and critical programming errors that can lead to serious software vulnerabilities. They are often easy to find, and easy to exploit. They are dangerous because they will frequently allow attackers to completely take over the software, steal data, or prevent the software from working at all.
The Top 25 list is a tool for education and awareness to help programmers to prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped. Software customers can use the same list to help them to ask for more secure software. Researchers in software security can use the Top 25 to focus on a narrow but important subset of all known security weaknesses. Finally, software managers and CIOs can use the Top 25 list as a measuring stick of progress in their efforts to secure their software.
A good starter course might be the MIT course in Computer Networks and Security. One thing that I would suggest is to not forget about privacy. Privacy, in some senses, is really foundational to security and isn't often covered in technical courses on security. You might find some material on privacy in this course on Ethics and the Law as it relates to the internet.
The Web Security team at Mozilla put together a great guide, which we abide by in the development of our sites and services.
The importance of secure defaults in frameworks and APIs:
Lots of early web frameworks didn't escape html by default in templates and had XSS problems because of this
Lots of early web frameworks made it easier to concatenate SQL than to create parameterized queries leading to lots of SQL injection bugs.
Some versions of Erlang (R13B, maybe others) don't verify ssl peer certificates by default and there are probably lots of erlang code that is susceptible to SSL MITM attacks
Java's XSLT transformer by default allows execution of arbitrary java code. There has been many serious security bugs created by this.
Java's XML parsing APIs by default allow the parsed document to read arbitrary files on the filesystem. More fun :)
You should know about the three A's. Authentication, Authorization, Audit. Classical mistake is to authenticate a user, while not checking if user is authorized to perform some action, so a user may look at other users private photos, the mistake Diaspora did. Many, many more people forget about Audit, you need, in a secure system, to be able to tell who did what and when.
Remember that you (the programmer) has to secure all parts, but the attacker only has to succeed in finding one kink in your armour.
Security is an example of "unknown unknowns". Sometimes you won't know what the possible security flaws are (until afterwards).
The difference between a bug and a security hole depends on the intelligence of the attacker.
I would add the following:
How digital signatures and digital certificates work
What's sandboxing
Understand how different attack vectors work:
Buffer overflows/underflows/etc on native code
Social engineerring
DNS spoofing
Man-in-the middle
CSRF/XSS et al
SQL injection
Crypto attacks (ex: exploiting weak crypto algorithms such as DES)
Program/Framework errors (ex: github's latest security flaw)
You can easily google for all of this. This will give you a good foundation.
If you want to see web app vulnerabilities, there's a project called google gruyere that shows you how to exploit a working web app.
when you are building any enterprise or any of your own software,you should just think like a hacker.as we know hackers are also not expert in all the things,but when they find any vulnerability they start digging into it by gathering information about all the things and finally attack on our software.so for preventing such attacks we should follow some well known rules like:
always try to break your codes(use cheatsheets & google the things for more informations).
be updated for security flaws in your programming field.
and as mentioned above never trust in any type of user or automated inputs.
use opensource applications(their most security flaws are known and solved).
you can find more security resource on the following links:
owasp security
CERT Security
SANS Security
netcraft
SecuritySpace
openwall
PHP Sec
thehackernews(keep updating yourself)
for more information google about your application vendor security flows.
Why is is important.
It is all about trade-offs.
Cryptography is largely a distraction from security.
For general information on security, I highly recommend reading Bruce Schneier. He's got a website, his crypto-gram newsletter, several books, and has done lots of interviews.
I would also get familiar with social engineering (and Kevin Mitnick).
For a good (and pretty entertaining) book on how security plays out in the real world, I would recommend the excellent (although a bit dated) 'The Cuckoo's Egg' by Cliff Stoll.
Also be sure to check out the OWASP Top 10 List for a categorization of all the main attack vectors/vulnerabilities.
These things are fascinating to read about. Learning to think like an attacker will train you of what to think about as you're writing your own code.
Salt and hash your users' passwords. Never save them in plaintext in your database.
Just wanted to share this for web developers:
security-guide-for-developershttps://github.com/FallibleInc/security-guide-for-developers
As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.
What are the different types of Security Testing?
We have a fairly full list which is discussed over on Security Stack Exchange here and here.
Discovery
The purpose of this stage is to identify systems within scope and the services in use. It is not intended to discover vulnerabilities, but version detection may highlight deprecated versions of software / firmware and thus indicate potential vulnerabilities.
Vulnerability Scan
Following the discovery stage this looks for known security issues by using automated tools to match conditions with known vulnerabilities. The reported risk level is set automatically by the tool with no manual verification or interpretation by the test vendor. This can be supplemented with credential based scanning that looks to remove some common false positives by using supplied credentials to authenticate with a service (such as local windows accounts).
Vulnerability Assessment
This uses discovery and vulnerability scanning to identify security vulnerabilities and places the findings into the context of the environment under test. An example would be removing common false positives from the report and deciding risk levels that should be applied to each report finding to improve business understanding and context.
Security Assessment
Builds upon Vulnerability Assessment by adding manual verification to confirm exposure, but does not include the exploitation of vulnerabilities to gain further access. Verification could be in the form of authorised access to a system to confirm system settings and involve examining logs, system responses, error messages, codes, etc. A Security Assessment is looking to gain a broad coverage of the systems under test but not the depth of exposure that a specific vulnerability could lead to.
Penetration Test
Penetration testing simulates an attack by a malicious party. Building on the previous stages and involves exploitation of found vulnerabilities to gain further access. Using this approach will result in an understanding of the ability of an attacker to gain access to confidential information, affect data integrity or availability of a service and the respective impact. Each test is approached using a consistent and complete methodology in a way that allows the tester to use their problem solving abilities, the output from a range of tools and their own knowledge of networking and systems to find vulnerabilities that would/ could not be identified by automated tools. This approach looks at the depth of attack as compared to the Security Assessment approach that looks at the broader coverage.
Security Audit
Driven by an Audit / Risk function to look at a specific control or compliance issue. Characterised by a narrow scope, this type of engagement could make use of any of the earlier approaches discussed (vulnerability assessment, security assessment, penetration test).
Security Review
Verification that industry or internal security standards have been applied to system components or product. This is typically completed through gap analysis and utilises build / code reviews or by reviewing design documents and architecture diagrams. This activity does not utilise any of the earlier approaches (Vulnerability Assessment, Security Assessment, Penetration Test, Security Audit)
Risk assessment - creating a threat model and defining what will be tested.
Security auditing - using the threat model to probe the system design.
Vulnerability scanning - using software to probe the system inplementation.
Penetration testing - trying to hack into the system, either externally or internally.
Operational testing - some or all of the above after the system is in production.
Vulnerability Scanning - Typically an automated procedure to scan one or more systems against known vulnerability signatures.
Security Scanning - This is a vulnerability scan plus a manual verification of the findings to help remove false positives/ negatives.
Penetration Testing - A tester will attempt to gain access and prove access to the system owner.
Risk Assessment - involves a security analysis of interviews with employees compiled with business and industry justifications for risks discovered.
Security Auditing - Typically an in-depth auditing of software code and/or Operating Systems. This is often a very thorough line-by-line inspection of code.
Ethical Hacking - This is very similar to a penetration test, but it is usually many of them against a number of systems in order to discover as many attack vectors as possible.
Posture Assessment and Security Testing - This combines security scanning, ethical hacking and risk assessments to show the overall security posture of the organization.
Each of these security testing types can be further sub-categorized by different methodologies.
Penetration can be of different types, broadly categorized as follows:
Web parameter tampering: The user manipulates parameters exchanged between client and server and modifies application data such as user credentials, permissions, price or quantity of products, etc. for their benefit.
Database Tampering: compromising the databases that support the system and store data critical for business or running of the app
Cookie Stealing: A valid computer session is exploited to gain unauthorized access
Cross-site Scripting: An attacker injects malicious scripts on the client-side code to redirect the website link.
Cross-site Request Forgery: Also called one-click attack or session riding, unauthori
Privilege Escalation: To hack into a senior’s ID and misuse privileges.
Let’s break down security testing into its constituent parts by discussing the different types of security tests that you might perform.
Static code analysis
Static code analysis is perhaps the first type of security testing that comes to mind, its the oldest form also.
Static code analysis involves reviewing source code to identify problems that could lead to security breaches in an application (or in resources to which the application has access). Classic examples of vulnerabilities that you might be looking out for using this type of analysis are coding flaws that could enable buffer overflows or injection attacks.
It’s possible to perform some amount of static code analysis by hand, meaning that developers read through code manually to find security flaws. But that is often not practical to do on a large scale, given the size of many source code files; plus, humans can easily overlook flaws. That’s why using automated analysis tools to scan your source code is important.
Penetration testing
Penetration tests involve simulating attacks against an application or infrastructure in order to identify weak points. For example, you could use a tool like nmap to attempt to connect to all endpoints on a network from a non-trusted host and see if any endpoints accept the connection; if they do, you probably want to make them stop accepting connections from arbitrary hosts.
Some folks might argue that penetration testing should be broken down into subcategories, since there are different types of penetration tests. Some focus on the network, some on applications, some on authentication gateways, some on databases, and so on.
Compliance testing
Compliance tests (which are sometimes called conformance tests) are used to assess whether a configuration, architecture or process meets an organization’s predefined policies. Compliance testing is not strictly limited to the realm of security; you could conceivably use compliance tests to help maintain standards for application performance or response time, for example.
However, when it comes to security, compliance tests are an important resource for ensuring that a given application’s configuration or deployment architecture meets minimum standards set by your organization. Compliance tests typically work by comparing actual configurations with those that are deemed to be safe. When the tests identify incongruity, admins know that there may be a security issue or other problem.
Load testing
Load testing refers to tests that measure how an application or infrastructure performs under heavy demand. Load testing is not often thought of as a type of security test; it’s more commonly used to help optimize application performance and availability.
However, there is a reason why security admins might want to pay attention to load testing results, too. That reason is Distributed-Denial-of-Service, or DDoS, attacks, which aim to disrupt application availability by overwhelming an application or its host infrastructure with traffic or other requests.
Origin analysis testing
As the popularity of open source software has grown over the past decade, so has the importance of origin analysis testing. This type of testing helps developers and security admins determine where a given piece of source code originated.
In cases where some of your source code came from a third-party project or repository — which is very common these days, given the ease with which developers can incorporate upstream open source code into their applications — security admins will need to make sure that any known vulnerabilities in that code are addressed, and that the code conforms to internal security standards.