Is there a site, or is there a simple way of setting up one, which demonstrates what can happen with a buffer overrun? This is in the context of a web app.
The first question is... what webserver or library? IIS-6 running the ASP script interpreter? ASP.NET behind IIS-7? Apache Tomcat? Some library like the HTTPServer in Poco?
To exploit a buffer overrun you must first find a specific place in an application where the overrun can occur, and then figure out what exact bytes must be sent to overwrite memory with the right bytes to run out to the stack and then execute machine code you've sent in the overrun (because you've rewritten the stack).
http://en.wikipedia.org/wiki/Buffer_overflow
http://seclists.org/fulldisclosure/2005/Apr/412
http://www.microsoft.com/technet/security/bulletin/MS06-034.mspx
http://www.owasp.org/index.php/Buffer_Overflow
http://www.windowsecurity.com/articles/Analysis_of_Buffer_Overflow_Attacks.html
Bring on the flames, but I would say that any codebase of sufficient size contains potentially explotable buffer overflows regardless of the execution environment or source language. Sure, some languages and platforms allow the app developers to "shoot themselves in the foot" easier than others, but it all comes down to individual dilligence, project complexity, use of 3rd-party code and then there are always exploits for base implementations.
Buffer overruns are here to stay, it's just a question of mitigating exposure through safe programming practices and diligently monitoring for new exploits as they are discovered to manage risk.
A grand example is that for a web app, there's the chance for overflows in the networking implementations of all the equipment and applications handling delivery of your bytes across the Internet.
Consider the distinction of a client trying to break a server, versus one client trying to conquer other clients accessing a specific server?
Besides the simple "C" style overruns where the developer failed to follow secure programming practices, there are hidden gems like integer overflows.
Even if a certain server host environment is known to be highly stable, it no doubt makes use of a database and other operating system facilities which themselves might be exploitable through the webserver.
When you get right down to it, nothing is secure and it's really just a matter of complexity.
The only way a specific application or server can be fully secure is through diligent monitoring so issues can be corrected when questionable behavior is discovered. I highly doubt Walmart or Fedex or any other major corporation is happy to publicize breaches. Instead they monitor, fix and move on.
One could even say that SQL Injection is an abstract form of a buffer overrun, but you're simply picking where you want the buffer to end using specific characters and then providing the code you wish to execute in the right place.
They tend not to occur in web apps. Google's Web Application Exploits and Defenses codelab has a short section on it but doesn't demonstrate it:
This codelab doesn't cover overflow vulnerabilities because Jarlsberg is written in Python, and therefore not vulnerable to typical buffer and integer overflow problems. Python won't allow you to read or write outside the bounds of an array and integers can't overflow. While C and C++ programs are most commonly known to expose these vulnerabilities, other languages are not immune. For example, while Java was designed to prevent buffer overflows, it silently ignores integer overflow.
It really depends on the language you're using as to whether buffer overflow is even possible.
Related
I don't usually post in forums because normally I can find any answer I need using Google. However every search that I am running is giving me very specific results, such as buffer overflow vulnerabilities that already exist for a specific game or system which is not what I need.
I have a home network including Windows Server 2008 R2 and my son wants to start a Minecraft server which of course I want to give him full access to so he can learn. However, I know that every game is "moddable" and he uses custom maps and the like in a lot of his games.
My concern is that I am going to create security risks on my network based on inexperienced programming. Will giving him the ability to install and create mods on my server potentially open up vulnerabilities (outside of the open Minecraft ports) due to the possible inexperience of people actually writing the mods? Or do mods just simply not work that way and I can't find an answer to my question because it's retarded and no one actually programs a mod lol?
Depends on the way the mod system works for the game, and whether the game itself is sandboxed. Importantly, no software is perfectly secure. You have to decide what level of security and reliability you are happy with.
There are several ways a mod could expose a vulnerability:
The game could allow the mod access to an inappropriately permissive set of actions, such as access to the filesystem. This can include the developer not sandboxing the mod properly.
The mod could exploit a vulnerability in the game's API to access actions the game developer didn't intend. This would be due to a bug in the API.
The mod could exploit a vulnerability in the language engine (for example, Java has a long history of security vulnerabilities).
The mod itself could be vulnerable to attack, and could be made to launch one of the attacks above.
If the mod system is script or VM based, such as Lua, JavaScript or Java, I would feel relatively safe installing mods (so long as the game has a well implemented API/sandbox), because exploits 2-4 are relatively unlikely.
(My understanding of native code mods/plugins is limited, but I'm pretty sure you MUST trust a native mod if you want to run it. Even if you do, it might still be exploitable. )
My understanding of minecraft mods is that they are written in java. My feeling about the Mojang guys is that they know what they're doing, so I'd be surprised if their mod API isn't exceptionally well designed and implemented. Having said that, installing mods necessarily introduces a security risk.
If this risk is unacceptable, you can reduce it by introducing depth to the system. Why not, say, run your minecraft server in a virtual machine, with limited access to the network (only required ports, for example)? That way the impact of vulnerabilities is reduced greatly.
I'd recommend creating a Ubuntu VM on VirtualBox (because they're both free as in beer), but you could install it on whatever OS you're comfortable with.
Buffer overrun vulnerabilities are associated with programming languages that permit unchecked memory access. Minecraft is written in Java, a language which is not susceptible to buffer overruns, so a pure-Java mod would be very unlikely to exhibit anything resembling this kind of vulnerability.
Naturally programs in Java can still be vulnerable to other kinds of security issue, either against the game itself (eg there have been game-account login exploits against Minecraft servers) or against the server (I'm not aware of any known cases of this for Minecraft, but it's always possible). The usual mitigations for running servers apply, for example lock down network access to good IPs if possible, run server as limited user and so on.
In some languages (Java, C# without unsafe code, ...) it is (should be) impossible to corrupt memory - no manual memory management, etc. This allows them to restrict resources (access to files, access to net, maximum memory usage, ...) to applications quite easily - e.g. Java applets (Java web start). It's sometimes called sandboxing.
My question is: is it possible with native programs (e.g. written in memory-unsafe language like C, C++; but without having source code)? I don't mean simple bypass-able sandbox, or anti-virus software.
I think about two possibilities:
run application as different OS user, set restrictions for this user. Disadvantage - many users, for every combination of parameters, access rights?
(somehow) limit (OS API) functions, that can be called
I don't know if any of possibilities allow (at least in theory) in full protection, without possibility of bypass.
Edit: I'm interested more in theory - I don't care that e.g. some OS has some undocumented functions, or how to sandbox any application on given OS. For example, I want to sandbox application and allow only two functions: get char from console, put char to console. How is it possible to do it unbreakably, no possibility of bypassing?
Answers mentioned:
Google Native Client, uses subset of x86 - in development, together with (possible?) PNaCl - portable native client
full VM - obviously overkill, imagine tens of programs...
In other words, could native (unsafe memory access) code be used within restricted environment, e.g. in web browser, with 100% (at least in theory) security?
Edit2: Google Native Client is exactly what I would like - any language, safe or unsafe, run at native speed, sandbox, even in web browser. Everybody use whatever language you want, in web or on desktop.
You might want to read about Google's Native Client which runs x86 code (and ARM code I believe now) in a sandbox.
You pretty much described AppArmor in your original question. There are quite a few good videos explaining it which I highly recommend watching.
Possible? Yes. Difficult? Also yes. OS-dependent? Very yes.
Most modern OSes support various levels of process isolation that can be used to acheive what you want. The simplest approach is to simply attach a debugger and break on all system calls; then filter these calls in the debugger. This, however, is a large performance hit, and is difficult to make safe in the presence of multiple threads. It is also difficult to implement safely on OSes where the low-level syscall interface is not documented - such as Mac OS or Windows.
The Chrome browser folks have done a lot of work in this field. They've posted design docs for Windows, Linux (in particular the SUID sandbox), and Mac OS X. Their approach is effective but not totally foolproof - there may still be some minor information leaks between the outer OS and the guest application. In addition, some of the OSes require specific modifications to the guest program to be able to communicate out of the sandbox.
If some modification to the hosted application is acceptable, Google's native client is worth a look. This restricts the compiler's code generation choices in such a way that the loader can prove that it doesn't do anything nasty. This obviously doesn't work on arbitrary executables, but it will get you the performance benefits of native code.
Finally, you can always simply run the program in question, plus an entire OS to itself, in an emulator. This approach is basically foolproof, but adds significant overhead.
Yes this is possible IF the hardware provides mechanisms to restrict memory accesses. Desktop processors usually are equipped with an MMU and access levels, so the OS can employ these to deny access to any memory address a thread should not have access to.
Virtual memory is implemented by the very same means: any access to memory currently swapped out to disk is trapped, the memory fetched from disk and then the thread is continued. Virtualization takes it a little farther, because it also traps accesses to hardware registers.
All the OS really needs to do is properly use those features and it will be impossible for any code to break out of the sandbox. Of course this much easier said than practically applied. Mostly because the OS takes liberties if favor of performance, oversights in what certain OS calls can be used to do and last but not least bugs in the implementation.
Are there any known methods that can be used to make your software immune to viruses? To be more specific, the development is done in .NET. Is there anything built into .NET that would help with this? I had someone specifically ask me about this and I did not have the slightest clue.
Update: Immunity in this situation would mean immune to exploitation. At least I believe that's what the person asking the question to me wanted to know.
I realize this is an extremely open ended question and that software development is way off from being at a point where it software developed can really be immune to viruses.
When you say immune to viruses, do you mean immune to exploitation? I think any software being immune to viruses is some way off for any development platform.
In general, there are two important activities that lead to secure software
1) Safety by design - does the
inherent design of a system allow an
attacker to control the host machine. However, a secure design won't protect against insecure coding.
2) Secure coding - most
importantly, don't trust user input. However, secure coding won't protect against an insecure design.
There are several features of .NET that help reduce the liklihood of exploitation.
1) .NET takes care of memory
allocation and destruction. This helps prevent exploitable heap corruption.
2) .NET, by default, will bounds check array accesses. In C based languages, overwriting, or underwriting into arrays has allowed attackers to write memory into the target process.
.
3) .NET execution model is not
vulnerable to stack overflows in the
same way as the normal x86 execution
model. Arrays are allocated on the
heap.
4) you can set Code Access permissions, if used properly these help prevent untrusted code performing privileged actions on the host machine.
I mean in operating systems or their applications. The only way I can think of is examine binaries for the use of dangerous functions like strcpy(), and then try to exploit those. Though with compiler improvements like Visual Studio's /GS switch this possibility should mostly be a thing of the past. Or am I mistaken?
What other ways do people use to find vulnerabilities? Just load your target in a debugger, then send unexpected input and see what happens? This seems like a long and tedious process.
Could anyone recommend some good books or websites on this subject?
Thanks in advance.
There are two major issues involved with "Client Side Security".
The most common client exploited today is the browser in the form of "Drive By Downloads". Most often memory corruption vulnerabilities are to blame. ActiveX com objects have been a common path on windows systems and AxMan is a good ActiveX fuzzer.
In terms of memory protection systems the /GS is a canary and it isn't the be all end all for stopping buffer overflows. It only aims to protect stack based overflows that are attempting to overwrite the return address and control the EIP. NX Zones and canaries are a good things, but ASLR can be a whole lot better at stopping memory corruption exploits and not all ASLR implementations are made equally secure. Even with all three of these systems you're still going to get hacked. IE 8 Running on Windows 7 had all of this and it was one of the first to be hacked at the pwn2own and here is how they did it. It involved chaining together a Heap Overflow and a Dangling Pointer vulnerability.
The problem with "client side security" is CWE-602: Client-Side Enforcement of Server-Side Security are created when the server side is trusting the client with secret resources (like passwords) or to send report on sensitive information such as the Players Score in a flash game.
The best way to look for client side issues is by looking at the traffic. WireShark is the best for non-browser client/server protocols. However TamperData is by far the best tool you can use for browser based platforms such as Flash and JavaScript. Each case is going to be different, unlike buffer overflows where its easy to see the process crash, client side trust issues are all about context and it takes a skilled human to look at the network traffic to figure out the problem.
Sometimes foolish programmers will hardcode a password into their application. Its trivial to decompile the app to obtain the data. Flash decompiling is very clean, and you'll even get full variable names and code comments. Another option is using a debugger like OllyDBG to try and find the data in memory. IDA-Pro is the best decompiler for C/C++ applications.
Writing Secure Code, 2nd edition, includes a bit about threat modeling and testing, and a lot more.
In Windows Vista, I am unable to drag/drop files onto my application's window because it is running as a high integrity level process. I need to run it as high, but I also need to be able to accept dropped files from low/medium integrity level processes like Windows Explorer. I believe it is UIPI that is blocking the drag/drop operation. I know that I can use the ChangeWindowMessageFilter function to allow certain Windows messages to bypass UIPI, but I'm not sure which messages to add to allow the drag/drop operation. Is ChangeWindowMessageFilter the right approach to permit this, or is there a better way? Thanks!
Considering the title of this blog entry:
"Why you shouldn’t touch ChangeWindowMessageFilter with a 10-ft pole…",
I guess it is not the best approach ;)
Now, this might seem like a great approach at first - after all, you’ll only use ChangeWindowMessageFilter when you’re sure you can completely validate a received message even if it is from an untrusted source, such that there’s no way something could go wrong, right?
Well, the problem is that even if you do this, you are often opening your program up to attack unintentionally.
Consider for a moment how custom window messages are typically used; virtually all the common controls in existence have “dangerous” messages in the custom class message range (e.g. WM_USER and friends).
Additionally, many programs and third party libraries confuse WM_USER and WM_APP, such that you may have programs communicating cross process via both WM_USER and WM_APP, via “dangerous” messages that are used to make sensitive decisions or include pointer parameters.
In the comments of this blog entry, an alternative approach was discussed, but with pretty much the same conclusion:
I would use RegisterWindowMessage and then allow that via ChangeWindowMessageFilter.
However, be aware that you cannot design a cross-process window message interface that passes pointers or other untrusted values or you are creating a security hole.
For this reason, I would tend to avoid using window at all messages for most cross-process IPC (if possible), as it is typically very difficult to do non-trivial things in a secure fashion using them.
Note: this entry "So, who wants to design a feature today?" illustrates the same problem, and points to the insightful articles of Raymond Chen:
Why aren't console windows themed on Windows XP?
Windows Vista has more extended options on the context menu
which both detail the issue.
This ServerFault question "Why can’t I drag/drop a file for editing in notepad in Windows Server 2008?" also includes some answers, but no quick-win.
See also this article on IE