I don't usually post in forums because normally I can find any answer I need using Google. However every search that I am running is giving me very specific results, such as buffer overflow vulnerabilities that already exist for a specific game or system which is not what I need.
I have a home network including Windows Server 2008 R2 and my son wants to start a Minecraft server which of course I want to give him full access to so he can learn. However, I know that every game is "moddable" and he uses custom maps and the like in a lot of his games.
My concern is that I am going to create security risks on my network based on inexperienced programming. Will giving him the ability to install and create mods on my server potentially open up vulnerabilities (outside of the open Minecraft ports) due to the possible inexperience of people actually writing the mods? Or do mods just simply not work that way and I can't find an answer to my question because it's retarded and no one actually programs a mod lol?
Depends on the way the mod system works for the game, and whether the game itself is sandboxed. Importantly, no software is perfectly secure. You have to decide what level of security and reliability you are happy with.
There are several ways a mod could expose a vulnerability:
The game could allow the mod access to an inappropriately permissive set of actions, such as access to the filesystem. This can include the developer not sandboxing the mod properly.
The mod could exploit a vulnerability in the game's API to access actions the game developer didn't intend. This would be due to a bug in the API.
The mod could exploit a vulnerability in the language engine (for example, Java has a long history of security vulnerabilities).
The mod itself could be vulnerable to attack, and could be made to launch one of the attacks above.
If the mod system is script or VM based, such as Lua, JavaScript or Java, I would feel relatively safe installing mods (so long as the game has a well implemented API/sandbox), because exploits 2-4 are relatively unlikely.
(My understanding of native code mods/plugins is limited, but I'm pretty sure you MUST trust a native mod if you want to run it. Even if you do, it might still be exploitable. )
My understanding of minecraft mods is that they are written in java. My feeling about the Mojang guys is that they know what they're doing, so I'd be surprised if their mod API isn't exceptionally well designed and implemented. Having said that, installing mods necessarily introduces a security risk.
If this risk is unacceptable, you can reduce it by introducing depth to the system. Why not, say, run your minecraft server in a virtual machine, with limited access to the network (only required ports, for example)? That way the impact of vulnerabilities is reduced greatly.
I'd recommend creating a Ubuntu VM on VirtualBox (because they're both free as in beer), but you could install it on whatever OS you're comfortable with.
Buffer overrun vulnerabilities are associated with programming languages that permit unchecked memory access. Minecraft is written in Java, a language which is not susceptible to buffer overruns, so a pure-Java mod would be very unlikely to exhibit anything resembling this kind of vulnerability.
Naturally programs in Java can still be vulnerable to other kinds of security issue, either against the game itself (eg there have been game-account login exploits against Minecraft servers) or against the server (I'm not aware of any known cases of this for Minecraft, but it's always possible). The usual mitigations for running servers apply, for example lock down network access to good IPs if possible, run server as limited user and so on.
Related
What if I develop a desktop application which million people will use, and behind the scene, the application is surveilling users' files on their hard drives, streaming the data time to time?
Can one be assured no such things happen, with any popular software applications, be it MS Office or Google Chrome?
Or this is just a stupid question?
Is it technically possible? Yes, it is.
Could it be happening in an application used by a million users for a relatively long time without being noticed? Very unlikely. Somebody would notice the strange network traffic eventually.
Also #Mjh mentioned open source in a comment. While open source can help by allowing people to audit the source code, how many times have you checked that the binary you are using is actually the compiled source that you were looking at? Of course, there are signatures on binary packages and all, but the signature is made by the package maintainer. There is an inherent trust not only in the developer of the application, but also in the tool chain that creates a binary package from the source code. And then we haven't talked about strange "bugs", or the fact that even in open source, some security issues are very hard to find (otherwise all open source software would be security bug-free, which they are not).
So back to your question, sure, you could use all kinds of techniques to monitor the behavior of an application, you could monitor memory access, network traffic, whatever else. You can also analyse the code itself, look for suspicious things. It will take a huge amount of effort and still there will be no 100% guarantee, only some level of assurance.
Automated version upgrades could make detection even harder by the way. Even if you put lots of resources into analysis of one version, what if only a short-lived version had malicious code? Sure, that too can be analysed, but would anyone bother, unless there was a good reason (like indications of something malicious)?
Yet I think you can be pretty sure that major vendors don't do this. It's just not worth it for them, why would they? Their risk would be huge, with a relatively low benefit.
I mean in operating systems or their applications. The only way I can think of is examine binaries for the use of dangerous functions like strcpy(), and then try to exploit those. Though with compiler improvements like Visual Studio's /GS switch this possibility should mostly be a thing of the past. Or am I mistaken?
What other ways do people use to find vulnerabilities? Just load your target in a debugger, then send unexpected input and see what happens? This seems like a long and tedious process.
Could anyone recommend some good books or websites on this subject?
Thanks in advance.
There are two major issues involved with "Client Side Security".
The most common client exploited today is the browser in the form of "Drive By Downloads". Most often memory corruption vulnerabilities are to blame. ActiveX com objects have been a common path on windows systems and AxMan is a good ActiveX fuzzer.
In terms of memory protection systems the /GS is a canary and it isn't the be all end all for stopping buffer overflows. It only aims to protect stack based overflows that are attempting to overwrite the return address and control the EIP. NX Zones and canaries are a good things, but ASLR can be a whole lot better at stopping memory corruption exploits and not all ASLR implementations are made equally secure. Even with all three of these systems you're still going to get hacked. IE 8 Running on Windows 7 had all of this and it was one of the first to be hacked at the pwn2own and here is how they did it. It involved chaining together a Heap Overflow and a Dangling Pointer vulnerability.
The problem with "client side security" is CWE-602: Client-Side Enforcement of Server-Side Security are created when the server side is trusting the client with secret resources (like passwords) or to send report on sensitive information such as the Players Score in a flash game.
The best way to look for client side issues is by looking at the traffic. WireShark is the best for non-browser client/server protocols. However TamperData is by far the best tool you can use for browser based platforms such as Flash and JavaScript. Each case is going to be different, unlike buffer overflows where its easy to see the process crash, client side trust issues are all about context and it takes a skilled human to look at the network traffic to figure out the problem.
Sometimes foolish programmers will hardcode a password into their application. Its trivial to decompile the app to obtain the data. Flash decompiling is very clean, and you'll even get full variable names and code comments. Another option is using a debugger like OllyDBG to try and find the data in memory. IDA-Pro is the best decompiler for C/C++ applications.
Writing Secure Code, 2nd edition, includes a bit about threat modeling and testing, and a lot more.
Is there a site, or is there a simple way of setting up one, which demonstrates what can happen with a buffer overrun? This is in the context of a web app.
The first question is... what webserver or library? IIS-6 running the ASP script interpreter? ASP.NET behind IIS-7? Apache Tomcat? Some library like the HTTPServer in Poco?
To exploit a buffer overrun you must first find a specific place in an application where the overrun can occur, and then figure out what exact bytes must be sent to overwrite memory with the right bytes to run out to the stack and then execute machine code you've sent in the overrun (because you've rewritten the stack).
http://en.wikipedia.org/wiki/Buffer_overflow
http://seclists.org/fulldisclosure/2005/Apr/412
http://www.microsoft.com/technet/security/bulletin/MS06-034.mspx
http://www.owasp.org/index.php/Buffer_Overflow
http://www.windowsecurity.com/articles/Analysis_of_Buffer_Overflow_Attacks.html
Bring on the flames, but I would say that any codebase of sufficient size contains potentially explotable buffer overflows regardless of the execution environment or source language. Sure, some languages and platforms allow the app developers to "shoot themselves in the foot" easier than others, but it all comes down to individual dilligence, project complexity, use of 3rd-party code and then there are always exploits for base implementations.
Buffer overruns are here to stay, it's just a question of mitigating exposure through safe programming practices and diligently monitoring for new exploits as they are discovered to manage risk.
A grand example is that for a web app, there's the chance for overflows in the networking implementations of all the equipment and applications handling delivery of your bytes across the Internet.
Consider the distinction of a client trying to break a server, versus one client trying to conquer other clients accessing a specific server?
Besides the simple "C" style overruns where the developer failed to follow secure programming practices, there are hidden gems like integer overflows.
Even if a certain server host environment is known to be highly stable, it no doubt makes use of a database and other operating system facilities which themselves might be exploitable through the webserver.
When you get right down to it, nothing is secure and it's really just a matter of complexity.
The only way a specific application or server can be fully secure is through diligent monitoring so issues can be corrected when questionable behavior is discovered. I highly doubt Walmart or Fedex or any other major corporation is happy to publicize breaches. Instead they monitor, fix and move on.
One could even say that SQL Injection is an abstract form of a buffer overrun, but you're simply picking where you want the buffer to end using specific characters and then providing the code you wish to execute in the right place.
They tend not to occur in web apps. Google's Web Application Exploits and Defenses codelab has a short section on it but doesn't demonstrate it:
This codelab doesn't cover overflow vulnerabilities because Jarlsberg is written in Python, and therefore not vulnerable to typical buffer and integer overflow problems. Python won't allow you to read or write outside the bounds of an array and integers can't overflow. While C and C++ programs are most commonly known to expose these vulnerabilities, other languages are not immune. For example, while Java was designed to prevent buffer overflows, it silently ignores integer overflow.
It really depends on the language you're using as to whether buffer overflow is even possible.
I have read the Wikipedia article, but I am not really sure what it means, and how similar it is to version control.
It would be helpful if somebody could explain in very simple terms what sandboxing is.
A sandpit or sandbox is a low, wide container or shallow depression filled with sand in which children can play. Many homeowners with children build sandpits in their backyards because, unlike much playground equipment, they can be easily and cheaply constructed. A "sandpit" may also denote an open pit sand mine.
Well, A software sandbox is no different than a sandbox built for a child to play. By providing a sandbox to a child we simulate the environment of real play ground (in other words an isolated environment) but with restrictions on what a child can do. Because we don't want child to get infected or we don't want him to cause trouble to others. :) What so ever the reason is, we just want to put restrictions on what child can do for Security Reasons.
Now coming to our software sandbox, we let any software(child) to execute(play) but with some restrictions over what it (he) can do. We can feel safe & secure about what the executing software can do.
You've seen & used Antivirus software. Right? It is also a kind of sandbox. It puts restrictions on what any program can do. When a malicious activity is detected, it stops and informs user that "this application is trying to access so & so resources. Do want to allow?".
Download a program named sandboxie and you can get an hands on experience of a sandbox. Using this program you can run any program in controlled environment.
The red arrows indicate changes flowing from a running program into your computer. The box labeled Hard disk (no sandbox) shows changes by a program running normally. The box labeled Hard disk (with sandbox) shows changes by a program running under Sandboxie. The animation illustrates that Sandboxie is able to intercept the changes and isolate them within a sandbox, depicted as a yellow rectangle. It also illustrates that grouping the changes together makes it easy to delete all of them at once.
Now from a programmer's point of view, sandbox is restricting the API that is allowed to the application. In the antivirus example, we are limiting the system call (operating system API).
Another example would be online coding arenas like topcoder. You submit a code (program) but it runs on the server. For the safety of the server, They should limit the level of access of API of the program. In other words, they need to create a sandbox and run your program inside it.
If you have a proper sandox you can even run a virus infected file and stop all the malicious activity of the virus and see for yourself what it is trying to do. In fact, this will be the first step of an Antivirus researcher.
This definition of sandboxing basically means having test environments (developer integration, quality assurance, stage, etc). These test environments mimic production, but they do not share any of the production resources. They have completely separate servers, queues, databases, and other resources.
More commonly, I've seen sandboxing refer to something like a virtual machine -- isolating some running code on a machine so that it can't affect the base system.
For a concrete example: suppose you have an application that deals with money transfers. In the production environment, real money is exchanged. In the sandboxed environment, everything runs exactly the same, but the money is virtual. It's for testing purposes.
Paypal offers such a sandboxed environment, for example.
For the "sandbox" in software development, it means to develop without disturbing others in an isolated way.
It is not similiar to version control. But some version control (as branching) method can help making sandboxes.
More often we refer to the other sandbox.
In anyway, sandbox often mean an isolated environment. You can do anything you like in the sandbox, but its effect won't propagate outside the sandbox. For instance, in software development, that means you don't need to mess with stuff in /usr/lib to test your library, etc.
A sandbox is an isolated testing environment that enables users to run programs or execute files without affecting the application, system, or platform on which they run. Software developers use sandboxes to test new programming code. Especially cybersecurity professionals use sandboxes to test potentially malicious software. Without sandboxing, an application or other system process could have unlimited access to all the user data and system resources on a network.
Sandboxes are also used to safely execute malicious code to avoid harming the device on which the code is running, the network, or other connected devices. Using a sandbox to detect malware offers an additional layer of protection against security threats, such as stealthy attacks and exploits that use zero-day vulnerabilities.
The main article is here.
we know that each executable file can be reverse engineered (disassembled, decompiled). No mater how strong security you will implement, anyway if crackers want to, they do crack!!! Just that is a question of time.
What about websites? May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)? If no, than what is the reason?
Yes it is always possible to do. There is always a way in.
It's like my grandfather always said:
Locks are meant to keep the honest
people out
May we say that website can be completely safe from attacks of hackers?
No. Even the most secure technology in the world is vulnerable to social engineering attacks, for one thing.
You can easily write a webapp that is mathematically proven to be secure... But that proof will only hold as long as the underlying operating system, interpreter|compiler, and hardware are secure, which is never the case.
The key thing to remember is that websites are usually part of a huge and complex system and it doesn't really matter if the hacker enters the system through the web application itself or some other part of the entire infrastructure. If someone can get access to your servers, routers, DNS or whatever, they can bring down even the best web application. In my experience a lot of systems are vulnerable in some way or another. So "completely secure" means either "we're trying really hard to secure the platform" or "we have no clue whatsoever, but we hope everything is okay". I have seen both.
To sum up and add to the posts that precede:
Web as a shared resource - websites are useful so long as they are accessible. Render the web site unaccessible, and you've broken it. Denial of service attacks add up to flooding the server so that it can no longer respond to legitimate requests will always be a factor. It's a game of keep away - big server sites find ways to distribute, hackers find ways to deluge.
Dynamic data = dynamic risk - if the user can input data, there's a chance for a hacker to be a menance. Today the big concepts are cross-site scripting and SQL injection, but once one avenue for cracking is figured out, chances are high that another mechanism will rise. You could, conceivably, argue that a totally static site can be secure from this, but then how many useful sites fit that bill?
Complexity = the more complex, the harder to secure - given the rapid change of technology, I doubt that any web developer could say with 100% confidence that a modern website was secure - there's too much unknown code. Taking the host aside (the server, network protocols, OS, and maybe database), there's still all the great new libraries in Java EE and .Net. And even a less enterprise-y architecture will have some serious complexity that makes knowing all potential inputs and outputs of the code prohibitively difficult.
The authentication problem = by definition, the web site lets a remote user do something useful on a server that is far away. Knowing and trusting the other end of the communication is an old challenge. These days server side authenitication is relatively well implemented an understood and (so far as I know!) no one's managed to hack PKI. But getting user authentication ironed out is still quite tricky. It's doable, but it's a tradeoff between difficulty for the user and for configuration, and a system with a higher risk of vulnerability. And even a strong system can be broken when users don't follow the rules or when accidents happen. All this doesn't apply if you want to make a public site for all users, but that severely limits the features you'll be able to implement.
I'd say that web sites simply change the nature of the security challenge from the challenges of client side code. The developer does not need to be as worried about code replication, but the developer does need to be aware of the risks that come from centralizing data and access to a server (or collection of servers). It's just a different sort of problem.
Websites suffer greatly from injection and cross site scripting attacks
Cross-site scripting carried out on
websites were roughly 80% of all
documented security vulnerabilities as
of 2007
Also part of a website (in some web sites a great deal) is sent to the client in the form of CSS, HTML and javascript, which is the open for inspection by anyone.
Not to nitpick, but your definition of "good hosting" does not assume the HTTP service running on the host is completely free from exploits.
Popular web servers such as IIS and Apache are often patched in order to protect against such exploits, which are often discovered the same way exploits in local executables are discovered.
For example, a malformed HTTP request could cause a buffer overrun on the server, leading to part of its data being executed.
It's not possible to make anything 100% secure.
All that can be done is to make something hard enough to break into, that the time and effort spent doing so makes it not worth doing.
Can I crack your site? Sure, I'll just hire a few suicide bombers to blow up your servers. Or... I'll blow up those power plants that power up your site, or I do some sort of social engineering, and DDOS attacks would quite likely be effective in a large scale not to mention atom bombs...
Short answer: yes.
This might be the wrong website to discuss that. However, it is widely known that security and usability are inversely related. See this post by Bruce Schneier for example (which refers to another website, but on Schneier's blog there's a lot of interesting readings on the issue).
Assuming the server itself isn't comprimised, and has no other clients sharing it, static code should be fine. Things usually only start to get funky when there's some sort of scripting language involved. After all, I've never seen a comprimised "It Works!" page
Saying 'completely secure' is a bad thing as it will state two things:
there has not been a proper threat analysis, because secure enough would be the 'correct' term
since security is always a tradeoff it means that the a system that is completely secure will have abysmal usability and the site will be a huge resource hog as security has been taken to insane levels.
So instead of trying to achieve "complete security" you should;
Do a proper threat analysis
Test your application (or have someone professional test it) against common attacks
Apply best practices, not extreme measures
The short of it is that you have to strike a balance between ease of use and security, much of the time, and decide what provides the optimal level of both for your purposes.
An excellent case in point is passwords. The easy way to go about it is to just have one, use it everywhere, and make it something easy to remember. The secure way to go about it is to have a randomly generated variable-length sequence of characters across the encoding spectrum that only the user himself knows.
Naturally, if you go too far on the easy side, the user's data is easy to pick off. If you go too far on the side of security, however, practical application could end up leading to situations that compromise the added value of the security measures (e.g. people can't remember their whole keychain of passwords and corresponding user names, and therefore write them all down somewhere. If the list is compromised, the security measures that had been put into place are for naught. Hence, most of the time a balance gets struck and places ask that you put a number in your password and tell you not to do anything stupid like tell it to other people.
Even if you remove the possibility of a malicious person with the keys to everything leaking data from the equation, human stupidity is infinite. There is no such thing as 100% security.
May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)?
Well if we're going to start putting constraints on the attacker, then of course we can design a completely secure system: we just have to bar all of the attacker's attacks from the scenario.
If we assume the attacker actually wants to get in (and isn't bound by the rules of your engagement), then the answer is simply no, you can't be completely safe from attacks.
Yes, it's possible for a website to be completely secure, for a reasonable definition of 'complete' that includes your original premise that the hosting is not vulnerable. The problem is the same as with any software that contains defects; people create software of a complexity that is slightly beyond their capability to manage and thus flaws remain undetected until it's too late.
You could start smaller and prove all your work correct and safe as you construct it, remaking any off-the-shelf components that haven't been designed to that stringent degree of quality, but unfortunately that leaves you at a massive commercial disadvantage compared to the people who can write 99% safe software in 1% of the time. Therefore there's rarely a good business reason for going down this path.
The answer to this question lies close to the ideas about computational theory that arise from considering the halting problem. http://en.wikipedia.org/wiki/Halting_problem To wit, if you could with clarity say you'd devised a way to programmatically determine if any particular program was secure, you might be close to disproving the undecidability of the halting problem on the class of machines you were working with. Since the undecidability of the halting problem has been proven, we can know that over turing machines you would be unable to prove securability since the problem of security reduces to the halting problem. Even for finite machines you might be able to decide all of the states of the program, but Minsk would tell us that the time required for a complete state tree for even simplistic modern day machines and web servers would be huge. You probably know a lot about a specific piece of code, but as soon as you changed the code, or updated it, a complete retest would be required. Fundamentally this is interesting because it all boils back to the concept of information and meaning. Read about Automated theory proving to understand more about the limits of computational systems. http://en.wikipedia.org/wiki/Automated_theorem_proving
The fact is hackers are always one step ahead of developers, you can never ever consider a site to be bullet proof and 100% safe. You just avoid malicious stuff as much as you can !!
In fact, you should follow whitelist approach rather than blacklist approach when it comes to security.