HTTPS on Cocos2d-x - security

I'm implementing a game app based on cocos2d-x. In order to technically prevent cheating, one of the ideas to do is using HTTPS for all the client-server communication, which make it difficult to get the data format / game logic and send modified request to cheat. (I know "prevent" is actually impossible but for increasing the cost of making game cheating it's ok : ). My question is,
In Cocos2d-x, how to make HTTPS request? Possible?
In a more general case, technically what to do to reduce such game hacking? What strategy to hold?

For native cross platform C++ networking you may consider using Boost C++ libraries. Boost.Asio is the one used for networking.
Boost.Asio link:
http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio.html
Boost.Asio tutorials link: http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio/tutorial.html
Although not officially supported (only due to lack of regression testing on iOS and Android), Boost runs without any problems on iOS and Android (and probably other C++ based mobile platforms as well).
To prevent cheating you usually rely on an external source (which can be your game server) e.g. if your game relies on the time of day you may get the time form an external server. You may use encryption libraries for data transfer on the client and server side.

by using curl library you can make https connection.
if you want technically protect your game use you own strong encryption technique.
Thanks

Hi this is a problem we face all the time. If the cheating is limited to the cheater's instance the questions is academical and should be studied on your spare time.
On the other hand when your income is impacted or when the cheater's actions impact other players and degrade the game experience you should put some effort on testing the game state for inconsistencies, secure the client/server transactions and deal with cheating in very subtle ways to avoid completely deterring the cheaters' interest.
C++ https implementations are available with curl and boost.
Concerning the game data, the simplest way to test for inconsistencies are scores. You can add a few indicators to avoid polluting your leaderboards. You can add special checksums based on the score's components (time spent in game, number of power ups and score multipliers received...) if you can recalculate the score on the server and if inconsistencies are discovered you can deal with it.
Also you can grab instants of the game state and a few commands, encode that and replay the sequences on the server to check for inconsistencies. Deal with cheaters however you like.
When playing on a server let the server manage the gamestate and allow no client side game state changes that would impact players. Check for input consistency etc...
When using micro transactions each micro transaction should be verified with the vendors servers before being fully committed to the player's account.
Even if these papers 1, 2 from valve refer to fps games they should give you some pointers as to how to deal with state inconsistencies (introduced by communication delays). It should help in avoiding fake positives and ruining the experience for non cheaters.

Related

How to integrate Node.js Server in My cocod2d-x (C++) Game

I am new in Multiplayer Game Development, I have already developed a offline game and now I want to make it a multiplayer, so with help of my friend we create a server side script in node.js, but I don't know how to integrate this in my c++ project,
I've googled but, can't find anything helpful.
anybody can suggest any tutorial.
Thanks
You've asked a big open-ended question. As Allern suggests there are a lot of things that you can do with networked programs that can extend it well beyond that of a single user game. For instance in my current game there is an access to a version welcome page in html. There are file downloads for campaign/user maps and there are connections to Firebase for leaderboards and other networked resources like ads.
However, I suspect you are referring to the communications between a number of separate user machines all synchronized to keep them coordinated. For this you will need to write some serialization code to transmit to and receive packets from the central server. Typically a serialization package like flatbuffers will be needed to move information from your data structures to a packet and the reverse.
You might also require communication/network software to asynchronously send and receive those packets (this may be included in whatever game engine you might be using). Boost.asio might help otherwise. There are numerous other networking packages and libraries all the way down to the bare-bones unix/POSIX calls (or Windows OS calls).
You will also need software on the server side to log users in, deal with players disconnecting and doing the main work of passing the game packets around. This software may also implement the logic of your game (game rules) and might do saves on the data if you want users to be able to play the game in multiple sessions (like a big dungeon crawl). There might be packages out there that do most of the server side stuff. If so, please post what you find out.
Cocos2dx does have some networking software built in but it isn't very functional as far as I'm concerned. It does have facilities to display web pages and download files fairly easy but the async communication seems a little weak. You can try the Network module in the API Docs which may have what you are looking for.
Since the type of game and how you want to implement your player interaction will dictate how the software is to be built I'm afraid this answer is a little vague. Good luck. Share your insights.
you can use public tools , as a sample is websocket,it can support C++ and javasc

Using node.js as a gaming server to keep track of game state

I am in the planning phase of a mobile multi-player real time game and I am considering what architecture to implement in order to keep track of state within a game lobby.
I have considered making the game either peer-to-peer where all devices within the same game lobby (4 players max) will emit their position to other players as the game progresses. I have also considered having the players connect to the server and the server keeping track of the game state and sends the state to every player.
If I do go this route and implement the server using node what are some considerations that I need to look at?
I have a prediction that scalability will be a major issue as the server has to keep track of the state of every game lobby that is nominally running at 60 fps. If I have 100 active lobbies I can foresee the issues that could arise. Will this be the case and should I look into a different networking architecture? or could I get significant capacity until the server reaches a maximum?
Depends highly on the kind of game.
The peer-to-peer solution is probably tricky and cheating would be pretty easy except you validate it on every client but since you want to run it on a mobile client that would slow down the game a lot.
It makes more sense to validate the data on a central server and just send out the validated data.
In general the FPS doesn't matter for the server at all.
Every client just emits user input to the server and not every frame
(button pressed - button released).
The server uses all this information and calculates what needs to happen and sends regular updates to the client.
Then the client renders everything like 200ms (really depending on the kind of game) in the past and "predicts" what the server will send in the next update to make it look smooth.
Just read this article, it explains it better than I ever could: https://developer.valvesoftware.com/wiki/Source_Multiplayer_Networking
But yeah, it will def take a lot more power than a normal website.
However, the single lobbies don't need to communicate with each other so you can super easy just use 3 servers in parallel.
You can expect Node to perform pretty well in comparison to most alternatives you have cause it handles concurrent connections very well. Just make sure you don't run CPU heavy things because that will pretty much kill Node (if you just receive status updates and use basic math to validate it and send it out again ->perfect choice).
Hope that helps
The biggest issue you will run into is Node is not built to handle application that are as latency sensitive as real time simulations.
HTTP is a bloated protocol, and it's text-based. It will not work for the type of games most people would consider multiplayer. It may work for games in which high latency is not an issue such as facebook games.

How are clientside security vulnerabilities generally discovered?

I mean in operating systems or their applications. The only way I can think of is examine binaries for the use of dangerous functions like strcpy(), and then try to exploit those. Though with compiler improvements like Visual Studio's /GS switch this possibility should mostly be a thing of the past. Or am I mistaken?
What other ways do people use to find vulnerabilities? Just load your target in a debugger, then send unexpected input and see what happens? This seems like a long and tedious process.
Could anyone recommend some good books or websites on this subject?
Thanks in advance.
There are two major issues involved with "Client Side Security".
The most common client exploited today is the browser in the form of "Drive By Downloads". Most often memory corruption vulnerabilities are to blame. ActiveX com objects have been a common path on windows systems and AxMan is a good ActiveX fuzzer.
In terms of memory protection systems the /GS is a canary and it isn't the be all end all for stopping buffer overflows. It only aims to protect stack based overflows that are attempting to overwrite the return address and control the EIP. NX Zones and canaries are a good things, but ASLR can be a whole lot better at stopping memory corruption exploits and not all ASLR implementations are made equally secure. Even with all three of these systems you're still going to get hacked. IE 8 Running on Windows 7 had all of this and it was one of the first to be hacked at the pwn2own and here is how they did it. It involved chaining together a Heap Overflow and a Dangling Pointer vulnerability.
The problem with "client side security" is CWE-602: Client-Side Enforcement of Server-Side Security are created when the server side is trusting the client with secret resources (like passwords) or to send report on sensitive information such as the Players Score in a flash game.
The best way to look for client side issues is by looking at the traffic. WireShark is the best for non-browser client/server protocols. However TamperData is by far the best tool you can use for browser based platforms such as Flash and JavaScript. Each case is going to be different, unlike buffer overflows where its easy to see the process crash, client side trust issues are all about context and it takes a skilled human to look at the network traffic to figure out the problem.
Sometimes foolish programmers will hardcode a password into their application. Its trivial to decompile the app to obtain the data. Flash decompiling is very clean, and you'll even get full variable names and code comments. Another option is using a debugger like OllyDBG to try and find the data in memory. IDA-Pro is the best decompiler for C/C++ applications.
Writing Secure Code, 2nd edition, includes a bit about threat modeling and testing, and a lot more.

Which programming language suits web critical application development?

According to this page,it seems that Perl,PHP,Python is 50 times slower than C/C++/Java.
Thus,I think Perl,PHP,Python could not handle critical application(such as >100 million user,>xx million request every second) well.But exceptions are exist,e.g. facebook(it is said facebook is written with PHP entirely),wikipeida.Moreover,I heard google use Python extensively.
So why?Is it the faster hardware fill the big speed gap between C/C++/Java and Perl/PHP/Python?
thanks.
Computational code is the least of my concerns in most heavy usage web applications.
The bottle necks in a typical high availablility web application are (not nessecarility in this order, but most likely):
Database (IO and CPU)
File IO
Network Bandwidth
Memory on the Application Server
Your Java / C++ / PHP / Python code
Your main concerns to make your application scalable are:
Reduce access to the database (caching, with clustering in mind, smart quering)
Distribute your application (clustering)
Eliminate useless synchronization locks between threads (see commons-pool 1.3)
Create the correct DB indexes, data model, and replication to support many users
Reduce the size of your responses, using incremental updates (AJAX)
Only after all of the above are implemented, optimize your code
Please feel free to add more to the list if I missed something
The page you are linking only tells half the truth. Of course native languages are faster than dynamic ones, but this is critical to applications with high computing requirements. For most web applications this is not so important. A web request is usually served fast. It is more important to have an efficient framework, that manages resources properly and starts new threads to serve requests quickly. Also the timing behaviour is not the only critical aspect. Reliable and error-free applications are probably better achieved with dynamic languages.
And no, faster hardware isn't a solution. In fact Google is famous for using a cluster of inexpensive machines.
(such as >100 million user,>xx million request every second)
To achieve that sort of performance, you are going to HAVE to design and implement the web site / application as a scalable multi-tier system with replication across (probably) all tiers. At this point, the fact that one programming language is faster / slower than another probably only affects the number of machines you need in your processor farm. The design of the system architecture is far more significant.
there is no JIT compiler in php which Compile the code into machine code
Another big reason is PHP's dynamic typing. A dynamically typed language is always going to be slower..
click below and read more
What makes PHP slower than Java or C#?
C is easily the fastest language out there. Its so fast we write other languages in it. Nobody seriously writes web sites in C. Why? Its very easy to screw up in C in ways that are very difficult to detect and it does almost nothing to help you. In short, it eats programmers and generates bugs.
Building a robust, fast application is not about picking the fastest langauge, its about A) maintainability and B) scalability.
Maintainability means it doesn't have a lot of bugs. It means you can quickly add new features and modify existing ones. You want a language that does as much of the work as possible for you and doesn't get in the way. This is why things like Perl, Python, PHP and Ruby are so popular. They were all written with the programmer's convenience in mind over raw performance or tidiness. C was written for raw performance. Java was written for conceptual tidiness.
Scalability means you can go from 10 users to 10,000 users without rewriting the whole thing. That used to mean you wrote the tightest code you can manage, but highly optimized code is usually hard to maintain code. It usually means doing things for the benefit of the computer, not the human and the business. That sacrifices maintainability and you have to tell your boss its going to take 3 months to add a new feature.
Scalability these days is mostly achieved by throwing hardware at it and parallelizing. How many processes and processors and machines can you farm your work out to? If you can achieve that, you can just fire up another cheap cloud computer as you need it. Of course you're going to want to optimize some, but at this scale you get so much more out of implementing a better algorithm than tightening up your code.
For example, I took a sluggish PHP app that was struggling to handle 50 users at a time, switched from Apache with mod_php to lighttpd with load balanced, remote FastCGI processes allowing parallelization with a minimum of code change. Some basic profiling revealed that the PHP framework they used to prototype was dog slow, so it was stripped out. Profiling also suggested a few indexes to make the database queries run faster. End result was a system that could handle thousands of users and more capacity could be added as needed while leaving most of the code implementing the business logic untouched. Took a few weeks, and I don't really know PHP well.
It may be beneficial to reimplement small, sharp pieces in a very fast language, but usually that's already been done for you in the form of an optimized library or tool. For example, your web server. For the complexity and ever-changing needs of business logic the important thing is ease of maintenance and how good your programmers are.
You will find that most of the web is written in PHP, Perl and Python because they are easy to write in, with small, sharp bits written in things like C, Java and exotics like Scala (for example, Twitter). Wikia, for example, is a modified Mediawiki which is written in PHP but it is performant (amongst other reasons) by doing a heroic amount of caching.
Google is using Python for GAE and Windows Azure is providing PHP. The LAMP architecture is a great for application scalabilty.
I also think that the programming language is not that important regarding performance. The most important thing is to look at the architecture of your app.
I hope it helps
To serve a web page, you need to:
Receive and parse the request.
Decide what you wish to do with the request.
Read/write persistent data (database, cache, file system)
Output HTML data.
The "speed" of the server side language only applies to steps two and four. Given that most scripts strive to keep step 2 as short as possible, and that most web languages (including PHP) optimize step 4 as much as they can, in any serious web site most of the request processing time will be spent in step 3.
And the time spent on step 3 is independent of the server-side language you use ... unless you implement your own database and distributed cache.
For php, there are lot of things you can do to increase performance. For example
Php Accelerator
Caching Queries
Optimize Queries
Using a profiler to find slower parts and optimize
These things would certainly help reduce the gap between lower level languages. So to answer your question there are other things you can do inside the code to optimize it and make it run faster
I agree with luc. Its the architecture that really matter and not the programming language.

Real time browser game server

I'm mostly looking for setup advise and pointers on how to go about going about this. I'll explain in as much detail as I can think and also note possible approaches that may be plausible.
The aim of this is to create a real time browser game, the best method that I have found for my needs would to use "long polling" with ajax, which will basically setup a request with the server that will "hang there" til the server has something to send it, then re-establish the connection upon receipt for more data. For my purposes this will handle a chat system aswell as character movement, IE: if a player enters the same area the clients there will recieve a response to inform them and thus update the browser client to show this.
The above is relatively easy to implement and I have already made a test-case for it, however I want to improve on it, on the server side it runs a loop for X amount of time before it'll auto timeout and send back and empty string, so another connection can be made, this is to prevent infinite loops and use up resources in cases where it shouldn't. Instead of looking up the database on each loop cycle (would be expensive I believe) for messages that need sending to the client, I use flatfiles, if a file has a modified timestamp greater than the last message sent to the client, then there is something new to send. However I believe this would also be expensive (not as much as using a mysql database though?) when done a couple of times per second.
My thought process on this was to have a C++ program (for speed) constantly running, and use that for very fast lookups in memory for new messages and so fourth, this would also give me the added bonus of being able to have bots within the game that the server can control for a more real-time feel/approach, however I have no clue if this is even possible and my searches on google have been fruitless.
The approach I would most love to be able to do, is to continue to use PHP to do the rendering and control of the page etc, and have the ajax requests go to the C++ application (that will always be running) that can handle all the real-time aspects.
CGI defeats the purpose of the above approach, as it creates a new instance of the application on each request, which is both slow and exactly what I do not want, I have php for that and don't want to switch one perfectally running language for another that would be better suited, PHP however (to my knowledge) can't store things in memory (ram) and so fourth.
Another approach that I have thought about was to use php sockets to connect into the C++ application, though I have no idea how feasible this may be. The C++ application only basically will need to control bots (AI) and the chat system messages.. I have absolutely no idea how to go about handling bots via PHP.
I hope this fully explains what my intentions and goals are, so if anyone has any pointers or advise then please reply and help me out, it would be very much appreciated. If you need any extra information (for if I didn't cover something or something very well) then I'll be happy to attempt to better explain.
How fast do the reactions need to be? For anything approaching real-time action games, AJAX/Comet is going to be much too slow. The overhead is also really depressing.
The way forward for that kind of thing will probably be WebSocket, with a custom server on the backend. But I don't think that means you need to resort to C[++] for this; the bottleneck is most likely going to be the network and not server processor power.
I'm using a Python SocketServer with a trivial message replication system — all the game logic in my case is on the client-side, with some complicated JavaScript maintaining a consistent game world in the face of lag — but even for a more complex server-side I think a scripting language will probably be just fine.
WebSocket isn't ready yet; there are no mainstream browser implementations. In the meantime I'm using a Flash Socket backup that emulates the WebSocket interface. Flash Sockets have their own problems in that they fail to negotiate proxies, but they are fast and hopefully the need for them will diminish as WebSocket arrives properly.
Reading your post sets alarm bells ringing.
How familiar are you with multi-threaded code? With C++? If the answer is "not very", then I fear you might be biting off a quite a large chunk. Why not take advantage of some existing (tried and tested) COMET server implementations rather than this barebones approach? Whatever application you have in mind, it should be quite separate from the comms implementation.
As someone who has implemented a such a server, I can tell you that it will take many design iterations and a helluva long time to get right. Testing such a product realisticly is also a very tricky process.

Resources