how to protect My Programs? [closed] - drm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Let's say I have designed s very important system, and this system costs thousands dollars. I want to protect my system with a serial number as I know crackers will try to edit the binary code to bypass the serial number.
I have read about using a checksum function and apply it over my binary code and check the value if changed, but again, we are talking about a condition a cracker can avoid by editing the code.
My question is: what's the most used technique to protect important programs?

I have yet to see a "protected" digital product that had not been cracked pretty quickly after its publication (or in some cases, before its publication). Sorry, but it's the reality. You have to get the revenue by making a good product. Most of those who want to use it and can afford, will pay.
There will be a few dickheads, but that's life. You better be kind towards the legit users of your software and not bully them with weird copy protection attempts that don't work anyway.

If your app is working offline, whatever checks you do (check sums, serial code validity, etc), do them often, repeating verification code, in many routines of your software. Obfuscate your code, to make reverse engineering a more difficult task, and, if you have the possibility, implement an online check, part of the core functionality of your app residing on your server, and being serviced only to those installations that you have checked server-side for valid license key. Associate the license key to some form of unique identifier of the hardware the app is running on, and if you check online, have statistics concerning the IPs that make the verification request: if you encounter more IPs trying to verify the same license key, contact the buyer and approve a list of IPs they usually log on from, whilst blacklisting any other until specific request from them, either by mail or by phone.

The most used technique is serial numbers. But your customers will have access to the code, so they will be able to bypass your serial number check, no matter how much work you put into obfuscating it.
However, if you can provide your software as a subscription-based or one-time-payment web application, then people will not be able to do this. Whether this is feasible or not depends on the type of application you're writing.

I would always recommend to build a custom software protection before applying any kind of commercial protector such as a Packer.
In any case just a serial validation and a checksum check are not going to keep crackers away.
I would recommend you to visit my new blog www.anti-reversing.com and take a quick look at the anti-piracy tips & tricks page just to have an idea about what I am talking about.

Related

Is it possible to prevent man in the browser attack at the server with hardware device [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Recently I found a hardware device that can prevent bot attacks by changing html DOM elements on the fly The details are mentioned here
The html input element id and name and also form element action will be replaced with some random string before page is sent to client. After client submit, the hardware device replace its values with originals. So the server code will remain on change and bots can not work on fixed input name, id.
That was the total idea, BUT they also have claimed that this product can solve the man in the browser attack.
http://techxplore.com/news/2014-01-world-botwall.html :
Shape Security claims that the added code to a web site won't cause
any noticeable delays to the user interface (or how it appears) and
that it works against other types of attacks as well, such as account
takeover, and man-in-the-browser. They note that their approach works
because it deflects attacks in real time whereas code for botnets is
changed only when it installs (to change its signature).
Theoretically is it possible that some one can prevent the man in the browser attack at the server?!
Theoretically is it possible that some one can prevent the man in the browser attack at the server?!
Nope. Clearly the compromised client can do anything a real user can.
Making your pages more resistant to automation is potentially an arms race of updates and countermeasures. Obfuscation like this can at best make it annoying enough to automate your site that it's not worth it to the attacker—that is, you try to make yourself no longer the ‘low-hanging fruit’.
They note that their approach works because it deflects attacks in real time whereas code for botnets is changed only when it installs (to change its signature).
This seems pretty meaningless. Bots naturally can update their own code. Indeed banking trojans commonly update themselves to work around changes to account login pages. Unless the service includes live updates pushed out to the filter boxes to work around these updates, you still don't win.
(Such an Automation Arms Race As A Service would be an interesting proposition. However I would be worried about new obfuscation features breaking your applications. For example imagine what would happen for the noddy form-field-renaming example on the linked site if you have your own client-side scripts were relying on those names. Or indeed if your whole site was a client-side Single Page App, this would have no effect.)

How does Yodlee work? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
From what I understand, you have to enter in all of your usernames and passwords into Mint, so I assume they are actually logging into your bank account and scraping the resulting screen to put this data into a form that Mint and others use.
How do they actually simulate the keypresses and mouse clicks? I assume banks don't like it when they do this - how do their scrapers avoid detection?
I'm pretty sure they don't simulate clicks, etc. In the end, any data that ends up on a user's page is transmitted in a response to a request. If you can figure out how to construct a valid request and then how to parse the response, you'll have the data you want.
As far as I could gather after using Yodlee for quite a while, they deal with sites in two major ways: the sites they have official agreements to work with and the sites they don't have official agreements with. For the first category of sites they, most often, have agreed upon APIs for getting the data. For the sites in the second category they reverse-engineer layer 7 communication protocols and data structures (a.k.a. screen/html scraping).
The way I understand it, Yodlee uses the OFX specification to access banks' financial information.
http://www.ofx.net/
For the banks that don't implement OFX, they use custom screen scrapers, which must constantly be updated when banks change the information that's displayed on their site.
I don't know Yodlee so i simply assume it's like "sofortüberweisung.de" where you give a 3rd party your bank login data (and depending on what you do even a valid TAN) and thus trust them not to abuse it and additionally break your bank's security regulations ("NEVER GIVE YOUR YOUR PIN/TAN").
They most likely simulate what a browser would do. As web-based banking interfaces are usually just HTML/JavaScript everyone can look at the client-side code and do whatever it does with a custom program. Since those actions are not done in a malicious way, actions which require e.g. a TAN or a CAPTCHA to be solved can be simply forwarded to the legit user who will then enter the necessary TAN or solve the CAPTCHA.
Nonetheless to say, it is really bad to use services like that. While they most likely won't do anything bad you cannot know it for sure. And your bank is damn right if they don't refund you anything if you ever get scammed by such a service.
Another solution which would be perfectly safe (as long as you are not concerned about a 3rd party knowing about your financial status etc.) would be the yodlee company making contracts with major banks allowing them to access your data after you've authorized it through some way (you can already do that on pages like Twitter - I'd never do that for bankign though but technically it wouldn't be hard to realize something like that). That would be clean and secure as it would not involve "screen-scraping" or customers entering their banking login data anywhere but on their bank's website. But I believe no bank does something like that and in my opinion that's good as there are way too many people out there who are far too trustworthy and we all know how many information they give out on Facebook & Co. Now imagine a facebook<->bank integration... M.Zuck.'s wet dreams which hopefully never become true... And even if it's not Facebook.. There'll always be companies who want people's personal data and enough people giving them out; especially if it's easy and looks secure ("I have to confirm it on MY BANK's page. so it MUST be safe - it's supported by MY BANK").

Protecting source code from theft during development [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there any way to protect my code during development so that if a developer leaves my company they are unable to access files in my project?
This is especially important with TFS where the project is downloaded locally, cached, and available for offline use. Ideally the code would be unreadable if they did not have a valid Active Directory user ID.
Even if this idea is not possible, I'd like to learn of any practical deterrent you can think of...
You have to extend some form of trust to your developers. If you can't trust them not to take source code with them, how can you trust them not to build back doors and the like into your systems?
Moreover, if they're going to work on code, they're going to need access to it, and if they get access to it they can almost certainly copy it. You can try to limit it, but it's you trying to outthink in advance a group of people who only need to find one mistake you made. Besides, overtly distrusting your developers isn't going to help you anyway.
Are there actual trade secrets built into your code? If so, you might want to rethink that. If not, how much harm will it do in somebody else's possession? They can't legally use it, and the developers that leave will often be able to write something similar anyway.
For this, you want legal protection, not technical.
Assuming they can read the code and compile it while they are there, there's not a lot you can do (unless you ban USB sticks, CR writers, scan all their email etc, and even then they'd find a way of defeating that).
Cover it in the employment contract, make it clear that if the code turns up there will be legal action.
(I've had this happen to me in a past life - an employee did take the code with him. We knew due to an error he made in doing it, and we sent a letter from our laywers pointing out the consequences of him revealing the code to anyone else. It seemed to work)
If you are afraid of losing a code as a whole (rather than the employee copy-paste part of it)...
With your source code management system (you have one, right ?), you can probably have some hooks so that when the user gets the code, part of it is a binary file that is dedicated to only that user and is necessary for the code to compile correctly and run correctly... if you push to the extreme, that will mean having the right hardware system (TPM, hardware keys...etc).
So after you have dealt with all the paperwork as Paul suggests for example, if ever the code leaks anywhere, you can track who is at fault (and knowing that would probably deter anybody to actually even try)
All things considered... no (especially if the project is stored locally as you mentioned). If a developer has access to source code, they have the ability to steal source code. IANAL, but to deter this sort of thing, you need a lawyer to draft up a non-disclosure agreement (NDA) and get your developers to sign it.
From Wikipedia, an NDA is:
a legal contract between at least two parties that outlines confidential material, knowledge, or information that the parties wish to share with one another for certain purposes, but wish to restrict access to by third parties. It is a contract through which the parties agree not to disclose information covered by the agreement.
This really isn't an crypto question. But there is an answer.
1)You should limit developers and only give them access to source that they need to get the job done. This the security principal of "least privilege access". Store binaries of libraries or the executeables in source control if need be.
2)Force all developers to sign a Non-Disclosure contract. Higher developers that you can sue. For instant defending this contract in India is more difficult than defending it in Indiana.
There are ways to secure your entire development environment so that so programmers can keep or take souvenirs of what they are working on. Take a look at www.chaperon-secure.net for possible solutions from secure development environment to vaulted source code repositories.
You can somewhat reduce the risk of code theft if your application is cleanly built into components/modules/plug-ins. The dev would only be given code access to the components that they work on, and compiled code for the rest of the application. I am, of-course, assuming that it's only worthwhile to steal the application as a whole and not just a handful of components.
On the other hand, you would be surprised that code itself is not always as valuable as you would like to think. If there is no sensitive IP in the code that can be directly resold, then is your dev going to just recompile and go head-to-head with you with their own application?

How do you prevent hired developers from stealing code? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm in the process of opening up a company that will eventually hire 2-5 developers to work on a large web app.
My main concern is that one or more developers could steal the code. I could make them sign contracts against this type of thing, but I live in a country where the law is "bendable".
Is my only option to lock them up in a room without inet access and usb ports?
I'd love to know how others have solved this problem.
Don't hire people you can't trust.
Break the app into sections and only let people work on a subset of the app, never getting access to the whole thing.
Make it worth their while - you're opening a company, hire people and give them some stock options. Make sure it's more attractive for them to make you succeed than otherwise.
How about keeping them all happy and show that you appreciate their work?
You may find that you think your source code is the valuable part of your business, but you can always build that again. Your real advantage over your competitors is usually in the people you hire, and in the business relationships that you establish in the course of naturally doing business.
My suggestion is not technical but social: Make them feel good.
Most human beings have a moral base that prevents them from hurting other people who have treated them with respect and generosity.
There's a slim chance you'll wind up hiring a psychopath, in which case this approach won't work -- but then, it's likely to be the least of your worries.
The only thing that occures to me is to make them sign a contract where you explicit that if they share any code outside the project ambient, they'll compromise to pay you a large amount of money. But there's no guarantee they'll not do it anyway ..
You can create a vitual environment (a virtual machine) with limited internet connection (only to specific servers - git/svn server, database server, etc) and no copy/paste possibilities.
This virtual machine would be a standard environment with common developer tools.
At the office a developer would remotely connect to the virtual machine and start developing without being able to steal the code.
Of course he could print the screen or type the code on another computer but it's still very hard to steal.
There are many encrypting softwares available to encrypt the code. Here is an example http://www.codeeclipse.com/step1.php
In other words you can hide the code of one developer(one module) from the other developer and he will not be able to take the whole code himself in any case if you follow this approach.
Thanks
Sunny

Are there best practices for testing security in an Agile development shop? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Regarding Agile development, what are the best practices for testing security per release?
If it is a monthly release, are there shops doing pen-tests every month?
What's your application domain? It depends.
Since you used the word "Agile", I'm guessing it's a web app. I have a nice easy answer for you.
Go buy a copy of Burp Suite (it's the #1 Google result for "burp" --- a sure endorsement!); it'll cost you 99EU, or ~$180USD, or $98 Obama Dollars if you wait until November.
Burp works as a web proxy. You browse through your web app using Firefox or IE or whatever, and it collects all the hits you generate. These hits get fed to a feature called "Intruder", which is a web fuzzer. Intruder will figure out all the parameters you provide to each one of your query handlers. It will then try crazy values for each parameter, including SQL, filesystem, and HTML metacharacters. On a typical complex form post, this is going to generate about 1500 hits, which you'll look through to identify scary --- or, more importantly in an Agile context, new --- error responses.
Fuzzing every query handler in your web app at each release iteration is the #1 thing you can do to improve application security without instituting a formal "SDLC" and adding headcount. Beyond that, review your code for the major web app security hot spots:
Use only parameterized prepared SQL statements; don't ever simply concatenate strings and feed them to your database handle.
Filter all inputs to a white list of known good characters (alnum, basic punctuation), and, more importantly, output filter data from your query results to "neutralize" HTML metacharacters to HTML entities (quot, lt, gt, etc).
Use long random hard-to-guess identifiers anywhere you're currently using simple integer row IDs in query parameters, and make sure user X can't see user Y's data just by guessing those identifiers.
Test every query handler in your application to ensure that they function only when a valid, logged-on session cookie is presented.
Turn on the XSRF protection in your web stack, which will generate hidden form token parameters on all your rendered forms, to prevent attackers from creating malicious links that will submit forms for unsuspecting users.
Use bcrypt --- and nothing else --- to store hashed passwords.
I'm no expert on Agile development, but I would imagine that integrating some basic automated pen-test software into your build cycle would be a good start. I have seen several software packages out there that will do basic testing and are well suited for automation.
I'm not a security expert, but I think the most important fact you should be aware of, before testing security, is what you are trying to protect. Only if you know what you are trying to protect, you can do a proper analysis of your security measures and only then you can start testing those implemented measures.
Very abstract, I know. However, I think it should be the first step of every security audit.
Unit testing, Defense Programming and lots of logs
Unit testing
Make sure you unit test as early as possible (e.g. the password should be encrypted before sending, the SSL tunnel is working, etc). This would prevent your programmers from accidentally making the program insecure.
Defense Programming
I personally call this the Paranoid Programming but Wikipedia is never wrong (sarcasm). Basically, you add tests to your functions that checks all the inputs:
is the user's cookies valid?
is he still currently logged in?
are the function's parameters protected against SQL injection? (even though you know that the input are generated by your own functions, you will test anyway)
Logging
Log everything like crazy. Its easier to remove logs then to add them. A user have logged in? Log it. A user found a 404? Log it. The admin edited/deleted a post? Log it. Someone was able to access a restricted page? Log it.
Don't be surprised if your log file reaches 15+ Mb during your development phase. During beta, you can decide which logs to remove. If you want, you can add a flag to decide when a certain event is logged.

Resources