Who is responsible for security flaws? - security

If you are a programmer of an app, with potential (costly) ramifications if the security of the app is compromised, are you responsible if anything goes wrong (e.g data is leaked)?
Does it depend on whether you are the manager of the project?

If you're ever in this position as a programmer - costly ramifications is an app has a security flaw - you should explicitly have a security breach plan. Get it in writing. Talk about who loses jobs.
I say this for two reasons. One, because it's true - everyone should do this. And two, if everyone knew precisely the employment results of a breach, people will code more securely.
And one last point - if there are big ramifications, security should never be one person's responsibility.

Morally, you are. Legally, you usually aren't. Watch out what you sign, however.

That will depend entirely on the legal jurisdiction, contract between you and the customer (and any intermediaries, such as an employer, if you're not doing this as an individual).
This is why most EULAs state that there is no warranty, etc.

From project manager point of view i would say that it is programmers fault if security is compromised, since project manager area of expertise does not necessarily lie in programming or programming security. The programer should be experienced enough to know such things if he decides to take on such a task or at least educate himself.
As i see, the things like security leaks happen often because of bugs, bugs that could have been found with thorough testing. Fact is that if it is one person job - the person who programs is also the manager - one person cannot think of anything and the chance that you screw up is even bigger. But in the end what counts is the legal contract.

The key idea is to have so much people involved in the project (Managers, programmers, testers) so that responsibility will get so diffused that no one could actually be fully blamed :)

No, that responsibility would be on your QA department. For really sensitive applications, they should get a third-party certification that guarantees the integrity of your application, or at least makes a thorough report on how and why it might fail.

In some organizations there are teams with people specializing in security inspections of applications from different perspectives
...- And for those org's that do not have such teams - the concept of security needs to be upfront as a goal highlighted from the inception of project . If it is not existing as a milestone - then neither the programmer nor the manager will take the initiative to implement it (It's often the last thing in the priority list often - because of time constraints , the last to be taken care of - though important).

Related

How do we "test" our security policy?

DISCLAIMER: At my place of work we are aware that, as none of us are security experts, we can't avoid hiring security consultants to get a true picture of our security status and remedial actions for vulnerabilities. This question is asked in the spirit of trying to be a little less dumb and a bit more aware of the issues.
In my place of work, a small business with a sum total of 7 employees, we need to do some work on reviewing our application for security flaw and vulnerabilities. We have identified two main requirements in a security tester:
They are competent, thorough and know their stuff.
They are able to leave us with a clear idea of the work we need to do to make our security better.
This process will be iterative so we will have a scan, do the remedial work and repeat. This will be a regular occurrence going forward.
The problem we have is: How do we know 1? And, even if we're reasonably sure of 1, how on earth do we proceed to 2?
Our first idea was to do some light security scanning on our code ourselves and see if we could identify any definite issues. Then, if the security consultants we choose identify those issues and a few more we're well on the way to 1 and 2. The only problem is that I've been trawling the interweb for days now looking at OWASP, Metasploit, w3af, burp, wikto, sectools (and Stack Overflow, natch)...
As far as I can tell security software seems to come in two flavours, complex open source security stuff for security experts and expensive complex proprietary security stuff for security experts.
I am not a security expert, I am an intermediate level business systems programmer looking for guidance. Is there no approachable scanner type software or similar which will give me an overview of the state of my codebase? Am I just going to have to take a part time degree in order to understand this stuff at a brass tacks level? Or am I missing something?
I read that you're first interested in hiring someone and knowing they're good. Well, you've got a few options, but the easiest is to talk to someone in the know. I've worked with a few companies, and can tell you that Neohapsis and Matasano are very good (though it'll cost you).
The second option you have is to research the company. Who have they worked with? Can they give you references? What do the references have to say? What vulns has the company published to the world? What was the community response (were they shouted down, was the vuln considered minor, or was it game changing, like the SSL MitM vuln)? Have any of the company's employees talked at a conference? Was it a respected conference? Was the talk considered good by the attendees?
Second, you're interested in understanding the vulnerabilities that are reported to you. A good testing company will (a) give you a document describing what they did and did not do, what vulnerabilities they found, how to reproduce the vulnerabilities, and how they know the vulnerability is valid, and (b) will meet with you (possibly teleconference) to review the vulnerabilities and explain how the vulns work, and (c) will have written into the contract that they will retest once after you fix the vulns to validate that they are truly fixed.
You can also get training for your developers (or hire someone who has a good reputation in the field) so they can understand what's what. SafeLight is a good company. SANS offers good training, too. You can use training tools like OWASP's webgoat, which walks you through common web app vulns. Or you can do some reading - NIST SP 800 is a freely downloadable fantastic intro to computer security concepts, and the Hacking Exposed series do a good job teaching how to do the very basic stuff. After that Microsoft Press offers a great set of books about security and security development lifecycle activities. SafeCode offers some good, short recommendations.
Hope this helps!
If you can afford to hire expert security consultants, then that may be your best bet given that your in-house security skills are low.
If not, there is not escaping the fact that you are going to need to understand more about security, how to identify threats, and how to write tests to test for common security exploits like XSS, SQL injection, CSRF, and so on.
Automated security vulnerability software (static code analysis and runtime vulnerability scanning) are useful, but they are only ever going to be one piece in your overall security approach. Automated tools do not identify all exploits, and they can leave you with a false sense of security, or a huge list of false positives. Without the ability to interpret the output of these tools, you might as well not have them.
One tool I would recommend for external vulnerability scanning is QualysGuard. They have a huge and up to date database of common exploits that they can scan for in public facing web applications, web servers, DNS servers, firewalls, VPN servers etc., and the output of the reports usually leaves you with a very clear idea of what is wrong, and what to do about it. But again, this would only be one part in your overall security approach.
If you want to take a holistic approach to security that covers not only the components in your network, applications, databases, and so on, but also the processes (eg. change management, data retention policy, patching) you may find the PCI-DSS specification to be a useful guide, even if you are not storing credit card numbers.
Wow. I wasn't really expecting this little activity.
I may have to alter this answer depending on my experiences but in continuing to wade through the acres of verbiage on my quest for something approachable I happened on a project which has been brought into the OWASP fold:
http://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
It boasts, and I quote from the project documentation's introduction:
[ZAP] is designed to be used by people
with a wide range of security
experience and as such is ideal for
developers and functional testers who
a (sic) new to penetration testing.
EDIT: After having a swift play with ZAP this morning, although I couldn't directly switch on the attack mode on our site right away I can see that the proxy works in a manner very similar to OWASP's Web Scarab (Would link but lack of rep and anti-spam rules prevent this. Web Scarab is more technically oriented, it seems, looking over the feature list Scarab does more stuff, but it doesn't have a pen test vulnerability scanner. I'll update more once I've worked out how to have a go with the vulnerability scanner.
Anyone else who would like to pitch in and have a go would be welcome to do so and comment or answer as well below.

Hacking and exploiting - How do you deal with any security holes you find?

Today online security is a very important factor. Many businesses are completely based online, and there is tons of sensitive data available to check out only by using your web browser.
Seeking knowledge to secure my own applications I've found that I'm often testing others applications for exploits and security holes, maybe just for curiosity. As my knowledge on this field has expanded by testing on own applications, reading zero day exploits and by reading the book The Web Application Hacker's Handbook: Discovering and Exploiting Security Flaws, I've come to realize that a majority of online web applications are really exposed to a lot of security holes.
So what do you do? I'm in no interest of destroying or ruining anything, but my biggest "break through" on hacking I decided to alert the administrators of the page. My inquiry was promptly ignored, and the security hole has yet not been fixed. Why wouldn't they wanna fix it? How long will it be before someone with bad intentions break inn and choose to destroy everything?
I wonder why there's not more focus on this these days, and I would think there would be plenty of business opportunities in actually offering to test web applications for security flaws. Is it just me who have a too big curiosity or is there anyone else out there who experience the same? It is punishable by law in Norway to actually try break into a web page, even if you just check the source code and find the "hidden password" there, use it for login, you're already breaking the law.
I once reported a serious authentication vulnerability in a online audiobook store that allowed you to switch the account once you were logged in. I was wary too if I should report this. Because in Germany hacking is forbidden by law too. So I reported the vulnerability anonymously.
The answer was that although they couldn’t check this vulnerability by themselves as the software was maintained by the parent company they were glad for my report.
Later I got a reply in that they confirmed the dangerousness of the vulnerability and that it was fixed now. And they wanted to thank me again for this security report and offered me an iPod and audiobook credits as a gift.
So I’m convinced that reporting a vulnerability is the right way.
"Ive found that Im often testing others applications for exploits and security holes, maybe just for curiosity".
In the UK, we have the "Computer Misuse Act". Now if these applications you're proverbially "looking at" are say Internet based and the ISP's concerned can be bothered to investigate (for purely political motivations) then you're opening yourself up getting fingered. Even doing the slightest "testing" unlesss you are the BBC is sufficient to get you convicted here.
Even Penetration Test houses require Sign Off from companies who wish to undertake formal work to provide security assurance on their systems.
To set expectations on the difficulty in reporting vulnerabilties, I have had this with actual employers where some pretty serious stuff has been raised and people have sat on it for months from the likes of brand damage to even completely shutting down operations to support an annual £100m E-Com environment.
I usually contact the site administrator, although the response is almost ALWAYS "omg you broke my javascript page validation I'll sue you."
People just don't like to hear that their stuff is broken.
Informing the administrator is the best thing to do, but some companies just won't take unsolicited advice. They don't trust or don't believe the source.
Some people would advise you to exploit the security flaw in a damaging way to draw their attention to the danger, but I would recommend against this, and it's possible that you could have serious consequences because of this.
Basically if you've informed them it's no longer your problem (not that it ever was in the first place).
Another way to ensure you get their attention is to provide specific steps as to how it can be exploited. That way it will be easier for whomever recieves the email to verify it, and pass it on to the right people.
But at the end of the line, you owe them nothing, so anything you choose to do is sticking your neck out.
Also, you could even create a new email address for yourself to use to alert the websites, because as you mentioned, some places it would be illegal to even verify the exploit, and some companies would choose to go after you instead of the security flaw.
If it doesn't affect many users, then I think notifying the site administrators is the most you can be expected to do. If the exploit has widespread ramifications (like a Windows security exploit) then you should notify someone in a position to fix the problem, then give them time to fix it before you publish the exploit (if publishing it is your intention).
A lot of people cry about exploit publication, but sometimes that's the only way to get a response. Keep in mind that if you found an exploit, there's a high likelihood that someone with less altruistic intentions has found it and has started exploiting it already.
Edit: Consult a lawyer before you publish anything that could damage a company's reputation.
I experienced the same like you. I once found an exploit in an oscommerce shop where you could download ebooks without paying. I wrote two mails:
1) Developers of oscommerce, they answered "Known issue, just don't use this paypal module, we won't fix"
2) Shop administrator: no answer at all
Actually I have no idea what's the best way to behave ... maybe even publicate the exploit to force the admins to react.
Contact the administrator, not a business-type person. Generally the admin will be thankful for the notice, and the chance to fix the problem before something happens and he gets blamed for it. A higher-up, or the channels a customer service person is going to go through, are the channels where lawyers get involved.
I was part of a group of people who reported an issue we stumbled across on the NAS system at University. The admins were very grateful we found the hole and reported it, and argued with their bosses on our behalf (the people in charge wanted to crucify us).
We informed the main developer about a sql injection vulnerability on their login page. Seriously, it's the classic '<your-sql-here>-- variety. You can't bypass the login, but you can easily execute arbitrary sql. Still hasn't been fixed in 2 months! Not sure what to do now...no one else at my office really cares, which amazes me since we pay so much for every little upgrade and new feature. It also scares me when I think about the code quality and how much stock we are putting in this software.

What are the best programmatic security controls and design patterns?

There's a lot of security advice out there to tell programmers what not to do. What in your opinion are the best practices that should be followed when coding for good security?
Please add your suggested security control / design pattern below. Suggested format is a bold headline summarising the idea, followed by a description and examples e.g.:
Deny by default
Deny everything that is not explicitly permitted...
Please vote up or comment with improvements rather than duplicating an existing answer. Please also put different patterns and controls in their own answer rather than adding an answer with your 3 or 4 preferred controls.
edit: I am making this a community wiki to encourage voting.
Principle of Least Privilege -- a process should only hold those privileges it actually needs, and should only hold those privileges for the shortest time necessary. So, for example, it's better to use sudo make install than to su to open a shell and then work as superuser.
All these ideas that people are listing (isolation, least privilege, white-listing) are tools.
But you first have to know what "security" means for your application. Often it means something like
Availability: The program will not fail to serve one client because another client submitted bad data.
Privacy: The program will not leak one user's data to another user
Isolation: The program will not interact with data the user did not intend it to.
Reviewability: The program obviously functions correctly -- a desirable property of a vote counter.
Trusted Path: The user knows which entity they are interacting with.
Once you know what security means for your application, then you can start designing around that.
One design practice that doesn't get mentioned as often as it should is Object Capabilities.
Many secure systems need to make authorizing decisions -- should this piece of code be able to access this file or open a socket to that machine.
Access Control Lists are one way to do that -- specify the files that can be accessed. Such systems though require a lot of maintenance overhead. They work for security agencies where people have clearances, and they work for databases where the company deploying the database hires a DB admin. But they work poorly for secure end-user software since the user often has neither the skills nor the inclination to keep lists up to date.
Object Capabilities solve this problem by piggy-backing access decisions on object references -- by using all the work that programmers already do in well-designed object-oriented systems to minimize the amount of authority any individual piece of code has. See CapDesk for an example of how this works in practice.
DARPA ran a secure systems design experiment called the DARPA Browser project which found that a system designed this way -- although it had the same rate of bugs as other Object Oriented systems -- had a far lower rate of exploitable vulnerabilities. Since the designers followed POLA using object capabilities, it was much harder for attackers to find a way to use a bug to compromise the system.
White listing
Opt in what you know you accept
(Yeah, I know, it's very similar to "deny by default", but I like to use positive thinking.)
Model threats before making security design decisions -- think about what possible threats there might be, and how likely they are. For, for example, someone stealing your computer is more likely with a laptop than with a desktop. Then worry about these more probable threats first.
Limit the "attack surface". Expose your system to the fewest attacks possible, via firewalls, limited access, etc.
Remember physical security. If someone can take your hard drive, that may be the most effective attack of all.
(I recall an intrusion red team exercise in which we showed up with a clipboard and an official-looking form, and walked away with the entire "secure" system.)
Encryption ≠ security.
Hire security professionals
Security is a specialized skill. Don't try to do it yourself. If you can't afford to contract out your security, then at least hire a professional to test your implementation.
Reuse proven code
Use proven encryption algorithms, cryptographic random number generators, hash functions, authentication schemes, access control systems, rather than rolling your own.
Design security in from the start
It's a lot easier to get security wrong when you're adding it to an existing system.
Isolation. Code should have strong isolation between, eg, processes in order that failures in one component can't easily compromise others.
Express risk and hazard in terms of cost. Money. It concentrates the mind wonderfully.
Well understanding of underlying assumptions on crypto building blocks can be important. E.g., stream ciphers such as RC4 are very useful but can be easily used to build an insecure system (i.e., WEP and alike).
If you encrypt your data for security, the highest risk data in your enterprise becomes your keys. Lose the keys, and data is lost; compromise the keys and all your data is compromised.
Use risk to make security decisions. Once you determine the probability of different threats, then consider the harm that each could do. Risk is, by definition
R = Pe × H
where Pe is the probability of the undersired event, and H is the hazard, or the amount of harm that could come from the undesired event.
Separate concerns. Architect your system and design your code so that security-critical components can be kept together.
KISS (Keep It Simple, Stupid)
If you need to make a very convoluted and difficult to follow argument as to why your system is secure, then it probably isn't secure.
Formal security designs sometimes refer to a thing called the TCB (Trusted Computing Base). But even an informal design has something like this - the security enforcing part of your code, the part you can't avoid relying on. This needs to be well encapsulated and as simple and small as possible.

What is the best way to stop an application being copied and used without the owner’s permission?

What is the best way to avoid that an application is copied and used without the owner’s knowing?
Is there any way to trace the usage? Meaning periodically the application communicates back, with enough information so that we can know where it is, and if it’s legal. Next thing, of course, shut it down, if it’s not legit.
Software that "phones home" will be quickly shunned by the vast majority of your users. Just license it appropriately and sell it.
People who use your software professionally will either pay for it or they won't use it. Corporations tend to frown on potential lawsuits.
People who want to use your software without paying for it will continue to do so despite your best efforts to counteract them. Once the software is in their hands, it is out of yours. Without pissing off your users, your only recourse is a legal one.
If your product is priced reasonably, some people will pay for it and some won't. That is just something you need to deal with upfront and it should be factored into your business plan.
Don't do this, don't attempt it, don't even think about it.
This is a battle you can't win. If people want to pirate your software they will. You'll be shamed by the fact that a smart reverse engineer can write a one byte binary patch to subvert all your protection schemes.
The people who are going to pirate your software will do so and all these "security features" you build in will likely end up only inconveniencing your true supporters: the people who have legitimately purchased your software. These draconian DRM / anti-piracy schemes only build resentment among software users.
Hardware dongles are the best way if you are really concerned about piracy IMO. Check out the big industrial CAD/CAM packages worth thousands or tens-of-thousands, or the AV/Music production software, they virtually all have dongle protection. Dongles can be emulated or reversed but not without a significant investment in time, a lot more than just changing a few JEs to JNEs in your assembly.
Phoning home is not the way to go unless you are providing a service that requires a subscription and constant updates (like antivirus products, for example) as part of your business model. You need to have a bit of respect for your users and their privacy. You might have perfectly innocent intentions but what if a court ordered your company to hand over that information (like the US government is doing with Google and its search terms) - would/could you fight it? What if you some time in the future sold your company and the new owners decided to sell all that historic information to a marketing company? Privacy is not just about trusting a company not to abuse your data, it is trusting that company to go out of their way to protect your data. Which is pretty far down the list of priorities for most companies. So basically, the monitoring users thing is not really a good path to go down.
The best (and pretty much only) way to reliably prevent piracy is to have a client/server application instead of a standalone one, where a non-trivial part of the work is done by the server and users need to register. Then you can at least detect and block simultaneous use of the same account.
There are several approaches you could take, but there are three that will be vastly more effective that any of the others.
A. Don't create it.
Software that doesn't exist never suffers from unauthorized use.
B. Don't release it.
If you have the only copy, and you keep it that way, then the chances are exceedingly good that there will be no unauthorized use.
C. Give everyone permission to use it.
If you don't want anyone to use it without permission, then you can give everyone permission and there will be no unauthorized users.
There is a possibility to trace the usage. You can accomplish this by letting phone your tool home and send the information you need. The problem with this is, that first nobody likes software that phones home for this purpose and second with a simple application-level gateway you can block the application to phone home! What you describe in your question is a common problem of software-distributors and it's not an easy one to solve!
There's another thing I haven't seen mentioned yet : You could add loads of settings to the applications' configuration file, and start with ridiculous defaults. Then do the installation & configuration personally, so no-one but you is able to figure out how everything should be set. This can be a mayor put-down for people that are just trying out if a copy is enough. (Be sure to add settings that depend on all sorts of system-settings, like OS-version related DLL-versions that should be loaded, etc). Not very user-friendly tho ;-)

Is it ethical to monitor users? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I didn't know I would be getting too many replies so fast. I can provide more details. It is definitely for use within the company. I was looking for some info on whether I should be more careful or anything to watch out for...
My boss is asking me to put some tracking info on what users do with their application. It is not about collecting sensitive data but there might be some screenshots involved and I am not sure if this is a breach of privacy.
Would love to hear any thoughts on this or if you handled something similar.
At work, there is no privacy. Think of it this way, if you work for a financial institution, or a government one, monitoring users may be the difference between keeping sensitive information secret and not. (I want my personal information kept private). They are paid to do work at work. If they are afraid about what they are doing is wrong, then they shouldn't be doing it.
A comment brought up a good point. If you are selling the product and spying on end users, that is totally different. That is highly unethical to take screen shots and report them back to the company. Actually where I work, we'd have you arrested for it if we found out. (yes, you'd be violating a federal law, and I guarantee we'd go after everyone and sort out the mistakes later.) That is a very slippery slope.
If you mean users at large, yes it's a breach of privacy.
If you mean users internal to your company (workers), then no -- there should be no expectation of privacy in the workplace.
Sometimes it is good to collect some metrics and will help in enhancing the user experience. Once, we were able to prove that a certain functionality was never used and we were able to remove support for it. For screenshots, you should be careful to take only the required window instead of a full screen.
If the application is used internally within your organization, and you have a corporate policy that states "no expectation of privacy" that has been communicated to and signed by your users then there is no issue.
Monitoring the actions of employees within a business in the US is very common practice.
Legal issues aside, do you want to work at a company that takes screenshots of your desktop?
Even if legal, this behavior is sure to drive away developers. Remember, in a bad work environment often the best developers leave first; they have the best job prospects.
Here's a corollary example: would you want your boss taping and listening to phone calls you made from the office? You don't give up every right you have just by cashing a paycheck.
Even if this screen capture methodology is legal, it certainly isn't ethical and will absolutely damage the morale of employees by demonstrating that they cannot be trusted.
It's just a bad idea. There have got to be better ways of accomplishing your goals than this.
Screenshots? If it's not opt-in, I'd say that's a pretty clear breach of privacy.
I made a simple CMS in PHP and I had to store all actions of users, but it's a completely different situation. In my opinion what is asking your boss is a bit out of privacy, especially if in your application you don't mention to the user this kind of behavior.
On a work machine? Absolutely; as long as the users know the extent to which they are being monitored. It's their choice to work for the employer, and they are using the employer's equipment.If you don't notify them that they are being watched, then that is kind of a "grey area"....depending upon state lawss, it may even be illegal - depending on what sort of information you are monitoring.
Something that would help on clarification would be is this an internal company application or something that will be on user's personal computers.
Typically when it comes to computers that are owned by the company, if the company decides to do monitoring, it is their choice. Disclosure of the monitoring is often encouraged in an effort to be open and honest, but is not mandatory. A user should not have any expectation of privacy when using equipment owned and managed by the company.
This is not just a matter of custom built applications, but also web browsing, email, phone conversations, etc. If you are using company resources then you are releasing your privacy.
If this is an application going to users outside of the company, then yes it is wrong without permission by the users.
That is greatly depending on the country you are in and what information you are collecting and what you do with it.
There is a huge difference between the US and EU for instance.
The Law, jurisprudence, union contracts and company policy (when not in contradiction to the above) are what determines what is acceptable.
If its for an internal app its completely ethical.
Beyond disclosing to all users that their use of the apps is monitored there is no other obligation of disclosure(excepting federal contracts and union contracts).
What is most important about capturing this kind of data is to focus on capturing the absolute least amount necessary - capturing screenshots of all open windows plus any adjacent data streams does in fact incur liability issues (think HIPPA) as well as producing a mountain of data that no one will ever look thru until a lawyer requests it with a subpoena and you're asked to go thru it and redact all Names, DOB, and SSNs in 160GB of data.
Seems this has already been answered, but it should be noted that there are countries where this is illegal, even at a place of work.
For instance, in Switzerland it is illegal to track which websites each user has been visiting.
Other than specific laws to the contrary, I would agree that it is acceptable to do, since there should be no reasonable expectation of privacy at the workplace. That said, informing the users is the right thing to do.
One other caveat, if the data you are collecting is sensitive enough that an attacker would have use of it (say, the screenshots include CC numbers), then you must ensure that this information is well protected. (I'm not referring to the user's information, but say the bank's clients' account details.)
If it is done without the user's consent, then it is definitely a breach of privacy. Even with the user's consent, it must be made clear exactly what information is being passed back. If the screenshot was to grab the whole screen, not just a window, then you could potentially get all kinds of private info.
Is this an internal app or a something for the public? If it's internal, it's not unethical, even if it's scummy, to monitor users.
If it's something for the public, in order to not be sleazy:
the user has to be able to opt-out
no personally identifying data can be collected
only data about your app (not screenshots of the entire screen) can be collected
It really depends on exactly what is being collected, the disclosure, and if the program could be opted out of. If that passes the smell test, then ensuring the reporting does not provide an attack vector and the data is appropriately safeguarded becomes your concern. If things seem shady get some written 'feature request' to CYA. The basic idea, if done right, is nothing new. Microsoft, for instance, does it with some of their products.
In a work environment, I think it is OK as long as all employees know that they may be monitored. I've seen places (Intuit was one) where employees are tracked all day. Not my cup of tea, however.
In government facilities, there is typically some sort of login screen that states that anything and everything done on that machine is subject to monitoring.
If these are applications that are run by the general public, I'd say that it better be crystal clear that you are collecting data on them. Personally, I'd rather not have programs 'phoning home' with info about my activities, boring as they may be.
If the client is external, this should be disclosed to the client. Actually, if the client is internal OR external, if you do not disclose it, it is totally unethical.
An employment agreement that states that there can be no expectation of privacy constitutes disclosure.
Screenshots? If it's not opt-in, I'd
say that's a pretty clear breach of
privacy.
you've opted-in by cashing your paycheck :)
as many indicated, informing the user is the best the company can do. Informing, not asking to Opt-In.
I would suggest reading:
Privacy. My interpretation is that people will expect some things to be kept private such as their personal information. By interacting with your sites, users are sharing information with you that you should be able to use but not distribute or abuse as if it was your own.
Screen shots is obviously the hot button issue here. While users entering information into a text input field are knowingly giving you information, screen shots go beyond what a typical user would expect and therefore should be disclosed to the user through a privacy policy.
Collecting anonymous usage should be doable without screenshots.
If your app collects any data that is meant to be protected by privacy laws, then you will have to treat the screenshots as containing sensitive information and protect them accordingly. Data protection laws are pretty strict in most countries.
Unless you have a really really small company, privacy laws vary a lot between countries, and the feature is probably more trouble than it's worth. In any country I've even lived in, that idea would never fly.
But don't ask a bunch of hacks on a site like stack overflow. Seriously, ask a lawyer.
I think the question is still a bit vague as to who is going to be monitored for what. From what I understand who'll be monitored are the end users who are using the application and the gathered data will be used internally. Assuming this is the case, I think, I can contribute the following answer:
If you are going to monitor end users to see how they are using your product, you are in human factors/user experience business and what you want to do is really an experiment. Doing such an experiment requires consent of the subject (the end user). In an academic setting (and I think the same goes for industry as well), there is an Institutional Review Board (IRB) which grants permission for such experiments. I believe in the industry scene there are similar organizations (just not sure what they are called). A request for permission for such an experiment is accompanied by a report which details the user experiment in a very specific manner. The IRB than decides whether to issue a permit or not.
The important point is the consent here and users should know about the experiment and agree to be subjects. I think, in the absence of a user consent the experiment is neither ethical nor legal. Again, I approached this based on an assumption and tried to summarize my experience in such experiments.
Collecting screen shots may be illegal even if employees are notified. This is an issue of local law and federal law. You haven't said which country you are in. In California, for example, monitoring screens might violate both workplace privacy laws and wiretap laws. You should get an opinion of your corporate attorney before implementing this.

Resources