Attacking websites without leaving an audit trail - security

Recently Aetna suffered a breach where it lost 65,000 SSNs. They never were able to find an audit trail of what happened which probably hints that the attack leveraged XSS or similar technique.
Are there specific known attacks that the bad guys are repeatedly leveraging for this type of attack?

There are common mistakes that people make and there are common platforms that people use. Each, if left unpatched would allow somebody to break in using a simple script.
But if somebody was going after something specifically, in this case social security numbers, that have high value in organised crime rings, I would have expected somebody to spend a little more time figuring out how the site worked and applying a custom exploit to grab the data.
I don't see why it has to be XSS either. If their systems weren't sending access logs off-server, or even logging every entry point, there are a variety of methods somebody could exploit an exploitable server and clean up afterwards.

It's not at all clear at this point that this was a technical failure, and given the inconclusive forensics it seems much more likely to me that this was a human failure, be it social engineeering, data left on a train seat, or a disgruntled employee.
AFAIK the only way to truly leave zero audit trail is for the auditing to have not been written. Logging HTTP traffic alone will always give you some evidence of an HTTP based attack.

I've seen the results of some automated attacks, and one of the first things they do is disable logging, and delete all logs.
That's why it's common to change logging locations to a non-standard path - it won't do anything against a determined attacker, but it will give you more information in the case of an automated attack.

The lack of an audit trail is not surprising to say the least. Not many companies out there keep meaningful audit trails. Sure, there's often gigabyte and gigabytes of logs, but who goes through all of that? Most IT places would just dump it once it ages enough, so its entirely possible that this breach happened a while ago and they've already dumped the logs since from the article it looks like they did not know about the possible breach until the spam mail started coming in.
I'd suspect poor IT instead of some clever attack that caused the lack of an audit trail.

Related

I found a vulnerability on a website, how do I approach the company about it?

This is the first time I have found a vulnerability on a website.
A client just wanted some data scraped from an auction website.
while collecting the clients data I noticed that if something sold you would need to buy a yearly subscription to see how much it sold for, glossary, item info etc.
But I found that the price it sold for is still being injected by JavaScript but not being displayed and this is a feature the ask customers to pay for.
Just wanted to get advice from people who usually test vulnerabilities on how they approach the companies about the vulnerability.
Thank in advance..
First of all, StackOverflow is more focused on software development, the Security StackExchange would be more appropriate to ask such questions.
If I understood the context correctly, you do not have any security audit contract that explicitly allows you to look for vulnerabilities. Yet, you were missioned to scrape their website, so looking at their code is part of what you had to do. I would say the answer depends on whether you were in the scope of your mission or not. Did you notice this flaw without even trying or did you dig more that you should have ?
If this flaw appeared to you while you were doing the job you were asked to do, then I would say it's pretty safe to just email them in the the context of your job.
If this vulnerability is too far out of your scope, then be careful because if they don't like it, worst case scenario they could very well pressure your company to fire you or something (they would be total idiots to do so, but better be careful when dealing with security). If you are in this scenario, I suggest you report the vulnerability "anonymously" (doesn't have to be over complicated, simply use an email address not containing your name or the name of your company and you'll be alright). You could also check out bug bounty platforms to see if they are affiliated to any program. Also some platforms such as openbugbounty provide ways to report vulnerabilities "anonymously".
In my experience, I've never seen any company being total jerk and going against someone reporting a vulnerability in good faith, companies are generally concerned about security and will thank you for telling them there's something wrong.
Either way, try to be as descriptive as possible so they can understand the issue and fix it.
Just stating the obvious here : don't disclose this vulnerability publicly or you're exposing yourself to some big trouble.

Programming/Hacking

Lets say I knew an ethical hacker that I wanted to hire to do a penetration test, but trust was an issue. Could I duplicate my system but have its sensitive data removed, and have it untraceable to the company that owns it?
If just the structure and security measures remained, could this duplicate be hacked to see if certain areas can be accessed? I'm guessing it could be done similarly to the 'missions' on hackthissite.org. I could then be informed of the exploits. What would the test site look like?
Could it actually be completely untraceable to its company? How hard would this be?
You generally cannot go around distributing the code for your employers sites.
With their permission, though, what you could do is setup a staging environment (most development environments should have these anyway) and in that sense you can point relevant people to that site (with no real data) for the purposes of providing a penetration test. Of course, it may limit the scope of the validity of their attacks, but not generally so, because you're already basically saying "attack this web infrastructure", and the data they see is kind of irrelevant (as long as it has the same structure); that is the aim of exposing weaknesses in the sites function is independent of data.
You could do that, but there are nuances. Just make sure that the structure is not changed. That is, remove non behavioral procedures and create a clone and allow him to test that only.
Bear in mind, though, that even if you remove the sensible data, you can still be hacked. A security flaw can be such that does not rely on behavior, but services and such (which is most times the case).
The tester can easily not report a vulnerability and leave this open as a backdoor to your real application.

Website hacking - Why it is always possible to do?

we know that each executable file can be reverse engineered (disassembled, decompiled). No mater how strong security you will implement, anyway if crackers want to, they do crack!!! Just that is a question of time.
What about websites? May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)? If no, than what is the reason?
Yes it is always possible to do. There is always a way in.
It's like my grandfather always said:
Locks are meant to keep the honest
people out
May we say that website can be completely safe from attacks of hackers?
No. Even the most secure technology in the world is vulnerable to social engineering attacks, for one thing.
You can easily write a webapp that is mathematically proven to be secure... But that proof will only hold as long as the underlying operating system, interpreter|compiler, and hardware are secure, which is never the case.
The key thing to remember is that websites are usually part of a huge and complex system and it doesn't really matter if the hacker enters the system through the web application itself or some other part of the entire infrastructure. If someone can get access to your servers, routers, DNS or whatever, they can bring down even the best web application. In my experience a lot of systems are vulnerable in some way or another. So "completely secure" means either "we're trying really hard to secure the platform" or "we have no clue whatsoever, but we hope everything is okay". I have seen both.
To sum up and add to the posts that precede:
Web as a shared resource - websites are useful so long as they are accessible. Render the web site unaccessible, and you've broken it. Denial of service attacks add up to flooding the server so that it can no longer respond to legitimate requests will always be a factor. It's a game of keep away - big server sites find ways to distribute, hackers find ways to deluge.
Dynamic data = dynamic risk - if the user can input data, there's a chance for a hacker to be a menance. Today the big concepts are cross-site scripting and SQL injection, but once one avenue for cracking is figured out, chances are high that another mechanism will rise. You could, conceivably, argue that a totally static site can be secure from this, but then how many useful sites fit that bill?
Complexity = the more complex, the harder to secure - given the rapid change of technology, I doubt that any web developer could say with 100% confidence that a modern website was secure - there's too much unknown code. Taking the host aside (the server, network protocols, OS, and maybe database), there's still all the great new libraries in Java EE and .Net. And even a less enterprise-y architecture will have some serious complexity that makes knowing all potential inputs and outputs of the code prohibitively difficult.
The authentication problem = by definition, the web site lets a remote user do something useful on a server that is far away. Knowing and trusting the other end of the communication is an old challenge. These days server side authenitication is relatively well implemented an understood and (so far as I know!) no one's managed to hack PKI. But getting user authentication ironed out is still quite tricky. It's doable, but it's a tradeoff between difficulty for the user and for configuration, and a system with a higher risk of vulnerability. And even a strong system can be broken when users don't follow the rules or when accidents happen. All this doesn't apply if you want to make a public site for all users, but that severely limits the features you'll be able to implement.
I'd say that web sites simply change the nature of the security challenge from the challenges of client side code. The developer does not need to be as worried about code replication, but the developer does need to be aware of the risks that come from centralizing data and access to a server (or collection of servers). It's just a different sort of problem.
Websites suffer greatly from injection and cross site scripting attacks
Cross-site scripting carried out on
websites were roughly 80% of all
documented security vulnerabilities as
of 2007
Also part of a website (in some web sites a great deal) is sent to the client in the form of CSS, HTML and javascript, which is the open for inspection by anyone.
Not to nitpick, but your definition of "good hosting" does not assume the HTTP service running on the host is completely free from exploits.
Popular web servers such as IIS and Apache are often patched in order to protect against such exploits, which are often discovered the same way exploits in local executables are discovered.
For example, a malformed HTTP request could cause a buffer overrun on the server, leading to part of its data being executed.
It's not possible to make anything 100% secure.
All that can be done is to make something hard enough to break into, that the time and effort spent doing so makes it not worth doing.
Can I crack your site? Sure, I'll just hire a few suicide bombers to blow up your servers. Or... I'll blow up those power plants that power up your site, or I do some sort of social engineering, and DDOS attacks would quite likely be effective in a large scale not to mention atom bombs...
Short answer: yes.
This might be the wrong website to discuss that. However, it is widely known that security and usability are inversely related. See this post by Bruce Schneier for example (which refers to another website, but on Schneier's blog there's a lot of interesting readings on the issue).
Assuming the server itself isn't comprimised, and has no other clients sharing it, static code should be fine. Things usually only start to get funky when there's some sort of scripting language involved. After all, I've never seen a comprimised "It Works!" page
Saying 'completely secure' is a bad thing as it will state two things:
there has not been a proper threat analysis, because secure enough would be the 'correct' term
since security is always a tradeoff it means that the a system that is completely secure will have abysmal usability and the site will be a huge resource hog as security has been taken to insane levels.
So instead of trying to achieve "complete security" you should;
Do a proper threat analysis
Test your application (or have someone professional test it) against common attacks
Apply best practices, not extreme measures
The short of it is that you have to strike a balance between ease of use and security, much of the time, and decide what provides the optimal level of both for your purposes.
An excellent case in point is passwords. The easy way to go about it is to just have one, use it everywhere, and make it something easy to remember. The secure way to go about it is to have a randomly generated variable-length sequence of characters across the encoding spectrum that only the user himself knows.
Naturally, if you go too far on the easy side, the user's data is easy to pick off. If you go too far on the side of security, however, practical application could end up leading to situations that compromise the added value of the security measures (e.g. people can't remember their whole keychain of passwords and corresponding user names, and therefore write them all down somewhere. If the list is compromised, the security measures that had been put into place are for naught. Hence, most of the time a balance gets struck and places ask that you put a number in your password and tell you not to do anything stupid like tell it to other people.
Even if you remove the possibility of a malicious person with the keys to everything leaking data from the equation, human stupidity is infinite. There is no such thing as 100% security.
May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)?
Well if we're going to start putting constraints on the attacker, then of course we can design a completely secure system: we just have to bar all of the attacker's attacks from the scenario.
If we assume the attacker actually wants to get in (and isn't bound by the rules of your engagement), then the answer is simply no, you can't be completely safe from attacks.
Yes, it's possible for a website to be completely secure, for a reasonable definition of 'complete' that includes your original premise that the hosting is not vulnerable. The problem is the same as with any software that contains defects; people create software of a complexity that is slightly beyond their capability to manage and thus flaws remain undetected until it's too late.
You could start smaller and prove all your work correct and safe as you construct it, remaking any off-the-shelf components that haven't been designed to that stringent degree of quality, but unfortunately that leaves you at a massive commercial disadvantage compared to the people who can write 99% safe software in 1% of the time. Therefore there's rarely a good business reason for going down this path.
The answer to this question lies close to the ideas about computational theory that arise from considering the halting problem. http://en.wikipedia.org/wiki/Halting_problem To wit, if you could with clarity say you'd devised a way to programmatically determine if any particular program was secure, you might be close to disproving the undecidability of the halting problem on the class of machines you were working with. Since the undecidability of the halting problem has been proven, we can know that over turing machines you would be unable to prove securability since the problem of security reduces to the halting problem. Even for finite machines you might be able to decide all of the states of the program, but Minsk would tell us that the time required for a complete state tree for even simplistic modern day machines and web servers would be huge. You probably know a lot about a specific piece of code, but as soon as you changed the code, or updated it, a complete retest would be required. Fundamentally this is interesting because it all boils back to the concept of information and meaning. Read about Automated theory proving to understand more about the limits of computational systems. http://en.wikipedia.org/wiki/Automated_theorem_proving
The fact is hackers are always one step ahead of developers, you can never ever consider a site to be bullet proof and 100% safe. You just avoid malicious stuff as much as you can !!
In fact, you should follow whitelist approach rather than blacklist approach when it comes to security.

Hacking and exploiting - How do you deal with any security holes you find?

Today online security is a very important factor. Many businesses are completely based online, and there is tons of sensitive data available to check out only by using your web browser.
Seeking knowledge to secure my own applications I've found that I'm often testing others applications for exploits and security holes, maybe just for curiosity. As my knowledge on this field has expanded by testing on own applications, reading zero day exploits and by reading the book The Web Application Hacker's Handbook: Discovering and Exploiting Security Flaws, I've come to realize that a majority of online web applications are really exposed to a lot of security holes.
So what do you do? I'm in no interest of destroying or ruining anything, but my biggest "break through" on hacking I decided to alert the administrators of the page. My inquiry was promptly ignored, and the security hole has yet not been fixed. Why wouldn't they wanna fix it? How long will it be before someone with bad intentions break inn and choose to destroy everything?
I wonder why there's not more focus on this these days, and I would think there would be plenty of business opportunities in actually offering to test web applications for security flaws. Is it just me who have a too big curiosity or is there anyone else out there who experience the same? It is punishable by law in Norway to actually try break into a web page, even if you just check the source code and find the "hidden password" there, use it for login, you're already breaking the law.
I once reported a serious authentication vulnerability in a online audiobook store that allowed you to switch the account once you were logged in. I was wary too if I should report this. Because in Germany hacking is forbidden by law too. So I reported the vulnerability anonymously.
The answer was that although they couldn’t check this vulnerability by themselves as the software was maintained by the parent company they were glad for my report.
Later I got a reply in that they confirmed the dangerousness of the vulnerability and that it was fixed now. And they wanted to thank me again for this security report and offered me an iPod and audiobook credits as a gift.
So I’m convinced that reporting a vulnerability is the right way.
"Ive found that Im often testing others applications for exploits and security holes, maybe just for curiosity".
In the UK, we have the "Computer Misuse Act". Now if these applications you're proverbially "looking at" are say Internet based and the ISP's concerned can be bothered to investigate (for purely political motivations) then you're opening yourself up getting fingered. Even doing the slightest "testing" unlesss you are the BBC is sufficient to get you convicted here.
Even Penetration Test houses require Sign Off from companies who wish to undertake formal work to provide security assurance on their systems.
To set expectations on the difficulty in reporting vulnerabilties, I have had this with actual employers where some pretty serious stuff has been raised and people have sat on it for months from the likes of brand damage to even completely shutting down operations to support an annual £100m E-Com environment.
I usually contact the site administrator, although the response is almost ALWAYS "omg you broke my javascript page validation I'll sue you."
People just don't like to hear that their stuff is broken.
Informing the administrator is the best thing to do, but some companies just won't take unsolicited advice. They don't trust or don't believe the source.
Some people would advise you to exploit the security flaw in a damaging way to draw their attention to the danger, but I would recommend against this, and it's possible that you could have serious consequences because of this.
Basically if you've informed them it's no longer your problem (not that it ever was in the first place).
Another way to ensure you get their attention is to provide specific steps as to how it can be exploited. That way it will be easier for whomever recieves the email to verify it, and pass it on to the right people.
But at the end of the line, you owe them nothing, so anything you choose to do is sticking your neck out.
Also, you could even create a new email address for yourself to use to alert the websites, because as you mentioned, some places it would be illegal to even verify the exploit, and some companies would choose to go after you instead of the security flaw.
If it doesn't affect many users, then I think notifying the site administrators is the most you can be expected to do. If the exploit has widespread ramifications (like a Windows security exploit) then you should notify someone in a position to fix the problem, then give them time to fix it before you publish the exploit (if publishing it is your intention).
A lot of people cry about exploit publication, but sometimes that's the only way to get a response. Keep in mind that if you found an exploit, there's a high likelihood that someone with less altruistic intentions has found it and has started exploiting it already.
Edit: Consult a lawyer before you publish anything that could damage a company's reputation.
I experienced the same like you. I once found an exploit in an oscommerce shop where you could download ebooks without paying. I wrote two mails:
1) Developers of oscommerce, they answered "Known issue, just don't use this paypal module, we won't fix"
2) Shop administrator: no answer at all
Actually I have no idea what's the best way to behave ... maybe even publicate the exploit to force the admins to react.
Contact the administrator, not a business-type person. Generally the admin will be thankful for the notice, and the chance to fix the problem before something happens and he gets blamed for it. A higher-up, or the channels a customer service person is going to go through, are the channels where lawyers get involved.
I was part of a group of people who reported an issue we stumbled across on the NAS system at University. The admins were very grateful we found the hole and reported it, and argued with their bosses on our behalf (the people in charge wanted to crucify us).
We informed the main developer about a sql injection vulnerability on their login page. Seriously, it's the classic '<your-sql-here>-- variety. You can't bypass the login, but you can easily execute arbitrary sql. Still hasn't been fixed in 2 months! Not sure what to do now...no one else at my office really cares, which amazes me since we pay so much for every little upgrade and new feature. It also scares me when I think about the code quality and how much stock we are putting in this software.

Is it ethical to monitor users? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I didn't know I would be getting too many replies so fast. I can provide more details. It is definitely for use within the company. I was looking for some info on whether I should be more careful or anything to watch out for...
My boss is asking me to put some tracking info on what users do with their application. It is not about collecting sensitive data but there might be some screenshots involved and I am not sure if this is a breach of privacy.
Would love to hear any thoughts on this or if you handled something similar.
At work, there is no privacy. Think of it this way, if you work for a financial institution, or a government one, monitoring users may be the difference between keeping sensitive information secret and not. (I want my personal information kept private). They are paid to do work at work. If they are afraid about what they are doing is wrong, then they shouldn't be doing it.
A comment brought up a good point. If you are selling the product and spying on end users, that is totally different. That is highly unethical to take screen shots and report them back to the company. Actually where I work, we'd have you arrested for it if we found out. (yes, you'd be violating a federal law, and I guarantee we'd go after everyone and sort out the mistakes later.) That is a very slippery slope.
If you mean users at large, yes it's a breach of privacy.
If you mean users internal to your company (workers), then no -- there should be no expectation of privacy in the workplace.
Sometimes it is good to collect some metrics and will help in enhancing the user experience. Once, we were able to prove that a certain functionality was never used and we were able to remove support for it. For screenshots, you should be careful to take only the required window instead of a full screen.
If the application is used internally within your organization, and you have a corporate policy that states "no expectation of privacy" that has been communicated to and signed by your users then there is no issue.
Monitoring the actions of employees within a business in the US is very common practice.
Legal issues aside, do you want to work at a company that takes screenshots of your desktop?
Even if legal, this behavior is sure to drive away developers. Remember, in a bad work environment often the best developers leave first; they have the best job prospects.
Here's a corollary example: would you want your boss taping and listening to phone calls you made from the office? You don't give up every right you have just by cashing a paycheck.
Even if this screen capture methodology is legal, it certainly isn't ethical and will absolutely damage the morale of employees by demonstrating that they cannot be trusted.
It's just a bad idea. There have got to be better ways of accomplishing your goals than this.
Screenshots? If it's not opt-in, I'd say that's a pretty clear breach of privacy.
I made a simple CMS in PHP and I had to store all actions of users, but it's a completely different situation. In my opinion what is asking your boss is a bit out of privacy, especially if in your application you don't mention to the user this kind of behavior.
On a work machine? Absolutely; as long as the users know the extent to which they are being monitored. It's their choice to work for the employer, and they are using the employer's equipment.If you don't notify them that they are being watched, then that is kind of a "grey area"....depending upon state lawss, it may even be illegal - depending on what sort of information you are monitoring.
Something that would help on clarification would be is this an internal company application or something that will be on user's personal computers.
Typically when it comes to computers that are owned by the company, if the company decides to do monitoring, it is their choice. Disclosure of the monitoring is often encouraged in an effort to be open and honest, but is not mandatory. A user should not have any expectation of privacy when using equipment owned and managed by the company.
This is not just a matter of custom built applications, but also web browsing, email, phone conversations, etc. If you are using company resources then you are releasing your privacy.
If this is an application going to users outside of the company, then yes it is wrong without permission by the users.
That is greatly depending on the country you are in and what information you are collecting and what you do with it.
There is a huge difference between the US and EU for instance.
The Law, jurisprudence, union contracts and company policy (when not in contradiction to the above) are what determines what is acceptable.
If its for an internal app its completely ethical.
Beyond disclosing to all users that their use of the apps is monitored there is no other obligation of disclosure(excepting federal contracts and union contracts).
What is most important about capturing this kind of data is to focus on capturing the absolute least amount necessary - capturing screenshots of all open windows plus any adjacent data streams does in fact incur liability issues (think HIPPA) as well as producing a mountain of data that no one will ever look thru until a lawyer requests it with a subpoena and you're asked to go thru it and redact all Names, DOB, and SSNs in 160GB of data.
Seems this has already been answered, but it should be noted that there are countries where this is illegal, even at a place of work.
For instance, in Switzerland it is illegal to track which websites each user has been visiting.
Other than specific laws to the contrary, I would agree that it is acceptable to do, since there should be no reasonable expectation of privacy at the workplace. That said, informing the users is the right thing to do.
One other caveat, if the data you are collecting is sensitive enough that an attacker would have use of it (say, the screenshots include CC numbers), then you must ensure that this information is well protected. (I'm not referring to the user's information, but say the bank's clients' account details.)
If it is done without the user's consent, then it is definitely a breach of privacy. Even with the user's consent, it must be made clear exactly what information is being passed back. If the screenshot was to grab the whole screen, not just a window, then you could potentially get all kinds of private info.
Is this an internal app or a something for the public? If it's internal, it's not unethical, even if it's scummy, to monitor users.
If it's something for the public, in order to not be sleazy:
the user has to be able to opt-out
no personally identifying data can be collected
only data about your app (not screenshots of the entire screen) can be collected
It really depends on exactly what is being collected, the disclosure, and if the program could be opted out of. If that passes the smell test, then ensuring the reporting does not provide an attack vector and the data is appropriately safeguarded becomes your concern. If things seem shady get some written 'feature request' to CYA. The basic idea, if done right, is nothing new. Microsoft, for instance, does it with some of their products.
In a work environment, I think it is OK as long as all employees know that they may be monitored. I've seen places (Intuit was one) where employees are tracked all day. Not my cup of tea, however.
In government facilities, there is typically some sort of login screen that states that anything and everything done on that machine is subject to monitoring.
If these are applications that are run by the general public, I'd say that it better be crystal clear that you are collecting data on them. Personally, I'd rather not have programs 'phoning home' with info about my activities, boring as they may be.
If the client is external, this should be disclosed to the client. Actually, if the client is internal OR external, if you do not disclose it, it is totally unethical.
An employment agreement that states that there can be no expectation of privacy constitutes disclosure.
Screenshots? If it's not opt-in, I'd
say that's a pretty clear breach of
privacy.
you've opted-in by cashing your paycheck :)
as many indicated, informing the user is the best the company can do. Informing, not asking to Opt-In.
I would suggest reading:
Privacy. My interpretation is that people will expect some things to be kept private such as their personal information. By interacting with your sites, users are sharing information with you that you should be able to use but not distribute or abuse as if it was your own.
Screen shots is obviously the hot button issue here. While users entering information into a text input field are knowingly giving you information, screen shots go beyond what a typical user would expect and therefore should be disclosed to the user through a privacy policy.
Collecting anonymous usage should be doable without screenshots.
If your app collects any data that is meant to be protected by privacy laws, then you will have to treat the screenshots as containing sensitive information and protect them accordingly. Data protection laws are pretty strict in most countries.
Unless you have a really really small company, privacy laws vary a lot between countries, and the feature is probably more trouble than it's worth. In any country I've even lived in, that idea would never fly.
But don't ask a bunch of hacks on a site like stack overflow. Seriously, ask a lawyer.
I think the question is still a bit vague as to who is going to be monitored for what. From what I understand who'll be monitored are the end users who are using the application and the gathered data will be used internally. Assuming this is the case, I think, I can contribute the following answer:
If you are going to monitor end users to see how they are using your product, you are in human factors/user experience business and what you want to do is really an experiment. Doing such an experiment requires consent of the subject (the end user). In an academic setting (and I think the same goes for industry as well), there is an Institutional Review Board (IRB) which grants permission for such experiments. I believe in the industry scene there are similar organizations (just not sure what they are called). A request for permission for such an experiment is accompanied by a report which details the user experiment in a very specific manner. The IRB than decides whether to issue a permit or not.
The important point is the consent here and users should know about the experiment and agree to be subjects. I think, in the absence of a user consent the experiment is neither ethical nor legal. Again, I approached this based on an assumption and tried to summarize my experience in such experiments.
Collecting screen shots may be illegal even if employees are notified. This is an issue of local law and federal law. You haven't said which country you are in. In California, for example, monitoring screens might violate both workplace privacy laws and wiretap laws. You should get an opinion of your corporate attorney before implementing this.

Resources