What's the "gadget vulnerability"? - security

In a recent security advisory, Microsoft warns that "Vulnerabilities in Gadgets Could Allow Remote Code Execution":
An attacker who successfully exploited a Gadget vulnerability could run arbitrary code in the context of the current user.
(Microsoft Security Advisory 2719662)
I don't really understand the point. As far as I know, gadgets are (by design) HTML-based application running with full trust!
Full Trust
The choice to run a gadget is presented to the user in the same way that the choice to run any application downloaded from the Internet is presented. Information about the author of the gadget is displayed in a dialog box that indicates there is risk associated with this file. After the user accepts the warning, the gadget will run with all of the permissions associated with the user's login account.
(MSDN: Gadgets for Windows Sidebar Security)
For example, nothing prevents you from adding
<script language="VBScript">
Set shell = CreateObject("Wscript.Shell")
shell.Run "notepad.exe"
</script>
and executing arbitrary commands from your gadget. This works and it's by design.
Obviously, they can do everything that another application running in the local user's context can do. So, where is the vulnerability the MS Security Advisory is mentioning which "can be exploited"?

Well the "gadget vulnerability" is the problem that:
the risks that gadgets are exposed to are the same as those faced by any web-based
application, e.g. Man-In-The-Middle or code injection. Similar issues existed in earlier versions of most web browsers but modern browsers have specifically implemented controls to attempt to mitigate many of these issues. These controls have not been implemented in the Gadgets platform, leaving them vulnerable to well-known and thoroughly discussed attacks.
- We have you by the gadgets, black hat.
so you can see the main exploit is that there were no controls to limit the gadgets from running code with no restraint.
Another problem:
Microsoft has said that it has discovered that some Vista and Win7 gadgets don’t adhere to secure coding practices and should be regarded as causing
risk to the systems on which they’re run.
so indeed running arbitrary code is part of HTA's but because the sidebar and gadgets platform didn't mitigate it and were quite pessimistic, thinking that all gadget programmers would write safe code and wouldn't try to exploit or do things gadgets aren't suppose to do.
Hope it answered what you asked.
I still think the question is quite vague because you say: well they allow to run arbitrary code and it's part of the model and concept and they didn't mitigate it so what's the exploit? it's already exploited... - this is the whole idea :)
It can be asked about every flaw and attack and that's exactly the problem - it was by design a problem and wasn't secure it was discovered that since no mitigation and since you are really able to run and execute the malicious code with no problem these gadgets have a flaw.

Agreed, the Gadgets platform appears to be no more or less vulnerable than if the user executed an unsigned application.
Why the same system-level execution prevention, heuristic analysis & other methods applied to applications could not be applied to Gadgets is mystifying to me.
This smacks of laziness on the part of Microsoft: The Gadgets platform was not highly regarded or widely used (despite the potential of delivering an unprecedented level of capability and integration of web-features directly into the desktop), so rather than make any attempt whatsoever to safeguard the user from malicious Gadgets, they simply discontinued them.
With the direction the User Interfaces in Windows, Mac and Android are headed, the average user has less and less idea how an app (or plugin) actually does what it is doing, so the proliferation of needless, opportunistic or even malicious apps continues. I've been back and forth over the Gadgets specification, and as near as I can tell, it is no more insecure than the plugins system used by Chrome and FireFox.
Execution of ActiveX and Java within a Gadget is subject to the Security settings in Internet Explorer. If your security settings allow a Gadget to do something, most of those functions are exploitable within a plugin or Java app as well.
The analyst reports I've read indicate that these vulnerabilities have been patched in "most modern browsers" but that clearly isn't true of Internet Explorer, as every Gadget exploit I've seen can also be run within the IE browser.
In short it is the "toggle-switch" style handling of ActiveX, Java and other plugins which is at fault here. By trying to spare the user endless prompting and eliminating the requirement of making an informed decision, Microsoft continues to leave uninformed or careless users wide open to malicious web apps and plugins.
Trust certificates & security patches would have been vastly preferable to discontinuing the feature.

As I see it, I think the security issue is a smoke screen. These "security issues" exists across many vectors, and gadgets, if they were such a problem would have been addressed much sooner than the dawn of the release of Windows 8. My opinion is that gadgets were jettisoned because they are a power drain on a Windows 8 tablet. It reminds me of how the ribbon interface was "to expose deeply buried functionality" when I think in reality Microsoft was really planning for a touch interface. So, whatever "excuse" Microsoft gives for doing something, I tend to look for a deeper purpose. Hopefully this will change with the new management. Does anyone know if it is possible to install some sort of gadget platform on Windows 8.1? Thanks!

These attacks happen in this way:
An attacker would have to convince a user to install and enable a vulnerable Gadget
An attacker who successfully exploited a Gadget vulnerability could gain the same user rights as a logged-on user. If the user is logged on with administrative user rights, an attacker who successfully exploited this vulnerability could take complete control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.
as you see it is simple if you install a vulnerable gadget, now tell me who authorize your gadgets? in the world wild web there are many many fake gadgets..be careful.
also microsoft has a hotfix to disable sidebar and gadgets that you can find in this link :
microsoft advisory
and they killed gadgets and sidebar in windows 8

I appreciate you to find the exact details, here is the article presented in blackhat which made Microsoft disable gadgets:
We have you by the gadgets - Black Hat (pdf file)

Related

Uploading Entire CdRom through browser

I am a doctor who is seeking a solution for my patients. I often receive medical CDs from my patients which contain their radiological data. What I need is a web solution which I can integrate with my web site. But the caveat is that I dont want this to happen via Choose File. Most of my patients are old people who doesnt know much about internet or computers. So I want a single button on my web site which will copy the entire CD in the CD drive and send it to me without any user intervention. Is it possible?
Update:
OK thank you all. I did not intend to break copyright issues. Actually, I thought a user who will hit that "button" will also give permission to access their files. I completely understand your concerns and I completely agree however - as an end-user - this is the problem requiring a solution in my case. After the COVID none of my patients can come to clinical visits and I need to see their follow-up. In neurosurgery, this is very important. I do not know if it is OK to send links (and sorry if it is not) here but for example, this web site makes something similar to my idea but it is not free and it is so complicated for my -low socioeconomic - patient profile.
My target population mostly deals with brain tumors and their level of concern for copyright issues is so low for that reason. I don't mean taking everything from them without their will but this is the case. So again thank you all for enlightening me and I am again sorry if I break the rules of this website.
Introduction
I'm going to go through the reasons as to why the specification as stated, cannot be implemented, and also as to why older technologies that may have allowed this implementation cannot be used.
Do note that even older technologies, would have required some sort of installation or agreement from the user- as a minimum 1 click.
Also note: It is possible to get files from a users system, but you still have to get their agreement through an action or prompt from their part!**
As to what you could do? Tukan already covers some nice alternatives but if I do think of something I will add it!
Basic Explanation
The most basic explanation is that this would be a giant unprecedented security hole. It would mean that browsers would allow a site to access files from a users computer hardware (DVD) without the permission of the user or the active actions of the user.
In your case you do have a valid non-malicious use for it. Imagine however all the malicious websites that would use this mechanism to steal stuff off the DVD/CD that is in the users tray. Imagine the privacy issues, security breaches, and even minor stuff like copyright issues.
Finally, and even worse, if the specific requested allowed access to the whole file system (including all drives like C:), a malicious site could steal everything on a user's system.
The positive (and negative for you) is that browsers have been incrementally locked down over the years and technologies/plugins/extensions/features have been incrementally either locked down, or deprecated/removed. Such technologies include: active X, java applets, and flash.
Finally, browsers like chrome and internet explorer themselves now'a'days run in sandboxes. See for example the article (and this is from 2013!!): Sandboxes Explained: How They’re Already Protecting You and How to Sandbox Any Program
They’re restricted to running in your browser and accessing a limited set of resources — they can’t view your webcam without permission or read your computer’s local files. If websites you visit weren’t sandboxed and isolated from the rest of your system, visiting a malicious website would be as bad as installing a virus.
Other programs on your computer are also sandboxed. For example,
Google Chrome and Internet Explorer both run in a sandbox themselves.
These browsers are programs running on your computer, but they don’t
have access to your entire computer. They run in a low-permission
mode. Even if the web page found a security vulnerability and managed
to take control of the browser, it would then have to escape the
browser’s sandbox to do real damage.
Active X (Deprecated) (Internet Explorer)
Let's start by saying that Active X would require the user to change their Internet Explorer Security Settings so we can strike it off immediately.
If a user did change their settings (see: Enable ActiveX controls in Internet Explorer ) and Enable for IE 11, a developer could use active x to access files on a users system.
Also note Active X is deprecated and rumour has it that it may not be around for long.
Java Signed Applets
Java Signed Applets could access the local file system.
However, Applets are no longer supported in firefox and chrome. They do run in Internet Explorer though IE is deprecated as well (since people are moving to Edge).
There's a very well written answer on the topic here: How do I run Java applets? [duplicate] and Why is the Java plugin (JRE) disabled in Chrome?
Adobe Flash (Previously Macromedia)
First off, flash has been removed from most Internet Browsers and is officially considered dead. Additionally, after Flash Player 10 it was possible to load a file but the user had to select it himself through a dialog (see: Can Flash action script read and write local file system? ).
FileSystem and FileWriter APIs
You can read and write using this API. However, it again requires the user to interact with the webpage and to select the files themselves.
References
Is it possible to access local file via javascript?
Sandboxes Explained: How They’re Already Protecting You and How to Sandbox Any Program
Enable ActiveX controls in Internet Explorer , Enable for IE 11, and active x to access files on a users system
Java Signed Applets could access the local file system, How do I run Java applets? [duplicate], Why is the Java plugin (JRE) disabled in Chrome?
Can Flash action script read and write local file system?
As Andrew mentioned this SO is used for Q&A from/to developers. I'll try to give you a general idea what could be done.
Who should do it?
I think you need some freelancer who would create a code for you.
The mechanism you are describing is not possible due to security issues.
Web page should not have access to the HW, as you would like, without user
interaction.
What is then feasible?
I think what is feasible is an application (thick - meaning .exe file) which would be executed by your patients which would search for a CD/DVD drive, pack it and send it via secure channel to your server. They would need to download it and execute it.
If you have elderly patients you need to visually confirm that the data has been send using some clear message.
Something like: Thank you for sending the data to Dr. Jones. All data has been received.
Secure channel can be for example: ftps, sftp, https, etc.
On your side you would a have a daemon which would serve as endpoint for your patient's data. After receiving the data it should be moved immediately outside the uploading folder.
Edit
One more option that came into my mind would be to distribute a tailored USB key to your patients with such application, which would be executed upon insertion.

What user-information is available to code running in browsers?

I recently had an argument with someone regarding the ability of a website to take screenshots on the user's machine. He argued that using a GUI-program to simulate clicking a mouse really fast to win a simple flash game could theoretically be detected (if the site cared enough) by logging abnormally high scores and taking a screenshot of those players' desktops for moderator review. I argued that since all website code runs within the browser, it cannot step outside the system to take such a screenshot.
This segued into a more general discussion of the capabilities of websites, through Javascript, Flash, or whatever other method (acceptable or nefarious), to make that step outside of the system. We agreed that at minimum some things were grabbable: the OS, the size of the user's full desktop. But we definitely couldn't agree on how sandboxed in-browser code was. All in all he gave website code way more credit than I did.
So, who's right? Can websites take desktop screenshots? Can they enumerate all your open windows? What else can (or can't) they do? Clearly any such code would have to be OS-specific, but imagine an ambitious site willing to write the code to target multiple OSes and systems.
Googling this led me to many red herrings with relatively little good information, so I decided to ask here
Generally speaking, the security model of browsers is supposed to keep javascript code completely contained within its sandbox. Anything about the local machine that isn't reflected in the properties of the window object and its children is inaccessible.
Plugins, on the other hand, have free reign. They're installed by the user, and can access anything the user can access. That's why they're able to access your webcam, upload files, do virus scans, etc. They're also able to expose APIs to javascript code, which pokes a hole in the javascript sandbox and gives javascript code some external access. That's how tools like Phonegap give javascript code in web apps access to phone hardware (gps, orientation, camera, etc.)

Security/Authentication for Plugin Architecture

I was thinking of the multiple ways which security could be implemented in a Plugin-based system. Now when I say 'Security', what I mean is this:
a) How developers of a Plugin system can ensure that plugins are secure and safe to use on the Core platform.
b) How developers of a plugin can ensure that the plugins being used on their Platform are 'trustable' i.e. some sort of way by which we know 'WHO' developed this plugin ( similar to what Facebook do with their API keys )
c) How can developers control what changes a plugin makes to the UI (if this is permitted at all)? For example, a plug in that is permitted to mainpulate the UI and redirect the plugin user to certain webpages takes the user to a Phishing site.
I have my initial thoughts on the issue:
On a) I am contemplating whether the use of a Sandbox would be suffecient. Would this protect the plugin from, say, making Direct DB calls to do some naughty things? Would one be able to restrict the plugin from accessing the Local system without effectively hampering the functionality of the system? What are your ideas on this?
On b), I believe Facebook-like authentication is the way to go. But would this not be overkill for a Small Application ( 'Small' in the sense that it is smaller than Facebook or Jira)? Are there any other possible options?
On c) I will be honest and say I have no idea how this can be implemented. Any opinions out there?
So, the question is... how does one implement Security on a Plugin Architecture?
a and c are, if I understand you correctly, the same question.
You want to limit what is possible in your plug-in system, the easy answer is to go and limit the environment. Build an environment where security, the GUI and whatever you think is sacred must be protected by design, call it a sandbox, call it a very strict API, call it forcing the plug-in developers to use something which isn't a real programming language.
If it is impossible to make something look like a log-in screen, or to redirect people to other places, that's something malicious developers will have to go without.
This however makes for a rigid plug-in system where the developers have little freedom to implement new features which may not be acceptable; and people have made wrong assumptions about what is a safe operation in the past.
b Knowing who developed something requires you to ask them for and confirm personally identifying information.
At that point you can simply use an user and password over SSL, or a signing system where you become a certificate authority if your system is to be used by anyone else and you don't want the extra load of people downloading plug-ins. They can always misplace their keys but there is little you can do about that.
Won't work for a small system, though, even if you were signing for free.
The next best option is a handle where a few checked plug-ins that were legitimate means you can get your plug-ins in with less checking or with none at all.
If developers can't be bothered with registering an account either, you could always check for IP with a bit of SSL traffic to avoid spoofing and use that as their internal user name. People with dynamic IPs or behind proxies and a lot of plug-ins to send would eventually register.
Of course, this requires people that can check the plug-ins.
a) How developers of a Plugin system can ensure that plugins are secure and safe to use on the Core platform.
How do developers know anything? They don't. They must trust the framework. For open source, that means the download it and check it themselves. For proprietary, who knows how developers grow to trust the framework?
b) How developers of a plugin can ensure that the plugins being used on their Platform are 'trustable' i.e. some sort of way by which we know 'WHO' developed this plugin ( similar to what Facebook do with their API keys )
If you build a plugin framework, you don't know anything about the plugins. A plug-in framework can have "good" plug-ins and "bad" plug-ins. But who decides good or bad? The users do. If a plug-in is "good", it's useful and works. If a plug-in is "bad" it's useless or doesn't work. Most viruses are just useless software.
Any software can fit into the plug-in framework and still be useless. It's a value judgement, not a technical question.
c) How can developers control what changes a plugin makes to the UI (if this is permitted at all)? For example, a plug in that is permitted to mainpulate the UI and redirect the plugin user to certain webpages takes the user to a Phishing site.
Yep. Happens all the time.
What is "Phishing"? Sometimes I don't want to give out my email even to a "real" company. Are they "phishing" when they ask? Not really. What about a news source behind a registration page? I must register to get news. Is that Phishing? What about a site that promises financial information? If I register, is that phishing from the financial source or is that legitimate user registration? What if the financial information is about Nigeria? What if it's about a dead relative of mine in Nigeria?
There's no technical means for determining "good" vs. "bad" here. It's all a value judgement on the part of the user.
The "plug-in" framework can't decide anything. Only users can decide.

Software and Security - do you follow specific guidelines?

As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.

How Big a Security Risk are Browser Extensions?

One of the more powerful features of modern day browsers is the ability for software developers to write browser extensions to enhance, modify and tweak the pages visited by the user. As more of our lives migrate onto the browser, aren't we potentially exposing ourselves to a massive privacy and security holes created by the installation of a browser extension that is malicious in nature?
I realize the source code of these extensions is extractable and readable if the author has not made attempts to obfuscate the behavior. But the effectiveness of this type of review is compromised by the browser encouraging users to keep their extensions up to date. While version 1.0 of an extension may be innocuous, a users browser may suggest an upgrade to version 1.1 which could contain malicious code which could be used to scrape information from the screen of the compromised browser.
As both a user and developer of browser extensions, is the developer's reputation the only thing in place to provide assurances to their users that their browsing activity will be secure? Are there any mechanisms in place to help protect users from a compromised browser extension?
Are there any best-practices to develop extensions in a manner that provides users with the assurance that the code they install and update is benign in nature?
Browser extensions can do almost anything user can do. They can send your bank passwords, read files on local disk, execute commands etc. Security of a browser depends not only on browser itself, but also on all installed extensions.
I've written a few extensions for Chrome recently, and I had no idea how much harm extensions could really do before that.
Extensions ask for permissions, but these are very broad. Any non-trivial extension would most likely end up asking for "Full Permission", and most users would just bang the "YES" button. Even a tech savvy user may shrug this off as legitimate, I know I have.
Most extensions are free. It costs time and money to code them up, so how are developers getting their investment back? Some do it for fun, but chrome web store specifically asks if you are planning to inject adds - I can only deduce that this is a common practice for extension developers. Extensions could also act as tracking cookies, and sell usage stats to whomever.
It's near trivial to write an extension that would glob up your passwords and send them on to a third party. Even if these passwords are 'saved'. One of my extensions had a legitimate use case to modify all input fields on all pages, and I found out that chrome would just happily paste-in stored passwords in plain text. Same goes for CC information.
Many extensions include analytics packages, to help developers identify who their users are, which parts of the app is used and so forth. I think that this is a legitimate use case, but you may not necessarily agree.
If you are a developer, be advised that Chrome extensions could significantly impact page load times. My own extension, which I tirelessly optimized to be as lightweight as possible, caused all pages to have an additional 50-200ms load time.
So after I've seen what's possible, I've disabled all extensions in Chrome except for my own. I really only miss AdBlock.
Internet Explorer Browser Helper Objects are extremely unsafe. They basically allow the browser to run native code, which could be anything. I'm not sure if they're still as pervasive now as they were in years past, but they're one of the reasons why Internet Explorer is so much less secure than Firefox and other browsers.
Mozilla style plug-ins using XUL and Microsoft's Silverlight plug-ins are sandboxed to try and prevent malicious behavior. Ultimately it rests on the developer's reputation for any kind of software to be deemed trustworthy by its users, however. Even in cases where the developer is not trying to write malware, bugs in the program may expose security exploits.
Which is why you have multiple machines, and if you can't afford a new one, use a virtual machine to run most of the stuff and monitor it's behavior. Its what i do atleast before I do anything.
RnVja3Mgd2l0aCBtZSBmYW0hIGhpdCBtZSB1cCBhdCB0aGVib3NzODkwN0B5YWhv
by5jb20gaWYgeW91IGhhdmUgYW55IHF1ZXN0aW9ucw==

Resources