How safe are extensions in Visual Studio code? - security

How safe are extensions in visual Studio code?
Can extensions introduce malware?
Is safe to install any extension?

They can contain malware, yes. When you download and run an extension, you are trusting it to do pretty much anything it wants with the permissions of your user.
VS Code does not implement sandboxing (like browsers do), and the code is not much restricted.
Having said that, a malicious extension would likely be uncovered pretty quickly. As these files are signed, a third party attacker has no easy way to modify an existing one, or somehow release a fake one, they would have to compromise the real developer first. Also many of them are open source (which btw is no guarantee the released version is built from the public source, but again, it is easy to check as extensions are just zip files).
So in short: extensions can in theory be malicious, but especially in case of well-known extensions, the likelihood of you getting a malicious version before others discover it and it gets removed is probably very low. On the other hand, extensions used by many people can be a nice target for sophisticated attackers, because security controls might sometimes be a lot more lenient than at the companies where those extensions are used.
TL;DR: only you can tell whether you want to accept the risk, which is not very high, but also not negligible, especially with smaller, niche extensions that get less thorough review by the community.

Related

Securing symmetric key

In my project (windows desktop application) I use symmetric key in order to encrypt/decrypt some configurations that need to be protected. The key is hardcoded in my code (C++).
What are the risks that my key will be exposed by reverse engineering ? (the customers will receive the compiled DLL only)
Is there a way for better security for managing the key?
Are there open source or commercial products which I can use
Windows provides a key storage mechanism as part of the Crypto API. This would only be useful for you if you have your code generate a unique random key for each user. If you are using a single key for installations for all users, it will obviously have to be in your code (or be derived from constants that are in your code), and thus couldn't really be secure.
What are the risks that my key will be exposed by reverse engineering ? (the customers will receive the compiled DLL only)
100%. Assuming of course that the key protects something useful and interesting. If it doesn't, then lower.
Is there a way for better security for managing the key?
There's no security tool you could use, but there are obfuscation and DRM tools (which are a different problem than security). Any approach you use will need to be updated regularly to deal with new attacks that defeat your old approach. But fundamentally this is the same as DRM for music or video or games or whatever. I would shop around. Anything worthwhile will be regularly updated, and likely somewhat pricey.
Are there open source or commercial products which I can use
Open source solutions for this particular problem are... probably unhelpful. The whole point of DRM is obfuscation (making things confusing and hidden rather than secure). If you share "the secret sauce" then you lose the protection. This is how DRM differs from security. In security, I can tell you everything but the secret, and it's still secure. But DRM, I have to hide everything. That said, I'm sure there are some open source tools that try. There are open source obfuscation tools that try to make it hard to debug the binary by scrambling identifiers and the like, but if there's just one small piece of information that's needed (the configuration), it's hard to obfuscate that sufficiently.
If you need this, you'll likely want a commercial solution, which will be imperfect and likely require patching as it's broken (again, assuming that it protects something that anyone really cares about). Recommending specific solutions is off-topic for Stack Overflow, but google can help you. There are some things specific for Windows that may help, but it depends on your exact requirements.
Keep in mind that the "attacker" (it's hard to consider an authorized user an "attacker") doesn't have to actually get your keys. They just have to wait until your program decrypts the configurations, and then read the configurations out of memory. So you'll need obfuscation around that as well. It's a never-ending battle that you'll have to decide how hard you want to fight.

Is it possible that the popular applications in my laptop are surveilling my files on hard drive?

What if I develop a desktop application which million people will use, and behind the scene, the application is surveilling users' files on their hard drives, streaming the data time to time?
Can one be assured no such things happen, with any popular software applications, be it MS Office or Google Chrome?
Or this is just a stupid question?
Is it technically possible? Yes, it is.
Could it be happening in an application used by a million users for a relatively long time without being noticed? Very unlikely. Somebody would notice the strange network traffic eventually.
Also #Mjh mentioned open source in a comment. While open source can help by allowing people to audit the source code, how many times have you checked that the binary you are using is actually the compiled source that you were looking at? Of course, there are signatures on binary packages and all, but the signature is made by the package maintainer. There is an inherent trust not only in the developer of the application, but also in the tool chain that creates a binary package from the source code. And then we haven't talked about strange "bugs", or the fact that even in open source, some security issues are very hard to find (otherwise all open source software would be security bug-free, which they are not).
So back to your question, sure, you could use all kinds of techniques to monitor the behavior of an application, you could monitor memory access, network traffic, whatever else. You can also analyse the code itself, look for suspicious things. It will take a huge amount of effort and still there will be no 100% guarantee, only some level of assurance.
Automated version upgrades could make detection even harder by the way. Even if you put lots of resources into analysis of one version, what if only a short-lived version had malicious code? Sure, that too can be analysed, but would anyone bother, unless there was a good reason (like indications of something malicious)?
Yet I think you can be pretty sure that major vendors don't do this. It's just not worth it for them, why would they? Their risk would be huge, with a relatively low benefit.

Centralized vs. Distributed version control security

As my company begins to further explore moving from centralized version control tools (CVS, SVN, Perforce and a host of others) to offering teams distributed version control tools (mercurial in our case) I've run into a problem:
The Problem
A manager has raised the concern that distributed version control may not be as secure as our CVCS options because the repo history is stored locally on the developer's machine.
It's been difficult to nail down his exact security concern but I've gathered that it centers on the fact that a malicious employee could steal not only the latest intellectual properly but our whole history of changes just by copying a single folder.
The Question(s)
Do distributed version control system really introduce new security concerns for projects?
Is it easier to maliciously steal code?
Does the complete history represent an additional threat that the latest version of the code does not?
My Thoughts
My take is that this may be a mistaken thought that the centralized model is more secure because the history seems to be safer as it is off on its own box. Given that users with even read access to a centralized repo could selectively extract snapshots of the project at any key revision I'm not sure the DVCS model makes it all that easier. Also, most CVCS tools allow you to extract the whole repo's history with a single command so that you can import them into other tools.
I think the other issue is just how important the history is compared to the latest version. Granted someone could have checked in a top secret file, then deleted it and the history would pretty quickly be significant. But even in that scenario a CVCS user could checkout that top secret version with a single command.
I'm sure I could be missing something or downplaying risks as I'm eager to see DVCS become a fully supported tool option. Please contribute any ideas you have on security concerns.
If you have read access to a CVCS, you have enough permissions to convert the repo to a DVCS, which people do all the time. No software tool is going to protect you from a disgruntled employee stealing your code, but a DVCS has many more options for dealing with untrusted contributors, such as a gatekeeper workflow. Hence its widespread use in open source projects.
You are right in that distributed version control does not really introduce any new security concerns since the developer has already access to the code in both cases. I can only think that since it is easier to work offline and offsite with GIT, developers might become more tempted to do it than in centralized. I would push to force encryption on all corporate laptops with code
not really easier, just the same. If you enable logs, then you will have the same information when the code is accessed.
I personally do not think so. It might represent the thought process leading to certain decisions but not necessarily more.
It comes down to knowledge on how to implement security measures in both cases. If you have more experience in one system vs another then you are more likely to implement more to prevent such loss but at the end of the day, you are trusting your developers with code the minute you allow them access to it. No way around that.
DVCS provides various protections against unauthorized writing. This is why it is popular with opensource teams. It has several frustrating limitations for controlling reading. Opensource teams do not care about this.
The first problem is that most DVCS encourage many copies of the full source. The typical granularity is the full repo. This can include many unneeded branches and even entire other projects, besides the concern of history (along with searchable commit comments that can make the code even more useful to the attacker). CVCS encourages developers to copy as little as possible to their desktop, since the less they copy, the faster it works. The less you put on mobile devices, the easier it is to secure.
When DVCS is implemented with many devices acting as servers, it is much more difficult to implement effective network security. Attacking a local CVCS workspace requires the attacker to gain access to the filesystem. Attacking a DVCS node generally requires attacking the DVCS itself on any device hosting the information (and remember: the folks who maintain most DVCS's are opensource guys; they don't care nearly as much about read controls). The more devices that host repositories, the more likely that users will set up anonymous read access (which again, DVCS encourages because of its opensource roots). This greatly simplifies the job of an attacker who is doing random sweeps.
CVCS that are based on URLs (like subversion) open the opportunity for quite fine-grain access control, such as per-branch access. DVCS tends to fight this kind of access control.
I know developers like DVCS, but there's no way it can be secured as effectively as CVCS. Most environments do a terrible job of securing their CVCS, and if that's the case then it doesn't matter which you use. But if you take access control seriously, you can have much greater control with CVCS as part of a broader least-privilege infrastructure.
Many may argue that there's no reason to protect source code. That's fine and people can argue about it. But if you are going to protect your source code, the best implementation is to not copy the source to random laptops (which are very hard to secure well), and rather have developers mount it from a central server. CVCS works well this way. DVCS makes no sense if you are going to keep it on a single server this way. If you are going to copy files to mobile devices, make sure you copy as little as possible. That's the opposite of DVCS.
There are a bunch of "security" issues; whether they are an issue depends on your setup:
There's more data floating around, which means the notional "attack surface" might be bigger (it depends on how you count).
But how much data does the "typical" developer check out? You might want to use a sparse checkout in svn, but lazy people and some GUI tools don't support that, so they'll have all your code checked out anyway. Git users might be more likely to use multiple repos. This depends on you.
Authentication/access control might be better (and it might be worse!). This is largely a function of the VCS, not whether it is "D" or "C". svn:// is plaintext.
Is deleting files a priority, and how easy is this to do? An accidental commit of a confidential file is more painful to do in git if it happened in the distant past (but people might be more likely to notice).
Are you really going to notice a malicious user pulling the entire history instead of merely doing a checkout? It depends on how big your repository is and what your branches are like. It's easy for a full SVN checkout to take up more space than the repository itself due to branches.
Change history is generally not something you want to give away for free (even to people with a source code license), but how valuable is it? Maybe you have top-secret design methodologies or confidential information in your commit messages, but this seems unlikely.
And finally, security economics:
How much is the extra security worth?
How much is increased productivity worth?
How much is caring about the concerns about your developers worth?
(IIRC it turns out that users should ignore security advice, because the expected cost is more than the expected benefit — this is especially true for things like certificates that expired yesterday. How much does it cost you to check the address bar every time you type in password? How often do you catch a phishing attempt? What is the cost to you per thwarted phishing attempt? What is the cost per successful phish?)

Security/Authentication for Plugin Architecture

I was thinking of the multiple ways which security could be implemented in a Plugin-based system. Now when I say 'Security', what I mean is this:
a) How developers of a Plugin system can ensure that plugins are secure and safe to use on the Core platform.
b) How developers of a plugin can ensure that the plugins being used on their Platform are 'trustable' i.e. some sort of way by which we know 'WHO' developed this plugin ( similar to what Facebook do with their API keys )
c) How can developers control what changes a plugin makes to the UI (if this is permitted at all)? For example, a plug in that is permitted to mainpulate the UI and redirect the plugin user to certain webpages takes the user to a Phishing site.
I have my initial thoughts on the issue:
On a) I am contemplating whether the use of a Sandbox would be suffecient. Would this protect the plugin from, say, making Direct DB calls to do some naughty things? Would one be able to restrict the plugin from accessing the Local system without effectively hampering the functionality of the system? What are your ideas on this?
On b), I believe Facebook-like authentication is the way to go. But would this not be overkill for a Small Application ( 'Small' in the sense that it is smaller than Facebook or Jira)? Are there any other possible options?
On c) I will be honest and say I have no idea how this can be implemented. Any opinions out there?
So, the question is... how does one implement Security on a Plugin Architecture?
a and c are, if I understand you correctly, the same question.
You want to limit what is possible in your plug-in system, the easy answer is to go and limit the environment. Build an environment where security, the GUI and whatever you think is sacred must be protected by design, call it a sandbox, call it a very strict API, call it forcing the plug-in developers to use something which isn't a real programming language.
If it is impossible to make something look like a log-in screen, or to redirect people to other places, that's something malicious developers will have to go without.
This however makes for a rigid plug-in system where the developers have little freedom to implement new features which may not be acceptable; and people have made wrong assumptions about what is a safe operation in the past.
b Knowing who developed something requires you to ask them for and confirm personally identifying information.
At that point you can simply use an user and password over SSL, or a signing system where you become a certificate authority if your system is to be used by anyone else and you don't want the extra load of people downloading plug-ins. They can always misplace their keys but there is little you can do about that.
Won't work for a small system, though, even if you were signing for free.
The next best option is a handle where a few checked plug-ins that were legitimate means you can get your plug-ins in with less checking or with none at all.
If developers can't be bothered with registering an account either, you could always check for IP with a bit of SSL traffic to avoid spoofing and use that as their internal user name. People with dynamic IPs or behind proxies and a lot of plug-ins to send would eventually register.
Of course, this requires people that can check the plug-ins.
a) How developers of a Plugin system can ensure that plugins are secure and safe to use on the Core platform.
How do developers know anything? They don't. They must trust the framework. For open source, that means the download it and check it themselves. For proprietary, who knows how developers grow to trust the framework?
b) How developers of a plugin can ensure that the plugins being used on their Platform are 'trustable' i.e. some sort of way by which we know 'WHO' developed this plugin ( similar to what Facebook do with their API keys )
If you build a plugin framework, you don't know anything about the plugins. A plug-in framework can have "good" plug-ins and "bad" plug-ins. But who decides good or bad? The users do. If a plug-in is "good", it's useful and works. If a plug-in is "bad" it's useless or doesn't work. Most viruses are just useless software.
Any software can fit into the plug-in framework and still be useless. It's a value judgement, not a technical question.
c) How can developers control what changes a plugin makes to the UI (if this is permitted at all)? For example, a plug in that is permitted to mainpulate the UI and redirect the plugin user to certain webpages takes the user to a Phishing site.
Yep. Happens all the time.
What is "Phishing"? Sometimes I don't want to give out my email even to a "real" company. Are they "phishing" when they ask? Not really. What about a news source behind a registration page? I must register to get news. Is that Phishing? What about a site that promises financial information? If I register, is that phishing from the financial source or is that legitimate user registration? What if the financial information is about Nigeria? What if it's about a dead relative of mine in Nigeria?
There's no technical means for determining "good" vs. "bad" here. It's all a value judgement on the part of the user.
The "plug-in" framework can't decide anything. Only users can decide.

How Big a Security Risk are Browser Extensions?

One of the more powerful features of modern day browsers is the ability for software developers to write browser extensions to enhance, modify and tweak the pages visited by the user. As more of our lives migrate onto the browser, aren't we potentially exposing ourselves to a massive privacy and security holes created by the installation of a browser extension that is malicious in nature?
I realize the source code of these extensions is extractable and readable if the author has not made attempts to obfuscate the behavior. But the effectiveness of this type of review is compromised by the browser encouraging users to keep their extensions up to date. While version 1.0 of an extension may be innocuous, a users browser may suggest an upgrade to version 1.1 which could contain malicious code which could be used to scrape information from the screen of the compromised browser.
As both a user and developer of browser extensions, is the developer's reputation the only thing in place to provide assurances to their users that their browsing activity will be secure? Are there any mechanisms in place to help protect users from a compromised browser extension?
Are there any best-practices to develop extensions in a manner that provides users with the assurance that the code they install and update is benign in nature?
Browser extensions can do almost anything user can do. They can send your bank passwords, read files on local disk, execute commands etc. Security of a browser depends not only on browser itself, but also on all installed extensions.
I've written a few extensions for Chrome recently, and I had no idea how much harm extensions could really do before that.
Extensions ask for permissions, but these are very broad. Any non-trivial extension would most likely end up asking for "Full Permission", and most users would just bang the "YES" button. Even a tech savvy user may shrug this off as legitimate, I know I have.
Most extensions are free. It costs time and money to code them up, so how are developers getting their investment back? Some do it for fun, but chrome web store specifically asks if you are planning to inject adds - I can only deduce that this is a common practice for extension developers. Extensions could also act as tracking cookies, and sell usage stats to whomever.
It's near trivial to write an extension that would glob up your passwords and send them on to a third party. Even if these passwords are 'saved'. One of my extensions had a legitimate use case to modify all input fields on all pages, and I found out that chrome would just happily paste-in stored passwords in plain text. Same goes for CC information.
Many extensions include analytics packages, to help developers identify who their users are, which parts of the app is used and so forth. I think that this is a legitimate use case, but you may not necessarily agree.
If you are a developer, be advised that Chrome extensions could significantly impact page load times. My own extension, which I tirelessly optimized to be as lightweight as possible, caused all pages to have an additional 50-200ms load time.
So after I've seen what's possible, I've disabled all extensions in Chrome except for my own. I really only miss AdBlock.
Internet Explorer Browser Helper Objects are extremely unsafe. They basically allow the browser to run native code, which could be anything. I'm not sure if they're still as pervasive now as they were in years past, but they're one of the reasons why Internet Explorer is so much less secure than Firefox and other browsers.
Mozilla style plug-ins using XUL and Microsoft's Silverlight plug-ins are sandboxed to try and prevent malicious behavior. Ultimately it rests on the developer's reputation for any kind of software to be deemed trustworthy by its users, however. Even in cases where the developer is not trying to write malware, bugs in the program may expose security exploits.
Which is why you have multiple machines, and if you can't afford a new one, use a virtual machine to run most of the stuff and monitor it's behavior. Its what i do atleast before I do anything.
RnVja3Mgd2l0aCBtZSBmYW0hIGhpdCBtZSB1cCBhdCB0aGVib3NzODkwN0B5YWhv
by5jb20gaWYgeW91IGhhdmUgYW55IHF1ZXN0aW9ucw==

Resources