Facebook Chat Plugin issues with CSP frame-src directive - content-security-policy

We have a Facebook Chat Plugin on one of our websites. We have a CSP set up where any third party providers are whitelisted by us.
The problem is, when viewing a website in mobile which has a Facebook Chat Plugin, when clicking the "Chat in Messenger Button" it throws:
Refused to frame '' because it violates the following Content Security Policy directive: "frame-src ... ".
But it strictly said '' to which we are unable to whitelist in our CSP headers.
When we tried to disable the CSP headers, upon clicking the "Chat in Messenger" button, it tries to open the Messenger app instead.
Any ideas on working around this?

But it strictly said '' to which we are unable to whitelist in our CSP headers.
On some platforms Chrome does not show blocked-uri in the console, I do not know why. Try Safari if you have one (I think Firefox does block nothing in you case).
Script of FB Chat plugin uses intent:// and fb-messenger:// vendor schemes to open app:
__d("sdk.openMessenger", ["sdk.UA"], (function(a, b, c, d, e, f) {
"use strict";
e.exports = a;
var g = "https://itunes.apple.com/us/app/messenger/id454638411",
h = "https://play.google.com/store/apps/details?id=com.facebook.orca",
i = 3e3;
function a(a) {
var c, d, e = a.link;
a = a.app_id;
b("sdk.UA").android() ? (c = "intent://share/#Intent;package=com.facebook.orca;scheme=fb-messenger;S.android.intent.extra.TEXT=" + encodeURIComponent(e) + ";S.trigger=send_plugin;", a && (c += "S.platform_app_id=" + encodeURIComponent(a) + ";"), c += "end", d = h) : (c = "fb-messenger://share?link=" + encodeURIComponent(e), a && (c += "&app_id=" + encodeURIComponent(a)), d = g);
setTimeout(function() {
window.location.href = d
}, i);
window.location.href = c
}
}), null);
Although vendor schemas should not be blocked by the page CSP, some browsers may need to specify intent: and fb-messenger: scheme-sources in the frame-src directive.
When we tried to disable the CSP headers, upon clicking the "Chat in Messenger" button, it tries to open the Messenger app instead.
Yes, Adnroid tries to open Messenger app if installed. It this question: Facebook chat button of my website in android webview is not opening the messenger app there is a link to the solution - substitute "intent:" scheme to the "https://". May be it helps you.
In case of prevent open Messenger app may be you will not have to use intent: and fb-messenger: scheme-sources in the CSP.

Related

Add variables to reply URLs in Azure B2C

I am trying to set the redirect_uri in Azure B2C. I have a language field in the Url like this:
https://mydomain/de-de/projects
https://mydomain/en-us/projects
https://mydomain/sv-se/projects
https://mydomain/ar-sa/projects
...
and to be correctly redirected, I have to add all the possibilities to the B2C Reply URLs and I am only limited to 20 max.
Is there a way to add variables to the redirect_uri?
Something like:
https://mydomain/:lang/projects
where ":lang" is a variable the could take any value.
////////////////////////////////////
Solution
The tricky solution was to manipulate the state and inject it with the returned URL due to the fact that it will be sent back after the login/signup response. createLoginUrl() method:
let url = that.loginUrl
+ '?response_type='
+ response_type
+ '&client_id='
+ encodeURIComponent(that.clientId)
+ '&state='
+ encodeURIComponent((state) + 'url' + returnedUrl)
+ '&redirect_uri='
+ encodeURIComponent(window.location.origin)
+ '&scope='
+ encodeURIComponent(that.scope);
so here I split the state with 'url' word so I can read it again after the response came.
encodeURIComponent((state) + 'url' + returnedUrl)
An important details redirect_uri, it should be the same origin:
'&redirect_uri=' + encodeURIComponent(window.location.origin)
and this URL should be added to the returned URL in Azure B2C application.
Now I can split it again in tryLogin() method:
const statePartsWithUrl = (parts['state'] + '').split('url');
window.location.href = statePartsWithUrl[1];
and it works perfectly.
////-------------------------------------
Edit : 1.2.2019
const statePartsWithUrl = (parts['state'] + '').split('url');
let state = '';
let returnedUrl = '';
if (statePartsWithUrl != null) {
state = statePartsWithUrl[0];
returnedUrl = statePartsWithUrl[1];
}
Here is the splitting of the state to read the information from it in method tryLogin(options)
Yeah so as you found out, you can't currently add wildcards to reply URLs in B2C.
This may be due to security concerns defined in the OAuth 2.0 Threat Model and Security Considerations RFC.
In it, the suggested counter-measure against Open Redirect Attacks is to have the client register the full redirect URI.
There is also no way to create apps programmatically: https://feedback.azure.com/forums/169401-azure-active-directory/suggestions/19975480-programmatically-register-b2c-applications.
So sadly the manual way is the only way at the moment. But be sure to go upvote the feature request on User Voice.
I actually even tried to manually edit an app via Graph Explorer:
{
"odata.error": {
"code": "Request_BadRequest",
"message": {
"lang": "en",
"value": "Updates to converged applications are not allowed in this version."
},
"date": "2018-01-08T12:00:00",
"requestId": "208e7159-d459-42ec-8bb7-000000000000",
"values": null
}
}
As you suggested in the comments, one way to work around this problem would be to use a single static redirect URI and keep the language/culture in the state/a cookie, and then do the redirect to the language-specific version after the user is returned to the app.

GoogleActions Account not linked yet error

I'm trying to implement oauth2 authentication on my nodejs Google Assistant app developed using (DialogFlow or API.ai and google actions).
So I followed this answer. But I'm always getting "It looks like your test oauth account is not linked yet. " error. When I tried to open the url shown on the debug tab, it shows 500 broken url error.
Dialogflow fullfillment
index.js
'use strict';
const functions = require('firebase-functions'); // Cloud Functions for Firebase library
const DialogflowApp = require('actions-on-google').DialogflowApp; // Google Assistant helper library
const googleAssistantRequest = 'google'; // Constant to identify Google Assistant requests
exports.dialogflowFirebaseFulfillment = functions.https.onRequest((request, response) => {
console.log('Request headers: ' + JSON.stringify(request.headers));
console.log('Request body: ' + JSON.stringify(request.body));
// An action is a string used to identify what needs to be done in fulfillment
let action = request.body.result.action; // https://dialogflow.com/docs/actions-and-parameters
// Parameters are any entites that Dialogflow has extracted from the request.
const parameters = request.body.result.parameters; // https://dialogflow.com/docs/actions-and-parameters
// Contexts are objects used to track and store conversation state
const inputContexts = request.body.result.contexts; // https://dialogflow.com/docs/contexts
// Get the request source (Google Assistant, Slack, API, etc) and initialize DialogflowApp
const requestSource = (request.body.originalRequest) ? request.body.originalRequest.source : undefined;
const app = new DialogflowApp({request: request, response: response});
// Create handlers for Dialogflow actions as well as a 'default' handler
const actionHandlers = {
// The default welcome intent has been matched, welcome the user (https://dialogflow.com/docs/events#default_welcome_intent)
'input.welcome': () => {
// Use the Actions on Google lib to respond to Google requests; for other requests use JSON
//+app.getUser().authToken
if (requestSource === googleAssistantRequest) {
sendGoogleResponse('Hello, Welcome to my Dialogflow agent!'); // Send simple response to user
} else {
sendResponse('Hello, Welcome to my Dialogflow agent!'); // Send simple response to user
}
},
// The default fallback intent has been matched, try to recover (https://dialogflow.com/docs/intents#fallback_intents)
'input.unknown': () => {
// Use the Actions on Google lib to respond to Google requests; for other requests use JSON
if (requestSource === googleAssistantRequest) {
sendGoogleResponse('I\'m having trouble, can you try that again?'); // Send simple response to user
} else {
sendResponse('I\'m having trouble, can you try that again?'); // Send simple response to user
}
},
// Default handler for unknown or undefined actions
'default': () => {
// Use the Actions on Google lib to respond to Google requests; for other requests use JSON
if (requestSource === googleAssistantRequest) {
let responseToUser = {
//googleRichResponse: googleRichResponse, // Optional, uncomment to enable
//googleOutputContexts: ['weather', 2, { ['city']: 'rome' }], // Optional, uncomment to enable
speech: 'This message is from Dialogflow\'s Cloud Functions for Firebase editor!', // spoken response
displayText: 'This is from Dialogflow\'s Cloud Functions for Firebase editor! :-)' // displayed response
};
sendGoogleResponse(responseToUser);
} else {
let responseToUser = {
//richResponses: richResponses, // Optional, uncomment to enable
//outputContexts: [{'name': 'weather', 'lifespan': 2, 'parameters': {'city': 'Rome'}}], // Optional, uncomment to enable
speech: 'This message is from Dialogflow\'s Cloud Functions for Firebase editor!', // spoken response
displayText: 'This is from Dialogflow\'s Cloud Functions for Firebase editor! :-)' // displayed response
};
sendResponse(responseToUser);
}
}
};
// If undefined or unknown action use the default handler
if (!actionHandlers[action]) {
action = 'default';
}
// Run the proper handler function to handle the request from Dialogflow
actionHandlers[action]();
// Function to send correctly formatted Google Assistant responses to Dialogflow which are then sent to the user
function sendGoogleResponse (responseToUser) {
if (typeof responseToUser === 'string') {
app.ask(responseToUser); // Google Assistant response
} else {
// If speech or displayText is defined use it to respond
let googleResponse = app.buildRichResponse().addSimpleResponse({
speech: responseToUser.speech || responseToUser.displayText,
displayText: responseToUser.displayText || responseToUser.speech
});
// Optional: Overwrite previous response with rich response
if (responseToUser.googleRichResponse) {
googleResponse = responseToUser.googleRichResponse;
}
// Optional: add contexts (https://dialogflow.com/docs/contexts)
if (responseToUser.googleOutputContexts) {
app.setContext(...responseToUser.googleOutputContexts);
}
app.ask(googleResponse); // Send response to Dialogflow and Google Assistant
}
}
// Function to send correctly formatted responses to Dialogflow which are then sent to the user
function sendResponse (responseToUser) {
// if the response is a string send it as a response to the user
if (typeof responseToUser === 'string') {
let responseJson = {};
responseJson.speech = responseToUser; // spoken response
responseJson.displayText = responseToUser; // displayed response
response.json(responseJson); // Send response to Dialogflow
} else {
// If the response to the user includes rich responses or contexts send them to Dialogflow
let responseJson = {};
// If speech or displayText is defined, use it to respond (if one isn't defined use the other's value)
responseJson.speech = responseToUser.speech || responseToUser.displayText;
responseJson.displayText = responseToUser.displayText || responseToUser.speech;
// Optional: add rich messages for integrations (https://dialogflow.com/docs/rich-messages)
responseJson.data = responseToUser.richResponses;
// Optional: add contexts (https://dialogflow.com/docs/contexts)
responseJson.contextOut = responseToUser.outputContexts;
response.json(responseJson); // Send response to Dialogflow
}
}
});
// Construct rich response for Google Assistant
const app = new DialogflowApp();
const googleRichResponse = app.buildRichResponse()
.addSimpleResponse('This is the first simple response for Google Assistant')
.addSuggestions(
['Suggestion Chip', 'Another Suggestion Chip'])
// Create a basic card and add it to the rich response
.addBasicCard(app.buildBasicCard(`This is a basic card. Text in a
basic card can include "quotes" and most other unicode characters
including emoji 📱. Basic cards also support some markdown
formatting like *emphasis* or _italics_, **strong** or __bold__,
and ***bold itallic*** or ___strong emphasis___ as well as other things
like line \nbreaks`) // Note the two spaces before '\n' required for a
// line break to be rendered in the card
.setSubtitle('This is a subtitle')
.setTitle('Title: this is a title')
.addButton('This is a button', 'https://assistant.google.com/')
.setImage('https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png',
'Image alternate text'))
.addSimpleResponse({ speech: 'This is another simple response',
displayText: 'This is the another simple response 💁' });
// Rich responses for both Slack and Facebook
const richResponses = {
'slack': {
'text': 'This is a text response for Slack.',
'attachments': [
{
'title': 'Title: this is a title',
'title_link': 'https://assistant.google.com/',
'text': 'This is an attachment. Text in attachments can include \'quotes\' and most other unicode characters including emoji 📱. Attachments also upport line\nbreaks.',
'image_url': 'https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png',
'fallback': 'This is a fallback.'
}
]
},
'facebook': {
'attachment': {
'type': 'template',
'payload': {
'template_type': 'generic',
'elements': [
{
'title': 'Title: this is a title',
'image_url': 'https://developers.google.com/actions/images/badges/XPM_BADGING_GoogleAssistant_VER.png',
'subtitle': 'This is a subtitle',
'default_action': {
'type': 'web_url',
'url': 'https://assistant.google.com/'
},
'buttons': [
{
'type': 'web_url',
'url': 'https://assistant.google.com/',
'title': 'This is a button'
}
]
}
]
}
}
}
};
Actually I deployed the code exists in the dialog flow inline editor. But don't know how to implement an oauth endpoint, whether it should be a separate cloud function or it has to be included within the existsing one. And also I am so confused with how oauth authorization code flow will actually work.. Let's assume we are on the Assistant app, once the user say "talk to foo app", does it automatically opens a web browser for oauth code exchange process?
The answer you referenced had an update posted on October 25th indicating they had taken action to prevent you from entering in a google.com endpoint as your auth provider for Account Linking. It seems possible that they may have taken other actions to prevent using Google's auth servers in this way.
If you're using your own auth server, the error 500 would indicate an error on your oauth server, and you should check your oauth server for errors.
Update to answer some of your other questions.
But don't know how to implement an oauth endpoint
Google provides guidance (but not code) on what you need to do for a minimal OAuth service, either using the Implicit Flow or the Authorization Code Flow, and how to test it.
whether it should be a separate cloud function or it has to be included within the existing one
It should be separate - it is even arguable that it must be separate. In both the Implicit Flow and the Authorization Code Flow, you need to provide a URL endpoint where users will be redirected to log into your service. For the Authorization Code Flow, you'll also need an additional webhook that the Assistant will use to exchange tokens.
The function behind these needs to be very very different than what you're doing for the Dialogflow webhook. While someone could probably make a single function that handles all of the different tasks - there is no need to. You'll be providing the OAuth URLs separately.
However, your Dialogflow webhook does have some relationship with your OAuth server. In particular, the tokens that the OAuth server hands to the Assistant will be handed back to the Dialogflow webhook, so Dialogflow needs some way to get the user's information based on that token. There are many ways to do this, but to list just a few:
The token could be a JWT and contain the user information as claims in the body. The Dialogflow webhook should use the public key to verify the token is valid and needs to know the format of the claims.
The OAuth server and the Dialogflow webhook could use a shared account database, and the OAuth server store the token as a key to the user account and delete expired keys. The Dialogflow webhook could then use the token it gets as a key to look up the user.
The OAuth server might have a(nother) webhook where Dialogflow could request user information, passing the key as an Authorization header and getting a reply. (This is what Google does, for example.)
The exact solutions depends on your needs and what resources you have available to you.
And also I am so confused with how oauth authorization code flow will actually work.. Let's assume we are on the Assistant app, once the user say "talk to foo app", does it automatically opens a web browser for oauth code exchange process?
Broadly speaking - yes. The details vary (and can change), but don't get too fixated on the details.
If you're using the Assistant on a speaker, you'll be prompted to open the Home app which should be showing a card saying what Action wants permission. Clicking on the card will open a browser or webview to the Actions website to begin the flow.
If you're using the Assistant on a mobile device, it prompts you directly and then opens a browser or webview to the Actions website to begin the flow.
The auth flow basically involves:
Having the user authenticate themselves, if necessary.
Having the user authorize the Assistant to access your resources on the user's behalf.
It then redirects to Google's servers with a one-time code.
Google's servers then take the code... and close the window. That's the extent of what the user's see.
Behind the scenes, Google takes this code and, since you're using the Authorization Code Flow, exchanges it for an auth token and a refresh token at the token exchange URL.
Then, whenever the user uses your Action, it will send an auth token along with the rest of the request to your server.
Plz suggest the necessary package for OAuth2 configuration
That I can't do. For starters - it completely depends on your other resources and requirements. (And this is why StackOverflow doesn't like people asking for suggestions like this.)
There are packages out there (you can search for them) that let you setup an OAuth2 server. I'm sure someone out there provides OAuth-as-a-service, although I don't know any offhand. Finally, as noted above, you can write a minimal OAuth2 server using the guidance from Google.
Trying to create a proxy for Google's OAuth is... probably possible... not as easy as it first seems... likely not as secure as anyone would be happy with... and possibly (but not necessarily, IANAL) a violation of Google's Terms of Service.
can't we store the user's email address by this approach?
Well, you can store whatever you want in the user's account. But this is the user's account for your Action.
You can, for example, access Google APIs on behalf of your user to get their email address or whatever else they have authorized you to do with Google. The user account that you have will likely store the OAuth tokens that you use to access Google's server. But you should logically think of that as separate from the code that the Assistant uses to access your server.
My implementation of a minimal oauth2 server(works for the implicit flow but doesn't store the user session).
taken from https://developers.google.com/identity/protocols/OAuth2UserAgent.
function oauth2SignIn() {
// Google's OAuth 2.0 endpoint for requesting an access token
var oauth2Endpoint = 'https://accounts.google.com/o/oauth2/v2/auth';
// Create element to open OAuth 2.0 endpoint in new window.
var form = document.createElement('form');
form.setAttribute('method', 'GET'); // Send as a GET request.
form.setAttribute('action', oauth2Endpoint);
//Get the state and redirect_uri parameters from the request
var searchParams = new URLSearchParams(window.location.search);
var state = searchParams.get("state");
var redirect_uri = searchParams.get("redirect_uri");
//var client_id = searchParams.get("client_id");
// Parameters to pass to OAuth 2.0 endpoint.
var params = {
'client_id': YOUR_CLIENT_ID,
'redirect_uri': redirect_uri,
'scope': 'email',
'state': state,
'response_type': 'token',
'include_granted_scopes': 'true'
};
// Add form parameters as hidden input values.
for (var p in params) {
var input = document.createElement('input');
input.setAttribute('type', 'hidden');
input.setAttribute('name', p);
input.setAttribute('value', params[p]);
form.appendChild(input);
}
// Add form to page and submit it to open the OAuth 2.0 endpoint.
document.body.appendChild(form);
form.submit();
}
This implementation isn't very secure but it's the only code I've gotten to work as OAuth server for the Assistant.
I am able to make it work after a long time. We have to enable the webhook first and we can see how to enable the webhook in the dialog flow fulfillment docs If we are going to use Google Assistant, then we have to enable the Google Assistant Integration in the integrations first. Then follow the steps mentioned below for the Account Linking in actions on google:-
Go to google cloud console -> APIsand Services -> Credentials -> OAuth 2.0 client IDs -> Web client -> Note the client ID, client secret from there -> Download JSON - from json note down the project id, auth_uri, token_uri -> Authorised Redirect URIs -> White list our app's URL -> in this URL fixed part is https://oauth-redirect.googleusercontent.com/r/ and append the project id in the URL -> Save the changes
Actions on Google -> Account linking setup 1. Grant type = Authorisation code 2. Client info 1. Fill up client id,client secrtet, auth_uri, token_uri 2. Enter the auth uri as https://www.googleapis.com/auth and token_uri as https://www.googleapis.com/token 3. Save and run 4. It will show an error while running on the google assistant, but dont worry 5. Come back to the account linking section in the assistant settings and enter auth_uri as https://accounts.google.com/o/oauth2/auth and token_uri as https://accounts.google.com/o/oauth2/token 6. Put the scopes as https://www.googleapis.com/auth/userinfo.profile and https://www.googleapis.com/auth/userinfo.email and weare good to go. 7. Save the changes.
In the hosting server(heroku)logs, we can see the access token value and through access token, we can get the details regarding the email address.
Append the access token to this link "https://www.googleapis.com/oauth2/v1/userinfo?access_token=" and we can get the required details in the resulting json page.
`accessToken = req.get("originalRequest").get("data").get("user").get("accessToken")
r = requests.get(link)
print("Email Id= " + r.json()["email"])
print("Name= " + r.json()["name"])`

Issue getting user/password with GetAuthenticationInfo in firebreath

I'm trying to get user/password from Firebreath plugin with the use of NpapiBrowserHost.GetAuthenticationInfo method.
I need to do this for npapi based browsers (chrome / firefox / opera). So this is my code:
boost::shared_ptr<FB::Npapi::NpapiBrowserHost> npapihost =
FB::ptr_cast<FB::Npapi::NpapiBrowserHost>(m_host);
if(npapihost)
{
char * username = NULL; uint32_t ulen = 0;
char * password = NULL; uint32_t plen = 0;
NPError err = npapihost->GetAuthenticationInfo("http",
"xxx.yyy.com",
80,
"Basic",
"Knownnameofrealm",
&username, &ulen,
&password, &plen );
}
In Opera it works. In Chrome & Firefox it returns err = NPERR_GENERIC_ERROR,
and ulen = 0, plen = 0 (username, password - bad ptr).
This code is executed from MypluginamePlugin::onPluginReady().
If you succeeded in getting credentials, please post code example.
PS Chrome according to chromium sources does not yet implement NPN_GetAuthenticationInfo https://code.google.com/p/chromium/issues/detail?id=23928
In Firefox I should use -1 instead of 80 for http (443 for https).
Simply speaking FF's password managing service stored all it's info inside a hashmap:
Map entry = ( (key to auth. object) , (objects with single user auth. info) )
Each key is a string created as follows: (some pro stuff) + (scheme) + "://" + (host) + ":" + (port).
FF substituted INTERNET_DEFAULT_HTTP_PORT = 80 (INTERNET_DEFAULT_HTTPS_PORT = 443) with -1 while creating new map entry.
In Opera initially all worked fine.
In Chrome browser-side endpoint function is not implemented since stub creation at 2009.
In IE npapihost is not available. Although I didn't even have to mess with login/password extraction because default CInternetSession (wininet package) constructor does it automatically.

can it possible to built facebook login functionality in extension

I want to include facebook login functionality in my chrome extension. I have included
connect.facebook.net/en_US/all.js file in popup.html. But not working.
i m using code
window.fbAsyncInit = function() {
// init the FB JS SDK
FB.init({
appId: 'APP_ID', // App ID from the App Dashboard
//channelUrl : '//www.example.com/', // Channel File for x-domain communication
status: true, // check the login status upon init?
cookie: true, // set sessions cookies to allow your server to access the session?
xfbml: true // parse XFBML tags on this page?
});
// Additional initialization code such as adding Event Listeners goes here
console.log(FB);
};
// Load the SDK's source Asynchronously
(function(d, debug) {
var js, id = 'facebook-jssdk', ref = d.getElementsByTagName('script')[0];
if (d.getElementById(id)) {
return;
}
js = d.createElement('script');
js.id = id;
js.async = true;
js.src = "http://connect.facebook.net/en_US/all" + (debug ? "/debug" : "") + ".js";
ref.parentNode.insertBefore(js, ref);
}(document, /*debug*/false));
but getting error
Refused to load the script 'http://connect.facebook.net/en_US/all.js' because it violates the following Content Security Policy directive: "script-src 'self' chrome-extension-resource:".
I use the following code and it works, without damaging the CSP
function loginfacebook()
{
chrome.windows.create(
{
'url': "https://www.facebook.com/dialog/oauth?client_id=yourclientid&redirect_uri=https://www.facebook.com/connect/login_success.html&scope=email&response_type=token"
}, null);
chrome.tabs.query({active:true}, function(tabs){
tabid = tabs[0].id;
chrome.tabs.onUpdated.addListener(function(tabid, tab)
{
var cadena = tab.url;
if (cadena != null)
{
var resultado = cadena.match(/[\\?&#]access_token=([^&#])*/i);
}
if (resultado != null)
{
token = resultado[0];
token = token.substring(14);
storagetoken(token);
};
});
As the error message says, the Facebook script isn't CSP-compliant. I haven't looked at that script, but if you can't modify it and fix the CSP issues, you have a couple options in general for dealing with such scripts:
Put it in a sandboxed iframe.
Put the script in a <webview>.
Unfortunately, the point of that Facebook script is likely to set a cookie after FB authentication, and that cookie would stay in either the iframe or the webview, so neither of these approaches will end up with the required cookie in your main app. You'll have to figure out a way to transmit the product (cookie) of the FB login operation to your app, likely through postMessage. If you do that legwork and succeed, please post your results somewhere, such as in a sample app on GitHub.

Chrome Extension : Modify User-Agent string

In Firefox extension, we can do:
var _prefService = Components.classes["#mozilla.org/preferences-service;1"].getService(Components.interfaces.nsIPrefBranch);
var httpHandler = Cc["#mozilla.org/network/protocol;1?name=http"].getService(Ci.nsIHttpProtocolHandler);
setCharPref("general.useragent.override",httpHandler.userAgent + " OurUAToken/1.0");
To add "OurUAToken/1.0" at the end of User-Agent string.
How can we duplicate this behavior in Google Chrome?
Not sure if someone still looking for the solution but the chrome.webRequest API suggested earlier is quite stable now.
chrome.webRequest.onBeforeSendHeaders.addListener(
function (details) {
for (var i = 0; i < details.requestHeaders.length; ++i) {
if (details.requestHeaders[i].name === 'User-Agent') {
details.requestHeaders[i].value = details.requestHeaders[i].value + ' OurUAToken/1.0';
break;
}
}
return { requestHeaders: details.requestHeaders };
},
{ urls: ['<all_urls>'] },
['blocking', 'requestHeaders']
);
One of the chrome extensions, Requestly already has similar implementation to allow overriding User Agent string for any website opened in the browser.
For more info, please visit blog: https://medium.com/#requestly_ext/switching-user-agent-in-browser-f57fcf42a4b5
The extension is also available for Firefox. Visit http://www.requestly.in for details.
You can use the WebRequest api: http://code.google.com/chrome/extensions/trunk/experimental.webRequest.html
Unfortunately, it's still in Experimental stage. I think it will be released as stable with Chrome version 17

Resources