Can I prevent node_modules bundled with webpack from using window.postMessage? - security

For security reasons, I need to disallow 3rd party modules which are included in my bundle by webpack from using window.postMessage to communicate with other processes in my Electron app.
Is that possible?

A standard trick could help here: simply move the method to another variable on window, like this maybe:
window._postMessage = window.postMessage;
window.postMessage = () => {};
Run this first on your render script or with a <script> tag on your html and plugins that would use that can't send events anymore (in fact, they don't crash but never get a response).
Edit
If you want to make sure that only authorized user can use it, something like this could work:
function createSecurePostMessage() {
const _postMessage = window.postMessage;
return {
get() {return function(...args) {
if (isAuthorized(...args) {
_postMessage(...args);
}
}
}
}
window.postMessage = createSecurePostMessage().get();
Now the original postMessage is inside the function and inaccessible and you can implement a method, for example, that checks if a certain message can be sent. If someone calls create again, then the secured postMethod will just be 'secured' again.

Related

Azure function run code on startup for Node

I am developing Chatbot using Azure functions. I want to load the some of the conversations for Chatbot from a file. I am looking for a way to load these conversation data before the function app starts with some function callback. Is there a way load the conversation data only once when the function app is started?
This question is actually a duplicate of Azure Function run code on startup. But this question is asked for C# and I wanted a way to do the same thing in NodeJS
After like a week of messing around I got a working solution.
First some context:
The question at hand, running custom code # App Start for Node JS Azure Functions.
The issue is currently being discussed here and has been open for almost 5 years, and doesn't seem to be going anywhere.
As of now there is an Azure Functions "warmup" trigger feature, found here AZ Funcs Warm Up Trigger. However this trigger only runs on-scale. So the first, initial instance of your App won't run the "warmup" code.
Solution:
I created a start.js file and put the following code in there
const ErrorHandler = require('./Classes/ErrorHandler');
const Validator = require('./Classes/Validator');
const delay = require('delay');
let flag = false;
module.exports = async () =>
{
console.log('Initializing Globals')
global.ErrorHandler = ErrorHandler;
global.Validator = Validator;
//this is just to test if it will work with async funcs
const wait = await delay(5000)
//add additional logic...
//await db.connect(); etc // initialize a db connection
console.log('Done Waiting')
}
To run this code I just have to do
require('../start')();
in any of my functions. Just one function is fine. Since all of the function dependencies are loaded when you deploy your code, as long as this line is in one of the functions, start.js will run and initialize all of your global/singleton variables or whatever else you want it to do on func start. I made a literal function called "startWarmUp" and it is just a timer triggered function that runs once a day.
My use case is that almost every function relies on ErrorHandler and Validator class. And though generally making something a global variable is bad practice, in this case I didn't see any harm in making these 2 classes global so they're available in all of the functions.
Side Note: when developing locally you will have to include that function in your func start --functions <function requiring start.js> <other funcs> in order to have that startup code actually run.
Additionally there is a feature request for this functionality that can voted on open here: Azure Feedback
I have a similar use case that I am also stuck on.
Based on this resource I have found a good way to approach the structure of my code. It is simple enough: you just need to run your initialization code before you declare your module.exports.
https://github.com/rcarmo/azure-functions-bot/blob/master/bot/index.js
I also read this thread, but it does not look like there is a recommended solution.
https://github.com/Azure/azure-functions-host/issues/586
However, in my case I have an additional complication in that I need to use promises as I am waiting on external services to come back. These promises run within bot.initialise(). Initialise() only seems to run when the first call to the bot occurs. Which would be fine, but as it is running a promise, my code doesn't block - which means that when it calls 'listener(req, context.res)' it doesn't yet exist.
The next thing I will try is to restructure my code so that bot.initialise returns a promise, but the code would be much simpler if there was a initialisation webhook that guaranteed that the code within it was executed at startup before everything else.
Has anyone found a good workaround?
My code looks something like this:
var listener = null;
if (process.env.FUNCTIONS_EXTENSION_VERSION) {
// If we are inside Azure Functions, export the standard handler.
listener = bot.initialise(true);
module.exports = function (context, req) {
context.log("Passing body", req.body);
listener(req, context.res);
}
} else {
// Local server for testing
listener = bot.initialise(false);
}
You can use global variable to load data before function execution.
var data = [1, 2, 3];
module.exports = function (context, req) {
context.log(data[0]);
context.done();
};
data variable initialized only once and will be used within function calls.

Error `window not defined` in Node.js

I know window doesn't exist in Node.js, but I'm using React and the same code on both client and server. Any method I use to check if window exists nets me:
Uncaught ReferenceError: window is not defined
How do I get around the fact that I can't do window && window.scroll(0, 0)?
Sawtaytoes has got it. I would run whatever code you have in componentDidMount() and surround it with:
if (typeof(window) !== 'undefined') {
// code here
}
If the window object is still not being created by the time React renders the component, you can always run your code a fraction of a second after the component renders (and the window object has definitely been created by then) so the user can't tell the difference.
if (typeof(window) !== 'undefined') {
var timer = setTimeout(function() {
// code here
}, 200);
}
I would advise against putting state in the setTimeout.
This will settle that issue for you:
typeof(window) === 'undefined'
Even if a variable isn't defined, you can use typeof() to check for it.
This kind of code shouldn't even be running on the server, it should be inside some componentDidMount (see doc) hook, which is only invoke client side. This is because it doesn't make sense to scroll the window server side.
However, if you have to reference to window in a part of your code that really runs both client and server, use global instead (which represents the global scope - e.g. window on the client).
This is a little older but for ES6 style react component classes you can use this class decorator I created as a drop in solution for defining components that should only render on the client side. I like it better than dropping window checks in everywhere.
import { clientOnly } from 'client-component';
#clientOnly
class ComponentThatAccessesWindowThatIsNotSafeForServerRendering extends Component {
render() {
const currentLocation = window.location;
return (
<div>{currentLocation}</div>
)
};
}
https://github.com/peterlazzarino/client-component
<Router onUpdate={() => window.scrollTo(0, 0)} history= {browserHistory}>
if you need to open new page on top in React JS app, use this code in router.js
Move the window and related code to the mounted() lifecycle hook. This is because mounted() hook is called on the client side only and window is available there.

Where should I put custom errors in sails.js?

I was wondering what's the best practice and if I should create:
a directory in which declare statically all the errors my application uses, like api/errors/custom1Error
declare them directly inside the files
or put the files directly inside the dir that needs that error, like api/controller/error/formInvalidError
other options!?
A neat way of going about this would be to simply add the errors as custom responses under api/responses. This way even the invocation becomes pretty neat. Although the doc says you should add them directly in the responses directory, I'm sure there must be a way to nest them under, say, responses/errors. I'll try that out and post an update in a bit.
Alright, off a quick search, I couldn't find any way to nest the responses, but you can use a small workaround that's not quite as neat:
Create the responses/errors directory with all the custom error response handlers. Create a custom response and name it something like custom.js. Then specify the response name while calling res.custom().
I'm adding a short snippet just for illustration:
api/responses/custom.js:
var customErrors = {
customError1: require('./errors/customError1'),
customError2: require('./errors/customError2')
};
module.exports = function custom (errorName, data) {
var req = this.req;
var res = this.res;
if (customErrors[errorName]) return customErrors[errorName](req, res, data);
else return res.negotiate();
}
From the controller:
res.custom('authError', data);
If you don't need logical processing for different errors, you can do away with the whole errors/ directory and directly invoke the respective views from custom.js:
module.exports = function custom (viewName, data) {
var req = this.req;
var res = this.res;
return res.view('errors/' + viewName, data);//assuming you have error views in views/errors
}
(You should first check if the view exists. Find out how on the linked page.)
Although I'm using something like this for certain purposes (dividing routes and so on), there definitely should be a way to include response handlers defined in different directories. (Perhaps by reconfiguring some grunt task?) I'll try to find that out and update if I find any success.
Good luck!
Update
Okay, so I found that the responses hook adds all files to res without checking if they are directories. So adding a directory under responses results in a TypeError from lodash. I may be reading this wrong but I guess it's reasonable to conclude that currently it's not possible to add a directory there, so I guess you'll have to stick to one of the above solutions.

Dart redstone web application

Let's say I configure redstone as follows
#app.Route("/raw/user/:id", methods: const [app.GET])
getRawUser(int id) => json_about_user_id;
When I run the server and go to /raw/user/10 I get raw json data in a form of a string.
Now I would like to be able to go to, say, /user/10 and get a nice representation of this json I get from /raw/user/10.
Solutions that come to my mind are as follows:
First
create web/user/user.html and web/user/user.dart, configure the latter to run when index.html is accessed
in user.dart monitor query parameters (user.dart?id=10), make appropriate requests and present everything in user.html, i.e.
var uri = Uri.parse( window.location.href );
String id = uri.queryParameters['id'];
new HttpRequest().getString(new Uri.http(SERVER,'/raw/user/${id}').toString() ).then( presentation )
A downside of this solution is that I do not achieve /user/10-like urls at all.
Another way is to additionally configure redstone as follows:
#app.Route("/user/:id", methods: const [app.GET])
getUser(int id) => app.redirect('/person/index.html?id=${id}');
in this case at least urls like "/user/10" are allowed, but this simply does not work.
How would I do that correctly? Example of a web app on redstone's git is, to my mind, cryptic and involved.
I am not sure whether this have to be explained with connection to redstone or dart only, but I cannot find anything related.
I guess you are trying to generate html files in the server with a template engine. Redstone was designed to mainly build services, so it doesn't have a built-in template engine, but you can use any engine available on pub, such as mustache. Although, if you use Polymer, AngularDart or other frameowrk which implements a client-side template system, you don't need to generate html files in the server.
Moreover, if you want to reuse other services, you can just call them directly, for example:
#app.Route("/raw/user/:id")
getRawUser(int id) => json_about_user_id;
#app.Route("/user/:id")
getUser(int id) {
var json = getRawUser();
...
}
Redstone v0.6 (still in alpha) also includes a new foward() function, which you can use to dispatch a request internally, although, the response is received as a shelf.Response object, so you have to read it:
#app.Route("/user/:id")
getUser(int id) async {
var resp = await chain.forward("/raw/user/$id");
var json = await resp.readAsString();
...
}
Edit:
To serve static files, like html files and dart scripts which are executed in the browser, you can use the shelf_static middleware. See here for a complete Redstone + Polymer example (shelf_static is configured in the bin/server.dart file).

Re-inject content scripts after update

I have a chrome extension which injects an iframe into every open tab. I have a chrome.runtime.onInstalled listener in my background.js which manually injects the required scripts as follows (Details of the API here : http://developer.chrome.com/extensions/runtime.html#event-onInstalled ) :
background.js
var injectIframeInAllTabs = function(){
console.log("reinject content scripts into all tabs");
var manifest = chrome.app.getDetails();
chrome.windows.getAll({},function(windows){
for( var win in windows ){
chrome.tabs.getAllInWindow(win.id, function reloadTabs(tabs) {
for (var i in tabs) {
var scripts = manifest.content_scripts[0].js;
console.log("content scripts ", scripts);
var k = 0, s = scripts.length;
for( ; k < s; k++ ) {
chrome.tabs.executeScript(tabs[i].id, {
file: scripts[k]
});
}
}
});
}
});
};
This works fine when I first install the extension. I want to do the same when my extension is updated. If I run the same script on update as well, I do not see a new iframe injected. Not only that, if I try to send a message to my content script AFTER the update, none of the messages go through to the content script. I have seen other people also running into the same issue on SO (Chrome: message content-script on runtime.onInstalled). What is the correct way of removing old content scripts and injecting new ones after chrome extension update?
When the extension is updated Chrome automatically cuts off all the "old" content scripts from talking to the background page and they also throw an exception if the old content script does try to communicate with the runtime. This was the missing piece for me. All I did was, in chrome.runtime.onInstalled in bg.js, I call the same method as posted in the question. That injects another iframe that talks to the correct runtime. At some point in time, the old content scripts tries to talk to the runtime which fails. I catch that exception and just wipeout the old content script. Also note that, each iframe gets injected into its own "isolated world" (Isolated world is explained here: http://www.youtube.com/watch?v=laLudeUmXHM) hence newly injected iframe cannot clear out the old lingering iframe.
Hope this helps someone in future!
There is no way to "remove" old content scripts (Apart from reloading the page in question using window.location.reload, which would be bad)
If you want to be more flexible about what code you execute in your content script, use the "code" parameter in the executeScript function, that lets you pass in a raw string with javascript code. If your content script is just one big function (i.e. content_script_function) which lives in background.js
in background.js:
function content_script_function(relevant_background_script_info) {
// this function will be serialized as a string using .toString()
// and will be called in the context of the content script page
// do your content script stuff here...
}
function execute_script_in_content_page(info) {
chrome.tabs.executeScript(tabid,
{code: "(" + content_script_function.toString() + ")(" +
JSON.stringify(info) + ");"});
}
chrome.tabs.onUpdated.addListener(
execute_script_in_content_page.bind( { reason: 'onUpdated',
otherinfo: chrome.app.getDetails() });
chrome.runtime.onInstalled.addListener(
execute_script_in_content_page.bind( { reason: 'onInstalled',
otherinfo: chrome.app.getDetails() });
)
Where relevant_background_script_info contains information about the background page, i.e. which version it is, whether there was an upgrade event, and why the function is being called. The content script page still maintains all its relevant state. This way you have full control over how to handle an "upgrade" event.

Resources