Jquery auto refresh is using up a LOT of browser memory. Is there a way to stop this. I had a 2 div refreshing every 3 seconds but I moved it up to 9 and 15 seconds, It helped a little bit the longer the window stays open on my site the more memory it takes until finally the browser crashes.
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" ></script>
<script>
var auto_refresh = setInterval(
function ()
{
$('#details2').load('links2.php').fadeIn("slow");
}, 15000); // refresh every 10000 milliseconds</script>
You could try to skip the load() and use $.ajax instead. I know load(); is an ajax request but I seem to recall it fetches the whole script. Try requesting a script, do your database calculations and return the data as json. I assume you're sending complete html with the data from the database request. Try this with json instead.
You'll get the data as an object, like this for example.
{"variable":"foo"}
Then you can fetch the data with a simple each statement.
$.ajax({
url: "links2.php",
type: "POST",
dataType: "json",
success: function(data){
// data here is returned as objects since it's json
$.each(data, function(key, value) {
$("#details2").empty().append(value.variable);
});
}
});
I think this shouldn't leak your memory and eventually crash your browser, even though you call it every other second or so. Give it a try and let me know how it goes.
Good luck!
Try changing it to this:
// ...
$('#details2').empty().load('links2.php').fadeIn('slow');
It may halp to explicitly tell jQuery to empty the container first, so it can free up any event handlers etc. (Though it's not clear that there would be any handlers in there ...)
edit — actually never mind; I checked the jQuery sources and it looks like calling .html() (which load() does, I'm pretty sure) seems to always call empty() first anyway.
Although an answer has been approved but I should tell you that. I had the same problem.
I found the problem in src of JQuery file. I used the JQuery site url as my source and yup it increased my computer usage to 99%. But then I downloaded the whole JQuery Script and saved it in my website directory, I used that in my source and then there was no problem with computer usage or memory. Try that too..
Related
I have been trying to do some changes regarding the code below. At first I discovered that a function that returns a promise and in which a query is sent to db to be executed was being run twice instead of once. I have checked the query and the function itself just to make sure. Then I removed all code from inside io.of() except socket.on() functions which didn't seem to be involved in this matter. I have put a simple console.log() statement inside after removing the code I mentioned and it also produced the 'being executed twice' problem.
io.of('....').on('connection', socket => {
console.log("hello");
//...
//......
// below are socket.on('...')... and nothing more
})
Adding this to html and moving the code to socket.on('load') in io.of() fixed it for me.
$(document).ready(function () {
socket.emit('load');
});
I am currently writing a userscript for website A to access another the contents on website B. So I tried to use the GM_xmlhttpRequest to do it. However, a variable on B is written to the window property eg: window.var or responseContent.var.
However, when I tried to get the window.var, the output is undefined, which means the properties under the window variable cannot be read successfully. I guess the window object is refering to the website A but not website B, so the result is undefined (There is no window.var on A).
I am sure that the GM_xmlhttpRequest has successfully read the content of the website B because I have added console.log to see the response.responseText. I have also used the window.var to successfully visit that variable on website B by browser directly.
GM_xmlhttpRequest({
method: "GET",
url: url,
headers: {
referrer: "https://A.com"
},
onload: function (response) {
// console.log(response.responseText);
let responseContent = new Document();
responseContent = new DOMParser().parseFromString(response.responseText, "text/html");
let titleDiv = responseContent.querySelector("title");
if (titleDiv != null) {
if (titleDiv.innerText.includes("404")) {
console.log("404");
return;
} else {
console.log(responseContent.var);
console.log(window.var);
}
}
},
onerror: function (e) {
console.log(e);
}
});
I would like to retrieve content window.var on website B and show it on the console.log of A
Please help me solve the problem. Thank you in advance.
#wOxxOm's comments are on point. You cannot really get the executed javascript of another website like that. One way to go around it is to use <iframe> and post message, just like #wOxxOm said. But that may fail if the other website has policy against iframes.
If this userscript is just for your use, another way is to have two scripts, one for each website and have them both open in browser tabs. Then again you can use postMessage to have those two scripts communicate the information. Dirty solution for your second userscript would be to just post the variable info on regular interval:
// Userscript for website-b.com
// needs #grant for unsafe-window to get the window.var
setInterval(()=>{postMessage(unsafeWindow.var, "website-a.com");}, 1000);
That would send an update of var's value every second. It's not very elegant, but it's simple and works. For a more elegant solution, you may want to first postMessage from website a that will trigger postMessage(unsafeWindow.var, "website-a.com"). Working with that further, you will soon find yourself inventing an asynchronous communication protocol.
Alternatively, if the second website is simple, you can try to parse the value of var directly from HTML, or wherever the value is coming from. That's a preferred solution, but requires reverse-engineering on your part.
As the title says, I'm trying to intercept script requests from the user's page, make a GET request to the script url from the background, add a bit of functionality and send it back to the user.
A few caveats:
I don't want to do this with every script request
I still have to guarantee that the script tags are executed in the original order
So far I came with two solutions, none of which work properly. The basic code:
chrome.webRequest.onBeforeRequest.addListener(
function handleRequest(request) {
// First I make the get request for the script myself SYNCHRONOUSLY,
// because the webRequest API cannot handle async.
const syncRequest = new XMLHttpRequest();
syncRequest.open('GET', request.url, false);
syncRequest.send(null);
const code = syncRequest.responseText;
},
{ urls: ['<all_urls>'] },
['blocking'],
);
Now once we have the code, there are two approaches that I've tried to insert it back into the page.
I send the code through a port to a content script, that will add it to the page inside a <script></script> tag. Along with the code, I also send an index to keep sure the scripts are inserted back into the page in the correct order. This works fine for my dummy website, but it breaks on bigger apps, like youtube, where it fails to load the image of most videos. Any tips on why this happens?
I return a redirect to a data url:
if (condition) return { cancel: false }
else return { redirectUrl: 'data:application/javascript; charset=utf-8,'.concat(alteredCode) };
This second options breaks the code formatting, sometimes removing the space, sometimes cutting it short. I'm not sure on the reason behind this behavior, it might have something to do with data url spec.
I'm stuck. I've researched pretty much every related answer on this website and couldn't find anything. Any help or information is greatly appreciated!
Thanks for your time!!!
I've written a Firefox addon for the first time and it was reviewed and accepted a few month ago. This add-on calls frequently a third-party API. Meanwhile it was reviewed again and now the way it calls setInterval is criticized:
setInterval called in potentially dangerous manner. In order to prevent vulnerabilities, the setTimeout and setInterval functions should be called only with function expressions as their first argument. Variables referencing function names are acceptable but deprecated as they are not amenable to static source validation.
Here's some background about the »architecture« of my addon. It uses a global Object which is not much more than a namespace:
if ( 'undefined' == typeof myPlugin ) {
var myPlugin = {
//settings
settings : {},
intervalID : null,
//called once on window.addEventlistener( 'load' )
init : function() {
//load settings
//load remote data from cache (file)
},
//get the data from the API
getRemoteData : function() {
// XMLHttpRequest to the API
// retreve data (application/json)
// write it to a cache file
}
}
//start
window.addEventListener(
'load',
function load( event ) {
window.removeEventListener( 'load', load, false ); needed
myPlugin.init();
},
false
);
}
So this may be not the best practice, but I keep on learning. The interval itself is called inside the init() method like so:
myPlugin.intervalID = window.setInterval(
myPlugin.getRemoteData,
myPlugin.settings.updateMinInterval * 1000 //milliseconds!
);
There's another point setting the interval: an observer to the settings (preferences) clears the current interval and set it exactly the same way like mentioned above when a change to the updateMinInterval setting occures.
As I get this right, a solution using »function expressions« should look like:
myPlugin.intervalID = window.setInterval(
function() {
myPlugin.getRemoteData();
},
myPlugin.settings.updateMinInterval * 1000 //milliseconds!
);
Am I right?
What is a possible scenario of »attacking« this code, I've overlooked so far?
Should setInterval and setTimeout basically used in another way in Firefox addons then in »normal« frontend javascripts? Because the documentation of setInterval exactly shows the way using declared functions in some examples.
Am I right?
Yes, although I imagine by now you've tried it and found it works.
As for why you are asked to change the code, it's because of the part of the warning message saying "Variables referencing function names are acceptable but deprecated as they are not amenable to static source validation".
This means that unless you follow the recommended pattern for the first parameter it is impossible to automatically calculate the outcome of executing the setInterval call.
Since setInterval is susceptible to the same kind of security risks as eval() it is important to check that the call is safe, even more so in privileged code such as an add-on so this warning serves as a red flag to the add-on reviewer to ensure that they carefully evaluate the safety of this line of code.
Your initial code should be accepted and cause no security issues but the add-on reviewer will appreciate having one less red flag to consider.
Given that the ability to automatically determine the outcome of executing JavaScript is useful for performance optimisation as well as automatic security checks I would wager that a function expression is also going to execute more quickly.
I am working with a node.js project (using Wikistream as a basis, so not totally my own code) which streams real-time wikipedia edits. The code breaks each edit down into its component parts and stores it as an object (See the gist at https://gist.github.com/2770152). One of the parts is a URL. I am wondering if it is possible, when parsing each edit, to scrape the URL for each edit that shows the differences between the pre-edited and post edited wikipedia page, grab the difference (inside a span class called 'diffchange diffchange-inline', for example) and add that as another property of the object. Right not it could just be a string, does not have to be fully structured.
I've tried using nodeio and have some code like this (i am specifically trying to only scrape edits that have been marked in the comments (m[6]) as possible vandalism):
if (m[6].match(/vandal/) && namespace === "article"){
nodeio.scrape(function(){
this.getHtml(m[3], function(err, $){
//console.log('getting HTML, boss.');
console.log(err);
var output = [];
$('span.diffchange.diffchange-inline').each(function(scraped){
output.push(scraped.text);
});
vandalContent = output.toString();
});
});
} else {
vandalContent = "no content";
}
When it hits the conditional statement it scrapes one time and then the program closes out. It does not store the desired content as a property of the object. If the condition is not met, it does store a vandalContent property set to "no content".
What I am wondering is: Is it even possible to scrape like this on the fly? is the scraping bogging the program down? Are there other suggested ways to get a similar result?
I haven't used nodeio yet, but the signature looks to be an async callback, so from the program flow perspective, that happens in the background and therefore does not block the next statement from occurring (next statement being whatever is outside your if block).
It looks like you're trying to do it sequentially, which means you need to either rethink what you want your callback to do or else force it to be sequential by putting the whole thing in a while loop that exits only when you have vandalcontent (which I wouldn't recommend).
For a test, try doing a console.log on your vandalContent in the callback and see what it spits out.