Recording Line Numbers of Executed Paths - google-chrome-extension

For Google Chrome Extension, is it possible to record the sequence of line numbers (with file names) (with the existing variables values in case of JavaScript) that are executed during the execution of HTML/CSS/JavaScript?

This is certainly possible but exceedingly difficult.
One can, in principle, implement it using chrome.debugger API, which gives the same access to the page as DevTools.
However, that API basically consists of sending almost-raw Remote Debugging protocol commands, and there aren't many samples to go on with. Debugger domain seems relevant.
So, it's possible but it's a lot of work, and additionally it's going to slow execution to a crawl.
As such, this is not a good problem to solve with extensions. It's better served by modification of Chromium code and maybe existing debugging capabilities of it. Basically, to efficiently output this information you need to get down to browser internals.

Related

use greasemonkey/tampermonkey to write simple data to "cloud" (probably something like pastebin)

I'm using Tampermonkey to collect data from pages I visit and store it locally and persistently across browser sessions. This is working fine. However, I'm limited to using the script on the same computer. I'd like to be able to use the same script on another computer and update the same collected data file.
I'd like to use the simplest method possible to read, edit, and store a single text file from multiple computers.
An actual functioning "Hello world!" script would be fantastic. I've mucked around with the pastebin API, but all the help applies to php code and there seems to be a lot of somewhat confusing overhead. I don't need to examine the contents of the pasted data in a useful editor. The data is never to be interpreted as code or html. I don't need an SQL database. This is just a project for fun, so I don't need to worry about privacy issues or elegant, modular code.
I just need a place to stash some bytes, and change them frequently.
What's the simplest solution?

Which points should be noted and observed while building a web application in a way that it can function well on most web browsers?

I'm working especially with Java web applications (in which mostly with JSF, Java Server Faces). I'm less concerned with the rest of the technologies.
Since different web browsers function less or more differently from one another, any web application should be designed in such a way that it can be incorporated and executed in a defined way by most browsers (might not be all). Which points should be kept in mind to design a web application in such a way that it can function almost exactly on most browsers?
What are the major differences among different browsers which should be noted by web application developers?
you have to check all of these points befor developing web applications with any language...
Almost all web developers (ahem! – perhaps that should read “quite a lot of web developers”) are aware of the need to check how their site looks in a variety of browsers. How far you go obviously depends on the resources available – not everyone is in a position to check Windows, Mac, Unix and Linux platforms. The minimum test would probably be:
Firefox, as that has the best standards compliance and is the second most-used browser;
Internet Explorer for Windows – currently the most widely used browser. It is essential to check both versions 6 and 7 as version 7
fixed quite a lot of bugs in 6 but introduced a new set of its own.
(Microsoft is however still kicking developers in the teeth by not
making it possible to install both versions on the same computer; you
will either need two computers or one of the work-arounds available
on the net.) Version 5 should preferably also be checked; as of
spring 2008 the number of users is not yet negligible. However it is
now uncommon enough that you needn’t worry about cosmetic issues; as
long as the site is readable that should be sufficient.
Opera – growing in popularity due to its speed and pretty good standards compliance.
For some time I also recommended checking Netscape 4 as well, as it often produces radically different results from any other browser, and was very popular for a long time. However the number of users of this bug-ridden browser is now so small (under 0.1% and decreasing) that it can now probably safely be ignored.
Check printed pages
Print some of the pages on a normal printer (i.e. with a paper size of A4 or Letter) and check that they appear sensibly. Due to the somewhat limited formatting options available for printing, you probably can’t achieve an appearance comparable to a document produced by a word-processor, but you should at least be able to read the text easily, and not have lines running off the right-hand side of the page. It is truly extraordinary how many site authors fail to think of this most elementary of operations.
You should also consider using CSS to adjust the appearance of the page when printed. For example you could – probably should – suppress the printing of information which is not relevant to the printed page, such as navigation bars. This can be done using the “#media print” or “#import print” CSS features.
Some sites provide separate “printer friendly” versions of their pages, which the user can select and print. While this may occasionally be necessary as a last resort, it significantly increases the amount of work needed to maintain the site, is inconvenient for the reader and shouldn’t usually be needed.
Switch Javascript off
There are unfortunately quite a number of Internet sites which abuse Javascript by, for example, generating unwanted pop-ups and irritating animations. There are also a number of Javascript-related security holes in browsers, especially Internet Explorer. As a result a lot of readers switch Javascript off – indeed I often do myself. (I have a page giving the reasons in more detail.) Some organisations even block the usage of Javascript completely. Furthermore few, if any, search engines support Javascript.
It is therefore important to check that your site still functions with Javascript disabled. A lot of sites rely – quite unnecessarily – on Javascript for navigation, with the result that the lack of Javascript renders the site unusable.
Clearly if you need it for essential content, that functionality will be lost. But there is no reason why the basic text of the site should be unavailable.
Avoid nearly-meaningless messages like “Javascript needed to view this site”. If you have something worth showing, tell the user what it is, e.g. “enable Javascript to see animation of solar system”.
Switch plug-ins off
The considerations for plug-ins (such as Flash or Java) are very similar to those for Javascript above. Check the site with any plug-ins disabled. The basic text and navigation should still work.
Interest the reader sufficiently, and he might just go to the trouble of down-loading the plug-in. Greet him with a blank screen or a “You need Flash to read this site” message and he will probably go away, never to return.
Switch images off
If scanning a number of sites quickly for information, many readers (including myself) switch images off, for quick loading. Other people cannot view images. So switch images off and check that the site is readable and navigable. This means, in particular, checking that sensible ALT texts have been provided for images. (This check is similar to using a text browser, but not quite the same).
its worth to take a look at this link for more info

Designing a self Recallable/Destructible email program

This is one of my assignments and I need some help in getting started. The basic idea behind the assignment is that I have to design a self destructible email program that is capable of destructing the message after (n) time duration.
Speaking about self destructible emails, there are quite a few ones on the internet offering the same service. But what they do is, they just convert the email message into an image and store them on their servers. Now, they send the message attaching the image inline with it. After they receive a hit on that image (which means that the message was being opened), they simply delete the image and the inline image link breaks! BOOM!
IMO, that's not what a self destructing email should be like. Nevertheless, in my case, I have to take care of following points:
I have to do it for TEXT. No image, nothing else.
I have to assume that the systems used throughout the process will be UNIX based (I don't know how that is going to make a difference).
There are also some hints regarding the usage of various network layers in solving the problem.
This isn't supposed to be done "in general". What I mean by that is, I have to do that ONLY for one/two UNIX systems. Let me put it this way, all I have is two UNIX systems and nothing else. Now I want to create a program (in UNIX itself) that would do that self-destructing thing. I have total control of protocols and the network layers and I have to code anything and everything required at any level.
This is more geared towards the StackOverflow side of things but I have no problem getting you started.
The first thing I'd like to point out is that you seem to be heavily over-analyzing this. The services that have self-destructing e-mails which are image based are simply deleting a file after it is viewed. All you need to do differently is put that text in a file and get it's contents before deleting it. This fits well with the UNIX philosophy since so many programs already make use of flat files.
The part you seem to have left out is how you are building this. You describe it as an e-mail program and then talk about web services. Is this a web-based project or a program you are designing for Linux? Do you have to code everything from scratch or can you parse output from Linux utilities to grab the mail? These kinds of things really would simplify the process.

Designing an AJAX driven Quicksilver-style search with multiple search plugins for a website

I'm trying to build a Quicksilver style search system for the internal web app that we develop at work. There are plenty of examples of really cool front ends for this using JQuery or MooTools or whatever. None of these examples really talk about the back-end. As far as I can tell, these examples assuming the back-end is searching a single table or at least, performing a single query. What I want to do is design a system where you can, literally, type anything at all at it and find what you were looking for. Idealy, I want to be able to just write plugins for this system, drop them in, and start searching.
I have a solution where the back-end uses the observer pattern to send the query to different plugins for each type of search. However, this will return the results from all the plug-ins as one chunk. This could get noticeably slow if there are many kinds of searches. I'd like it to be quick and return the results in a more asynchronous fashion where results are displayed as they come in, a la OS X's Spotlight or Quicksilver.
Another solution is to write, on the fly, a javascript array with the names of the plug-ins to be used. I could then fire off separate calls to the server with the query, one for each plug-in. Something about this solution seems... off to me. I can't exactly put my finger on it though.
So, my question is: does anyone have any better solutions for building a plug-in based search system where the individual search types are not known before the page is loaded and the results are returned ASAP?
Another solution is to write, on the fly, a javascript array with the names of the plug-ins to be used. I could then fire off separate calls to the server with the query, one for each plug-in. Something about this solution seems... off to me. I can't exactly put my finger on it though.
This does not seem like that bad of an option. It gives you everything you need.
You need search results to come back as soon as they can.
It allows you to use your existing plugin architecture, I believe.
It follows the KISS principle.
It is not a new solution, but I think that it is the easiest.
Regards.
You could do a Comet style solution that used long polling in Ajax to get results for the search. Make one place for the script to call that will give back the results of all the plugins as they come in. This method allows you to get the quick results displayed sooner.
Having an array of plugins is an option but some browsers are limited to 2 requests at a time so that would limit the amount of request just being kicked off and could cause a fast process to have to wait for the slow processes.
It sounds like you are getting close with with back end you have just make it provide up the data as it comes in. Also this will allow you to add and remove plugins on the fly without effecting the JS so no worries about cached array lists.
A few thoughts on the back end from comment. Build a work queue so search requests can be farmed out to many workers. It would be possible to implement the work queue in a DB or through a web service so you could use other languages or even computers to do the work for each search. The work call would need some id to pass back to direct the data at the correct client. Also you would want a way to remove jobs from the queue or at least mark all work for a client as void if that client goes away. (You should be able to detect this if you are using long polling.)
Connection limits
IE7 for HTTP1 4
IE7 for HTTP1.1 2
IE8 for HTTP1 6
IE8 for HTTP1.1 6
From all the comments and talk it seems like you want to build this on the front end.
Don't build an array of plugins to call it forces you to worry about caching when changing out plugins you should do instead is build a bootstrap system. It would be a simple ajax call that got a list of plugins with there URL to call. This will allow you to turn on and off plugins from a central location and it will work better.
You will have to make each of your plugins into a web service instead of a plugin so each can be called independently. Make sure to use mediasalve's link about the number of connections because it will be limited by browsers if you don't get around it.

online trading bot

I want to code a trading bot for Magic: The Gathering Online. This bot should wait until someone offers to trade, accept, look through the cards available from the other trader (the information is shown on screen), and perform other similar functions. I have several questions:
How can it know that someone is offering a trade?
How can it know that the other trader has some card (the informaion is stored in pictures)?
I just cannot imagine right now how to do it, I have no experience with it, until now I've been coding only console programs for my physics neсessities.
First, you should note that some online games forbid bots, as they can give certain players unfair advantages. The MTGO Terms of Service do not seem to say anything about this, though they do put restrictions on anything that might negatively impact the service. They have also said that there is a possibility they will add an API in the future, so they don't seem to be against the idea of automation, but are not supporting it at the moment. Tread carefully here, but it looks like it should be OK to write a bot as long as it is not harmful or abusive. This is not legal advice, and it would be a good idea to ask the folks who run MTGO for permission. edit since I wrote this, it has been pointed out that there are lots of bots already, so there should be no problems writing bots.
Assuming that it is not forbidden by the terms of service, but they do not have an API, you will have to find a way to detect what's going on, and control the game automatically. There's a pretty good series of articles on writing poker bots (archived copy), which has some good information on how to inject a DLL into an application, scrape the screen, and control the application. That might provide you with a starting point for doing this sort of thing.
You might also want to look for tools that other people have already written for doing this. It looks like there are several existing MTGO bots, but they all seem a bit sketchy (there have been some reports of them stealing passwords), so be careful there.
Edit
Since this answer still seems to be getting upvotes, I should probably update it with some more useful information. Since writing this, I have found a great UI automation system called Sikuli. It allows you to write programs in Python that automate a GUI. It includes image recognition features which make it very easy to recognize buttons, cards, and other UI elements; you just take a screenshot, crop it down to include just the thing you're interested in, and do fuzzy image matching (so that changing backgrounds and the like doesn't cause the match to fail). It even includes a custom IDE that allows you to embed those screenshots directly in your source code, so you can see exactly what the code is looking for. Here's an example from the documentation (apologies for the code formatting, doing images inline in code is not easy given StackOverflow's restricted subset of HTML):
def resizeApp(app, dx, dy):
switchApp(app)
corner = find(Pattern().targetOffset(3,14))
drop_point = corner.getTarget().offset(dx, dy)
dragDrop(corner, drop_point)
resizeApp("Safari", 50, 50)
This is much easier to get started with than the techniques mentioned in the article linked above, of injecting a DLL into the process you are debugging. Sikuli runs entirely at the UI level, so you never have to modify the program you are automating or worry about changes to the internals breaking your script.
One thing it is a bit poor at is handling text; it has OCR features, but they aren't all that good. If the text is selectable, however, you can select the text, copy it, and then look directly at the clipboard.
If I were to write a bot to automate something without a good API or text-based interface, Sikuli is probably the first tool I would reach for.
This answer is constructed from my comments.
What you are trying to do is hard, any way you try and do it.
Arguably the easiest way to do it is to totally mimic the user. So the application presses buttons, moves the mouse etc. The downside with this is that it is dependant on being able to recognise the screen.
This is easier if you can alter the games files as you can then just skin ( changing the image (texture)) the required cards to a single unique colour.
The major down side is you have to have the game as the top level window or have the game running in a virtual machine. Neither of which is ideal.
Another method is to read the processes memory. You may be able to find a list of memory locations, which would make things simpler, otherwise it involves a lot of hardwork, a debugger to deduce the memory addresses. It also helps (a lot) to be able to understand assembly.
The third method is to intercept the packets, and alter them. This is easier that the method above as it (at least for me) is easier to reverse engine the protocol as you have less information to deal with. It is just a matter of setting up a packet sniffer and preforming a action with one variable different (for example, the card) and comparing the differences.
The thing you need to check are that you are not breaking the EULA. I don't know how the game works, but most of the games I have come across have a EULA that prohibits (i.e. You get banned) doing any of the things I have mentioned.

Resources