I am planning to make an application that monitors the websites visited by the users and performs some calculations on that.
So for every website that is opened,I developed a google chrome extension that will send the URL to a NPAPI plugin.
The problem is with the second part.Is it possible for a NPAPI plugin to pass the information it received from the extension to another application.I want a 2 way communication between my application and the NPAPI plugin so that depending on the processing performed by the application,the NPAPI plugin informs the extension to change the URLS it should send.
PS-I am using firebreath to develop the NPAPI plugin if that makes it easier to answer my question.
I would really like some ideas as to how this can be implemented.I am new to programming.
Any help is greatly appreciated.
NPAPI plugins have unrestricted access to the local machine, so your plugin is running its code like any other application. So what you're looking for is actually a way for two processes to communicate, AKA Inter-Process Communication. There are quite a few ways of doing that, you can find some here. The most appropriate one depends on your actual need, but when searching, don't let the NPAPI context bother you. You're just trying to get two processes to talk.
Shared memory is quite simple to use. Since you're new to programming ,I think this is the way to go. You can find an example here
Related
I am currently using Python 3.2. I am planning to use PAMIE to simulate some webpages. Will that work? Is PAMIE the best way to simulate webpages with Python? If yes, What else do I need to run PAMIE? I don't see a lot of tutorials/Online help on PAMIE. Is it because it's not used widely? Please advise...
P.A.M.I.E. is not for simulating webpages, but for automating access to them. Since it requires Internet Explorer I would say that it almost per definition is not the best way to do it. It certainly can not be used much, and this is the first time I've heard of it after 10 years of Python web development.
A much more commonly used solution is Webdriver, which also supports both IE, Firefox and Chrome.
Another solution, if you don't need Javascript support, is to use mechanize. This doesn't control a web browser, but is an implementation of a "headless" web browser and can be good for making test suites.
Okay, here's a complicated one I've been breaking my head over all week.
I'm creating a self service system, which allows people to identify themselves by barcode or by smartcard, and then perform an arbitrary action. I run a Tomcat application container locally on each machine to serve up the pages and connect to external resources that are required. It also allows me to serve webpages which I then can use to display content on the screen.
I chose HTML as a display technology because it gives a lot of freedom as to how things could look. The program also involves a lot of Javascript to interact with the customer and hardware (through a RESTful API). I picked Javascript because it's a natural complement to HTML and is supported by all modern browsers.
Currently this system is being tested at a number of sites, and everything seems to work okay. I'm running it in Chrome's kiosk mode. Which serves me well, but there are a number of downsides. Here is where the problems start. ;-)
First of all I am petrified that Chrome's auto-update will eventually break my Javascript code. Secondly, I run a small Chrome plugin to read smartcard numbers, and every time the workstation is shutdown incorrectly Chrome's user profile becomes corrupted and the extension needs to be set up again. I could easily fix the first issue by turning off auto-update but it complicates my installation procedure.
Actually, having to install any browser complicates my installation procedure.
I did consider using internet explorer because it's basically everywhere, but with the three dominant versions out there I'm not sure if it's a good approach. My Javascript is quite complex and making it work on older versions will be a pain. Not even mentioning having to write an ActiveX component for my smartcards.
This is why I set out to make a small browser wrapper that runs in full screen, and can read smartcard numbers. This also has downsides. I use Qt: Qt's QtWebkit weighs a hefty 10MB, and it adds another number of dependencies to my application.
It really feels like I have to pick from three options that all have downsides. It really is something I should have investigated before I wrote the entire program. I guess it is a lesson learnt well.
On to the questions:
Is there a pain free way out of this situation? (probably not)
Is there a browser I can depend on without adding tens of megabytes to my project?
Is there another alternative you could suggest?
If you do not see another way out, which option would you pick?
I have a backend software that needs to be able to communicate with a gecko-based web browser (and vice-versa). What is the best way to realize this? Since HTTP is rather one-way (with the exception of e.g. reverse AJAX which I consider to be quite "hacky") I am wondering how to do this.
Would creating an NPAPI-based plugin be an option? Based on the data exchanged between the browser and backend, the browser needs to manipulate the DOM of a webpage. The manipulations need to be quite dynamic and communication speed is an important requirement.
I am glad for any help pointing me in the right direction or providing useful resources that might be worth reading!
Writing browser plugins isn't quite trivial, if you can use alternatives like WebSockets (or their emulations like web-socket-js, see here and here for more details).
Only if such alternatives don't give you enough control because of special requirements should you consider writing a browser plugin.
With it you would get the full benefits of native code (high control over whatever API you choose) but also the problems that come with it:
you have to start to worry about privileges
bugs can crash the whole browser
you might have to handle behavioral differences between platforms and browsers
you have to worry about distribution on multiple platforms
...
If you need the higher level of control for some reason you could
implement the connection handling of your choice in the plugin
let the JavaScript initiate connections and send data
let the JavaScript register handlers for incoming data etc.
on incoming data call those handlers and pass them the data
To get started with NPAPI plugins see here, to support IE too you'd have to write a content extension. Finally i would advise to take a look at FireBreath that already does much of the heavy lifting for you (hides the different APIs for IE and NPAPI, gives you a higher level API, fixes for browser bugs included, ...).
I know there some experienced loadRunner users around so I would like to ask (as I was not able to find the answer on my own): Is the content checking available only for webpages? I mean, I cannot check for content in win32 apps, right? Thank you!
If you are asking about using the web_reg_save_param function, then, yes, it is limited to web applications.
Generally, functions with a "web" prefix are unique to web applications.
web_reg_save is web protocol-only, yes.
Depending on the protocol you use, you surely have a way to do a context verification. For example, when you are using terminal emulator, you can check for specific strings in specific display areas. Or, when using Citrix, you can wait for specific bitmaps to appear in certain areas. Or, with RMI, you can inspect whatever you want in the replies you receive.
Inspecting a Win32 app's screen, however, might be painful. LoadRunner tries to "sniff" at the protocol level, so usually you'd have some traffic to emulate on the sockets level, for example. You could still find the app's window handle and fetch some content from it using Windows API calls. LR will not assist you in doing so, though, except for with DLL support.
gud day!.
i am to develop a system that would simply list all URL accessed in a browser with its response time.
my probtion is alem is this applica standalone program(not a plug-in to a certain browser) written in c++. every time a user browse, the program then performs certain method.
so it is like, my program would listen to the browsers events. i dont know how to create an EVEN SINK implemetation for the above mention event in web browsers like Internet explorer, mozilla firefox and google chrome.
any suggestion, advise or idea i cant get from you for me to be able to start the development. any areas i need to focus in studying.
thanks alot for your time! hope for your response!:)
best regards!
The easiest way to achieve what you need is intercepting network traffic and extracting URLs from HTTP packets.
You can do this in many ways, e.g.:
using WinPCAP/libPCAP libarary
modifying LSP stack
intercepting winsock functions calls
If you're on the Windows platform, I think your best shot is using the MSAA interface, which is supported by all three browsers.
Documentation:
MSDN Overview and C++ API
Firefox statement of support for MSAA
Chrome
You could take a lower-level approach (such as an LSP), but they're much harder to debug.