jmeter script for JSF with PrimeFaces - jsf

I want to stress test JSF application (using Primefaces) with Jmeter and I'm facing a strange problem.
Said application saves some textual field and one image field. The workflow of application is that, on image upload control (primefaces) image is stored in session and on save button click application saves textual data as well as image data (from session).
Now the problem is this: I made two post request - one with image data and 2nd with textual data - but the page can't simulate saving.
Is there any way to simulate the process in jmeter?

Given you send the same requests as browser does you should be able to replicate the browser behaviour, just make sure to:
Properly build HTTP Request sampler(s)
Pay attention to HTTP Headers
Correlate dynamic parameters like JSF ViewState
With regards to point 1 it should be sufficient to just record the requests using JMeter's HTTP(S) Test Script Recorder, just make sure to copy the file(s) you're uploading to the "bin" folder of your JMeter installation, this way JMeter will be able to properly capture the requests. See Recording File Uploads with JMeter article for more details.
Points 2 and 3 - cross check the requests which are being sent from browser using browser developer tools and JMeter's View Results Tree listener - the requests need to be exactly the same apart from dynamic parameters which need to be correlated
And don't forget to add HTTP Cookie Manager to your Test Plan, it should deal with JSESSIONID and other cookies

Related

how to script for sharepoint app using jmeter for performance testing

I am having a SharePoint-based application, using which I need to perform load testing.
But When I m recording the script, the response is not as same as the browser, and thus unable to get what needs to be done
And in first 2 requests:
get page
post login
in these, there is no dynamic value, so I am not able to understand it.
First of all add HTTP Cookie Manager to your test plan
Second check all fields of the request from the browser (i.e. using browser developer tools) and JMeter and pay attention to URL and Headers
And last but not the least very often Sharepoint installations are protected using NTLM or Kerberos, if this is the case you will need to add properly configured HTTP Authorization Manager, see Windows Authentication with Apache JMeter article for more details.

Is my picture of a website correct?

I tried analyzing what in essence is a website . I thought of deconstructing or reverse engineering a website . The following are the sequence of events, I speculate or theorize the following sequence of events to be taking place during interaction with a website .
1.Every website is basically a set of computer programs,which get executed when the system where they are stored are contacted .
2.Depending on the processing of the type of request sent by the sender , some xml files , files containing the code to be executed,in response to different events and some script purported for dynamic alteration of the xml files are sent. Out of these xml files .
Out of these xml files , one contains the information about the initial appearance of the page and the furnishing of different controls or event generators on the screen .
4.So when some activity is done in the locality of one event generator , like a mouse click , an event is generated .
The code snippet to respond to the event is executed . If the code contains contacting the server and sending some request then the server is again pinged .
When the server is pinged again , depending on the request sent it again executes some code and in response transfers some more code files ,xml files and scripts to dynamically change the appearance of the page .
Is my understanding about the flow of a website correct ?
A web server is basically just a program sitting on a computer that listens on some TCP port (usually 80 for HTTP, 443 for HTTPS).
Clients (such as browsers) can connect and send a request (in HTTP format) to the server.
The server then sends an HTTP response back.
That's it. That's the basic flow: Connect, request, response.
The response contains a "type" field that tells the client what to do with the data. E.g. it could send an image (which is usually displayed on screen), an audio file (which is played), or a "normal" web page in HTML format.
HTML contains structured information about page content and layout, and may contain references to other resources such as images, style sheets, and scripts. A browser automatically fetches these resources (another HTTP request/response) and processes them.
Scripts can be used to customize the behavior on the client side. These are typically written in JavaScript and make use of an API exposed by the browser for interacting with the current page. They can e.g. register "click" handlers to define what happens when the user clicks on some page element.
XML may or may not be used internally by the web server. It doesn't really matter as far as clients are concerned.
If you want to learn more about this, I suggest researching HTTP, HTML, CSS, and JavaScript. MDN has some good articles, for example.

Scraping adf faces oracle rich client

I am trying to scrape a oracle adf faces rich client webpage but I am not getting the best of luck, I login automatically using node.js request module but after that I can't get to any other page with request. I get stuck on redirects, the loop script or simply don't get information I expect to.
I am using Wireshark to view every page and the way it handles, I recreate the page to match headers and even size but everytime the framework denies me access.
Before you ask, it's legal and I am not breaking any terms of service. Just trying to make a web api to speed up a process. I have used phantomjs with casperjs but get stuck on ajax calls that don't show on page and php curl but it's much easier with java.
Any suggestions are really really appreciated.
My bad on this one, wireshark was displaying fields as truncated, if you want to see the full field you need to right click the packet and click follow TCP stream, rich clients have very long posts generated by the framework behind the rich client and it appears I was missing about half of them when I did the calls.

What technology can i use to run a method on a browser(client side) every time a user uploads a picture?

I have a custom function/method that needs to run on the browser (client side) every time the user uploads a picture to a web-server. This method modifies the image being uploaded and sends it to the server.
Currently the method is written in java so I thought of using an applet on the browser which could run this method and then send the modified picture to a servlet residing on the server, but the applet has certain disk read/write restrictions. I am aware of policies that can be used to grant these permissions to the applet but they need the users consent every time.
Also I want to avoid the applet .class file to be downloaded every time this page is viewed. So
Is there a cleaner approach to all this?
Are there any other technologies that can help me run this method on the browser ? (its ok if i have to rewrite the function in a different language)
Is writing a custom browser extension a good idea?
I think, that the JS using will be much better for this task.
One of JS image processing JS-library
, just for example.
How to invoke a servlet from JS example
Writing a browser extension is a really wrong way.

Can I capture JSON data already being sent with a userscript/Chrome extension?

I'm trying to write a userscript/Chrome extension to capture JSON data being sent while using a web service so that I can reformat it and display selected portion on page. Currently the JSON is sent as the application loads (as I've observed from watching traffic with Fiddler 2). Is my only option to request the JSON again or is capture possible? As I'm not providing a code example, a requested answer is even some guidance on what method / topic to research or if I'm barking up the wrong tree.
No easy way.
If it is for a specific site you might look into intercepting and overwriting part of a code which sends a request. For example if it is sent on a button click you can replace existing click handler with your own implementation.
You can also try to make a proxy for XMLHttpRequest. Not sure if this even possible, never seen a working example. You can look at some attempts here.
For all these tasks you probably would need to run your javascript code out of sandboxed content script to be able to access parent page variables, so you would need to inject <script> tag with your code right into the page from a content script:

Resources