Cannot load webpage from Postman because of javax.faces.ViewState? - jsf

I am trying to integrate a web application written by someone else with an API written by someone else. At the moment I am trying to test one of the webpages using Postman. When the webpage is loaded in a browser it works correctly. I have replicated all of the headers and body in Postman, however when I try to launch the webpage in Postman a HTTP 500 status code appears (internal server error).
I think the issue is with: javax.faces.ViewState, which is a body key/value pair. I initially do I get request to the webpage in Postman and get the viewstate:
I tried passing the value: xxxxxxxxxxxxxxxxxxxxxx;yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy in the body key/value pair, but still I get an internal server error. I have also checked that the JSESSIONID cookie is identical in the GET request and the POST request.
I have also noticed that if I access the webpage from a browser, then there is a colon instead of a semi colon in the value if that has any bearing.
Most of what I have tried so far was suggested in the answer to this question: How to programmatically send POST request to JSF page without using HTML form?
What am I doing wrong?

Related

Which request params can be used to tell if a request is coming from Postman vs NodeJS server?

I am running two identical API requests to a 3rd party API -- one in postman and the other in NodeJS. The API responds with different set-cookie headers in Postman vs NodeJS.
I've tried:
Copying the headers from Postman headers tab into my nodejs request headers.
Copying headers from the Postman console logs into my nodejs request headers.
Copying Postman's auto-generated axios code.
node-fetch instead of axios.
Turning various settings on/off in Postman.
Every time, the API request in Postman responds with a different set-cookie header than NodeJS. The postman request is receiving the correct session token while the NodeJS request is not.
The API server can somehow tell the difference between the two environments, but how?
Is postman running on a headless browser so that it can "fool" a server checking for browser runtime?
Is postman a true "curl" while nodejs requests are not?
Given that the request headers and body are the same in both requests, which variables might be used to differentiate between a postman request and nodejs request?
I'll answer this myself, for anyone who comes here searching for answers. Apparently, Postman uses some magic configuration to make requests from the browser while bypassing CORS issues.
They call it the "Postman Agent". It seems like it's probably a local proxy in front of a headless browser with CORS turned off (or something along those lines).
You can read about it here: https://blog.postman.com/introducing-the-postman-agent-send-api-requests-from-your-browser-without-limits/
In my case, the issue wasn't caused by a difference between the requests. It was caused by the way the responses were handled. Postman was showing cookies received in an initial 302 response, and then following the redirect. The NodeJS request was following the redirect but not showing the initial 'set-cookie' header in the final response. As soon as I set redirect: 'manual' in nodejs, I could see the correct headers from the initial 302 response.

Is there a way to request an internal API of a public website from Node fetch?

I am trying to scrape dynamic websites and was using Puppeteer with Node.js before I realized i can just fetch the website's API directly and not have to render stuff that I don't need. By looking in the "Network" tab of Chrome's developer tools I could find the exact endpoints that returns the data I need. It works for most of the sites I am trying to scrape, but for some, especially POST requests, the API returns a "403: Forbidden" error code.
The API returns a success if I do a fetch-request directly from the Chrome console. But as soon as I try from a different tab, Postman, or Node using node-fetch I get "403: Forbidden".
I have tried copying the exact headers that are sent naturally from the website, and I have tried explicitly setting the "origin" and "referer" headers to the website's address but to no avail.
Is this simply a security measure that is impossible to breach or is there a way to trick the API into thinking that the request is coming from their own website?

HTTP GET request on website does not return full body (website scraping)

I am trying to scrape a website, but some pages of that website will be fully returned upon a GET request. I don't wan't to disclose the URL of said website but still I'd like to ask for help in this regard.
I've implemented HTTP requests to log into the member area of that website, which works fine. Then, I'd like to get a list of conversations, however, when I compare the response from the firefox developer tools (of the same GET to the same location with the same parameters), I will see the full HTML in firefox dev tools, but in my implementation (using the request nodejs module) I will only see the inner <div id="content">...</div>, without any javascript or surrounding HTML.
How can this be? I understand javascript can inject HTML afterwards, but how should this be possible if no javascript has been received by my scraping implementation? What is different in firefox? I understand that in firefox probably their javascript client is running and doing the GET request, which then inserts content. However, the firefox log shows a HTTP GET request (no XHR) and it shows the full response in the dev tools. How is this possible?
Anyone got a hint on how to proceed on this further?

How do i put the request result inside an html in express?

im trying to make a website using express, and in this website i request some data from an external API.
So, i have an html where i "send" the request. How do i take that parameters for the request into the server, and then the response to the html or at least the js linked to that html?
To this point, i already tested how to add an html with a js linked to it, and it worked, so now i have to make the rest of the web concept, that is request data from the API.
Sorry if i dont have the code, but im still making it and i have this big issue that i cant resolve.
Thanks for your time and advice anyways
You have two choices.
Either you make an ajax request to the api from the front-end or in the back-end and render the result.
You can also make a request from the front-end, send the result to the back-end and have express send a different response.
No code attached as your question is very generic.

Python Request Module throws 403 Error + SharePoint Online

I am currently writing a script which will pull information from sharepoint online. To do this outside the sharepoint environment. I need to get the token and Cookies (FedAuth, rtFa). Sames cookies i can get using the requests in python.
Now, If i pass the same cookies in header in my script, while fetching the List data from SP Online. it throws 403 Error. But if same cookies i use in Postman to fetch the list data. It successfully fetches the information.
Can anyone tell me what i am doing wrong?
EDIT: figured out the answer, here is the link for the same

Resources