I have been asked to build an web-app which will have a page where the user can define the url he wants to navigate to (which is an external link) and add additional http headers he can send with that url. The web-app will be build with jsf 2.1
Which headers exactly do you need to set and what exactly are they for?
Answ: They are additional headers H-Version,H-UniqueID etc.
Is it a specific or an arbitrary external URL?
Answ: is an specific url (but it must be absolute)
And, importantingly, what exactly does the first response of a request on that external URL represent? Does it represent a full blown HTML page (thus with all relative references on it such as CSS/JS/images/links), or does it return a special response (e.g. XML/JSON or even a simple HTTP redirect)?
Answ: The target link will call another java-web-app which it's response will be html/CSS/JS/images/links..
Do you guys know any solution for this,
Many thanks!
Related
Using express JS I'm trying to add some headers to the redirection I'm returning
However, everything I tried just work for the response headers and not for the request headers of the redirection. I.E., when inspecting it with the developer tools I can see the response headers but when the next call is made, I can not see the request headers
req.headers['x-custom-header'] = 'value'
res.setHeader('x-custom-header', 'value')
res.redirect('example.com')
Does anybody could explain how the response and request headers work on ExpressJS?
A redirect just does a redirect. It tells the browser to go to that new location with standard, non-custom headers. You cannot set custom headers on the next request after the redirect. The browser simply doesn't do that.
The usual way to pass some type of parameters in a redirect is to put them in a query string for the redirect URL or, in some cases, to put them in a cookie. In both cases of query string parameters and data in a cookie, those will be available to your server when the browser sends you the request for the redirected URL.
It also may be worth revisiting why you're redirecting in the first place and perhaps there's a different flow of data/urls that doesn't need to redirect in the first place. We'd have to know a lot more about what this actual operation is trying to accomplish to make suggestions there.
If your request is being processed by an Ajax call, then you can program the code receiving the results of the Ajax call to do anything you want it to do (including add custom headers), but if it's the browser processing the redirect and changing the page URL to load a new page, it won't pay any attention to custom headers on the redirect response.
Can anybody explain how the response and request headers work on ExpressJS?
Express is doing exactly what you told it to do. It's attaching the custom headers to the response that goes back to the browser. It's the browser that does not attach those same headers to the next request to the redirected URL. So, this isn't an Express thing, it's a browser thing.
Why would #1 work, but not #2 or 3 when used in a $$Return field if database is being accessed using IE11? The field is hidden.
[db_path/db_filename/Page?OpenPage]
http://server_dns/db_path/db_filename/Page?OpenPage
server_dns/db_path/db_filename/Page?OpenPage
A URL in brackets (e.g., [db_path/db_filename/Page?OpenPage]) is interpreted by the Domino server as a command to send an HTTP 30x REDIRECT response (probably a 303, but I'm not sure) to the browser. Upon receipt of this response, the browser interprets it as an instruction to retrieve the specified URL. That's simply a matter of compliance with standards, so all browsers will do it.
The other choices you list are not treated as anything special by the Domino server. They are simply sent as ordinary content in a 200 OK response to the browser's POST request. No standards apply to this, so a browser may or may not choose to recognize that the response text looks like a URL and may or may not choose to do something with it - e.g., follow the link. Based on your question, it appears that IE11 does not do anything with it. It doesn't follow the URL. Frankly, I had no idea that any browser would do actually follow a URL if it is received as the sole content with a 200 OK response.
I just could not get the http-proxy module to work properly as a forward proxy. It works great as a reverse proxy. Therefore, I have implemented a node-based forward proxy using node's http and net modules. It works fine, both with http and https. I will deal with websockets later. Among other things, I want to log the URLs visited or requested through a browser. In the request object, I do get the URL, but as expected, when a page loads, a zillion other requests are triggered, including AJAX, third-party ads, etc. I do not want to log these.
I know that I can distinguish an AJAX request from the x-requested-with header. I can distinguish requests coming from a browser by examining the user-agent header (though these can be spoofed thru cURL). I want to minimize the log entries.
How do commercial proxies log such info? Or do they just log every request? One way would be to not log any requests within a certain time after the main request presuming that they are all associated with the main request. That would not be technically accurate.
I have researched in this area but did not find any solution. I am not looking for any specific code, just some direction...
No one can know that with precision, but you can find clues such as, "HTTP referer", "x-requested-with" or add your custom headers in each ajax request (squid proxy by default sends a "X-Forwarded-For" which says he is a proxy), but anybody can figure out what headers are you sending for your requests or copy all headers that a common browser sends by default and you will believe it is a person from a browser, but could be a bash cURL sent by a bot.
So, really, you can't know for example, if a request is an AJAX request because the headers aren't mandatory, by default your browser or your framework adds an x-requested-with or useful information to help to "guess" who is performing the request.
Is it possible that a jsf application can navigate to an external link and specify headers for that external link?
So far I have tried to call in the backing bean method:
ExternalContext#setResponseHeader(java.lang.String name, java.lang.String value)
ExternalContext#redirect(java.lang.String url)
The redicection is successfully executed but the Headers are lost.
Is there any way to specify a link accompanied with the headers?
No, HTTP doesn't allow setting headers on a different request.
The headers have to be set by the code behind the target URL. Whatever problem you incorrectly thought to solve this way has definitely to be solved differently.
I have a configuration of two servers working in intranet.
First one is a web server that produces html pages to the browser, this html sends requests to the second server, which produces and returns reports (also html) according to some GET parameter's value.
Since this solution is un-secured (the passed parameter is exposed) I thought about having the html (produced by the first server) sending the requests for report back to the first server, there, a security check will be made, and the request for report will be sent to the reports server using http between the servers, instead of from browser to server.
The report's markup will be returned to the first server (as a string?), added to the response object and presented in the browser.
Is this a common practice of http?
Yes, it's a common practice. In fact, it works the same when your webserver needs to fetch some data from a database (not publically exposed - ie not in the webserver DMZ for example).
But you need to be able to use dynamic page generation (not static html. Let's suppose your webserver allows PHP or java for example).
your page does the equivalent of an HTTP GET (or POST, or whatever you like) do your second server, sending any required parameter you need. You can use cURL libraries, or fopen(http://), etc.
it receives the result, checks the return code, can also do optionnal content manipulation (like replacing some text or URLs)
it sends back the result to the user's browser.
If you can't (or won't) use dynamic page generation, you can configure your webserver to proxy some requests to the second server (for example with Apache's mod_proxy).
For example, when a request comes to server 1 for URL "http://server1/reports", the webserver proxies a request to "http://server2/internal/reports?param1=value1¶m2=value2&etc".
The user will get the result of "http://server2/internal/reports?param1=value1¶m2=value2&etc", but will never see from where it comes (from his point of view, he only knows http://server1/reports).
You can do more complex manipulations associating proxying with URL rewriting (so you can use some parameters of the request to server1 on the request to server2).
If it's not clear enough, don't hesitate to give more details (o/s, webserver technology, urls, etc) so I can give you more hints.
Two others options:
Configure the Internet facing HTTP server with a proxy (e.g.
mod_proxy in Apache)
Leave the server as it is and add an Application Firewal