i have a simple XPage and i access it through an reverse proxy.
My problem is now to get the correct URL on server side.
context.getUrl().toString()
and
XSPContext xspContext = new ServletXSPContextFactory().getXSPContext(FacesContext.getCurrentInstance());
XSPUrl xspUrl = xspContext.getUrl();
return xspUrl.toString();
did not work correct.
For example:
URL in the Browser is https://myip/db.nsf
But the SSJS function as well as the Java function returns just http://myip/db.nsf
When i try this without a reverse proxy, everything work fine.
Is there a way to get location.href on server side?
Unless you want to send out links to other places, you don't need the protocol part. If you are on the same browser //someserver/somepage will link to a different server using the currently used protocol. Other than that the proxy probably set a header.
You can use the following code to create the URL manually:
var path = facesContext.getExternalContext().getRequest().getContextPath()
var url = "https://" + path
This will return the path to your nsf-file with the https prefix
Hmm... this may be an administrative setting: with an internet site document you can additionally create a website rule (type = substitution) to automatically compute the whole URL by the incoming pattern. Have a look at the IBM Domino administration help on how to setup a site document as well as a web site rule.
The goal is to get both URLs to have the same scheme so that XSP computation will result in correct values dynamically.
I believe what you want is to set the $WSIS header from the reverse proxy to Domino to True. Much like the other WebSphere connector headers, this should cause Domino to think that the incoming protocol is HTTPS in all situations. Note that this also has the unfortunate side effect of causing Domino to revert to its behavior of only using one Site document per IP; if you've been taking advantage of the reverse proxy to avoid this bug, you will have to find another route, such as looking for an X-SSL header from the proxy.
Related
I'm using proxy on Apache to proxypass subfolder of domain to Node.js Express app on local port, but is there a way from Express side, somehow to pass purely a status in a way, so Apache uses own error page? (I want them to look the same).
As far as I know I may either use absolute path on server to that page etc, but that may not be consistent if I change the Apache settings. Is there any way to tell Apache over proxy request response, to show his error page, whatever it has been set to?
Maybe there is no such way, I just wonder if there is some way. The best I came up so far is to redirect to same URL but starting with /e/ which would work, but remains /e/ in URL - not bad, but maybe someone give a better hint.
Say we have an internal URL https://my.internal.url (in our case a Liferay Portal) and from a web application firewall an external URL https://my.external.url pointing to this internal URL.
The internet user is using the external URL.
PrimeFaces extends attributes like for example
onclick="...;window.open('https://my.interal.url'..."
This leads to CORS problems.
The HTTP header Access-Control-Allow-Origin is not an option, since the internal URL is internal.
We'll talk with the WAF people about URL replacement, but I'd like to know wether or not we can tell PrimeFaces to use the external URL (or maybe relative URLs in case this would work).
The portal doesn't know about the external URL but of course we could implement this as a configuration option.
(watching the source code, there are more occurences of the internal URL outside of the jsf/PrimeFaces portlet, so I add the liferay tag too)
Update
The question is obsolete, WAF has to handle this correctly (an old SSL environment did it, a new WAF environment doesn't)
You say
The portal doesn't know about the external URL
however, any properly configured reverse proxy (or WAF) should forward the actual host name used to request the current page.
On Apache httpd's mod_proxy_http, this is done with the option ProxyPreserveHost On. When forwarding with AJP, the host is automatically forwarded. Other WAF/Proxy configurations - of course - differ. But the proper way to generate the URL is to let the generating server know what URLs it should generate.
If you need to worry about the proper host name, you'll need to do so by request: Liferay is well able to use Virtual Host names to distinguish between different sites - and if they're completely different, you might be signed in to one of them, but not to the other. This has a repercussion on the permissions.
Have the infrastructure handle it for you. Don't write code (or application configuration) for it.
I essentially have the same issue as desribed here Redirect HTTP to HTTPS in Azure Application Gateway but am trying to solve it a different way.
My back end web application works fine when both http and https are open on the AAG, however when you click on a link generated by the webapp to another page the url sent back to the client is for http not https. Obcviously the proper solution is to make the web app aware it is behind a reverse proxy and generate links accordingly.
In the short term I have been attempting, and failing, to use the IIS url rewrite module to either:
a) Using an inbound rule, rewrite (not redirect) the incoming URLs as https which ought to force the responses to contain https urls (a redirect causes an infitite loop as AAG forwards everything to the back end web servers as http). I'm guessing this is impossible because its essentially creating a secure channel between itself.
b) Using an outbound rule, rewrite the responses so the urls are https instead of http. This is proving to be very difficult as I don't understand what parts of the responses I need to be modifying. I'm hoping this approach is possible though?
For the uninitiated, the answer is to use custom tags in an outbound rule, which match the html elements containing the values that need modifying.
The drawback is of course that it means the web server is having to do a patter match & replace on every single page it serves unless you can use conditions to limit the scope. Still very inefficient compared to fixing the code so it is proxy aware!
I have a configuration of two servers working in intranet.
First one is a web server that produces html pages to the browser, this html sends requests to the second server, which produces and returns reports (also html) according to some GET parameter's value.
Since this solution is un-secured (the passed parameter is exposed) I thought about having the html (produced by the first server) sending the requests for report back to the first server, there, a security check will be made, and the request for report will be sent to the reports server using http between the servers, instead of from browser to server.
The report's markup will be returned to the first server (as a string?), added to the response object and presented in the browser.
Is this a common practice of http?
Yes, it's a common practice. In fact, it works the same when your webserver needs to fetch some data from a database (not publically exposed - ie not in the webserver DMZ for example).
But you need to be able to use dynamic page generation (not static html. Let's suppose your webserver allows PHP or java for example).
your page does the equivalent of an HTTP GET (or POST, or whatever you like) do your second server, sending any required parameter you need. You can use cURL libraries, or fopen(http://), etc.
it receives the result, checks the return code, can also do optionnal content manipulation (like replacing some text or URLs)
it sends back the result to the user's browser.
If you can't (or won't) use dynamic page generation, you can configure your webserver to proxy some requests to the second server (for example with Apache's mod_proxy).
For example, when a request comes to server 1 for URL "http://server1/reports", the webserver proxies a request to "http://server2/internal/reports?param1=value1¶m2=value2&etc".
The user will get the result of "http://server2/internal/reports?param1=value1¶m2=value2&etc", but will never see from where it comes (from his point of view, he only knows http://server1/reports).
You can do more complex manipulations associating proxying with URL rewriting (so you can use some parameters of the request to server1 on the request to server2).
If it's not clear enough, don't hesitate to give more details (o/s, webserver technology, urls, etc) so I can give you more hints.
Two others options:
Configure the Internet facing HTTP server with a proxy (e.g.
mod_proxy in Apache)
Leave the server as it is and add an Application Firewal
I have a dyndns.org account which goes to my home computer. That all works fine, wordpress is installed.
I want to buy a domain (mysite.com) and have it mask to mysite.dyndns.org while passing back all of the additional URI goodness.
For example, if I go to mysite.com/page2 it should go to mysite.dyndns.org/page2
Any thoughts on ways to accomplish this?
The answer turned out to be to use a CNAME record. CNAMES essentially act like symlinks.
Yes, assuming you want to keep hash parameters in addition to query parameters, you could accomplish such a thing with a little piece of JavaScript like the following:
if (window.location.host == 'mysite.com') {
var current_url = window.location.href;
var new_url = current_url.replace('mysite.com', 'mysite.dyndns.org');
window.location.href = new_url;
}
If you don't need to preserve hash parameters, then you could implement similar redirect logic in the web server or in the server-side scripting language of your choice, by checking the HTTP "Host" header for the current host name, and issuing a 301 redirect as needed.
However, I really do not understand why you would want your system to be set up in this way. Typically custom domains are more trustworthy than domains hanging off of "dyndns.org". Why not just have your custom domain configured to point to the correct IP address(es)? Most web hosting solutions will automatically provide the appropriate DNS configuration.