Currently I'm getting a 422 The change you requested was rejected error when trying to login to gitlab.
The only thing I changed was to follow the official doc on how to setup gitlab behind a reverse proxy. The funny thing is, if I access gitlab from the outside via https, it works perfectly and to login is possible. But accessing the gitlab instance via the internal lan directly, the above error gets thrown.
Do I miss some configuration for the nginx when http gets used?
Try first and check if your proxy is used when your are using the internal lan.
I usually set
no_proxy='localhost,.company.com'
That will avoid using the proxy for intranet access.
Related
HTTPS conflicts with HTTP
I make my first full-stack project on React and NODEjs and deployed it on netlify.
My backend server runs on HTTP localhost.
And here is a problem:
My app works on my Mac in Chrome but doesn't work properly on other browsers and computers.
Other computers can download index.js (display sign-up and sign-in pages) and it seems there is no problem with CORS but authentication doesn't work.
Safari logs mistakes:
[blocked] The page at https://MYAPP.netlify.appwas not allowed to display insecure content from http://localhost:3500/register.
Not allowed to request resource
XMLHttpRequest cannot load http://localhost:3500/register due to access control checks.
I don't understand why the app works on my MAC but
doesn't on other computers and can't find an answer on how to solve this HTTPS - HTTP conflict
I have tried to find a problem in CORS but it looks like CORS is ok. Also, I tried rewriting the server with HTPPS but it didn't work.
I've never worked with Netlify, so I could be wrong, but I suspect your problem isn't directly related to Netlify.
The Safari error message indicates that your frontend is trying to talk directly to localhost. localhost is an alias for "the computer that is making the connection attempt" under normal circumstances. This means that when someone runs your frontend, the browser tries to talk to the backend running on the same computer that the browser is running on.
This works on your computer in Chrome because you probably have the backend running on your computer for testing. Safari is only complaining that the frontend was loaded via HTTPS but is trying to talk to non-HTTPS servers. It is not stating that it can't talk to the backend, it's stating that it won't even try.
If I'm right and you shut down the back end on your computer, it will start to fail on your computer as well, even on Chrome.
If this is the problem, the solution can be one of two things: You can either run the backend somewhere where it has a domain name/ip address that everyone can connect to, or you need to run a proxy for your backend somewhere where it also meets those conditions, and has a way to pass the request on to where your full backend does run.
You need to find a way to run your backend somewhere other than your own computer or have something somewhere else proxy requests to your computer which then gets relayed to the localhost address. How you go about that will depend on things you didn't specify in the original question.
I'm trying to setup Sonarqube behind a Azure Web App using .NET Core's proxy library. This might sound weird but as Web Apps provide a SSL certificates automatically and I am not able to get a custom domain I thought this solution to be the easiest for me ;)
Now after some playing around everything works great, the site works without any errors in browsers, the Log-in is possible using Sonar login or Azure Active Directory.
But in my build processes it is just not possible to post the analysis result to the server. The response is always 401.
I have checked the Sonarqube logs and found the following corresponding entries:
in Web.log
DEBUG web[...][auth.event] login failure [cause|Wrong CSFR in request][method|JWT][provider|LOCAL|local][IP|some ip|actual client ip:37390][login|admin]
in access.log:
some ip - - [...] "POST /api/ce/submit HTTP/1.1" 401 - "-" "Mozilla/5.0 ..." "..."
Therefore I can see that the actual sonar request comes from a different IP, probably because of the network setup or any other Azure magic.
I cannot figure out how to solve this issue :D
My reverse proxy solution is very simple. Basicly I use a simple empty ASP.NET Core application and integrate the reverse proxy functionality in the Startup.cs like this:
app.RunProxy(new ProxyOptions
{
BackChannelMessageHandler = new HttpClientHandler
{
CheckCertificateRevocationList = false,
ServerCertificateCustomValidationCallback = (message, certificate2, arg3, arg4) => true,
AllowAutoRedirect = true,
AutomaticDecompression = DecompressionMethods.GZip,
CookieContainer = new CookieContainer
{
Capacity = int.MaxValue,
MaxCookieSize = int.MaxValue,
PerDomainCapacity = int.MaxValue
}
},
Scheme = serverConfiguration.Scheme,
Host = serverConfiguration.Host,
Port = serverConfiguration.Port,
});
I also added some middleware to add the X_FORWARDED_PROTO header and I check if the X-Forwarded-For header is configured correctly. I also configured the Azure IIS to not truncate query parameters or content in large requests via the web.config way.
I also tried to fake it and set the X-Forwarded-For IP to the IP sending the actual request to Sonarqube with no effect.
Has anyone an idea how to get this solved? :) As this is just a POC setup I would love to just turn CSRF checking off but i could not find any config for that. Any help would be appreciated.
Edit + Current Solution
Thinking a bit more about my initial solution the problem becomes quite clear. I am trying to connect to the server by using Azure App Service's VNet Integration Feature. This provides a secure VPN Connection between the Proxy site and the actual server. But it also causes the IP to be different than expected:
Client [Client IP] -> Web App Proxy [Proxy Public IP] -> VNet VPN [VPN IP of the Web App == some ip in the logs] -> Sonarqube => 401 CSRF error
I guess that the X-Fowarded-For chain is not correct in that case, and I don't know how to fix that.
Now, as a workaround, I have added a Public IP to the Sonarqube server and configured the Network Security Groups to allow traffic only from the Web App (using the provided outgoing IP addressess of the Web App). With that solution everything works :)
I still would like to facilitate the VNet integration feature, so if someone has an idea, please let me know :)
we have the same problem with Sonar behind Apache as reverse proxy with SSO. Apache sends the SSO Headers in the proxy request to sonar.
I have reported this problem to google group as bug:
"Sonar native login form displayed randomly even if SSO is used"
https://groups.google.com/forum/#!msg/sonarqube/o2p2ZmjqRN8/UAZZF3tMBgAJ
What I have found is, that apache reuses one connection for different users. User-A comes through Apache-Sonar connection-1, then Apache reuses this connection for other request of other user-B and then next request in the same Apache-Sonar connection is used for new request from user-A. This request is then classified as unauthorized and Sonar generates Login Form, although the Apache request contains the headers with SSO Login data.
Today, I have activated DEBUG logs and found the message "Wrong CSFR in request". It looks really like CSFR protection, but with some bug, as if the code does ignore username or something like this.
Regards,
Robert.
I'm working on a Node.js/Express application that, when deployed, sits behind a reverse proxy.
For example: http://myserver:3000/ is where the application actually sits, but users access it at https://proxy.mycompany.com/myapp.
I can get the original user agent request's host from a header passed through easily enough (if the reverse proxy is configured properly), but how do I get that extra bit of path and protocol information from the original URL accessed by the browser?
When my application has to generate redirects, it needs to know that the end user's browser expects the request to go to not only to proxy.mycompany.com over https, but also under a sub-path of myapp.
So far all I can get access to is proxy.mycompany.com, which isn't enough to create a valid redirect.
For dev purposes I'm using a reverse proxy setup in nginx, but my company is using Microsoft's ISA as well as HAProxy.
Generally this is done with x-forwarded-* headers which are inserted by the reverse proxy itself. For example:
x-forwarded-host: foo.com
x-forwarded-proto: https
Take a look here:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/x-forwarded-headers.html
Probably you can configure nginx to insert whatever x- header you want, but the convention (standard?) seems to be the above.
If you're reverse proxying into a sub-path such as /myapp, that definitely complicates matters. Presumably that sub-path should be a configuration option available to both nginx and your app.
I've got a fairly simple app deployed on OpenShift that uses CloudFlare as a DNS provider, since they support CNAME records for the root domain, which our current domain provider does not.
The issue with this setup is somewhere along the line https is not working. I believe this is an OpenShift issue because it's the same kind of issue you get when you've mapped the domain name to your app but haven't added the proper aliases yet - you get a timeout essentially.
We've got two aliases - with www and without. There's no option to specify https or anything with OpenShift aliases from what I can see. There aren't any SSL certificates assigned to these aliases as we do not need or use https - we're on the Free plan.
The main URL to access the site is http://www.jcuri.com - notice this works as expected, however https://www.jcuri.com times out.
Initially we were thinking of using CloudFlare page rules to auto-redirect to a non-https URL however this is locked down behind a paywall which we're hoping to avoid, as we don't need any of the Pro features.
Is there something I'm missing here? It seems that OpenShift is just denying any https connections purely because we don't have certificates assigned to the aliases. I wouldn't even mind if there were certificate errors, at least that would give us a chance to do a redirect on the actual NodeJS application, but we don't even reach that point.
Can anyone offer some advice on this?
Since those domains are not pointed directly at openshift via CNAME, but are seemingly redirected via another service (from what i can tell from the dns) it is hard to say whether it is OpenShift that is causing the https issues. If you do not have a custom ssl certificate installed on openshift, you will just get an invalid certificate error, but since you are using a redirect service, maybe it is possible that the service is checking the certificate first, seeing an error, and then not working?
Since the https page rules you stated above are behind a paywall, this actually makes a lot of sense that they are blocking it, not OpenShift. Godaddy provided a forwarding service that would allow you to point both www and naked domain to openshift correctly using cnames, i have used it before.
What's required to setup Neo4j behind IIS proxy server?
I am running into the issue listed here: https://github.com/neo4j/neo4j/issues/112
Error message (Chrome console):
displayed insecure content from =1363713541737">http://xyz:7474/db/data/?=1363713541737
xyz is the server name.
Thanks
Considering the GitHub issue is still open, you can assume that this is not currently supported and no workaround has been supplied by Neo.
If you want to persist ahead, you will need to rewrite the content passing through the proxy.