I am using longhorn for my k3s cluster. My cluster is in a completely different environment than my local computer. The setup for accessing the dashboard looks like this.
Local PC (env1) <-> Reverse Proxy (env1, IIS6) <-> Reverse Proxy (env2, IIS8) <-> K3s Cluster (env2 nginx ingress) <-> Longhorn dashboard (env2)
This setup works fine for most of my apps. Like Kubernetes dashboard, graylog, some self developed stuff etc.
But longhorn is reloading stuff so many times, that all ephemeral windows ports on the first Reverse Proxy are getting occupied. And then, no other connection to the server is possible anymore. Also closing all ports needs a restart of IIS. Also then, getting all connections closed needs some time. So it is creating a high pressure on the server.
My question is, is there something wrong with my IIS configuration? I am using ARR on IIS 6. It is a default configuration, nothing special. Done on the user interface with the URL Rewrite module.
Or is there a setting for longhorn, where i can disable the relaod of the page?
Here is a screenshot of my browser. As you can see, many requests are sent to the server:
Related
I have a DO Load Balancer that has 4 servers behind it, I've been using socket.io with Sticky Sessions enabled in the Load Balancer settings and it had been working just fine for a while.
Recently clients have not been able to connect at all getting a 400 error immediately on connection. I haven't changed anything in the way I connect to the sockets at all. If I do require that the transport be 'websocket' only from the client it does connect successfully, but then I lose out on the polling backup (one of the main benefits of socket.io).
Also, connecting directly to one of the droplets works as expected, so the issue definitely stands with the Load Balancer.
Does anyone have any idea as to any kind of set up that should be in place for this to work with the DO Load Balancers? Anything that might have changed recently?
I'm running socket.io on a NodeJS server with Express if that helps at all.
Edit #1: Added a screenshot of the LB Settings
sorry if this is a wrong question on this forum but I am simply just stuck and need some advice. I have a shared hosting service and a cloud based hosting server with node.js installed. I want to host my website as normal but I also want to add real time chat and location tracking using node.js I am confused with what I am reading in several places because node.js is itself a server but not designed to host websites? So I have to run 2 different servers? One for the website and one to run node.js? When I setup the cloud one with a node.js script running I can no longer access the webpages.
Whats the best way for me achieve this as I am just going round in circles. Also is there a way I can set up a server on my PC and run and test both of these together before hand so I see what is needed and get it working as it will stop me ordering servers I dont need.
Many thanks for any help or advice.
Node can serve webpages using a framework like Express, but can cause conflicts if run on the same port as another webserver program (Apache, etc). One solution could be to serve your webpages through your webserver on port 80 (or 443 for HTTPS) and run your node server on a different port in order to send information back and forth.
There are a number of ways you can achieve this but here is one popular approach.
You can use NGINX as your front facing web server and proxy the requests to your backend Node service.
In NGINX, for example, you will configure your upstream service as follows:
upstream lucyservice {
server 127.0.0.1:8000;
keepalive 64;
}
The 8000 you see above is just an example, you may be running your Node service on a different port.
Further in your config (in the server config section) you will proxy the requests to your service as follows:
location / {
proxy_pass http://lucyservice;
}
You're Node service can be running in a process manager like forever / pm2 etc. You can have multiple Node services running in a cluster depending on how many processors your machine has etc.
So to recap - your front facing web server will be handling all traffic on port 80 (HTTP) and or 443 (HTTPS) and this will proxy the requests to your Node service running on whatever port(s) you define. All of this can happen on one single server or multiple if you need / desire.
I have successfully set haproxy on my server cluster. I have run into one snag that I can't find a solution for...
TESTING INDIVIDUAL CLUSTER COMPUTERS
It can happen that for one reason or another, one computer in the cluster gets a configuration variation. I can't find a way to tell haproxy that I want to use a specific computer out of a cluster.
Basically, mysite.com (and several other domains) are served up by boxes web1, web2 and web3. And they round-robin perfectly.
I want to add something to the URL to tell haproxy that I specifically want to talk to web2 only because in a specific case, only that server is throwing an error on one web page.
Anyone know how to do that without building a new cluster with a URI filter and only have one computer in that cluster? I am hoping to use the cluster as-is but add something to the URI that will tell haproxy which server to use out of the cluster.
Thanks!
Have you thought about using different port for this? Defining new listen section with different port, because, as I understand, you can modify your URL by any means?
Basically, haproxy cannot do what I was hoping. There is no way to add a param to the URL to suggest which host in the cluster to use.
I solved my testing issue by setting up unique ports for each server in the cluster at the firewall. This could also be done at the haproxy level.
To secure this path from the outside world, I told the firewall to only accept traffic from inside our own network.
This lets us test specific servers within the cluster. We did have to add a trap in our PHP app to deal with a session cookie that is too large because we have haproxy manipulating this cookie to keep users on the server they first hit. So when the invalid session cookie is detected, we have the page simply drop the session and reload the page.
This is working well for our testing purposes.
I have one windows server already running sharepoint on 80/443 and the site works correctly.
We're trying to add more functionality by installing NodeJS and Apache
I've set apache to listen on 8080 and the default website comes up.
Node is running on 3000 and I can access the explorer that way as well.
My questions come from this. The server has a complete certificate chain installed on it and https://:8080 comes up correctly, but I can't get the node stuff to work on https: Secondly it appears while I have proxy pass set up correctly within my httpd.conf, either something is wrong within that as if I goto the https://:8080 /api/and anything beyond that, I get 503 errors and the page can't be displayed.
I'm unsure what I'm doing incorrectly here as from reading the documentation on proxy module, it seems that everything is setup and configured correctly.
Netstat shows listening on 3000 and 8080 and 80/443 for my SharePoint farm.
I had to configure the ssl settings for the proxypass to use the IP address of the local machine. After doing that I was able to connect correctly.
This allowed for connecting on :3000 via telnet to the localmachine and allowed for explorer to be viewed with https://:8080 the correct way enforcing our certificates.
I wanted to test my ReactJS + NodeJS website on another machine on my LAN, so I changed the server host ip from localhost to 0.0.0.0 as described in this answer. I noticed that although I could access the server from a remote machine, all I could see was the title and favicon (the rest was a blank page). I tried another approach of using the ngrok module as described here (which happens to be the answer to the same question as the previous link). I still got the same blank page.
The GET requests to the server are shown below (as shown by ngrok).
/landing is a page I was trying to access. Can someone explain whats happening?
PS: The server is running on a Mac and I'm trying to access the page on an Ubuntu machine. Also, I'm using this react-redux boilerplate. Webpack is also being used along with hot reloading.
Did you try changing port settings in firewall?
Go to firewall settings and allow the respective port for inbound