My online whiteboard application has been working previously, but for whatever reason, it no longer works. Now it shows a 503 Varnish cache error as seen here: http://grab.by/eFHG
Do you happen to know where I should start to look to try to resolve this issue?
Thanks!
Donny
This error means that Varnish had no response (even not an HTTP error) from the backend within the timeout limit.
You can troubleshoot that in many ways :
On the backend : do you see requests from Varnish in your webserver
log ?
On Varnish server : Run varnishlog and check the request process. You should have events in this order : RxRequest > TxRequest > RxResponse > TxResponse. Your problem is between TxRequest (request sent to backend) and RxResponse (response received from backend).
On your Varnish server try connecting on the backend using telnet (telnet ). Does it connect ? If it does, try sending a request (e.g. "GET / "). Do you receive a response ?
Probable causes could be : firewall/selinux blocking between varnish & backend, bad varnish or backend web server config (are backend address & port sync ?), webserver stopped, ...
You could always check your /etc/varnish/default.vcl (CentOS).
backend default {
.host = "127.0.0.1";
.port = "80";
}
Make sure the .host value is your server IP Address, and change the port to 8080, and adjust your port setting in /etc/httpd/conf/httpd.conf and make sure Apache listen to 8080.
EDIT
It could also mean web server (Apache) have wrong/default settings.
Check the virtual host ports, the values might be 80, they should be
8080.
Check Listen directive and ServerName, they must have 8080
port.
Make sure you do sudo systemctl enable varnish.
And for the final touch, do sudo reboot (cache programs are bugging me in the development state).
When I tried to reach this page today, I got shown the error page instead with following contents:
Error 503 Service Unavailable
Service Unavailable
Guru Mediation:
Details: cache-hel6832-HEL 1623148717 170212118
So it seems StackOverflow is also using a CDN that was failing today, and the failure was impacting huge number of services at the same time. In these kind of situations, we might be at the mercy of external CDN provider. Today showed us that sometimes the only thing that you can do, is to wait for the others to solve it for you.
Luckily these kind of outages are shortlived.
Suitable long term strategy is to use several CDN services instead of single.
Related
I am using Nginx as my https server to serve my http content from my node server.
I am also hosting my server on google cloud.
I have been keep getting a 504 Gateway Timeout Error; So I wonder if it is because I didnt set my upstream server (node server) 8080 port open. Then it works. Not so sure if it is the correct way to do it
But then I kept looking other docs or tutorial online. I never see people configure in such way to connect to node server. They mainly only left the port 80 opened. So I wondered if my config in server block causing the 504 gateway problem
----------second update
this is my setting, and the default_server is written by default
but i always see doc have included a variable - server_name ; Actually I dont quite understand this varibale. May I know should I consider it or not for later use, although it works now
Aside, I got an
Server Error from my app.
FetchError: request to https://34.96.213.54:443/search/guest2 failed, reason: self-signed certificate
Why is that it works on chrome,although I get that api directly and postman successfully.
third updated------
About self-signed certificate: You need to buy one or using a free service like https://letsencrypt.org .Beside that your questions are so basic so you have to research more on nginx docs (http://nginx.org/en/docs/http/server_names.html)
I have an architecture where I have multiple instances but I want to maximize cache hits.
Users are defined in groups and I want to make sure that all users that belong to the same group hit the same server as much as possible.
The application is fully stateless, but having users from the same group hitting the same server will dramatically increase performance and memory load on all instances.
When loading the main page I already know which server I would like to send this user to on the XHR call.
Using the ARRAffinity cookies are not really great is almost impossible in this scenario (cross domain, have to make server call first etc) and I would strongly prefer sending a hint myself through a custom header.
I'm trying manually to do some workarounds with deleting the cookies and assigning them, but it feels very hacky and I don't get it fully working yet. and it doesn't work for XHR calls.
Question:
Is it possible to direct to a specific instance through a header, url or domain instead of a cookie?
Notes
Distributed cache does not work for me in this case. I need the performance of memory cache without extra network hops and serialization/deserialization.
This seems to be possible with Application Gateway, but it seem to need a lot of extra infrastructure and moving parts while all my problems would be fixed by sending the "right" header.
I could fix this by duplicating the web app in its entirety and assigning a different hostname. Also this feels like adding a lot of extra moving parts that can break. Also maintenance will be harder and more confusing, I loss autoscale, etc.
Maybe this can be fixed easily by Kubenetes/Docker Swarm type of architecture (no experience), but as this is a large legacy project and I have a pretty strict deadline I am very cautious of making such a dramatic switch last minute.
If I am understanding correctly you want to set a custom header via a client application and based on that proxy over the connection to some other backend server.
I like to use HAProxy for that, you can also look into Nginx as well for that.
You can install HAProxy on linux from the distribution's package manager, or you can use the available HAProxy docker container.
An example of installing it on ArchLinux:
sudo pacman -S haproxy
sudo systemctl start haproxy
Once its installed you can find out where your haproxy.cfg config file is located and then copy the haproxy.cfg config snippet that I posted here below instead of the existing default config.
In my case haproxy.cfg was in /etc/haproxy/haproxy.cfg
To achieve what you want in HAProxy you would set all clients to communicate with this main HAProxy server, which would then forward the connection to the different backend servers you have based on the value of the custom header that you can set client side, for example "x-mycustom-header: Server-one". As a bonus, you can also enable sticky sessions as well if needed on HAProxy, but it is not mandatory to achieve what you are looking for.
Here is a simple example setup for the HAProxy config file (haproxy.cfg) with only 2 backend servers, but you can add more.
The logic here is that all the clients would make http requests to the HAProxy server listening on port 80, then HAProxy would check the value of the custom header called 'x-mycustom-header' that the clients added and based on that value, it will forward the client to either backend_server_one or backend_server_two.
For testing purposes both HAProxy and the two backends are on the same box but listening on different ports. HAProxy on port 80, server1 is on 127.0.0.1:3000 and server2 is on 127.0.0.1:4000.
cat haproxy.cfg
#---------------------------------------------------------------------
# Example configuration. See the full configuration manual online.
#
# http://www.haproxy.org/download/1.7/doc/configuration.txt
#
#---------------------------------------------------------------------
global
maxconn 20000
log 127.0.0.1 local0
user haproxy
chroot /usr/share/haproxy
pidfile /run/haproxy.pid
daemon
frontend main
bind :80
mode http
log global
option httplog
option dontlognull
option http_proxy
option forwardfor except 127.0.0.0/8
maxconn 8000
timeout client 30s
use_backend backend_server_one if { req.hdr(x-mycustom-header) server-one }
use_backend backend_server_two if { req.hdr(x-mycustom-header) server-two }
default_backend backend_server_one #when the header is something else default to the first backend
backend backend_server_one
mode http
balance roundrobin
timeout connect 5s
timeout server 5s
server static 127.0.0.1:3000 #change this ip to your 1st backend ip address
backend backend_server_two
mode http
balance roundrobin
timeout connect 5s
timeout server 30s
server static 127.0.0.1:4000 #change this ip to your 2nd backend ip address
To test that this works you can open two netcat listeners, one on port 3000, and then the other on port 4000, run them on differnt screens or different ssh sessions.
nc -l 3000 # in the first screen
nc -l 4000 # in a second screen
Then after you do a sudo systemctl reload haproxy to make sure that HAProxy is reloaded with your new config file, you can make an http GET request on port 80 and provide the "x-mycustom-header: Server-one" header.
You will be able to see the request in the output of the netcat instance that is listening on port 3000.
Now change the header to "x-mycustom-header: Server-two" and make a second GET request, and you will see that the request reached to the second netcat instance this time, which is listening on port 4000, which indicates that this works.
Tested on ArchLinux
The Microsoft Team has responded an confirmed this is at the moment not possible.
I'm running a nodejs webserver on azure using the express library for http handling. We've been attempting to enable cloudflare protection on the domains pointing to this box, but when we turn cloudflare proxying on, we see cycling periods of requests succeeding, and requests failing with a 524 error. I understand this error is returned when the server fails to respond to the connection with an HTTP response in time, but I'm having a hard time figuring out why it is
A. Only failing sometimes as opposed to all the time
B. Immediately fixed when we turn cloudflare proxying off.
I've been attempting to confirm the TCP connection using
tcpdump -i eth0 port 443 | grep cloudflare (the request come over https) and have seen curl requests fail seemingly without any traffic hitting the box, while others do arrive. For further reference, these requests should be and are quite quick when they succeed, so I'm having a hard time believe the issue is due to a long running process stalling the response.
I do not believe we have any sort of IP based throttling or firewall (at least not intentionally?)
Any ideas greatly appreciated, thanks
It seems that the issue was caused by DNS resolution.
On Azure, you can configure a custom domain name for your created webapp. And according to the CloudFlare usage, you need to switch the DNS resolution to CloudFlare DNS server, please see more infomation for configuring domain name https://azure.microsoft.com/en-us/documentation/articles/web-sites-custom-domain-name/.
You can try to refer to the faq doc of CloudFlare How do I enter Windows Azure DNS records in CloudFlare? to make sure the DNS settings is correct.
Try clearing your cookies.
Had a similar issue when I changed cloudflare settings to a new host but cloudflare cookies for the domain was doing something funky to the request (I am guessing it might be trying to contact the old host?)
I have a website behind cloudflare. I need to enable websockets over SSL without turning off cloudflare support. I have a PRO plan and hence won't get the new websocket support. I am using Nginx to proxy a SSL connection to a web socket running on a node server. Now, I read somewhere that cloudflare could work with approved ports would support websockets. Hence, I'm using 8443 for the Nginx port and another port for the node server. Using wscat it returns a 200 error.
$ wscat -c wss://xyz.com:8443
error: Error: unexpected server response (200)
I know that the websocket is expecting a 101 code. However, if I visit https://xyz.com:8443, I can see the page displayed by the node server telling me proxy is working. Also, once I turn off cloudflare support, the websocket starts working. Any clues to get this working. I know I can create a subdomain but I'd prefer running the websocket behind cloudflare.
If you're trying to access this through CloudFlare's network you'd need to explicitly have web sockets enabled on your domain before they will work -- regardless of the port. As in, even if the port can pass through our network, that won't automatically mean that web sockets will be enabled or accessible on your domain.
You can try contacting our support team to request an exception to see if they can enable it for your domain, but typically this is still only available at the business and enterprise levels.
Disclaimer: I work at CloudFlare.
I am attempting to set up my first Varnish Cache server and I have a couple questions for any person(s) experienced.
1.) I am running Varnish as a stand alone server. Do I need Apache also installed on the same server. Ultimately the actual site that will be behind Varnish is not on this server.
2.) Do I point the domain to Varnish and then set the config to point to the ip address of the server that is hosting the site? If so, how do you point it to the right site?
3.) If Varnish is standalone and I have an Apache content server, can they both be port 80 and just change the ip address in the default.vcl
backend default {
.host = "198.221.134.235";
.port = "80";
}
Sorry for the basic questions. I have been on Google all weekend and I found plenty of information on how to install and config Varnish but it seems like the site you want to Cache is on the same server since all of them are changing the port Apache listens to and that seems like it would mean the site is living on the same server.
And if you have any good sites with information, please feel free to share them! Thanks again!
No, Varnish and Apache (or any other HTTP/webserver) can run on a separate server.
Indeed, point the domain to the IP of Varnish and setup a backend as described in the documentation: https://www.varnish-cache.org/docs/3.0/tutorial/backend_servers.html. The IP
of your webserver will be the IP of the backend.
Correct, as long as Apache and Varnish are on separate servers they both can listen on port 80
If I am not mistaken you will have the following setup:
DNS example.com => 1.1.1.1
IP 1.1.1.1:80: Varnish (backend: 1.1.1.2:80)
IP 1.1.1.2:80: Apache