We're using Varnish 3.0.3. Varnish is behind a load balancer.
We would like to bypass the Varnish cache for a particular IP address. After doing research, I found the following. Unfortunately, it is not working.
acl passem { "7x.xxx.xxx.xxx"; }
sub vcl_recv {
if (!(client.ip ~ passem)) {
return (pass);
}
}
This appears in varnishlog "6 VCL_acl c NO_MATCH passem"
I'm not sure what is wrong. The only thing I can think of is Varnish is not seeing the incoming IP address. This is what I see in varnishlog.
6 RxHeader c X-Real-IP: "7x.xxx.xxx.xxx"
6 RxHeader c X-Forwarded-For: "7x.xxx.xxx.xxx"
6 SessionOpen c 10.10.10.4 58143 0.0.0.0:80
6 ReqStart c 10.10.10.4 58143 1026834560
The RxHeader is receiving the correct IP and matches the acl passem, but I don't know if acl passemis instead referencing the SessionOpen IP address, which is the IP address of the load balancer.
In Varnish, "X-Real-IP" and "http.x-forwarded-for" are strings and "client.ip" is an object.
Extra code is required to copy the IP address from the "X-Forwarded-For" header into Varnish's client_ip structure.
Below is what was required to make it work. This worked successfully. Credit goes to http://zcentric.com/2012/03/16/varnish-acl-with-x-forwarded-for-header/
C{
#include <netinet/in.h>
#include <string.h>
#include <sys/socket.h>
#include <arpa/inet.h>
}C
acl passem { "7x.xxx.xxx.xxx"; }
sub vcl_recv {
C{
struct sockaddr_storage *client_ip_ss = VRT_r_client_ip(sp);
struct sockaddr_in *client_ip_si = (struct sockaddr_in *) client_ip_ss;
struct in_addr *client_ip_ia = &(client_ip_si->sin_addr);
char *xff_ip = VRT_GetHdr(sp, HDR_REQ, "\020X-Forwarded-For:");
if (xff_ip != NULL) {
inet_pton(AF_INET, xff_ip, client_ip_ia);
}
}C
if (!(client.ip ~ passem)) {
return (pass);
}
}
Yes, your client.ip will be the real IP, not anything forwarded in headers. Instead you need to use the correct header req.http.X-Real-IP for instance.
In current varnish-cache versions you can use std.ip(), e.g.
import std;
sub vcl_recv {
if (std.ip(req.http.X-Real-IP) !~ passem) {
return (pass);
}
}
Related
I am trying to customize what appears on the screen for a user when he reaches a Guru Meditation error screen, upon making an erring request to my backend, which has a Varnish reverse proxy in front. I have tried placing log.info(client.ip) in the different subroutines of default.vcl, only to run into compilation errors when trying to start the varnish service. I am using a Linux virtual machine.
You certainly can. Have a look at the following tutorial I created: https://www.varnish-software.com/developers/tutorials/vcl-synthetic-output-template-file/.
Modifying the synthetic output templates
Here's the VCL code you need to extend the regular vcl_synth subroutine in case you call return(synth()) from your VCL code, as well as the code you need to extend vcl_backend_response in case of a backend fetch error:
vcl 4.1;
import std;
sub vcl_synth {
set resp.http.Content-Type = "text/html; charset=utf-8";
set resp.http.Retry-After = "5";
set resp.body = regsuball(std.fileread("/etc/varnish/synth.html"),"<<REASON>>",resp.reason);
return (deliver);
}
sub vcl_backend_error {
set beresp.http.Content-Type = "text/html; charset=utf-8";
set beresp.http.Retry-After = "5";
set beresp.body = regsuball(std.fileread("/etc/varnish/synth.html"),"<<REASON>>",beresp.reason);
return (deliver);
}
As explained in the tutorial, you can store the HTML code you want to display in an HTML and load this into your VCL output.
The trick is to put a <<REASON>> placeholder in your HTML where the actual error message gets parsed into.
Adding custom logging to your VCL
If you want to add custom logging that gets sent to VSL, you can use the std.log() function that is part of vmod_std.
Here's some example VCL code that uses this function:
vcl 4.1;
import std;
sub vcl_recv {
std.log("Client IP: " + client.ip);
}
The log will be displayed through a VCL_Log tag in your VSL output.
If you want to filter out VCL_Log tags, you can use the following command:
varnishlog -g request -i VCL_Log
This is the output you may receive:
* << Request >> 32770
- VCL_Log Client IP: 127.0.0.1
** << BeReq >> 32771
If you're not filtering the VCL_Log tag, you'll see it appear in your VSL output if your run varnishlog -g request.
Tip: if you want to see the full log transaction but only for a specific URL, just run varnishlog -g request -q "ReqUrl eq '/'". This will only display the logs for the homepage.
Update: displaying the client IP in the synthetic output
The VCL code below injects the X-Forwarded-For header into the output by concatenating to the reason phrase:
vcl 4.1;
import std;
sub vcl_synth {
set resp.http.Content-Type = "text/html; charset=utf-8";
set resp.http.Retry-After = "5";
set resp.body = regsuball(std.fileread("/etc/varnish/synth.html"),"<<REASON>>",resp.reason + " (" + req.http.X-Forwarded-For + ")");
return (deliver);
}
sub vcl_backend_error {
set beresp.http.Content-Type = "text/html; charset=utf-8";
set beresp.http.Retry-After = "5";
set beresp.body = regsuball(std.fileread("/etc/varnish/synth.html"),"<<REASON>>",beresp.reason + " (" + bereq.http.X-Forwarded-For + ")");
return (deliver);
}
It's also possible to provide a second placeholder in the template and perform an extra regsuball() call. But for the sake of simplicity, the X-Forwarded-For header is just attached to the reason string.
I need to make a simple GET call using HTTP. Despite successfully connecting to the WiFi network, getting the IP, Gateway and DNS correctly, The HTTP client keeps getting connection failed error. I tried using the WiFi client as well but no luck. It don't think the issue is with my code, I tried to use the code from the examples folder as well and it fails too. I tried connecting to a local internal server and it failed. I have spent about a whole day on this and I can't figure what could be wrong.
The code is given below. That is all there is at the moment.
I am using the Arduino for ESPRESSIF on Platform IO using Visual Code Editor on Windows.
The WifiConnect function is called from setup()
Initially I was only calling the WiFi.begin() function to connect which was also connecting with the WiFi but since the connection kept failing I tried ESP8266WiFiMulti as per the sample code.
I even checked that host name resolution also works
IPAddress remote_addr;
WiFi.hostByName("192.168.2.107", remote_addr);
Serial.println(remote_addr);
But connecting to any server always fails. I've tried connecting to servers internal to the network, servers on the internet, servers running http on special ports, normal port 80 etc. But it stubbornly refuses to connect.
#include "Arduino.h"
#include <ESP8266WiFi.h>
#include <ESP8266HTTPClient.h>
#include "Ticker.h"
#include <WiFiClient.h>
#include <ESP8266WiFiMulti.h>
ESP8266WiFiMulti WiFiMulti;
void WifiConnect()
{
WiFi.mode(WIFI_STA);
WiFiMulti.addAP("MySSID", "MySECret Password");
Serial.print("Connecting");
while (WiFiMulti.run() != WL_CONNECTED)
{
delay(500);
Serial.print(".");
}
Serial.println();
Serial.print("Connected, IP address: ");
Serial.println(WiFi.localIP());
Serial.println(WiFi.dnsIP());
Serial.println(WiFi.gatewayIP());
Serial1.println(WiFi.localIP());
}
void logToThinkSpeakt(){
// wait for WiFi connection
if ((WiFiMulti.run() == WL_CONNECTED)) {
WiFiClient client;
HTTPClient http;
Serial.print("[HTTP] begin...\n");
if (http.begin(client, "http://jigsaw.w3.org/HTTP/connection.html")) { // HTTP
Serial.print("[HTTP] GET...\n");
// start connection and send HTTP header
int httpCode = http.GET();
// httpCode will be negative on error
if (httpCode > 0) {
// HTTP header has been send and Server response header has been handled
Serial.printf("[HTTP] GET... code: %d\n", httpCode);
// file found at server
if (httpCode == HTTP_CODE_OK || httpCode == HTTP_CODE_MOVED_PERMANENTLY) {
String payload = http.getString();
Serial.println(payload);
}
} else {
Serial.printf("[HTTP] GET... failed, error: %s\n", http.errorToString(httpCode).c_str());
}
http.end();
} else {
Serial.printf("[HTTP} Unable to connect\n");
}
}
}
PLATFORM: Espressif 8266 2.6.2 > NodeMCU 1.0 (ESP-12E Module)
HARDWARE: ESP8266 80MHz, 80KB RAM, 4MB Flash
PACKAGES:
framework-arduinoespressif8266 3.20704.0 (2.7.4)
tool-esptool 1.413.0 (4.13)
tool-esptoolpy 1.20800.0 (2.8.0)
tool-mklittlefs 1.203.200522 (2.3)
tool-mkspiffs 1.200.0 (2.0)
toolchain-xtensa 2.40802.200502 (4.8.2)
LDF: Library Dependency Finder
LDF Modes: Finder ~ chain, Compatibility ~ soft
Found 29 compatible libraries
Scanning dependencies...
Dependency Graph
|-- 1.0
|-- 1.2
| |-- 1.0
|-- 1.0
Building in release mode
This is a C++ server application which communicates with all clients based on UDP protocol. When a user logs into the server from client, the client application registers a UDP channel to the server and this channel is in fixed format: IP+Port, which means if the IP keeps unchanged, then no matter what user logged in the client registers a same channel.
The server's socket layer maintains a heartbeat mechanism which will remove the channel if it doesn't receive any heartbeat packets from the channel in 3 minutes. Everything works fine until the client is down, e.g. the network wire is plugged off. Look at below scene:
1. User-A logs into server. The Client registers channel (IP:Port)
to the server. Because the UDP channel is alive, so the Server
sets the User status of User-A as Online.
2. Kill the client process, and within 3 minutes(before the channel
timeouts in server), let User-B logs into server from the same
computer. Because the IP remains unchanged, so actually the client
registers a same (IP:PORT) pair to the server as it did when User-A
logs in.
3. Since the Server receives packets from (IP:PORT), so it considers
User-A is still alive, thus setting the user status of User-A as
Online which is not right anymore.
In above scenario, the Server is not able distinguish different users logged from a same computer, which results in wrong user states. Does anybody know how to solve this problem?
I see no reason to presume that the origin port number for any two users will be identical unless the client application is explicitly binding the UDP socket. Clients which initiate the communication can often use ephemeral ports just as effectively. Ephemeral ports may or may not be sufficiently random for your particular use case, code below shows how to access client ports from inbound UDP data. If they are not sufficiently random, it may be wise to encode session cookies or user-cookies into the protocol.
#include <sys/socket.h>
#include <sys/types.h>
#include <arpa/inet.h>
#include <stdio.h>
#include <string.h>
int
main() {
/*- setup UDP socket on 8500 */
int rc;
int server_socket;
struct sockaddr_in server_address;
server_socket = socket(AF_INET, SOCK_DGRAM, 0);
if (server_socket < 0) {
perror("failed to init socket");
return 1;
}
memset(&server_address, 0, sizeof(server_address));
server_address.sin_family = AF_INET;
server_address.sin_port = htons(8500);
server_address.sin_addr.s_addr = inet_addr("127.0.0.1");
rc = bind(server_socket, (struct sockaddr*) &server_address
, sizeof(server_address));
if (rc < 0) {
perror("failed to bind");
return 2;
}
/* - receive from UDP socket and print out origin port - */
char buffer[4096];
struct sockaddr_in client_address;
int client_address_len = sizeof(client_address);
rc = recvfrom(server_socket
, buffer
, sizeof(buffer)
, 0
, (struct sockaddr*) &client_address
, &client_address_len);
if (rc < 0)
return 3;
fprintf(stderr, "%s %d\n", buffer, ntohs(client_address.sin_port));
return 0;
}
I see following behavior with sendmsg in case of IPv4:
Suppose that 10.1.2.3 is the client IP.
And 10.1.2.10 is configured on one of the interfaces of client.
In an UDP message, following control information is added into the packet:
It is just the source-address or interface address that server should use in replying back to the client:
cmsg->cmsg_len = sizeof(struct cmsghdr) + sizeof(sa->sin_addr);
cmsg->cmsg_level = IPPROTO_IP;
cmsg->cmsg_type = IP_SENDSRCADDR_WITH_ERROR;
* (struct in_addr *)CMSG_DATA(cmsg) = sa->sin_addr;
cmsg = (struct cmsghdr *)((caddr_t) cmsg + ALIGN(cmsg->cmsg_len));
And message is sent as:
sendmsg(fd, send_msg, 0);
If I configure 10.1.2.10 as source-ip and once it is added into cmsg, things work fine.
server replies back to 10.1.2.10.
But, if I configure some un-reachable IP address or IP that is not configured on any interface on the client, sendmsg fails with below error:
sendmsg to 10.1.2.3(10.1.2.3).1813 failed: Can't assign
requested address
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
But I do not see the same behavior with IPv6:
Suppose that 2001::1 is the client IP.
And 2001::2001 is configured on one of the interfaces of client.
IPv6 source address is added into control message as below:
cmsg->cmsg_level = IPPROTO_IPV6;
cmsg->cmsg_type = IPV6_PKTINFO;
cmsg->cmsg_len = CMSG_LEN(sizeof(struct in6_pktinfo));
memcpy((struct in6_addr *)CMSG_DATA(cmsg), &(sa6->sin6_addr),
sizeof(sa6->sin6_addr));
cmsg = (struct cmsghdr *)((caddr_t) cmsg + ALIGN(cmsg->cmsg_len));
It works fine, if I configure 2001::2001 as source-ip and server does reply back to this address.
But If I configure an unreachable IPv6 source address say 1001::1001, there is no error message from sendmsg similar to the one we see in IPv4 case. Message is still sent with original IPv6 which is 2001::1.
Can someone please suggest on what can be the problem?
Thanks.
IP_SENDSRCADDR and IPV6_PKTINFO must be two different implementations. Maybe in the first case it just control errors. Have you tried to set the interface index in the ancillary data for IPV6_PKTINFO? For IPV6_PKTINFO the ancillary data is of type: in6_pktinfo.
struct in6_pktinfo {
struct in6_addr ipi6_addr; /* src/dst IPv6 address */
unsigned int ipi6_ifindex; /* send/recv if index */
};
Hope this helps in some way
I meet the same issue, I set source address as 408:6666:f:f500::1 (not local IP), but I received the packet with 4085:6666:f:fc10::1 (the local IP) as source address, no matter I set ipi6_ifindex or not.
I will get forward to investigate it.
I've been trying like mad to figure out the VCL for how to do this and am beginning to think it's not possible. I have several backend app servers that serve a variety of different hosts. I need varnish to cache pages for any host and send requests that miss the cache to the app servers with the original host info in the request ("www.site.com"). However, all the VCL examples seem to require me to use a specific host name for my backend server ("backend1" for example). Is there any way around this? I'd love to just point the cache miss to an IP and leave the request host intact.
Here's what I have now:
backend app1 {
.host = "192.168.1.11";
.probe = {
.url = "/heartbeat";
.interval = 5s;
.timeout = 1 s;
.window = 5;
.threshold = 3;
}
}
backend app2 {
.host = "192.168.1.12";
.probe = {
.url = "/heartbeat";
.interval = 5s;
.timeout = 1 s;
.window = 5;
.threshold = 3;
}
}
director pwms_t247 round-robin {
{
.backend = app1;
}
{
.backend = app2;
}
}
sub vcl_recv {
# pass on any requests that varnish should not handle
if (req.request != "HEAD" && req.request != "GET" && req.request != "BAN") {
return (pass);
}
# pass requests to the backend if they have a no-cache header or cookie
if (req.http.x-varnish-no-cache == "true" || (req.http.cookie && req.http.cookie ~ "x-varnish-no-cache=true")) {
return (pass);
}
# Lookup requests that we know should be cached
if (req.url ~ ".*") {
# Clear cookie and authorization headers, set grace time, lookup in the cache
unset req.http.Cookie;
unset req.http.Authorization;
return(lookup);
}
}
etc...
This is my first StackOverflow question so please let me know if I neglected to mention something important! Thanks.
Here is what I actually got to work. I credit ivy because his answer is technically correct, and because one of the problems was my host (they were blocking ports that prevented my normal web requests from getting through). The real problem I was having was that heartbeat messages had no host info, so the vhost couldn't route them correctly. Here's a sample backend definition with a probe that crafts a completely custom request:
backend app1 {
.host = "192.168.1.11";
.port = "80";
.probe = {
.request = "GET /heartbeat HTTP/1.1"
"Host: something.com"
"Connection: close"
"Accept-Encoding: text/html" ;
.interval = 15s;
.timeout = 2s;
.window = 5;
.threshold = 3;
}
}
I need varnish to cache pages for any host and send requests that miss
the cache to the app servers with the original host info in the
request ("www.site.com"). However, all the VCL examples seem to require
me to use a specific host name for my backend server ("backend1" for example)
backend1 is not a hostname, it's a back-end definition with an ip-address. You're defining some routing logic in your vcl file (to which backend a request is proxied), but you're not changing the hostname in the request. What you're asking for (keep the hostname the same) is the default behavior.