I'm facing a strange reconnecting behavior after restart an Azure Web App that hosts my SignalR Hub. When I restart, even if the application restarts in less than the DisconnectTimeout (tested with 2 min), the client doesn't reconnect.
Am I doing something wrong?
Hub Code
public class PingHub : Hub
{
public void Hello()
{
Clients.All.hello();
}
public override Task OnReconnected()
{
Trace.WriteLine("Reconnect");
return base.OnReconnected();
}
public override Task OnConnected()
{
Trace.WriteLine("Connect");
return base.OnConnected();
}
}
Client Code
var hubConnection = new HubConnection("http://url/");
hubConnection.TraceLevel = TraceLevels.All;
hubConnection.TraceWriter = Console.Out;
IHubProxy hubProxy = hubConnection.CreateHubProxy("PingHub");
hubProxy.On("hello", () => Console.WriteLine($"Hello {DateTime.Now.ToString()}"));
hubConnection.Reconnected += () =>
{
Console.WriteLine("Reconnected");
};
hubConnection.Start().Wait();
Client Trace Logs
16:55:48.3999367 - null - ChangeState(Disconnected, Connecting)
16:55:48.8459354 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: GET http://gf-test-signalr.azurewebsites.net/signalr/connect?clientProtocol=1.4&transport=serverSentEvents&connectionData=[{"Name":"PingHub"}]&connectionToken=9Vs1ACQjDX%2BQmrcJ2XnoLCCJN%2FDtlJd%2BM0r5o8QvORX50ydXDkrAzeeVUgVIzNc3d7JcDvJ49KmxI3oVPQ%2Bt8IUMJe8HGFAJDasufD%2FFwxEr2l23l40q2dlKVADnFJA5
16:55:48.9604385 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: initialized)
16:55:48.9609355 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: {"C":"d-B53A1D13-E,0|F,0|G,1","S":1,"M":[]})
16:55:49.1059354 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - ChangeState(Connecting, Connected)
16:55:53.0300013 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: {})
16:56:03.0655798 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: {})
16:56:13.0791344 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: {})
16:56:23.0965041 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: OnMessage(Data: {})
16:56:26.7919383 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - ChangeState(Connected, Reconnecting)
16:56:26.7939373 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: GET http://gf-test-signalr.azurewebsites.net/signalr/reconnect?clientProtocol=1.4&transport=serverSentEvents&connectionData=[{"Name":"PingHub"}]&connectionToken=9Vs1ACQjDX%2BQmrcJ2XnoLCCJN%2FDtlJd%2BM0r5o8QvORX50ydXDkrAzeeVUgVIzNc3d7JcDvJ49KmxI3oVPQ%2Bt8IUMJe8HGFAJDasufD%2FFwxEr2l23l40q2dlKVADnFJA5&messageId=d-B53A1D13-E%2C0%7CF%2C0%7CG%2C1
16:56:26.8962939 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - OnError(Microsoft.AspNet.SignalR.Client.HttpClientException: StatusCode: 503, ReasonPhrase: 'Service Unavailable', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
Date: Tue, 15 Nov 2016 16:56:22 GMT
Set-Cookie: ARRAffinity=9fa33f4c59eaa0cb53ffc0472e2395fa67ff17a0f59613b57fb963b1519ab999;Path=/;Domain=gf-test-signalr.azurewebsites.net
Server: Microsoft-IIS/8.0
Content-Length: 326
Content-Type: text/html; charset=us-ascii
}
at Microsoft.AspNet.SignalR.Client.Http.DefaultHttpClient.<>c__DisplayClass5_0.<Get>b__1(HttpResponseMessage responseMessage)
at Microsoft.AspNet.SignalR.TaskAsyncHelper.<>c__DisplayClass31_0`2.<Then>b__0(Task`1 t)
at Microsoft.AspNet.SignalR.TaskAsyncHelper.TaskRunners`2.<>c__DisplayClass3_0.<RunTask>b__0(Task`1 t))
16:56:28.9148136 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: GET http://gf-test-signalr.azurewebsites.net/signalr/reconnect?clientProtocol=1.4&transport=serverSentEvents&connectionData=[{"Name":"PingHub"}]&connectionToken=9Vs1ACQjDX%2BQmrcJ2XnoLCCJN%2FDtlJd%2BM0r5o8QvORX50ydXDkrAzeeVUgVIzNc3d7JcDvJ49KmxI3oVPQ%2Bt8IUMJe8HGFAJDasufD%2FFwxEr2l23l40q2dlKVADnFJA5&messageId=d-B53A1D13-E%2C0%7CF%2C0%7CG%2C1
16:56:29.0051243 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - OnError(Microsoft.AspNet.SignalR.Client.HttpClientException: StatusCode: 503, ReasonPhrase: 'Service Unavailable', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
Date: Tue, 15 Nov 2016 16:56:24 GMT
Server: Microsoft-IIS/8.0
Content-Length: 326
Content-Type: text/html; charset=us-ascii
}
at Microsoft.AspNet.SignalR.Client.Http.DefaultHttpClient.<>c__DisplayClass5_0.<Get>b__1(HttpResponseMessage responseMessage)
at Microsoft.AspNet.SignalR.TaskAsyncHelper.<>c__DisplayClass31_0`2.<Then>b__0(Task`1 t)
at Microsoft.AspNet.SignalR.TaskAsyncHelper.TaskRunners`2.<>c__DisplayClass3_0.<RunTask>b__0(Task`1 t))
16:56:31.0165736 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - SSE: GET http://gf-test-signalr.azurewebsites.net/signalr/reconnect?clientProtocol=1.4&transport=serverSentEvents&connectionData=[{"Name":"PingHub"}]&connectionToken=9Vs1ACQjDX%2BQmrcJ2XnoLCCJN%2FDtlJd%2BM0r5o8QvORX50ydXDkrAzeeVUgVIzNc3d7JcDvJ49KmxI3oVPQ%2Bt8IUMJe8HGFAJDasufD%2FFwxEr2l23l40q2dlKVADnFJA5&messageId=d-B53A1D13-E%2C0%7CF%2C0%7CG%2C1
16:56:56.7950186 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - OnError(System.TimeoutException: Couldn't reconnect within the configured timeout of 00:00:30, disconnecting.)
16:56:56.7959897 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - Disconnected
16:56:56.8103502 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - Transport.Dispose(6171c2d4-a9dd-4fa4-b710-0910af48132b)
16:56:56.8108527 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - Closed
16:56:56.7950186 - 6171c2d4-a9dd-4fa4-b710-0910af48132b - OnError(System.TimeoutException: Couldn't reconnect within the configured timeout of 00:00:30, disconnecting.)
As far as I know, the default value of DisconnectTimeout is 30 seconds. And according to the logs, the reconnecting takes about 30 seconds, so please check if you set/change DisconnectTimeout setting in Application_Start.
GlobalHost.Configuration.DisconnectTimeout = TimeSpan.FromSeconds(30);
Besides, if you want to continuously reconnect to hub after a connection has been lost, you could call the Start method from disconnected event handler. For more detailed information, please refer to “How to continuously reconnect”.
Related
Kafka connection timeout after the 30000ms.it showing error
{ TimeoutError: Request timed out after 30000ms
at new TimeoutError (/app/node_modules/kafka-node/lib/errors/TimeoutError.js:6:9)
at Timeout.timeoutId._createTimeout [as _onTimeout] (/app/node_modules/kafka-node/lib/kafkaClient.js:1007:14)
at listOnTimeout (internal/timers.js:535:17)
at processTimers (internal/timers.js:479:7) message: 'Request timed out after 30000ms' }
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient broker is now ready
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient kafka-node-client updated internal metadata
Kafka Producer is connected and ready.
----->data PRODUCT_REF_TOKEN { hash:
'0x964f714829cece2c5f57d5c8d677c251eff82f7fba4b5ba27b4bd650da79a954',
success: 'true' }
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient compressing messages if needed
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient kafka-node-client createBroker 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient missing apiSupport waiting until broker is ready...
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient waitUntilReady [BrokerWrapper 127.0.0.1:9092 (connected: true) (ready: false) (idle: false) (needAuthentication: false) (authenticated: false)]
Tue, 22 Oct 2019 10:10:24 GMT kafka-node:KafkaClient kafka-node-client socket closed 127.0.0.1:9092 (hadError: true)
Tue, 22 Oct 2019 10:10:25 GMT kafka-node:KafkaClient kafka-node-client reconnecting to 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:25 GMT kafka-node:KafkaClient kafka-node-client createBroker 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:25 GMT kafka-node:KafkaClient kafka-node-client socket closed 127.0.0.1:9092 (hadError: true)
Tue, 22 Oct 2019 10:10:26 GMT kafka-node:KafkaClient kafka-node-client reconnecting to 127.0.0.1:9092
Tue, 22 Oct 2019 10:10:26 GMT kafka-node:KafkaClient kafka-node-client createBroker 127.0.0.1:9092
docker-compose.yml for kafka setup please let me know if any setup or properties need to be setup.
version: "3.5"
services:
api:
image: opschain-sapi
restart: always
command: ["yarn", "start"]
ports:
- ${API_PORT}:80
env_file:
- ./truffle/contracts.env
- ./.env
external_links:
- ganachecli-private
- ganachecli-public
networks:
- opschain_network
graphql-api:
build:
context: ./graphql-api
dockerfile: Dockerfile
command: npm run dev
ports:
- 9007:80
depends_on:
- mongodb
- graphql-api-watch
- api
volumes:
- ./graphql-api/dist:/app/dist:delegated
- ./graphql-api/src:/app/src:delegated
environment:
VIRTUAL_HOST: api.blockchain.docker
PORT: 80
OFFCHAIN_DB_URL: mongodb://root:password#mongodb:27017
OFFCHAIN_DB_NAME: opschain-wallet
OFFCHAIN_DB_USER_COLLECTION: user
JWT_PASSWORD: 'supersecret'
JWT_TOKEN_EXPIRE_TIME: 86400000
BLOCKCHAIN_API: api
networks:
- opschain_network
graphql-api-watch:
build:
context: ./graphql-api
dockerfile: Dockerfile
command: npm run watch
volumes:
- ./graphql-api/src:/app/src:delegated
- ./graphql-api/dist:/app/dist:delegated
networks:
- opschain_network
mongodb:
image: mongo:latest
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
MONGO_INITDB_DATABASE: opschain-wallet
logging:
options:
max-size: 100m
networks:
- opschain_network
ui:
build:
context: ./ui
dockerfile: Dockerfile
ports:
- 9000:3000
volumes:
- ./ui/public:/app/public:delegated
- ./ui/src:/app/src:delegated
depends_on:
- graphql-api
networks:
- opschain_network
environment:
VIRTUAL_HOST: tmna.csc.docker
REACT_APP_API_BASE_URL: http://localhost:8080
logging:
options:
max-size: 10m
test:
build: ./test
volumes:
- ./test/postman:/app/postman:delegated
networks:
- opschain_network
zoo1:
image: zookeeper:3.4.9
hostname: zoo1
ports:
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_PORT: 2181
ZOO_SERVERS: server.1=zoo1:2888:3888
volumes:
- ./pub-sub/zk-single-kafka-single/zoo1/data:/data
- ./pub-sub/zk-single-kafka-single/zoo1/datalog:/datalog
networks:
- opschain_network
kafka1:
image: confluentinc/cp-kafka:5.3.1
hostname: kafka1
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
# KAFKA_ADVERTISED_HOST_NAME: localhost
# KAFKA_ZOOKEEPER_CONNECT: zoo1:2181
KAFKA_CREATE_TOPICS: "cat:1:1"
volumes:
- ./pub-sub/zk-single-kafka-single/kafka1/data:/var/lib/kafka/data
depends_on:
- zoo1
- api
networks:
- opschain_network
networks:
opschain_network:
external: true
in the above compose file i have exposed the port 9092 and zookeper port 2181. Exactly i am not sure what the issue is
const kafka = require('kafka-node');
const config = require('./configUtils');
function sendMessage({ topic, message }) {
let Producer = kafka.Producer,
client = new kafka.KafkaClient({ kafkaHost: config.kafka.host,autoConnect: true}),
producer = new Producer(client);
producer.on('ready', () => {
console.log('Kafka Producer is connected and ready.');
console.log('----->data',topic,message)
producer.send(
[
{
topic,
messages: [JSON.stringify(message)],
}
],
function(_err, data){
console.log('--err',_err)
console.log('------->message sent from kafka',data);
}
);
});
producer.on('error', error => {
console.error(error);
});
}
module.exports = sendMessage;
producer file where it connects to the kafka client and on ready it produces the message
I ran into a similar issue using the landoop/fast-data-dev image with docker-compose. I was able to solve it by making sure the ADV_HOST environment variable was configured to be the name of the kafka service (e.g. kafka1). Then setting the kafkaHost option to the name of service. (e.g. kafka1:9092).
The environment variable for your kafka image appears to be "KAFKA_ADVERTISED_HOST_NAME".
I created a server with actix_web that will connect through GET to another service using actix client and return body on success or error on error. I have been able to return the body but have no clue about how to return the error.
This is my handler:
fn echo_client(client: web::Data<Client>) -> impl Future<Item = HttpResponse, Error = Error> {
client
.get("127.0.0.1:9596/echo/javier") // <- Create request builder
.header("User-Agent", "Actix-web")
//.finish().unwrap()
.send() // <- Send http request
.map_err(|_| ())
//.map_err(Error::from)
.and_then(|response| {
response
.body()
.and_then(|body| {
println!("{:?}", body);
Ok(HttpResponse::Ok().body(body))
})
.map_err(|error| Err(error.error_response()))
})
}
There are three things that may fail:
Failing connection.
Non 200-status code.
Abrupt stop in body stream.
To handle 1, do not map_err to ():
.map_err(|err| match err {
SendRequestError::Connect(error) => {
ErrorBadGateway(format!("Unable to connect to httpbin: {}", error))
}
error => ErrorInternalServerError(error),
})
SendRequestError lists the errors that can occur when doing client requests.
To handle 2, make sure you use the status code from the client response:
.and_then(|response| Ok(HttpResponse::build(response.status()).streaming(response))))
actix-web handles 3 I believe.
Complete example, handling headers too:
use actix_web::client::{Client, SendRequestError};
use actix_web::error::{ErrorBadGateway, ErrorInternalServerError};
use actix_web::{web, App, Error, HttpResponse, HttpServer};
use futures::future::Future;
fn main() {
HttpServer::new(|| App::new().data(Client::new()).route("/", web::to(handler)))
.bind("127.0.0.1:8000")
.expect("Cannot bind to port 8000")
.run()
.expect("Unable to run server");
}
fn handler(client: web::Data<Client>) -> Box<Future<Item = HttpResponse, Error = Error>> {
Box::new(
client
.get("https://httpbin.org/get")
.no_decompress()
.send()
.map_err(|err| match err {
SendRequestError::Connect(error) => {
ErrorBadGateway(format!("Unable to connect to httpbin: {}", error))
}
error => ErrorInternalServerError(error),
})
.and_then(|response| {
let mut result = HttpResponse::build(response.status());
let headers = response
.headers()
.iter()
.filter(|(h, _)| *h != "connection" && *h != "content-length");
for (header_name, header_value) in headers {
result.header(header_name.clone(), header_value.clone());
}
Ok(result.streaming(response))
}),
)
}
Which actually failed:
$ curl -v localhost:8000
* Rebuilt URL to: localhost:8000/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8000 (#0)
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 502 Bad Gateway
< content-length: 50
< content-type: text/plain
< date: Sun, 07 Jul 2019 21:01:39 GMT
<
* Connection #0 to host localhost left intact
Unable to connect to httpbin: SSL is not supported
Add ssl as a feature in Cargo.toml to fix the connection error:
actix-web = { version = "1.0", features=["ssl"] }
Then try the request again:
$ curl -v localhost:8000
* Rebuilt URL to: localhost:8000/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8000 (#0)
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< transfer-encoding: chunked
< x-frame-options: DENY
< date: Sun, 07 Jul 2019 21:07:18 GMT
< content-type: application/json
< access-control-allow-origin: *
< access-control-allow-credentials: true
< server: nginx
< x-content-type-options: nosniff
< x-xss-protection: 1; mode=block
< referrer-policy: no-referrer-when-downgrade
<
{
"args": {},
"headers": {
"Date": "Sun, 07 Jul 2019 21:07:18 GMT",
"Host": "httpbin.org"
},
"origin": "212.251.175.90, 212.251.175.90",
"url": "https://httpbin.org/get"
}
I want to use Varnish as a "smart" proxy and it almost works. The idea is that some requests should be passed through Varnish, hit the backend and return, all other requests should return a "synt" message that the specific response contains no result.
This works apart from the fact that Varnish returns a 301 redirect to the backend instead of just the response from the actual backend.
Backend and Cache are not located on the same host (or not even on the same network in this case).
Backend is ALSO running a separate Varnish instance and this request is always passed through that.
// VCL.SHOW 0 1820 input
#
# This is an example VCL file for Varnish.
#
# It does not do anything by default, delegating control to the
# builtin VCL. The builtin VCL is called when there is no explicit
# return statement.
#
# See the VCL chapters in the Users Guide at https://www.varnish-cache.org/docs/
# and http://varnish-cache.org/trac/wiki/VCLExamples for more examples.
# Marker to tell the VCL compiler that this VCL has been adapted to the
# new 4.0 format.
vcl 4.0;
### Here starts my part of the VCL
# Default backend definition. Set this to point to your content server.
backend default {
.host = "myhost.mydomain";
.port = "80";
}
sub vcl_recv {
# Happens before we check if we have this in cache already.
#
# Typically you clean up the request here, removing cookies you don't need,
# rewriting the request, etc.
if (req.url ~ "^\/cgi-bin\/wspd_cgi\.sh/apiFlightSearch.p\?from=ARN&to=AOK&date=2017-05-20&homedate=2017-05-27")
{
return (hash);
}
return (synth(750));
}
sub vcl_synth {
if (resp.status == 750) {
# Set a status the client will understand
set resp.status = 200;
# Create our synthetic response
set resp.http.content-type = "text/xml";
synthetic("<flights xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'><status><status>No flights</status></status></flights>");
return(deliver);
}
}
sub vcl_backend_response {
# Happens after we have read the response headers from the backend.
#
# Here you clean the response headers, removing silly Set-Cookie headers
# and other mistakes your backend does.
set beresp.ttl = 10 s;
}
sub vcl_deliver {
# Happens when we have all the pieces we need, and are about to send the
# response to the client.
#
# You can do accounting or modifying the final object here.
}
### Here ends my part of the VCL. The rest I guess is built in.
// VCL.SHOW 1 5479 Builtin
/*-
* Copyright (c) 2006 Verdens Gang AS
* Copyright (c) 2006-2014 Varnish Software AS
* All rights reserved.
*
* Author: Poul-Henning Kamp <phk#phk.freebsd.dk>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
*
* The built-in (previously called default) VCL code.
*
* NB! You do NOT need to copy & paste all of these functions into your
* own vcl code, if you do not provide a definition of one of these
* functions, the compiler will automatically fall back to the default
* code from this file.
*
* This code will be prefixed with a backend declaration built from the
* -b argument.
*/
vcl 4.0;
#######################################################################
# Client side
sub vcl_recv {
if (req.method == "PRI") {
/* We do not support SPDY or HTTP/2.0 */
return (synth(405));
}
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
if (req.method != "GET" && req.method != "HEAD") {
/* We only deal with GET and HEAD by default */
return (pass);
}
if (req.http.Authorization || req.http.Cookie) {
/* Not cacheable by default */
return (pass);
}
return (hash);
}
sub vcl_pipe {
# By default Connection: close is set on all piped requests, to stop
# connection reuse from sending future requests directly to the
# (potentially) wrong backend. If you do want this to happen, you can undo
# it here.
# unset bereq.http.connection;
return (pipe);
}
sub vcl_pass {
return (fetch);
}
sub vcl_hash {
hash_data(req.url);
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
return (lookup);
}
sub vcl_purge {
return (synth(200, "Purged"));
}
sub vcl_hit {
if (obj.ttl >= 0s) {
// A pure unadultered hit, deliver it
return (deliver);
}
if (obj.ttl + obj.grace > 0s) {
// Object is in grace, deliver it
// Automatically triggers a background fetch
return (deliver);
}
// fetch & deliver once we get the result
return (fetch);
}
sub vcl_miss {
return (fetch);
}
sub vcl_deliver {
return (deliver);
}
/*
* We can come here "invisibly" with the following errors: 413, 417 & 503
*/
sub vcl_synth {
set resp.http.Content-Type = "text/html; charset=utf-8";
set resp.http.Retry-After = "5";
synthetic( {"<!DOCTYPE html>
<html>
<head>
<title>"} + resp.status + " " + resp.reason + {"</title>
</head>
<body>
<h1>Error "} + resp.status + " " + resp.reason + {"</h1>
<p>"} + resp.reason + {"</p>
<h3>Guru Meditation:</h3>
<p>XID: "} + req.xid + {"</p>
<hr>
<p>Varnish cache server</p>
</body>
</html>
"} );
return (deliver);
}
#######################################################################
# Backend Fetch
sub vcl_backend_fetch {
return (fetch);
}
sub vcl_backend_response {
if (beresp.ttl <= 0s ||
beresp.http.Set-Cookie ||
beresp.http.Surrogate-control ~ "no-store" ||
(!beresp.http.Surrogate-Control &&
beresp.http.Cache-Control ~ "no-cache|no-store|private") ||
beresp.http.Vary == "*") {
/*
* Mark as "Hit-For-Pass" for the next 2 minutes
*/
set beresp.ttl = 120s;
set beresp.uncacheable = true;
}
return (deliver);
}
sub vcl_backend_error {
set beresp.http.Content-Type = "text/html; charset=utf-8";
set beresp.http.Retry-After = "5";
synthetic( {"<!DOCTYPE html>
<html>
<head>
<title>"} + beresp.status + " " + beresp.reason + {"</title>
</head>
<body>
<h1>Error "} + beresp.status + " " + beresp.reason + {"</h1>
<p>"} + beresp.reason + {"</p>
<h3>Guru Meditation:</h3>
<p>XID: "} + bereq.xid + {"</p>
<hr>
<p>Varnish cache server</p>
</body>
</html>
"} );
return (deliver);
}
#######################################################################
# Housekeeping
sub vcl_init {
return (ok);
}
sub vcl_fini {
return (ok);
}
Output from curl:
$ curl "thisandthatip.compute.amazonaws.com/cgi-bin/wspd_cgi.sh/apiFlightSearch.p?from=ARN&to=AOK&date=2017-05-20&homedate=2017-05-27&adults=2&triptype=return&children=0&infants=0" -i
HTTP/1.1 301 Moved Permanently
Date: Wed, 15 Mar 2017 07:14:11 GMT
Server: Apache/2.2.15 (Red Hat)
Location: http://myserver.mydomain/cgi-bin/wspd_cgi.sh/apiFlightSearch.p?from=ARN&to=AOK&date=2017-05-20&homedate=2017-05-27&adults=2&triptype=return&children=0&infants=0
Content-Length: 514
Content-Type: text/html; charset=iso-8859-1
X-Varnish: 529144137
Via: 1.1 varnish-v4
X-Varnish: 98309 11
Age: 2
Via: 1.1 varnish-v4
Connection: keep-alive
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>301 Moved Permanently</title>
</head><body>
<h1>Moved Permanently</h1>
<p>The document has moved here.</p>
<hr>
<address>Apache/2.2.15 (Red Hat) Server at thisandthatip.eu-central-1.compute.amazonaws.com Port 80</address>
</body></html>
Backend apache access log
127.0.0.1 - - [15/Mar/2017:08:09:49 +0100] "GET /cgi-bin/wspd_cgi.sh/apiFlightSearch.p?from=arn&to=aok&date=2017-05-20&homedate=2017-05-27&adults=2&triptype=return&children=0&infants=0 HTTP/1.1" 200 994 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"
Sending the request from the AWS instance to the backend renders no 301 redirection:
$ curl "myserver.mydomain/cgi-bin/wspd_cgi.sh/apiFlightSearch.p?from=arn&to=aok&date=2017-05-20&homedate=2017-05-27&adults=2&triptype=return&children=0&infants=0" -i
HTTP/1.1 200 OK
Date: Wed, 15 Mar 2017 08:54:14 GMT
Server: Apache/2.2.15 (Red Hat)
Cache-Control: max-age=1
Expires: Wed, 15 Mar 2017 08:54:15 GMT
Content-Type: text/xml
X-Varnish: 527559784
Age: 0
Via: 1.1 varnish-v4
Transfer-Encoding: chunked
Connection: keep-alive
Accept-Ranges: bytes
... Response body here ...
Complete varnishlog output of a single request
* << BeReq >> 98314
- Begin bereq 98313 fetch
- Timestamp Start: 1489568144.701450 0.000000 0.000000
- BereqMethod GET
- BereqURL /cgi-bin/wspd_cgi.sh/apiFlightSearch.p?from=ARN&to=AOK&date=2017-05-20&homedate=2017-05-27&adults=2&triptype=return&children=0&infants=0
- BereqProtocol HTTP/1.1
- BereqHeader Host: thisandthatip.compute.amazonaws.com
- BereqHeader Upgrade-Insecure-Requests: 1
- BereqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36
- BereqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
- BereqHeader Accept-Language: sv-SE,sv;q=0.8,en-US;q=0.6,en;q=0.4
- BereqHeader X-Forwarded-For: ip.ip.ip.ip
- BereqHeader Accept-Encoding: gzip
- BereqHeader X-Varnish: 98314
- VCL_call BACKEND_FETCH
- VCL_return fetch
- BackendClose 17 default(ip.ip.ip.ip,,80) toolate
- BackendOpen 17 default(ip.ip.ip.ip,,80) 172.31.31.195 42868
- Backend 17 default default(ip.ip.ip.ip,,80)
- Timestamp Bereq: 1489568144.730329 0.028878 0.028878
- Timestamp Beresp: 1489568144.759773 0.058322 0.029444
- BerespProtocol HTTP/1.1
- BerespStatus 301
- BerespReason Moved Permanently
- BerespHeader Date: Wed, 15 Mar 2017 08:55:44 GMT
- BerespHeader Server: Apache/2.2.15 (Red Hat)
- BerespHeader Location: http://myserver.mydomain/cgi-bin/wspd_cgi.sh/apiFlightSearch.p?from=ARN&to=AOK&date=2017-05-20&homedate=2017-05-27&adults=2&triptype=return&children=0&infants=0
- BerespHeader Content-Length: 514
- BerespHeader Content-Type: text/html; charset=iso-8859-1
- BerespHeader X-Varnish: 526644873
- BerespHeader Age: 0
- BerespHeader Via: 1.1 varnish-v4
- BerespHeader Connection: keep-alive
- TTL RFC 120 -1 -1 1489568145 1489568145 1489568144 0 0
- VCL_call BACKEND_RESPONSE
- TTL VCL 10 10 0 1489568145
- VCL_return deliver
- Storage malloc s0
- ObjProtocol HTTP/1.1
- ObjStatus 301
- ObjReason Moved Permanently
- ObjHeader Date: Wed, 15 Mar 2017 08:55:44 GMT
- ObjHeader Server: Apache/2.2.15 (Red Hat)
- ObjHeader Location: http://myserver.mydomain/cgi-bin/wspd_cgi.sh/apiFlightSearch.p?from=ARN&to=AOK&date=2017-05-20&homedate=2017-05-27&adults=2&triptype=return&children=0&infants=0
- ObjHeader Content-Length: 514
- ObjHeader Content-Type: text/html; charset=iso-8859-1
- ObjHeader X-Varnish: 526644873
- ObjHeader Via: 1.1 varnish-v4
- Fetch_Body 3 length stream
- BackendReuse 17 default(ip.ip.ip.ip,,80)
- Timestamp BerespBody: 1489568144.759849 0.058398 0.000076
- Length 514
- BereqAcct 578 0 578 415 514 929
- End
* << Request >> 98313
- Begin req 98312 rxreq
- Timestamp Start: 1489568144.701372 0.000000 0.000000
- Timestamp Req: 1489568144.701372 0.000000 0.000000
- ReqStart ip.ip.ip.ip 63485
- ReqMethod GET
- ReqURL /cgi-bin/wspd_cgi.sh/apiFlightSearch.p?from=ARN&to=AOK&date=2017-05-20&homedate=2017-05-27&adults=2&triptype=return&children=0&infants=0
- ReqProtocol HTTP/1.1
- ReqHeader Host: thisandthatip.compute.amazonaws.com
- ReqHeader Connection: keep-alive
- ReqHeader Upgrade-Insecure-Requests: 1
- ReqHeader User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36
- ReqHeader Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
- ReqHeader Accept-Encoding: gzip, deflate, sdch
- ReqHeader Accept-Language: sv-SE,sv;q=0.8,en-US;q=0.6,en;q=0.4
- ReqHeader X-Forwarded-For: ip.ip.ip.ip
- VCL_call RECV
- VCL_return hash
- ReqUnset Accept-Encoding: gzip, deflate, sdch
- ReqHeader Accept-Encoding: gzip
- VCL_call HASH
- VCL_return lookup
- Debug "XXXX MISS"
- VCL_call MISS
- VCL_return fetch
- Link bereq 98314 fetch
- Timestamp Fetch: 1489568144.759883 0.058511 0.058511
- RespProtocol HTTP/1.1
- RespStatus 301
- RespReason Moved Permanently
- RespHeader Date: Wed, 15 Mar 2017 08:55:44 GMT
- RespHeader Server: Apache/2.2.15 (Red Hat)
- RespHeader Location: http://myserver.mydomain/cgi-bin/wspd_cgi.sh/apiFlightSearch.p?from=ARN&to=AOK&date=2017-05-20&homedate=2017-05-27&adults=2&triptype=return&children=0&infants=0
- RespHeader Content-Length: 514
- RespHeader Content-Type: text/html; charset=iso-8859-1
- RespHeader X-Varnish: 526644873
- RespHeader Via: 1.1 varnish-v4
- RespHeader X-Varnish: 98313
- RespHeader Age: 0
- RespHeader Via: 1.1 varnish-v4
- VCL_call DELIVER
- VCL_return deliver
- Timestamp Process: 1489568144.759907 0.058535 0.000024
- Debug "RES_MODE 2"
- RespHeader Connection: keep-alive
- Timestamp Resp: 1489568144.759933 0.058561 0.000026
- Debug "XXX REF 2"
- ReqAcct 566 0 566 454 514 968
- End
Varnish 4.0.4 running on AWS Amazon Linux.
The 301 redirect is not done by your varnish. It is done by an apache server probably your backend. It can be seen by the X-Server header in your curl.
What varnish does is proxify the request and forward it to the backend you declare myhost.mydomain. In fact Varnish will resolve the dns at startup and forward the request to the ip it got.
I see two things to check here :
ban your request from your varnish cache (it may result in a 301 at some time during your test and still be cached, that does not serm to be the case but better start from fresh cache)
Make a curl to your backend to see if you get a 301 or a 200.
If that does not work I would restart your varnish service to refresh the dns resolution.
The Host header entry sent to the backend matched that of the AWS instance. That triggered a redirect in the backend, not in the Varnish cache.
Overriding the http.resp.host value in Varnish solved the problem:
sub vcl_recv {
# Happens before we check if we have this in cache already.
#
# Typically you clean up the request here, removing cookies you don't need,
# rewriting the request, etc.
# Set req.http.host (Host header) to www.airtours.se otherwise a redirect will be triggered
set req.http.host = "myserver.mydomain";
... More setting goes here
}
I’d like to ask you a favor. I do have a problem confirming previously readed D2C message using IoT Hub. I am using REST API to pick up message like (I have replace SIG)
Request:
GET https://iot-hub-pospa.azure-devices.net/devices/18596c88-01e6-3f16-427b-10028d7305c5/messages/devicebound?api-version=2015-08-15-preview HTTP/1.1
IoTHub-MessageLockTimeout: 3600
Accept: application/json
Authorization: SharedAccessSignature sr=iot-hub-pospa.azure-devices.net&sig={sig}&se=1485558838&skn=iothubowner
Host: iot-hub-pospa.azure-devices.net
If-None-Match: "1c5006a4-2288-4a2f-b7ea-dcdf9b5bbc99"
Connection: Close
X-P2P-PeerDist: Version=1.1
X-P2P-PeerDistEx: MinContentInformation=1.0, MaxContentInformation=2.0
Accept-Encoding: peerdist
Response:
HTTP/1.1 200 OK
Content-Length: 35
ETag: "dfc78580-d251-4156-a5f6-c2a30811a504"
Server: Microsoft-HTTPAPI/2.0
iothub-messageid: 02cdb012-9749-48a9-bfb3-5812a4740675
iothub-to: /devices/18596c88-01e6-3f16-427b-10028d7305c5/messages/deviceBound
iothub-expiry:
iothub-correlationid:
iothub-ack: full
iothub-sequencenumber: 56
iothub-enqueuedtime: 2/2/2016 9:57:34 AM
iothub-deliverycount: 0
Date: Tue, 02 Feb 2016 10:21:43 GMT
Connection: close
2/2/2016 10:57:34 AM - Test message
Then when confirming I am getting HTTP 412 like
Request:
DELETE https://iot-hub-pospa.azure-devices.net/devices/18596c88-01e6-3f16-427b-10028d7305c5/messages/devicebound/02cdb012-9749-48a9-bfb3-5812a4740675?api-version=2015-08-15-preview HTTP/1.1
Accept: application/json
If-Match: "02cdb012-9749-48a9-bfb3-5812a4740675"
Authorization: SharedAccessSignature sr=iot-hub-pospa.azure-devices.net&sig={sig}&se=1485558838&skn=iothubowner
Host: iot-hub-pospa.azure-devices.net
Content-Length: 0
Connection: Close
Response:
HTTP/1.1 412 Precondition Failed
Content-Length: 330
Content-Type: application/json; charset=utf-8
Server: Microsoft-HTTPAPI/2.0
iothub-errorcode: DeviceMessageLockLost
Date: Tue, 02 Feb 2016 10:21:49 GMT
Connection: close
{"Message":"ErrorCode:DeviceMessageLockLost;Message 02cdb012-9749-48a9-bfb3-5812a4740675 lock was lost for Device 18596c88-01e6-3f16-427b-10028d7305c5\r\nTracking Id:05994074a3664933a0910b5fc70e04e5-G:GatewayWorkerRole.6-B:1-P:cffe397b-f627-4435-bd54-48f5ba79c3ca-TimeStamp:02/02/2016 10:21:49\r\nErrorCode:DeviceMessageLockLost"}
Does anybody know what should I do to successfully confirm/delete message from IoT Hub, please? Thanks
static DeviceClient _deviceClient;
_deviceClient = DeviceClient.CreateFromConnectionString(<IoTHubURI>, TransportType.Http1);
public void SendMessage(IDictionary<string, object> dictionary)
{
Microsoft.Azure.Devices.Client.Message message = new Microsoft.Azure.Devices.Client.Message();
try
{
foreach (var r in dictionary)
{
message.Properties[r.Key] = r.Value.ToString();
}
_deviceClient.SendEventAsync(message);
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
I'm seeing the following 503 error in varnish from time to time in the logs:
* << BeReq >> 213585014
- Begin bereq 213585013 fetch
- Timestamp Start: 1452675822.032332 0.000000 0.000000
- BereqMethod GET
- BereqURL /client/hedge-funds-asset-managers/
- BereqProtocol HTTP/1.1
- BereqHeader X-Real-IP: 123.125.71.28
- BereqHeader Host: XXXXXXXXXXXXXXXXXXX
- BereqHeader X-Forwarded-Proto: http
- BereqHeader User-Agent: Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)
- BereqHeader Accept-Encoding: gzip
- BereqHeader Accept-Language: zh-cn,zh-tw
- BereqHeader Accept: */*
- BereqHeader X-Forwarded-For: 172.18.210.22
- BereqHeader X-Varnish: 213585014
- VCL_call BACKEND_FETCH
- VCL_return fetch
- BackendOpen 232 reload_2016-01-12T07:28:50.cp_12 162.251.80.23 80 172.18.210.71 40019
- Timestamp Bereq: 1452675822.047840 0.015508 0.015508
- FetchError http first read error: EOF
- BackendClose 232 reload_2016-01-12T07:28:50.cp_12
- Timestamp Beresp: 1452675876.038544 54.006212 53.990704
- Timestamp Error: 1452675876.038555 54.006223 0.000010
- BerespProtocol HTTP/1.1
- BerespStatus 503
- BerespReason Service Unavailable
- BerespReason Backend fetch failed
- BerespHeader Date: Wed, 13 Jan 2016 09:04:36 GMT
- BerespHeader Server: Varnish
- VCL_call BACKEND_ERROR
- BerespHeader Content-Type: text/html; charset=utf-8
- BerespHeader Retry-After: 5
- VCL_return deliver
- Storage malloc Transient
- ObjProtocol HTTP/1.1
- ObjStatus 503
- ObjReason Backend fetch failed
- ObjHeader Date: Wed, 13 Jan 2016 09:04:36 GMT
- ObjHeader Server: Varnish
- ObjHeader Content-Type: text/html; charset=utf-8
- ObjHeader Retry-After: 5
- Length 286
- BereqAcct 350 0 350 0 0 0
- End
The issue is not with the backend connection because a curl to the same URL from the varnish server works fine. The version of varnish is 4.1.0. I'm not sure what "http first read error: EOF" means and any light on this issue is appreciated. Due to the random nature of this issue, I do not have any way to reproduce it as well.
A "first read error" happens in Varnish when you try to read headers from the backend before calling vcl_fetch, and Varnish failed to get a response. TL;DR: your backend is either closing the connection before delivering a response, or it is timing out delivering the response. You could use a tool like wireshark to determine which of the two is happening.
To understand what goes on, let's do some source diving:
static int __match_proto__(vdi_gethdrs_f)
vbe_dir_gethdrs(const struct director *d, struct worker *wrk,
struct busyobj *bo)
{
int i, extrachance = 1;
struct backend *bp;
struct vbc *vbc;
...
do {
vbc = vbe_dir_getfd(wrk, bp, bo);
Not getting too much into directors, vbe_dir_gethdrs is called after Varnish has either opened a new connection, or decided it is going to reuse a connection.
if (vbc->state != VBC_STATE_STOLEN)
extrachance = 0;
If we reuse a connection, vbc->state is set to VBC_STATE_STOLEN (Varnish-Cache/bin/varnishd/cache/cache_backend_tcp.c line 364). When we've opened a new connection, this value is not set. So far, so good.
i = V1F_SendReq(wrk, bo, &bo->acct.bereq_hdrbytes, 0);
if (vbc->state != VBC_STATE_USED)
VBT_Wait(wrk, vbc);
assert(vbc->state == VBC_STATE_USED);
if (i == 0)
i = V1F_FetchRespHdr(bo);
What this does is sends the request to the backend. If everything there goes well, we then call V1F_FetchRespHdr, which waits for the origin to send its protocol response and headers. If we follow the code into V1F_FetchRespHdr:
VTCP_set_read_timeout(htc->fd, htc->first_byte_timeout);
...
do {
...
i = read(htc->fd, htc->rxbuf_e, i);
if (i <= 0) {
bo->acct.beresp_hdrbytes +=
htc->rxbuf_e - htc->rxbuf_b;
WS_ReleaseP(htc->ws, htc->rxbuf_b);
VSLb(bo->vsl, SLT_FetchError, "http %sread error: EOF",
first ? "first " : "");
htc->doclose = SC_RX_TIMEOUT;
return (first ? 1 : -1);
}
Here, we see that we're setting a timeout on the socket before we do the read syscall. If this read returns an error (the < 0 case), or EOF (the == 0 case), and this is the first time we have called read, we end up logging http first read error: EOF as you are seeing in your varnishlog output.
So, if you open a new connection to the backend, and the backend times out or closes the connection after the request was sent, you get this error.
Personally, I would find it suspect if your origin was closing connections; I think timeouts are usually more likely. But connections may be closed if your backend thinks it has too many open connections, or perhaps it has received too many requests over the connection, or something like this.
Hope that helps!