I am writing inline C in my VCL file. More specifically I am using Maxmind's GeoIP database to geocode a visitor's IP. I have everything installed, I have followed all the wiki examples for GeoIP database and everything works swimmingly.
I am trying to now do some magic with GeoIP besides the return country examples. I want to return the visitor's city using the method GeoIP_record_by_addr(), which returns a pointer.
Problem: I cannot seem to correctly cast a GeoIPRecord* to char*. I have tried for hours. I get Varnish to compile my VCL file without any errors or notices, but the varnish server responds with 403.
Question: Anyway I can debug either the inline C or the 403 varnish is responding with?
Generally, Firebug and varnishlog will be your best friends.
If you want to debug pure VCL, the best way is to send data into HTTP headers ([req/bereq/beresp/resp].http.[header name]) and check their value into Firebug (or varnishlog if you have few requests).
If you want to debug inline C, you can also play with headers (VRT_SetHdr()) but if your C code makes varnish crash, you'll see why into /var/log/messages.
You can also check varnishlog to see if varnish crashes...but when varnish crashes, you get timeouts, not 403...
I'd have to see your VCL to understand why you get 403 but technically, it's not an "error", but a "status", meaning that your request has been processed by varnish (and, unfortunately, forbidden somewhrere).
I don't think Varnish would return 403 except if you ask him to do it. So there's a big chance the 403 status comes from your web server (backend).
Anyway, your varnish doesn't seem to crash but rather have behavior issues.
Related
Well. Exists something api, and I try get file, but I get 404 status code - why?
I don't know what is, but I found several features.
If use browser and going to path for download file - always ok, him download.
But if use modules(I use superagent) you are get 404 status code.
Ok. I watch on headers request - I copy all headers from firefox and set up in request - anyway 404.
I use Google chrome, and always ok! Him too download file.
I use Sphere(~anonymous browser) - 404.
I go on linux, and try again.
All browsers drop 404. superagent too.
And it's as far as I know work on nginx. (message "404 not found. nginx.")
I search any ideas why it happens.
I use API from bazon.cc
There are several reasons why this might be happening: Cookies, User-Agent based filtering, HTTP-Referrer Header checking and so on. You should use a packet capture tool to make sure the requests are really exactly the same.
Might seem like a strange question. But I want to be able to force 500 internal server errors (not errors with my code, but error 500 IIS).
Any sure bet way to do this?
I need to be able to test how other systems react that are connecting when the server returns Internal Server error 500.
I saw this question:
How to force IIS to throw an exception
Problem with this is I can already do this. If I do it this way, I have more control than with the IIS internal errors. Or at least I think I have more control.
I don't want to mimic a 500 error. I need to be able to see all the aspects of the 500 error IIS returns (content, headers, etc).
The purpose is that I need to be able to CHANGE those things if possible. And know I changed them.
One example of what I need to do is set a short cache time for these internal 500 errors (without affecting other responses).
This worked for me, I configured IIS limits to zero connections. Any requests gives
HTTP/1.1 503 Service Unavailable\r\n
Not exactly what you asked for, but perhaps close enough?
We use Apigee proxy to invoke our API. All works well when we test it out within Apigee trace. Also works fine with curl. But on a browser, it gives a 503. This is not consistent though, sometimes it gives a 200 on the browser too. Tried Chrome and Firefox, same behavior.
Our API still executes well though. We do not return any response, merely set the status. Any ideas on what we could try out to get a 200 on the browser?
Couple of things to check:
Check if your Browser has a DNS entry caching. Sometimes services like ELB changes the actual IPs. So caching DNS entries may result in 503.
Another you may want to check is the difference is in the HTTP Verb used. Browsers send a GET request. But curl commands can do all. So if your service is specifically not serving GET calls you may get some server side errors. Also curl sends certain headers even if you do not explicitly send. E.g., Accept:/ header and user-agent header etc. Check if the server is behaving differently based on those headers.
You should look into using Chrome or Firefox extensions for this. There are two in particular which support a wide range of additional features for API developers.
For Chrome, try Postman.
For Firefox, try RESTClient.
Thanks.
I've noticed an issue on one of my sites whereby my content pages (which shouldn't set any cookies, should all be returning "Cache-Control: public" with a max-age set, and don't require authorization).
My issue is that somehow HitPass objects are making it into my cache, removing the caching from that page. I need to debug this, but am confused at exactly how best to do this particularly as I'm unable to replicate the issue.
I notice that varnish gives me an ID beside the HitPass in the varnish log. I assume this is the varnish ID for the request that generated the HitPass, and that searching back in a varnish log would tell me exactly what was wrong with the response?
Would it be better to just remove the SetCookie header from pages that I want to cache? The problem is that vcl_fetch is called even if a URL is passed... Is there any way to tell in vcl_fetch whether or not the current request has been passed by vcl_recv?
SetCookie is indeed a reason why you get hit-for-pass objects in your cache. This is an important optimization for non-prepared sites. A hit-for-pass will let varnish go straight to the backend for each of these request instead of stall them and wait for the response of the previous one.
I'm not sure as to exactly what you are wanting to debug. If it's the set-cookie, you should probably either remove that from the backend or make your own rules on what ones to cache or what one's to ignore in your cache. If you still need the set-cookie and it has unique values, hit-for-pass is the way to do that best.
I'm running varnish on a dedicated server. When i load a page, it is delivered via Apache and on the second and subsequent hits it is then delivered via Varnish Cache (i.e. I can see two timestamps in X-Varnish headers).
But when i open up the same page from some other computer, it's again delivered from the backend (apache) for the first time and on further reloads it comes from Varnish.
If a page is already in Varnish Cache, isn't it supposed to be delivered via Varnish even on a new computer for the first time? I've tried simple hello world php files without any database calls with the same effect. Might it be something wrong with my vcl file or Varnish works this way only?
check whether you sending session data (cookies) which then look like unique calls to varnish. the docs show you how to strip cookies.
Jon is right. I had similar problem. You also need to clean up your cookie and cache before test. Check if the first visit response header, it tries to set cookie. If so, you can do "unset beresp.http.Set-Cookie under vcl_fetch.