In ASP.NET there is the Application_EndRequest event in global.asax. In classic ASP however there is no such equivalent event in global.asa
Is there any other built in way of handling the end request event, or any way of somehow hooking into IIS to accomplish the same thing?
We use a particularly twisted technique to execute code after the request has completed. Consider the following snippet:
Class EndRequestHandler
Sub Class_Terminate()
'' Handler code goes here
End Sub
End Class
Set EndRequestHandlerInstance = New EndRequestHandler
When the request ends, ASP unloads all of the global variables, including EndRequestHandlerInstance, which calls it's Class_Terminate method. If you place this into an include file that's used by every page on the site, it should serve as your global end request handler.
On IIS6 and older (or in an IIS7 classic pipeline application pool) you would really need to the help of an ISAPI filter to achieve the same sort of thing as a global End Request opertaion.
In IIS7 integrated pipeline you could use .NET End Request code even if the page executed is a classic ASP.
Not being nosy but what is it that you are trying to do? There might be different solutions for what you want (such as rendering out debug goodies at the footer of each page) or no solutions at all (such as my wish that I could get the contents of the Response buffer and mess with it before farming it out to the wire)
Related
I am writing a managed Unity3d plugin which handles some web requests, registration and downloading content from the web. I am using the .Net WebClient class and the corresponding Async calls for down/uploading content in a separate thread to avoid laggy UI.
I am now trying to find an easy way to retrieve the callbacks from those async calls on the "unity3d side". Something like the follwing line of code does not work, obviously:
MyManagedPlugin.RequestDataAsync(Callback);
I also would like to avoid any thread specific code on the unity3d side (meaning in the scripts) so that all the async calls are encapsulated in the managed plugin. I am not sure if that is possible at all since there need to be some kind of polling mechanism to on the main thread (unit3d) to check if a thread is done right?
I have my site hosted on IIS hosting. Site has feature that needs calling WCF service and then return result. The issue is that site is processing calling to WCF service another web site calling is freezing and not return content fast (this is just static content). I setup two chrome instances with different imacros' scripts, which one is calling page that requests wcf service and another one page is just static content. So here I can just see that when first page that requests wcf services freezes, another one page also freezes and when first is released the second is too.
Do I need reconfigure something in my Web.Config or do should I do something else to get possible to get static content immediately.
I think that there are two seperate problems here:
Why does the page that uses the WCF service freeze
Why does the static content page freeze
On the page that calls the WCF service a common problem is that the WCF client is not closed. By default there are 10 WCF connections with a timeout of 1 min. The first 10 calls go fine (say they execute i 2 secs), then the 11th call comes, there are no free wcf connections it must therefore wait 58 secs for a connection to timeout and become available.
On why your static page freezes. It could be that your client only allows one connection to the site, the request for the static page is not sent untill the request for the page with the wcf services is complete.
You should check the IIS logs to see how must time IIS is reporting that the request is taking.
I would say that this is a threading issue. This MSDN KB article has some suggestions on how to tune your ASP.NET threading behavior:
http://support.microsoft.com/kb/821268
From article - ...you can tune the following parameters in your Machine.config file to best fit your situation:
maxWorkerThreads
minWorkerThreads
maxIoThreads
minFreeThreads
minLocalRequestFreeThreads
maxconnection
executionTimeout
To successfully resolve these problems, do the following:
Limit the number of ASP.NET requests that can execute at the same time to approximately 12 per CPU.
Permit Web service callbacks to freely use threads in the ThreadPool.
Select an appropriate value for the maxconnections parameter. Base your selection on the number of IP addresses and AppDomains that are used.
etc...
Consider such scenario: when you make a request to IIS your app changes, deletes or creates some file outside of App_Data folder. This often tends to be a log file which is mistakenly was put at bin folder of the app. The file system changes lead to AppDomain reloading by IIS as it thinks that app was changed, hence the experienced delay. This may or may not apply to your issue, but it is a common mistake in ASP.NET apps.
Well, maybe there is no problem...
It may be just the browser's same domain simultaneous requests limit.
Until the browser not finished the request to the first page (the WCF page), it won't send the request to the second page (the static).
Try this:
Use different browsers for each page (for example chrome/firefox).
Or open the second page in chrome in incognito window (Ctrl + Shift + N).
Or try to access each page from different computer.
You could try to use AppFabric and see what is wrong with your WCF services http://msdn.microsoft.com/en-us/windowsserver/ee695849
I have created a .net web service and when i try to call a method that saves the data in the database, the request is fired twice. I use net profiler to check if two requests are made to the server but only one request is made to server. I fail to understand why data is being entered twice in the database.I am using jsonp method to call the cross domain site
I just found something interesting. I have two servers. When i host the web service on one and call the web service using cross domaining, the data is entered once whereas on the other the data is entered twice. Do we need to take care of some IIS settings too?
So, if there aren't two requests being made then there are almost certainly two calls to the Save() method (or whatever it is called), being fired from the web service end point. But there maybe dupliacte data somewhere too.
Here are a couple of things to check:
What data is actually being
transferred? Have you checked this
using a tool like Charles?
Is the data being passed to your Save() method the same as the data being passed to the web service?
How is the data being written to the database? Is there duplicate SQL somewhere?
Heyi all, i just converted my project to visual studio 2010 and then installed it on the server. Everything is running perfectly now. Thanks
I am using IIS to develop some web applications. I used to believe that every application should have a entry point. But it seems a web application doesn't have one.
I have read many books and articles addressing how to build an ASP.NET application under IIS, but they are just not addressing the most obvious and basic thing that I want to know.
So could anyone tell me how is a web application started? What's the difference between a traditional desktop application and a web application in terms of their working paradigm, such as the starting and terminating logic.
Many thanks.
Update - 1 - 23:14 2011/1/4
My current understanding is:
When some request arrives, the URL contained in the request will be extracted by the IIS. I guess IIS must have maintained some kind of a internal table which maps a URL to corresponding physical directory on disk. Let's take the following URL as an example:
http://myhost/webapp/page1.aspx
With the help of the aforementioned internal table, IIS will locate the page1.aspx file on disk. And then this file is checked and the code-behind code file is located. And then proper page class instance will be contructed and its methods defined in the code-behind file will be invoked in a pre-defined order. The output of the series of method invoking will be the response sent to the client.
Update - 2 - 23:32 2011/1/4
The URL is nothing but an identifier that serves as an index into the aforementioned internal table. With this index, IIS (or any kind of web server technology) could find the physical location of the resource. Then with some hint (such as file extension name like *.aspx), the web server knows what handler (such as the asp.net ISAPI handler) should be used to process that resource. That chosen handler will know how to parse and execute the resource file.
So this also explains why a web server should be extensible.
It depends what language and framework you are using, but broadly there are a number of entry points that will be bound to HTTP requests (e.g. by URL). When the server receives a request that matches one of these bindings, the bound code is executed.
There may also be various filter chains and interceptors that are executed based on other conditions of the request. There will probably also be some set-up code that the server executes when it starts up. Ultimately, there is still a single entry-point - the main() function of the server - but from the web application's perspective it is the request bindings that matter.
Edit in response to question edits
I have never used IIS, but I would assume there is no "lookup table", but instead some lookup rules. I shall talk you through the invocation of a .jsp page on an Apache server, which should be basically the same process.
The webapp is written and placed in the file system - e.g. C:/www/mywebapp
The web server is given a configuration rule telling it that the URL path /webapp/ should be mapped to C:/www/mywebapp
The web server is also configured to recognise .jsp files as being JSP servlets
The web server receives a request for /webapp/page1.jsp, this is dispatched to a worker thread
The web server uses its mapping rules to locate C:/www/mywebapp/page1.jsp
The web server wraps the code in the JSP file in a class with method serveRequest(request, response) and compiles it (if not already done so)
The web server calls the serveRequest function, which is now the entry point of the user code
When the user code is finished, the web server sends the response to the client, and the worker thread terminates
This is the most basic system - resource-based servlets (i.e. .jsp or .aspx files). The binding rules become much more complicated when using technologies like MVC frameworks, but the essential concepts are the same.
Similar to what OrangeDog mentioned in his answer, there is plenty that goes on when serving these pages Before you even get to your code.
Not only in asp.net mvc, but in asp.net in general there are various pieces that come into play when you're executing a request.
There is code like modules, handlers, etc that again do processing Before it gets to the code of the page. Additionally you can map the same page to be able to process different urls.
The concept of handler in asp.net is important, as there are various handlers that are responsible of processing requests that match extensions and/or http verbs (get, head, post). If you take a look into %systemroot%\Microsoft.NET\Framework64\v4.0.30319\Config\web.config, you can see a section. You can also see the handlers in IIS (these can be changed x site).
For example, the HttpForbiddenHandler is one that just rejects the request. It is configured to be called for special files like the sources "*.cs".
You can define your own handler, that is nothing more than a class that implements an IHttpHandler interface. So it has 2 methods: ProcessRequest and IsReusable. This is more similar to your cgi program, as the implementation is mainly a method that produces HTML or any other type of output based on the information in the request.
Asp.net pages build on top of that, and have plenty of extra features meant to make it easier for you to develop pages. You implement a class that inherits from Page, and there are 2 code files associated to it (.aspx and .cs). The same can be said for asp.net mvc, but it is structured differently. There is much more than it, if you want to take advantage of it you'd need to learn about it.
The downside of those abstractions, is that it makes some developers lose track of the context they're at / about the underlying. The context is still the same, you're producing an application that takes a request and produces an output. The difference is that there is plenty more code in place intended to make it easier.
In terms of a more detailed list of IIS's ASP.NET Request Lifecycle, there are a quite a few stages in the HTTPApplication Pipeline. Faily recently there was a good blog post that I thought summarized them very concisely and well. It's "HTTP Request Lifecycle Events in IIS Pipeline that every ASP.NET Developer Should Know" by Suprotim Agarwal.
For a more detailed explanation you should check out the MSDN article on the subject. This will also go into information on what happens before that pipeline.
I am currently working on setting up a system for my company using a classic ASP on IIS setup that is relying on data from a 3rd party API. Problem is the API responds extremely slow and all it sends is a large XML file or all data. Simple enough fix is to request is once a day and setup a database to store this info and have the app use the db as a source instead of the API.
I wrote the database filler code in an ASP file and just need to make sure it gets run every day. What is the best way to do this? I am not really familiar with best practices here, should I just have scheduled tasks open iexplorer pointing to the URL that runs the scraper or is there some kind of way to do it via the command line. Mostly worried that I am trying to translate how I would do this in PHP on LAMP to ASP/WISA instead of looking at this as its own issue.
I would recommend against using a scripting call to accomplish this. This would be best done with a .Net console application called from the Task Scheduler.
The equivalent of cron on a windows computer is "Scheduled Tasks". The how-to for Windows XP is here: http://support.microsoft.com/kb/308569. Similar steps can be followed on most versions of windows.
If you put the data filtering code in an asp file, you can make a http request to that file using the following VBS script:
Const WinHttpVersion = "5.1"
Dim objWinHttp, strURL
' Request URL from 1st Command Line Argument. This is
' a nice option so you can use the same file to
' schedule any number of differnet scripts.
strURL = WScript.Arguments(0)
' For more WinHTTP v5.0 info, go to Registry Editor to find out
' the version of WinHttpRequest object.
Set objWinHttp = CreateObject("WinHttp.WinHttpRequest." & WinHttpVersion)
If IsObject(objWinHttp) Then
objWinHttp.Open "GET", strURL
objWinHttp.Send
If objWinHttp.Status <> 200 Then
Err.Raise 1, "HttpRequester", objWinHttp.ResponseText
End If
Set objWinHttp = Nothing
End If
If Err.Number <> 0 Then
' Something has gone wrong... do something about it...
End If
The call to this would look something like:
HttpRequester.vbs http://localhost/MyApp/loadData.asp
You probably want to create a windows service that'll run in the background - hard to do this with just an ASP page.
It is trivial to call the ASP page at scheduled intervals. Just use a Windows port of wget or curl (you can use IE if you have to) and set up a scheduled task to load the page at the interval you desire.
No scripting will be required for this, just put the executable path as the process you want to run and the URL in the parameters field (plus some other switches, depending on which program you end up going with).