Umbraco throwing 404 errors on resources - IIS Express [closed] - iis

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
I can run the Umbraco site, get to the umbraco.aspx page, but all css, js, images that attempt to get loaded when you log in return 404 (Not Found) errors.
The site is running with its own AppPool with identity set to: ApplicationPoolIdentity
I have verified that the folders have the correct permissions. Umbraco site offers a few scripts to set the permissions. I have manually set some of the resource folders/files to test unsuccessfully.
I can do the following:
I can open the file via folder path in the browser i.e. c:\DirectoryName\umbraco\css\filename.css
I can open any of the aspx pages in the related directories.
I cannot:
Load the Url to the file http://localhost/umbraco/css/somefilename.css
Example Error CSS file (but same applies for js files and images)
Request URL:http://localhost/umbraco_client/panel/style.css?cdv=1
Request Method:GET
Status Code:404 Not Found
Request Headersview source
Accept:text/css,*/*;q=0.1
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Cookie:UMB_UCONTEXT=0db94de2-069c-40ce-b166-bab7b0adce34; ASP.NET_SessionId=paothhfjrxiozxkysmnrqke3; UMB_UPDCHK=1; .JbfAUTH=D9053593029C68111E4718F956C1D9B3685E8265AA95C27E72CB18897F802A3887A30CAFA0A0E50AD6C6935D65429E23BAE29BC12A6619A53658A88C4E3A34AD771C65C139322DC4B433AFF3EA0DBD49261220B00935FF413128EE9567B1B4E2C4F31AD1A770EAC96399A2862D60CB906A00B328567136C2124666DB6D4E8F2A6A13E3F446F999E824390E02CCCF20381021EB129AE9AA10D8D3662B6571FD08FCC99CBBEBBFC17DDFE7131A057D0B0EDC021875B74849F858B900606A1BE62AE7C11EEA0FCB4C577AF926C16E1A056B2ACEC975D707209BB848F2B7D43ABC29A74ADF425025F0C39FA01403A77D91AE60DED766
Host:localhost
Referer:http://localhost/umbraco/dashboard.aspx
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11
Query String Parametersview URL encoded
cdv:1
Response Headersview source
Content-Length:0
Date:Wed, 25 Jul 2012 17:09:47 GMT
Server:Microsoft-IIS/7.5
X-Powered-By:ASP.NET
UPDATE
I have verified that if I let the site load through Visaul Studio 2010 Development Server, it runs without issue.
I also, created a new IIS site and throw a couple js files out and tried to hit them and also found that I get the same 404 error when browsing to them. So this might be more of an IIS issue (IIS Express) Windows 7.

Turned out to be an issue with IIS and required reinstallation.

Related

GET request to IIS returns Microsoft-HttpApi/2.0

I've got 6 identical machines running IIS and Apache. Today one of them decided to just stop serving requests. I can access all of the webapps when I try from localhost/resource but when I try from url/resource I get a 404. I did a Get request against the machine that isn't working and I get this back:
Server: Microsoft-HTTPAPI/2.0
Connection: close
Compared to a working server:
Server: Microsoft-IIS/8.5
X-Powered-By: ASP.NET
Content-Type: text/html
Tried searching for this problem but came up with nothing, anyone got any idea's?
Windows has an HTTP service that manages calls to IIS and other HTTP enabled services on a windows machine. Either you need to configure it to handle your calls, or, in the case of WAMP or similar non-IIS-web-server-on-windows scenarios you may just need to turn it off.
When you see "Microsoft-HttpApi/2.0" returning error, such as 400 "bad URL" or "bad header", etc. the problem is most likely because the HTTP.sys service is intercepting your http request and terminating it because it does not meet with the minimum validation rules that are configured.
This configuration is found in the registry at HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters. In my case, it was choking because I had a RESTful call that had a 400 character segment in the url which was 160 characters more than the default value of 260, so I
added the registry parameter UrlSegmentMaxLength with a DWORD value of 512,
stopped the service using net stop http
started the service using net start http
I've run into these issues before and it is easy to troubleshoot but there is very little on the web that addresses it.
Try these links
"the underlying problem is that the client has sent a request to IIS that breaks one or more rules that HTTP.sys is enforcing"
enabling logging on HTTP.sys is described here
a list of the HTTP.sys parameters that you can control in the registry is found here.
A bit late, so put here for posterity ;-)
After trying all sorts of solutions found on the web, I almost gave up, but found this little nugget.
If the response's Server header returns Microsoft-HttpApi/2.0, it means that the HTTP.sys is being called, not IIS.
As a result, a lot of the workarounds will not work (URLScan, etc).
This worked however:
Open regedit
Navigate HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\
If DisableServerHeader doesn't exist, create it (DWORD 32bit) and give it a value of 2. If it does exist, and the value isn't 2, set it to 2.
Finally, restart the service by calling net stop http then net start http
src: WS/WCF: Remove Server Header
Set below registry flag to: 2
HKLM\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\DisableServerHeader
Setting this to 2 will ensure that self host WCF services no longer sends the SERVER header and thus ensure we are security compliant.
Please note that this disables ALL server headers.
The default value of 0 enables the header, and the value of 1 disables server header from DRIVER (http.sys), but app can still have headers.
For me I had to restart the server for the changes to take effect.
Hope this helps someone
I was working on our web app on a client's site and ran into an issue where the site root pages loaded, but the reports folder always returned a 404 for files that existed in the folder. The 404 page showed the .Net version of 2 when the application was set to 4, and a test of a non-existent page in the root returned a 404 page showing .Net 4.
I tried just http://localhost/reports and got back a Microsoft Reporting Services page. Not part of my application.
Be sure to check just the default document of the folder when a unexpected 404 comes up and the file exists.
This question and series of replies helped me get to the bottom of the related issue I was having. My issue centered around using just a subdomain to go to our server (e.g. typing "www/somepath" into the browser while on our corporate network), which had worked in the past on an older server, but no longer worked when the system was upgraded to a new server. I saw the unexpected Microsoft-HttpApi/2.0 string in the header when using the Chrome Devtools to inspect the Network traffic.
My HTTP.sys process was already logging, so I could verify that my traffic was going to that service and returning 404 NotFound status codes.
My resolution was to add a binding to the IIS site for the subdomain, making IIS respond instead of the HTTP.sys process, as described in this server fault article - https://serverfault.com/questions/479274/why-is-microsoft-httpapi-returning-404-to-my-network-switch
In my case, running Windows 10 Pro, it was the Windows MultiPoint Service.
By executing:
net stop wms
Port 80 was released.

How do I setup CORS on Lotus Domino?

I'm attempting to communicate with Domino via REST via a cross domain request, but I'm encountering an issue. I've setup an Internet Site document with the IP Address, localhost and a server name listed as the host names. The internet site is working as a redirect rule I've setup on that internet site is working. I've also setup a Web Site Rule with the following:
Now when I attempt to hit the rest.xsp page via an html GET request I'm getting this error:
XMLHttpRequest cannot load
http://192.168.1.104/testing/restService.nsf/rest.xsp/testRest?reqType=UserCanAc…TOP&startId=BA4241EC74912860ED60FD1123473BF7&returnType=ARRAYOBJECTS.
No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin
'http://127.0.0.1:8020' is therefore not allowed access.
Here are the request headers:
Accept:application/json, text/javascript, */*; q=0.01
Cache-Control:max-age=0
Origin:http://127.0.0.1:8020
Referer:http://127.0.0.1:8020/Backbone%20Playground/index.html
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36
I can't for the life of me figure out what I've missed. Can someone point me in the right direction?
The CORS header is part of the response, so you need to check if you get a CORS response header with your page. In any case, for an XPage you can get direct access to the servlet response object and set the header in your XPage:
var externalContext = facesContext.getExternalContext();
var response = externalContext.getResponse();
response.setHeader("Access-Control-Allow-Origin","*");
You want to replace the * with a little more restrictive setting. Cors doesn't work in all browsers, so you need to check that end too.
I think your configuration is fine and you can test it using CURL . You should be able to see the Custom Headers by checking any URL different to the one you're using.
The problem, maybe, is due to the XPages Extension Library control, REST Service, you're using. I think the "HTTP response headers" are not applied for this control. I've tested it in Domino 8.5.3
I know this is kinda old thread but since it's not being answered and there are some news, I think it's worth throwing in my own findings.
Mark Leusink caved into this and discovered that there's a need to accept also return code 204 for GET and 201 also for any write (PUT / POST) operations
There is now a new possibility to include a fourth Response Header to all website rules by the means of notes.ini parameter "HTTPAdditionalRespHeader=", see this technote
However, I'm also struggling on completing a CORS task currently, because Domino always responds with an 401 to the preflight (which seems clear as it comes unauthenticated, at least within Chrome).

Is this googlebot or someone trying to impersonate googlebot?

On my elmah exceptions i keep getting exceptions of what appears to be googlebot but what I imagine is someone impersonating themselves trying to download what appears to be wares and other dodgy software from my server.
Here are just a few of the attempts and the software they are trying to get.
The controller for path '/download/msjavx86.exe' was not found
/downloads/IEZawGyiGtalkfont.EXE'
/downloads/alphazawgyiremover.exe
/downloads/gtalkmyanmaraddinremover.exe'
/cgi-bin/irbis32r/cgiirbis_32.exe
/ticker/MBISetup.exe'
The user agent and remote host are always the same
REMOTE_HOST 66.249.65.163
HTTP_USER_AGENT Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
So my question is, is this googlebot searching for malware , or someone having a go at my server ??
I guess Yes. Google does scan websites for safe search listing. Malware scan Based on you server software is part of it.

IIS website is sending multiple content-type headers for zip files

We have a problem with an IIS5 server.
When certain users/browsers click to download .zip files, binary gibberish text sometimes renders in the browser window. The desired behavior is for the file to either download or open with the associated zip application.
Initially, we suspected that the wrong content-type header was set on the file. The IIS tech confirmed that .zip files were being served by IIS with the mime-type "application/x-zip-compressed".
However, an inspection of the HTTP packets using Wireshark reveals that requests for zip files return two Content-Type headers.
Content-Type: text/html;
charset=UTF-8
Content-Type:
application/x-zip-compressed
Any idea why IIS is sending two content-type headers? This doesn't happen for regular HTML or images files. It does happen with ZIP and PDF.
Is there a particular place we can ask the IIS tech to look? Or is there a configuration file we can examine?
I believe - and i may be wrong that the http 1.1 header sends multiple headers definitions and the most specific has precedence .
so in your example here it is sending 2 text/html and then application/x-zip-commercial so the second one would be the most specific - if that cant be handled on the client then the more general one is used (the first one in this case ) -
I have read through this http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html and that sort of points to what you are saying - not sure if this is what is actually happening though.
Of course i may be totally wrong here
Make sure that you don't have any ISAPI filters or ASP.net HTTP modules set up to rewrite the headers. If they don't check to see if the header already exists, it will be appended rather than replaced. We had issues a while ago with an in-house authentication module not correctly updating the headers so we were getting two Authorization headers, one from IIS and one from our module.
What software has been installed on the server to work with .zip files?
It looks like IIS picks up MIME translations from the registry, perhaps zip-software you use has registered the MIME-type. This doesn't explain why IIS would respond with two content-type headers, so any ISAPI filter and other Mime-table is suspect.
This may be related to this knowledge base article. It is suggesting that IIS may be gzipping the already zipped file, but some browsers just buck pass straight to a secondary application giving you bad data (as it has been zipped twice). If you change the mime type of the zip extension to application/octet-stream this may not happen.
It sounds like there may be a issue with your configuration of IIS. However that is not possible to tell from your post if this is the case.
You can have mime types configured on several levels on your IIS. My IIS 5 knowledge is a bit rusty, as far as I can remeber this behavior is the same for IIS 6. I tried to simulate this on a IIS 6 enviroment, but only ever received one mime type depending on the accepted header
I have set the the header for zip files on the site to application/x-zip-compressed and for the file I have explicity set it to
tinyget -srv:dev.24.com -uri:/helloworld.zip -tbLoadSecurity
WWWConnect::Connect("server.domain.com","80")
IP = "127.0.0.1:80"
source port: 1581
REQUEST: **************
GET /helloworld.zip HTTP/1.1
Host: server.domain.com
Accept: */*
RESPONSE: **************
HTTP/1.1 200 OK
Content-Length: 155
Content-Type: text/html
Last-Modified: Wed, 29 Apr 2009 08:43:10 GMT
Accept-Ranges: bytes
ETag: "747da786a6c8c91:0"
Server: Microsoft-IIS/6.0
Date: Wed, 29 Apr 2009 10:47:10 GMT
PK??
? ? ? helloworld.txthello worldPK??¶
? ? ? ? helloworld.txtPK?? ? ? < 7 ? hello world sample
WWWConnect::Close("server.domain.com","80")
closed source port: 1581
However I dont feel this prove much. It does however raise a few questions:
What is all the mime maps that have been setup on the server (ask the server admin for the metabase.xml file, and then you can make sure he has not missed some setting)
Is those clients on a network that is under your control? Probably not, I wonder what proxy server might be sitting inbetween your server and the clients
How does the IIS log's look like, for that request, I am spesifically intrested in the Accept header.
I wonder what fiddler will show?
I've encountered a similar problem. I was testing downloads on IIS 6 and couldn't figure out why a zipped file called test.zip was displaying as text in IE8 (it was fine in other browsers, where it would download).
Then I realised that for the test I'd compressed a very small text file. My guess is that IE sniffed the file, saw the text (which was pretty much uncompressed because of the small size) and decided it was plain text.
I tried again with a larger file and the download prompt appeared OK in IE8.
May not be relevant to your case, but thought I'd mention it.
Tim

ColdFusion RDS and NTLM Integrated Authentication Problem

I can't seem to get the magic combination of enabling NTLM authentication and still having RDS work. If I leave just anonymous authentication on, RDS works fine - as soon as I enabled it site wide, RDS fails (which is to be expected). Here is what I have done:
This is Windows XP SP2 and ColdFusion 8, Eclipse + Adobe plugins
In the IIS Manager, Right click on default web site and choose Properties
Directory Security tab, click the Edit button for anonymous access and authentication control
Authentication Methods popup window, uncheck anonymous access, and check Integrated Windows authentication (all other checks blank as well).
Click OK, OK, and override the settings for all child sites as well such that the entire site is "secured" using NTLM authentication.
Back in the IIS manager, right click on the CFIDE virtual directory, choose Properties
Directory security tab, edit the authentication methods. Uncheck Integrated Windows authentication and check anonymous access. Hit OK, OK and test:
C:\>wget -S -O - http://localhost/CFIDE/administrator/
--2009-01-21 10:11:59-- http://localhost/CFIDE/administrator/
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Server: Microsoft-IIS/5.1
Date: Wed, 21 Jan 2009 17:12:00 GMT
X-Powered-By: ASP.NET
Set-Cookie: CFID=712;expires=Fri, 14-Jan-2039 17:12:00 GMT;path=/
Set-Cookie: CFTOKEN=17139032;expires=Fri, 14-Jan-2039 17:12:00 GMT;path=/
Set-Cookie: CFAUTHORIZATION_cfadmin=;expires=Mon, 21-Jan-2008 17:12:00 GMT;path=/
Cache-Control: no-cache
Content-Type: text/html; charset=UTF-8
Length: unspecified [text/html]
Saving to: `STDOUT'
... html output follows ...
And so far so good, the CFIDE directory and at least one child directory appear to be working without NTLM authentication. So I fire up Eclipse and try to establish an RDS connection. Unfortunately I just get an Access Denied message. Investigating a bit further it appears that Eclipse is trying to communicate with /CFIDE/main/ide.cfm - fair enough, pull out trusty wget once again see what IIS is doing:
C:\>wget -S -O - http://localhost/CFIDE/main/ide.cfm
--2009-01-21 10:16:56-- http://localhost/CFIDE/main/ide.cfm
Resolving localhost... 127.0.0.1
Connecting to localhost|127.0.0.1|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 401 Access Denied
Server: Microsoft-IIS/5.1
Date: Wed, 21 Jan 2009 17:16:56 GMT
WWW-Authenticate: Negotiate
WWW-Authenticate: NTLM
Content-Length: 4431
Content-Type: text/html
Authorization failed.
One potential hang up that has been documented elsewhere is that the main directory and ide.cfm page don't actually exist on disk. IIS is configured to hand off all .cfm files to JRun and JRun is configured to map ide.cfm to the RDS servlet. In an attempt to force IIS to be a bit more sensible, I dropped a main directory and empty ide.cfm file on disk hoping it would solve the authentication issue but it didn't make any difference.
What I can do as a work around is leave the entire site as anonymous access and then just enable the specific application folders to use NTLM integrated authentication, but there are quite literally hundreds of possible web applications I would have to do that for. Yuck.
Please Help!!!
There is something strange about answering your own question, but I did finally get it resolved.
NTLM integrated authentication can be enabled for the entire web site
Anonymous access must be enabled for the CFIDE virtual directory
Anonymous access must be enabled for the JRunScripts virtual directory
Once both CFIDE and JRunScripts had anonymous access enabled, RDS and debugging through Eclipse worked like a charm.

Resources