I have been trying to generate a SSL certificate for one of our projects which is running on an Azure VM which has no IP restrictions. However, the challenge file which is generated throws a 404 error and is not accessible over the web.
I have tried the following:
Moving the static content type above the extension less options in IIS
Adding a mime type for text/json, text/html
None of the above work which is making it really hard for me to generate a SSL using this service. Any idea how I can make it accessible ? I have given full access to that specific App Pool identity so permissions don't seem to be an issue in this case, its just the way the extension less files are being handled in IIS
Any help is appreciated.
Thanks,
Vishal
You just Add a New MIME Type on IIS
like this .
and try use the url at your browser . you will see that
Now you can Pass the lets-encrypt authentication :)
Also, if you're using a system with lots of custom routing or a framework that interferes with how URLs are handled (e.g. a CMS), ensure that you've told it to ignore /.well-known
We often use Umbraco for public-facing sites and I keep forgetting that I need to add ~/.well-known to the umbracoReservedUrls app setting in the web.config. Hopefully next time I'm stuck, I'll come across this answer...
Taking inspiration from the accepted answer, I did the following:
I was using plesk for windows on Godaddy.
Go to
Web server settings
In the MIME types, added the following node and click OK.
text/plain .
Note the dot at the end of the above setting.
Related
I have configured a webserver on localhost with https using Microsoft IIS Administration. I am able to browse directory with files with browsers and Visual Studio using localhost prefixed with https, such as https://localhost/trial etc.
I wish to upload a file to the said directory, ie., trial, using Libcurl to test some features. Unfortunately I'm unable to do so.
Using the same Libcurl example as given on
Libcurl File Upload
-modified for https, the console window tells me that the following has occurred, upon running the code :
IIS 10.0 Detailed Error - 405.0 - Method Not Allowed
HTTP Error 405.0 - Method Not Allowed The page you are
looking for cannot be displayed because an invalid method (HTTP verb)
is being used.
I checked the IIS Administrator and saw that all authorizations are allowed. I suppose the fact that it is flagging a HTTP verb issue rather than HTTPS as I'd enabled and used as URL in code isn't a big thing?
Libcurl uses PUT for uploading files, so should be an allowed verb.
I am quite new to this, so I'm not certain I did something incorrect with the setting up of the webserver, or whether there are security issues or permission issues which are causing a problem here.
As far as I know, there is impossible to use http put or post a file to a IIS web application's folder without writing server-side code. Otherwise, configure an FTP site on your IIS installation. Then you could use ftp command to upload the file.
If you really need using HTTP put or post to upload the file, you could consider using WebDav.
More details about what is webdev and how to use it, you could refer to below article.
https://learn.microsoft.com/en-us/iis/install/installing-publishing-technologies/installing-and-configuring-webdav-on-iis
https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-7/what39s-new-for-webdav-and-iis-7
Try hostname instead of localhost
Add a trailing slash (/) for the directory.
I want to use a reverse proxy to point one of my endpoints to a resource that's hosted elsewhere. My primary server (where everything else is hosted) is in an Azure Web App and is otherwise working perfectly.
I've been using this seemingly failproof article along with the other links mentioned at the bottom of it: https://blogs.msdn.microsoft.com/zhiliang_xus_blog/2016/01/19/build-a-google-reverse-proxy-site-on-azure-web-app-in-less-than-3-minutes/
As a baseline, I used a Web App with no additional code and confirmed that the reverse proxy works. This was done by manually creating/editing the web.config file and applicationHost.xdt file then restarting the server.
I've tried 3 separate approaches (all on clean, new web apps) all of which are failing for me:
Push my code, confirm it works, then follow the reverse proxy steps manually
Follow the steps manually, confirm reverse proxy works, then push my code
Put the reverse proxy files into my codebase and push everything at the same time
None of these 3 approaches are working. Is this a bug in Azure? How can I try to figure this out?
Post XML Transformation (XDT), have you restarted the site?
I would suggest you to take a look at this blog from Ruslan:
http://ruslany.net/2014/05/using-azure-web-site-as-a-reverse-proxy/
It talks about using a Site extension. It implements the reverse proxy and it does the XDT transformation for you.
If the above is setup correctly, then there is something wrong with the URL Rewrite rules. I would recommend you to enable Failed Request Tracing and debug this further.
The link/way you posted used URL Rewrite to implement a reverse proxy. I tested it and it worked fine with my empty web application. After published a web application to the Azure Web App(For example, an ASP.NET MVC web application), the URL Rewrite stopped working. The reason is that all the requests to your web application are routed by ASP.NET route module.
To enable URL Rewrite for some URLs, we need to disable ASP.NET route for these URLs. For example, if you want to rewrite all the requests with "product/xxx" format to another site. You could add following code to RouteConfig.cs file.
routes.Ignore("product/{action}");
The problem in this specific case was the location of my web.config file.
It needs to be in the root directory of the application which, in my case, was not site\wwwroot. My code was being generated and copied into site\wwwroot\dist. Putting the config file in that directory fixed the problem.
Additionally, there are logs that can be enabled to get some insight as to what's going on: https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-enable-diagnostic-log
We're looking to do some scraping on a specific URL that uses cloudflare. Has anyone experienced issues using Zombie.js/user-agents while trying to crawl cloudflare hosted sites.
Would love some help!
I am trying to interface to an API on a client's site and I am getting a 403 error indeed. The request doesn't even reach my server.
Turning security to "essentially off" did not help. The final solution was to white-list the developer machine's IP.
The error is triggered on a single URL (json serving API) with a Java client with standards compliant libraries.
Solution:
1. try to set a rule to allow direct access for that URL
2. try setting security to weaker and weaker ("essentially off")
3. if both fails: try whitelisting
4. set up an alternate non-cloudflare url (direct.domain.com)
These will of course only work if you can negotiate with the site owners.
Backup solution: use an embedded browser that you can "frame" and "remote control" or a testing framework that does the same through a plugin, and extract the content from there (if you can)
Hope this helps.
You're probably triggering one of our security features by trying to scrape a site on us. The only option, really, would be to ask the site owner to whitelist your IP(s) to override the behavior.
I would appreciate any hints regarding the following issue:
The problem summary:
While using Negotiate:Kerberos in IIS 7.5, the authorization works correctly right until we setup URL rewriting (using the MS module "URL Rewrite 2.0") - any rewritten URL then returns "401.1 Unathorized" (requests not matching any rewrite rule keep working though).
The setup:
Windows Server 2008 R2 x64
IIS 7.5
URL Rewrite 2.0
Server is in a domain
SPN exists for HOST/hostname and HOST/hostname.domain (created by default)
Pool is using default ApplicationPoolIdentity (no custom account, not network service)
Kernel mode set to OFF
Authentication providers set to "Negotiate:Kerberos" only (no NTLM or annonymous)
URL Rewrite rule as as "^(.*)/$" => "index?x={R1}"
The result:
1) When accessing any URL not matching any URL rewrite pattern, Kerberos is working correctly, i.e. Kerberos ticket is issued (verified using klist), sent (verified using netmon and HTTP headers) and accepted (verified by URL being accessible and appropriate AUTH_USER property set to my domain account name) => no problem here.
2) When accessing any URL matching URL rewrite pattern, e.g. "hostname/foo" the result is:
HTTP Error 401.1 - Unauthorized
You do not have permission to view this directory or page using the credentials that you supplied.
Module WindowsAuthenticationModule
Notification AuthenticateRequest
Error Code 0x80070055
Requested URL http://hostname/index?x=foo
Physical Path D:\wwwroot\
Logon Method Not yet determined
Logon User Not yet determined
(if we try to access the rewritten URL directly, e.g. hostname/index?x=foo, Kerberos works again normally)
The attempts to solve it so far:
After googling, we have tried several options:
turning kernel mode ON: Kerberos stopped working completely, using either default pool identity or network service (I suppose we would need to setup additional HTTP SPN and/or use custom domain account with additional SPN for that account explicitly)
turning "useAppPoolCredentials" ON: no difference
enabling "Failing Request Tracing": surprisingly these failing 401.1 requests ARE NOT generating any output into the fail logs no matter what rule we try to setup (e.g. 400-999) - the folder is just empty (while other errors, like 404 or even handshake 401.x when accessing not-rewritten URLs are generating logs - very strange)
The conclusion:
So far we have reached a dead end - it may be some weird kind of "double hop" issue requiring using a custom domain account rather than default app pool identity, but as we're in fact accessing the same resources, it seems more like a URL Rewrite issue.
Any tips, hints, pointers? Anything would be highly appreciated.
Best regards,
Marek
we face the same issues as you do. By enabling extended error logging, we were able to put the finger on the actual problem, which seems to be a bug in the rewrite module (or at least in some part of IIS, which is related to the module):
When the URL gets rewritten, the access to the new rewritten URL is checked (seemingly hardcoded) using Basic Authentication and NTLM, neither of which has been configured on the Website at hand. The only configured authentication provider is Kerberos. Since the client doesnt send NTLM nor Basic credentials, there is no way this can work.
We (another person on the current project) are sending the issue to Microsoft. I will let you know, when I get any result.
It seems as though you have multiple issues here.
Failed-Request Tracing Logs
To fix your missing logs issue, you must make sure that the user that is running your site's AppllicationPool has read/modify rights to the folder where those logs are generated, otherwise you won't see anything. See the section labeled "Enable Failed-Request Tracing" on this page: Troubleshoot Failed Requests Using Tracing in IIS 7
What isn't clear is the fact that the site's Application Pool Identity (found in Advanced Settings for Application Pool) is the account that needs read/modify rights to that folder.
Once that is fixed you can load the XML logs in IE and see a much clearer picture of what is going on.
401.1 - Unauthorized Issue
A possible fix to your 401 error is to make sure unlisted file name extensions are allowed in Request Filtering. Go to IIS --> Sites --> [your site] --> Request Filtering
You have two options here:
Allow File Name Extension... and add the value "." (minus the quotes), see this answer.
Edit Feature Settings... and enabled the option "Allow unlisted file name extensions"
The 1st option should work well, the 2nd option obviously opens up a gaping hole but allows everything so you should be able to get it working.
I hope that helps.
I am not asking how. I am asking if. Is it possible to bypass a 403 error on the web?
Let me explain a bit in detail. On a web server the IIS has set up a directory for a project we are such that it is not accessable to the outside. So if you type the path to that directory in a web browser, the web browser will say that it is not accessable and it will throw a 403 error.
Now, here is the problem. Some files are placed there with some secure information. A programmer on our team has made a big deal about this and the fact that the files are placed on a server that is accessalbe to the outside world. On the other hand, I think this is not such a big deal since if a user on the outside tried to go to that directory, his web browser will throw the 403 error. But other people on the team say that a hacker can still somehow access it.
So that leads me here and to my question. Is it possible to bypass a 403 error on the web? I say no. Some network guys at work say maybe. I am not asking how to do it. I am only asking if it is really possible.
I gather from your information that there is a web server with a directory setup on the web like so
http://www.example.com/directory
Now, if you navigate to this URL you get a 403 Forbidden error? However, if you know the name of a file you can go to http://www.example.com/directory/MyImportantDocument.docx and it is possible to view the document at this location?
Unless there is a runnable script on your server that does this, it is not possible to view the directory contents via the web. However, URLs are not considered secure as they are logged in browser history, proxy and server logs and can also be leaked by browsers' referer header. I assume the files are stored here so they can be accessed by a remote application?
File names can be easily brute forced by an attacker. Tools such as dirbuster and dirb do this automatically. Therefore, if the files do not need to be readable remotely, they should be moved to an internal server, not accessible from the internet or DMZ.
If access is needed you should implement some sort of authentication. At the very least activate basic auth on IIS. This will prompt a web browser user for a username and password in order to view files, or the files can be accessed programmatically by setting the appropriate Authorization header, which is an encoded username and password.
Better would be something with comprehensive session management, like an application pre-built for this purpose. E.g. a CMS which is kept up-to-date and securely configured.
Also you should make sure that the IIS website is only configured to be accessed via HTTPS which will protect against traffic snooping of the credentials, URL path, headers and file contents.
In some cases (e.g. Back-end or web server mis-configuration) it's possible to bypass 403. For understanding those methods read this script:
https://github.com/lobuhi/byp4xx
this script contained well-known methods and collected from various bug bounty communities.
So if your back-end server not vulnerable to this script, probably it's safe.
So basically it is NOT possible if the server software itself doesn't has any bug. But if you have other parts of your website that are public and probably using a dynamic scripting language that may higher your risk if someone is able to find a hole with something like "access file from filesystem".
In general I would recommend you to NOT store any security relevant files on a public server that don't need to!
If you could avoid it, it's always the better way.
There is a simple exploit to bypass .httacess restrictions... Try to Google "bypass error 403" and you will find the method. As auditor I can confirm that it is not a good practice (and if I see it I will always raise it as an issue) if you store credentials (or any other sensitive information) in plain text on web server.