I am newer to IIS so please forgive my ignorance. Here is my situation. I have a backend server that resides on ip xx.xx.xx.175 from here I have all my code and IIS installed here. I created a website and can access it just fine using localhost:3000. When I go to the frontend server which resides at xx.xx.xx.174 I cannot connect to the site using the URL.
I have updated the bindings, I have updated the firewall rules, I have also used netsh http add iplisten and I still cannot connect to the site. I am not sure where else to go from here as all of my Google searching lead me to the same things.
Please make sure you have added the ip address in the host file:
Open your text editor in Administrator mode.
In the text editor, open C:\Windows\System32\drivers\etc\hosts.
Add the IP Address and hostname. Example: 192.10.10.5 testserver.com.
Save the changes.
I have a handful of sites set up in the root directory of IIS. I also have them set up in the host file, to route to my local IP address. When I open a browser and type 127.0.0.1/example/index.html, the page opens in my browser. I need to be able to type in example.com and have it open that page, but when I type example.com in a browser, I get the following:
Unable to connect
An error occurred during a connection to example.com.
The site could be temporarily unavailable or too busy. Try again in a
few moments.
If you are unable to load any pages, check your computer’s network
connection.
If your computer or network is protected by a firewall or proxy, make
sure that Firefox is permitted to access the Web.
Also, it changes in the address line of the browser to https://example.com. What could be preventing it from opening my local site?
Evidently it is an issue with the Firefox browser. I tried in both Edge (ew) and Chrome, and the local site came up as expected. I tried making a couple of network security changes in Firefox, but was not able to get it to work, so I'm just going to use Chrome.
I want to add a load balancer to an existing asp.net project using Application Request Routing. So I made myself familiar with the concepts and created a local test-setup:
IIS locally running on Windows 10:
Installed Application Request Routing 3.0 with Windows Platform Installer
Created server farm with following servers:
<test-server-name>.de (Microsoft 2012 R2 Server: contains the asp.net project)
www.google.com (just to see if load balancing and url rewriting works because I don't have two test servers available)
URL-Rewriting rule:
After typing localhost multiple times in any browser, I can see that load balancing (weighted round robin) is working fine. It's alternating between 1. and 2. website.
The problem I'm facing is a 404 Error on both websites.
I already tried the following:
Installing and enabling Failed Request Tracing Rules (on local IIS): URL Rewriting is working properly i think.
Failed Request Log for www.google.com: google drive, unzip and open xml in e.g. IE for better view
Create Server Farm without automatic creation of URL Rewrite rules
(selecting No and create own URL Rewrite rule)
Change "Managed Pipeline Mode"-setting of Applcation Pool from Integrated to Classic
Healthcheck on other Websites I have absolutly no clue why it's working on Git-websites and why facebook is returning a 400 error code.
Enabling/disabling proxy (IIS-Manager -> Application Request Routing Cache -> Server Proxy Settings...)
I don't know what i could do next, so I appreciate any help. Thanks.
Answer can be found here: https://forums.iis.net/t/1238739.aspx?Why+some+sites+return+HTTP+404+some+don+t+
Some websites simply don't support localhost as hostname, which is why localhost can't be found (error 404) e.g. on google.com
Detailed answer if link above is not working in future:
That is not an effective test.
What you are doing is sending the hostname of your request to the third party servers. Like Google.
So if your request is say http://example.com you are sending this to say www.google.com and the Google servers will likely reject this as you can see
Web server admins generally don't let themselves receive traffic from domain thet do not host.
If you sent a request to my servers IP with mysite.com I too would likely reject it. (Things get complex if you have wildcard sites and you allow all traffic through)
But simply showing that 404 page from Google means tour request hit there server so that implies ARR is working.
If you really wanted to test it this way have a local host file with www.google.com resolving to your servers IP. Set up a site with www.google.com as the hostheader and then you should see the correct info hitting Google. But there is no accounting for what 3rd party admins do on their side.
I installed CentOS7 on Amazon EC2 instance, I also installed latest version of CWP(CentOS Web Panel). I created a new user 'myuser'. I associated a domain 'myuser.com' with the user. I uploaded the website files into '/home/myuser/public_html/' and I deleted the default HTTP test index.html present in the same directory. Now I can access my website at http://IP_ADDRESS/~myuser
But when I point 'myuser.com' with my server IP - IP_ADDRESS, it shows CWP HTTP test page. I even modified my host file to make 'myuser.com' point to IP_ADDRESS.
It just shows CWP HTTP test page.
Please someone help me in solving this issue.
Thanks in advance.
I got that just googling for "default page cwp"
Take a look here -> http://forum.centos-webpanel.com/apache/default-page-displayed-for-all-domains/
For Google compute engine (Maybe same scenario in Amazon) in CWP Setting don't use the same IP that is being used for cwp admin panel. Use the default IP that will be suggesting just below the field of Shared IP in cwp setting.
In google compute VM Instance you'll find two IP's internal and external
For domain name server setting use external IP while in cwp setting change Shared Ip to your internal IP.
After 10-15 minutes use Kproxy to browse your site again should work then.
Its just temporary issue. Clear browsing data. This mainly happens in Google Chrome. As you didnt open http://IP_ADDRESS/~myuser before creating newuser, you could view uploaded files, as you might have opened myuser.com before uploading and after pointing to server IP, you are still seeing default cwp template.
We've got a webserver running IIS. We'd like to run maybe a shared blog or something to keep track of information. Because of security issues, we'd like for that part to be only viewable from localhost so people have to remote in to use it.
So, to repeat my question, can part of a website be made viewable from localhost only?
For some one doing it in IIS 8 / Windows 2012
1) In Server Manager, go to Manage, Add Roles and Features, Next, Next (get to Server Roles), scroll down to Web Server (IIS), expand that row, then expand Web Server, and finally expand Security. Make sure that IP and Domain Restrictions are installed.
2) In IIS Manager, drill down to the folder that you want to protect and left click select it. In the Features View of that folder select IP and Domain Restrictions In Actions choose Edit Feature Settings. Change 'Access for unspecified clients:' to 'Deny' then OK.
3) Finally go to 'Add Allow Entry' In the Actions menu. Type in the Specific IP address of your server.
Now only requests coming from your server will be allowed access. Or any server that shares that IP address. So in a small network, the office could share the IP address between all of the PCs in that offices, so all of those PCs could access that folder.
Last but not least is to remember that if your network has a dynamic IP address, then if that IP changes, you will expose your blog admin folder to whoever is using that IP now. Also, everyone on that new IP address will lose access to your that folder...
You can also use bindings instead of IP restrictions. If you edit the bindings for the web site you want to restrict access to, you can select which IP address the site is available at. If you set the IP address to 127.0.0.1, then the site is only responding on this IP address, and this IP address will of course only work locally on the machine.
I've tested this using IIS 8.5.
In IIS6 you can bring up the properties for the web and click on the directory security tab. Click the button in the middle of the tab for editing the IP and Domain restrictions. On this tab set all computers as denied, then add an exception for the IPs you want to allow access to this site.
I am not sure how to configure this on IIS7. I looked but couldn't find it, if I find it I will edit this answer.
Edit: Configuring IIS7
Josh
Should anyone wish to do this on the command line, this appears to work on IIS 7+
%windir%\system32\inetsrv\appcmd.exe set config "Default Web Site" -section:system.webServer/security/ipSecurity /+"[ipAddress='0',allowed='False']" /commit:apphost
%windir%\system32\inetsrv\appcmd.exe set config "Default Web Site" -section:system.webServer/security/ipSecurity /+"[ipAddress='127.0.0.1',allowed='True']" /commit:apphost
Reference
I initially wanted to do this in web.config to ease distribution, and it looked like the following might work:
<security>
<ipSecurity allowUnlisted="false"> <!-- this line blocks everybody, except those listed below -->
<clear/> <!-- removes all upstream restrictions -->
<add ipAddress="127.0.0.1" allowed="true"/> <!-- allow requests from the local machine -->
</ipSecurity>
</security>
but as you need to unlock the function in the central IIS config anyway there was no advantage over making the change directly using the first commands.
I agree with the recommendations to use IIS "Directory Security" to block all IP address except 127.0.0.1 (localhost).
That said, I'm wondering how this strategy of requiring users to remote in could possibly be more secure. Wouldn't it be more secure (as well as much simpler) to use standard IIS authentication mechanisms rather than have to manage Windows roles and permissions on the server machine?
As suggested in https://stackoverflow.com/a/39870955/2279059, it is possible to configure the site's bindings to listen only on the loopback interface. This makes the site inaccessible from the network without having to use IP address restrictions.
To support both IPv4 and IPv6, add two bindings, one for 127.0.0.1 and one for [::1], and set the hostname to *, so either IP address or localhost can be used to access it as shown in the screenshot:
To add a "local" site programmatically, you can use:
appcmd add site /name:MyLoalSite /bindings:http/127.0.0.1:7103:*,http/[::1]:7103:* /physicalPath:"C:\path\to\site\"
Depending on exactly what you want to happen if an unauthorized user tries to visit it.
You could try to setup the specific section as a virtual directory, then deny view to anonymous users. However, they will be prompted for login, and if they can login then they could see it.
Judging from the options present in the IIS MMC, you can also have a virtual directory only be accessible by certain IP-ranges. You could block everyone but 127.0.0.1. I have not tried this, however.
You can grant or deny access to a site or folder from certain IPs to a site or folder. In IIS, go into properties for the site or folder in question.
(1) Click to the "Diectory Security" Tab
(2) Click Edit Under the "IP Address and Domain Name Restriction" frame.
(3) Click "Denied Access" (This tells IIS to block every IP except those you list)
(4) Click "Add..."
(5) Click "Single Computer"
(6) Enter 127.0.0.1 (the IP of localhost)
Note that it is best to use an IP here (as I've described) rather than a domain name because domains can be easily forged using a hosts file.
You could simply add this .NET to the top of the page.
string MyWebServerName = currentContext.Request.ServerVariables["SERVER_NAME"];
if ( MyWebServerName == "127.0.0.1" || MyWebServerName == "localhost" )
{
// the user is local
}
else
{
// the user is NOT local
}