Issue with malicious files showing up in IIS site folders - iis

If this is something in another thread I'm happy to read but I've been keeping google hot without a result.
I have an IIS site that is getting hammered with some search engine exploit file creations. Basically trying to direct traffic to videos on YouTube by getting them indexed. I've disabled all FTP accounts to start and then went through confirming everything is updated including things like PHP. I'm having a bear of a time figuring out where these are coming from. I have started resetting the passwords on the anonymous user credential for each website but I don't think this is a password exploit.
The info I have:
They all seem to be created by Network Service but I can't tell which site they are coming in through.
I'm using up to date and active AV and active malware scanning.
All windows updates are in place
I have attempted to use process monitor and the only thing I can see is that the file creation comes from NETWORK SERVICE
The only thing I've thought of at this point is an exploit somewhere but without any viruses, missing updates or malware I'm at a loss of where to go with this.
I have tried to put auditing in place but this server has a pretty immense load overall so there is a number of completely safe temp files created and deleted constantly. This is compounded by the files being created at random times. I may keep a clean deck for a day or two and then get hit for a few hours. I did try auditing anyways and stopped at 15gb of security events without a hit.
I am considering containerizing into credentialed app pools but that will be a pretty significant time and resource overhead for a guess.
Any ideas or suggestions? I'm nearing the point of visiting a fortune teller down the road to see if they have any idea. :D

Related

Intermittent Microsoft Azure Web Site access failure

I have a number of small MVC apps deployed as Microsoft Windows Azure websites. This has been working for several months.
Yesterday I rolled out a new one, and the deployment was unremarkable, everything worked fine. But a couple of hours later, access to the site was unavailable. The symptoms were that when the browser tried to navigate to the URL for that site, it would try to load for several minutes and then just give up with a completely blank page.
I attempted to stop and restart the site, and it worked once, but the symptoms came back several minutes later. Then I tried to stop and restart, and it didn't work.
I deployed the identical app to three additional URLs. Again, immediately on deployment, they all work fine, however, they fail at some interval in the future. They seem to not all fail at once. Sometimes restarting the site will fix the problem, and sometimes not.
IMPORTANT: If I wait for some period of time, the site may start to work again on its own.
However, deploying four versions of the app so that our users can go to a backup one if the primary one is not working is not optimal.
Any words of wisdom as to how I might go about debugging this?
ADDITIONAL INFO NOV 25, 2013:
When sites are failing, the IIS logs show either 500 or 502 Internal Service Errors. Our own MVC code is never hit, not even app_start.
You can start by checking the logs and remote debugging
http://www.drdobbs.com/windows/azure-sdk-22-supports-visual-studio-2013/240163499
Are the apps working locally?
Might not be the same problem, but from time to time our Azure instances will get the blue question mark of death as a status.
The reason we found out was that Microsoft will do upgrades on instances from time to time. If you have just one instance in a cloud service/role, then from time to time they will do maintenance and during that time it will be dead.
I have confirmed this with their support.
The only way to get around this that I know of is to create two instances. Then Microsoft guarantees ~99% availability.
Of course I also confirmed with them that this means twice the cost. =/
If that's not the issue I would enable RDP and get onto the machine to see what the problem is. Microsoft has these tools to help debug problems: http://blogs.msdn.com/b/kwill/archive/2013/08/26/azuretools-the-diagnostic-utility-used-by-the-windows-azure-developer-support-team.aspx
First, you should always run multiple instances of your web role with more than 1 upgrade domain. This is configurable in the service definition (CSDEF). Without this, you don't get an SLA from Microsoft, so you can't really complain that the VMs go down.
Second, to figure out what might be going on with these boxes, you should have both logs (my preference is to roll my own with page blobs or table storage), AND you should always have RDP access to a pre-production environment (production as well if you're not too fussed about security). Once on the box, look through the event viewer for errors.
Third, when an outage occurs check out the azure service dashboard (http://www.windowsazure.com/en-us/support/service-dashboard/) for outages.
Lastly, contact Microsoft support. It may take a few hours, but they are pretty good.
That it is happening repeatedly and for extended periods of time (more than 5 minutes), I would be there's something wrong with your hosted service. Again, RDP in and poke around. Good luck.
To debug your sites try to enable diagnostic logs:
http://www.windowsazure.com/en-us/develop/net/common-tasks/diagnostics-logging-and-instrumentation/
Another nice way to look around your site is using the debug console:
https://github.com/projectkudu/kudu/wiki/Kudu-console

Extremely uneven cloud service load-balancing with Azure

I'm utilizing Azure for hosting a cloud service, which I recently modified to be scalable across multiple instances, including a session caching worker role. My question is, why would I be seeing extreme load (upwards of 90%) on one instance, but not on other instances (15-20% across all other instances)? Should I be worried?
Before I set up load balancing and when my single instance hit upwards of 95% load, it would slow to a crawl --- becoming unusable. Is there any way to ensure that I don't have any users experiencing this because they're somehow round-robin'd onto the overloaded instance?
We found we had a similar type of situation when one load-balanced instance failed over; what we were seeing is that all the load transferred, but wouldn't balance out again. We found that turning off keep-alive for a couple of minutes let the load spread again, after which we could turn it back on.
http://technet.microsoft.com/en-us/library/cc772183(v=ws.10).aspx
Well... azure load balance is based on round robin... so the distribution should be almost equal (something like 60-40 or even 70-30 is still acceptable)... so just to be sure: Are you sure your not using IIS "redirect" (I forgot the name of the feature) that would set sticky session?
I must say that without further details about what your site actually "do and how" it's quite hard to advice... I must say that this behavior is strange, but it's not clear that it is the load balancer fault...
Edit1: I would suggest that you further exam what is the 90% guy is doing by tracing it's activities... maybe you're out-of-luck and the requests that will cause heavy load are falling into that machine and the ones that will be quickly worked are being worked by the other one... Another thing that might be happening is that something might be stucked (maybe a infinite-loop)... if you implemented a scalable architecture I would recommend that you provision another machine and kill the one that is suffering...
Edit2: A simple way to verify that the load balancer is working is: Log remotely to the service machines and replace something like a image that is displayed on the main page (something that you can easily spot just by looking to the page). On server 1 put lets say a yellow image and on server 2 a red image (ok... maybe something not this drastic but you get the point...). Then keep loading the page again and again...

Determining Cause of Suspended Website on Windows Azure

I have a Website hosted on Windows Azure. This website is a custom ASP.NET MVC 4 site hosted as a shared web site instance. Within the past couple of days, I've started to get large spikes in CPU Time. These spikes have been sustained and have caused my web site to get suspended. However, I'm not sure how to determine the cause of these spikes. Here is what I've done so far:
I attempted to look at the diagnostics via the Windows Azure FTP drop. I did not see anything there.
I reviewed my Google Analytics to see if there was anything out of the ordinary. The site had 20 visitors yesterday. So nothing crazy.
How can I identify the culprit of the the CPU spike? Once it spikes, it just sits there for hours. I'm not sure what would cause this.
Thank you
Have you tried running your site on your local box and simulating your visitor traffic, exercising all your website's features?
Testing locally is 1000's of times easier and more revealing than trying to debug a site that's running live.
If you still can't find anything wrong when running locally, consider using logging and tracing to strategic points in your site so that you can see how often, and how long it takes for your site to execute complex operations.

Keep getting signed out of Orchard

I have deployed Orchard to my online webserver. It's all working quite nicely, untill I log into the dashboard and try to create pages and such. I keep getting signed out, Orchard tells me access is denied. When I enter my credentials again, I log in and am able to view the page and continue. But after some clicking, I'm signed out again. Before getting signed out, there appears to be a long wait. What could cause this?
Update: I have the idea that it's a time out of some sort. Either on the cookie, or some process on the server taking a long time. Overall, the performance of the site on the server isn't that great. Pages take a long time (several seconds) to load. It's a shared hosting environment, so I don't have acccess to the server itself unfortunately. But as a comparison; I'm also running a Wordpress installation on the same box and that's just plain quick.
First hypothesis: your shared host is too weak, and has strict memory restrictions that cause the application to restart all the time. Fix: find a decent ASP.NET hosting company. That their plan works with a PHP app such as WordPress is no good indication of the quality of their ASP.NET service.
Second hypothesis: you are missing a machine key: http://docs.orchardproject.net/Documentation/Setting-up-a-machine-key

Architecture recommendation for load-balanced ASP.NET site

UPDATE 2009-05-21
I've been testing the #2 method of using a single network share. It is resulting in some issues with Windows Server 2003 under load:
http://support.microsoft.com/kb/810886
end update
I've received a proposal for an ASP.NET website that works as follows:
Hardware load-balancer -> 4 IIS6 web servers -> SQL Server DB with failover cluster
Here's the problem...
We are choosing where to store the web files (aspx, html, css, images). Two options have been proposed:
1) Create identical copies of the web files on each of the 4 IIS servers.
2) Put a single copy of the web files on a network share accessible by the 4 web servers. The webroots on the 4 IIS servers will be mapped to the single network share.
Which is the better solution?
Option 2 obviously is simpler for deployments since it requires copying files to only a single location. However, I wonder if there will be scalability issues since four web servers are all accessing a single set of files. Will IIS cache these files locally? Would it hit the network share on every client request?
Also, will access to a network share always be slower than getting a file on a local hard drive?
Does the load on the network share become substantially worse if more IIS servers are added?
To give perspective, this is for a web site that currently receives ~20 million hits per month. At recent peak, it was receiving about 200 hits per second.
Please let me know if you have particular experience with such a setup. Thanks for the input.
UPDATE 2009-03-05
To clarify my situation - the "deployments" in this system are far more frequent than a typical web application. The web site is the front end for a back office CMS. Each time content is published in the CMS, new pages (aspx, html, etc) are automatically pushed to the live site. The deployments are basically "on demand". Theoretically, this push could happen several times within a minute or more. So I'm not sure it would be practical to deploy one web server at time. Thoughts?
I'd share the load between the 4 servers. It's not that many.
You don't want that single point of contention either when deploying nor that single point of failure in production.
When deploying, you can do them 1 at a time. Your deployment tools should automate this by notifying the load balancer that the server shouldn't be used, deploying the code, any pre-compilation work needed, and finally notifying the load balancer that the server is ready.
We used this strategy in a 200+ web server farm and it worked nicely for deploying without service interruption.
If your main concern is performance, which I assume it is since you're spending all this money on hardware, then it doesn't really make sense to share a network filesystem just for convenience sake. Even if the network drives are extremely high performing, they won't perform as well as native drives.
Deploying your web assets are automated anyway (right?) so doing it in multiples isn't really much of an inconvenience.
If it is more complicated than you're letting on, then maybe something like DeltaCopy would be useful to keep those disks in sync.
One reason the central share is bad is because it makes the NIC on the share server the bottleneck for the whole farm and creates a single point of failure.
With IIS6 and 7, the scenario of using a network single share across N attached web/app server machines is explicitly supported. MS did a ton of perf testing to make sure this scenario works well. Yes, caching is used. With a dual-NIC server, one for the public internet and one for the private network, you'll get really good performance. The deployment is bulletproof.
It's worth taking the time to benchmark it.
You can also evaluate a ASP.NET Virtual Path Provider, which would allow you to deploy a single ZIP file for the entire app. Or, with a CMS, you could serve content right out of a content database, rather than a filesystem. This presents some really nice options for versioning.
VPP For ZIP via #ZipLib.
VPP for ZIP via DotNetZip.
In an ideal high-availability situation, there should be no single point of failure.
That means a single box with the web pages on it is a no-no. Having done HA work for a major Telco, I would initially propose the following:
Each of the four servers has it's own copy of the data.
At a quiet time, bring two of the servers off-line (i.e., modify the HA balancer to remove them).
Update the two off-line servers.
Modify the HA balancer to start using the two new servers and not the two old servers.
Test that to ensure correctness.
Update the two other servers then bring them online.
That's how you can do it without extra hardware. In the anal-retentive world of the Telco I worked for, here's what we would have done:
We would have had eight servers (at the time, we had more money than you could poke a stick at). When the time came for transition, the four offline servers would be set up with the new data.
Then the HA balancer would be modified to use the four new servers and stop using the old servers. This made switchover (and, more importantly, switchback if we stuffed up) a very fast and painless process.
Only when the new servers had been running for a while would we consider the next switchover. Up until that point, the four old servers were kept off-line but ready, just in case.
To get the same effect with less financial outlay, you could have extra disks rather than whole extra servers. Recovery wouldn't be quite as quick since you'd have to power down a server to put the old disk back in, but it would still be faster than a restore operation.
Use a deployment tool, with a process that deploys one at a time and the rest of the system keeps working (as Mufaka said). This is a tried process that will work with both content files and any compiled piece of the application (which deploy causes a recycle of the asp.net process).
Regarding the rate of updates this is something you can control. Have the updates go through a queue, and have a single deployment process that controls when to deploy each item. Notice this doesn't mean you process each update separately, as you can grab the current updates in the queue and deploy them together. Further updates will arrive to the queue, and will be picked up once the current set of updates is over.
Update: About the questions in the comment. This is a custom solution based on my experience with heavy/long processes which needs their rate of updates controlled. I haven't had the need to use this approach for deployment scenarios, as for such dynamic content I usually go with a combination of DB and cache at different levels.
The queue doesn't need to hold the full information, it just need to have the appropriate info (ids/paths) that will let your process pass the info to start the publishing process with an external tool. As it is custom code, you can have it join the information to be published, so you don't have to deal with that in the publishing process/tool.
The DB changes would be done during the publishing process, again you just need to know where the info for the required changes is and let the publishing process/tool handle it. Regarding what to use for the queue, the main ones I have used is msmq and a custom implementation with info in sql server. The queue is just there to control the rate of the updates, so you don't need anything specially targeted at deployments.
Update 2: make sure your DB changes are backwards compatible. This is really important, when you are pushing changes live to different servers.
I was in charge of development for a game website that had 60 million hits a month. The way we did it was option #1. User did have the ability to upload images and such and those were put on a NAS that was shared between the servers. It worked out pretty well. I'm assuming that you are also doing page caching and so on, on the application side of the house. I would also deploy on demand, the new pages to all servers simultaneously.
What you gain on NLB with the 4IIS you loose it with the BottleNeck with the app server.
For scalability I'll recommend the applications on the front end web servers.
Here in my company we are implementing that solution. The .NET app in the front ends and an APP server for Sharepoint + a SQL 2008 Cluster.
Hope it helps!
regards!
We have a similar situation to you and our solution is to use a publisher/subscriber model. Our CMS app stores the actual files in a database and notifies a publishing service when a file has been created or updated. This publisher then notifies all the subscribing web applications and they then go and get the file from the database and place it on their file systems.
We have the subscribers set in a config file on the publisher but you could go the whole hog and have the web app do the subscription itself on app startup to make it even easier to manage.
You could use a UNC for the storage, we chose a DB for convenience and portability between or production and test environments (we simply copy the DB back and we have all the live site files as well as the data).
A very simple method of deploying to multiple servers (once the nodes are set up correctly) is to use robocopy.
Preferably you'd have a small staging server for testing and then you'd 'robocopy' to all deployment servers (instead of using a network share).
robocopy is included in the MS ResourceKit - use it with the /MIR switch.
To give you some food for thought you could look at something like Microsoft's Live Mesh
. I'm not saying it's the answer for you but the storage model it uses may be.
With the Mesh you download a small Windows Service onto each Windows machine you want in your Mesh and then nominate folders on your system that are part of the mesh. When you copy a file into a Live Mesh folder - which is the exact same operation as copying to any other foler on your system - the service takes care of syncing that file to all your other participating devices.
As an example I keep all my code source files in a Mesh folder and have them synced between work and home. I don't have to do anything at all to keep them in sync the action of saving a file in VS.Net, notepad or any other app initiates the update.
If you have a web site with frequently changing files that need to go to multiple servers, and presumably mutliple authors for those changes, then you could put the Mesh service on each web server and as authors added, changed or removed files the updates would be pushed automatically. As far as the authors go they would just be saving their files to a normal old folder on their computer.
Assuming your IIS servers are running Windows Server 2003 R2 or better, definitely look into DFS Replication. Each server has it's own copy of the files which eliminates a shared network bottleneck like many others have warned against. Deployment is as simple as copying your changes to any one of the servers in the replication group (assuming a full mesh topology). Replication takes care of the rest automatically including using remote differential compression to only send the deltas of files that have changed.
We're pretty happy using 4 web servers each with a local copy of the pages and a SQL Server with a fail over cluster.

Resources