Why is copying to a local UNC path so slow on my Windows 2008 R2 server? - windows-server-2008-r2

I have a Windows 2008 R2 server named Server that runs a backup each night to an encrypted USB drive attached to the server. I shared the drive on the server as shown below and configured Windows Server Backup to backup to the local shared folder.
\Server\Backup
Everything worked well for several months with the backup speeds being about 1 GB/min (15 to 20 MB/s). At the beginning of this week, I noticed that the backup speeds reduced to 1 to 2 MB/s which resulted in the backup not completing before the encrypted disk was automatically dismounted.
After much research and testing, I believe that I have narrowed down the cause of the issue to an issue writing data to the local UNC path of \Server\Backup as shown by comparing the following tests.
Test 1
c:> copy largefile h:\
The above DOS command copies the file to the encrypted drive mounted at h:\ quickly.
Test 2
c:> copy largefile \Server\Backup
The above DOS command copies the data to the same location but was 33x slower that the Test 1 DOS command.
I duplicated the above results using the xcopy command instead of the copy command but could not duplicate the above results on my Windows 7 computer. On my Windows 7 computer, the two versions of the copy performed the same.
The following are some additional observations:
If I change the backup location to an external UNC like \Sever2\Backup, everything works well.
After repeatedly running Windows Server Backup after make changes such as restarting the server and tweaking the NIC parameters, the backup will all of a sudden start backing up at the normal speed but this will only last for about an hour before it reduces back to the slow backup speed.

Related

What is Different Between How Azure File Sync Copies Files and How Regular Copy/Paste Works?

We set up Azure File Sync across 3 VMs in Azure running IIS 10 on Windows Server Datacenter 2016 in order to make a common group of files available to the application spread across these servers.
The weird thing is, when we paste files into one of the servers to initiate a sync, the web applications on the servers which received the files as part of the sync process crash, but the server in which we hand-pasted the files continues to function normally.
What is the difference between copying and pasting files by hand via RDP versus syncing the files via Azure File Sync?
I have tried pasting files by hand into each of the servers and have had 0 application crashes, but every server has crashed every time it receives files by the sync process.
The application crash itself is a 502.3 displayed in IIS coming from the application itself which is running dotnet core 2.1.

IIS shared config on network drive - if network is down for a bit, IIS doesn't recover

We have several servers using shared IIS config stored on a network storage. After access to that storage is down for a few seconds (and then comes back), IIS isn't working until you do iisreset.
The problem seems to be that the local app pool config files become corrupted. To be more precise, the error given out is "Configuration file is not well-formed XML", and if you go to the app pool config, you see that instead of an actual config, it contains the following:
Now, trying to solve this we've come across the "Offline Files" feature and tried it for the shared applicationHost.config, but it wouldn't sync (saying other process is using the file, which is strange - I can easily change and save it).
The shared path starts with an IP (like \1.2.3.4...) - perhaps that's the issue (can't figure why it would be, just out of ideas at this point)?
Basically, I have two questions:
1) If the shared config is unavailable for a bit, how to make IIS recover and not be left with corrupt files till iisreset?
2) Any other idea to prevent this situation altogether.
We did manage to get offline files to work - the problem was the network drive is over Samba, and had to have oplocks on - otherwise was telling it can't sync because file is used by another process.
Now, the IIS does recover - actually, doesn't go down with the drive. However, since our websites are also on that drive, they are not available during network outage (which is predictable), the last strange thing is that it takes IIS about 1 minute to "feel" them again after the drive is back online.

Waht could be the reason for "svnrdump" and "svnadmin dump" to produce different sized dumps?

I tried to migrate multiple repositories to a different SVN server.
I have root access to the source server, so I first tried to dump the repositories locally on the server using "svnadmin dump". This works fine for the first couple of repositories until I encountered a repository that need more space to be dumped than the server has empty disk space.
So instead I switched to using "svnrdump dump" to dump the repositories onto a remote machine. As my root on the source server has no svn read access to the server I used my svn user account instead. That account has full read and write access to all the repositories.
To be sure I dumped all repositories (not just the missing one) again with "svnrdump dump"
After I was done I ended up with some repositories that were dumped two times (one time with svnadmin and one time with svnrdump).
I suddenly noticed that the size of one of the dumps was 115 MB for the dump created with "svnadmin dump" and only 78 MB for the dump created with "svnrdump dump".
The SVN server is a unix machine with SVN 1.6.17 and the remote machine used for svnrdump is a windows machine with Tortoise-SVN 1.9.4 and SVN 1.9.4.
So, now I am unsure if my dumps made with "svnrdump" are really correct.
Can the different size be because of the difference between the two accounts (root of the server on the one hand and svn user on the other hand)?
Or might it have something to do with the different versions of svn?
Regards,
Sebastian
When you used the 1.6.x client, the dump file contains the full info for every revision.
svn clients version 1.8.x and later only contain the delta info from revision to revision, so they're a lot smaller.
svnadmin dump has a switch --deltas to produce a dump using those deltas, which results in a smaller dump file. svnrdump does that unconditionally to reduce network traffic.

Why is there activity on our FTP server while Cloudberry FTP backup says job is finished?

Here is the setup
We are testing Cloudberry for backing up files to a remote FTP server
As a test we are backing up files on a desktop, using Cloudberry FTP to a FTP server (FileZilla server) located on the same desktop. The FileZilla Server in turns is accessing a Synology NAS located on the same network.
The job is set to run every 24 hours
According the Cloudberry interface, it was last run at midnight and latested 1h 31min
There are no running jobs showing in Cloudberry interface
HOWEVER, it is 9AM , FileZilla server is still showing files upload. Filezilla has a counter to keep track on the number connection. The count is currently at 1.2million, but thereare only ~ 70,000 file being backed up.
I deleted the job and created a new one with the same result
So what is going on?
Alex
Found the root cause of this issue.
By looking through the logs in %programdata%\CloudBerryLab\CloudBerry Backup\Logs, I found that a Consistency job was running every hour...
No matter how many times I checked the Backup Job definition, this setting was never shown as it is only displayed in the Welcome tab, not the Backup Plans tab...
Changed the Consistency job to running weekly.
Hope this will help somebody else
Note: Disappointed with the lack of Support from CloudBerry given that Stackoverflow is officially their Support page as per http://www.cloudberrylab.com/support.aspx?page=support

Commits are extremely slow/hang after a few uploads

I've recently started to notice really annoying problems with VisualSVN(+server) and/or TortoiseSVN. The problem is occurring on multiple (2) machines. Both running Windows 7 x64
The VisualSVN-server is running Windows XP SP3.
What happens is that after say, 1 2 or 3 (or a bit more, but almost always at the same file) the commit just hangs on transferring data. With a speed of 0bytes/sec.
I can't find any error logs on the Server. I also just asked for a 45day trial of Enterprise Server for its logging capabilities but no errors there as well.
Accessing the repository disk itself is fast, I can search/copy/paste to that disk/SVN repo disk just fine.
The Visual SVN Server also does not use excessive amounts of memory nor CPU usage, which stays around 0-3%.
Both the Server as well as TortoiseSVN's memory footprint moves/changes which would indicate at least "something" is happening.
Committing with Eclipse (different project (PHP), different repository on the server) is going great. No slow downs, almost instant commits, with 1 file or 50files. The Eclipse plugin that I use is Subclipse.
I am currently quite stuck on this problem and it is prohibiting us from working with SVN right now.
[edit 2011-09-08 1557]
I've noticed that it goes extremely slow at 'large' files, for instance a 1700MB .resx (binary) or 77KB .h source (text) file. 'small' files > 10KB go almost instantly.
[edit 2011-09-08 1608]
I've just added the code to code.google.com to see if the problem is on my end or the server end. Adding to google code goes just fine, no hangs at all. 2,17MB transferred in 2mins and 37secs.
I've found and fixed the problem. It appeared to have been a faulty NIC, speedtest.net resulted in ~1mbit, shoving in a different NIC pushed this to the max of 60mbit and solving my commit problems.

Resources