We started to have an issue while uploading large files (> 6 MB) to an on-premise SharePoint 2016.
The whole server stops responding once the file is uploaded to SharePoint (whether thru client application or directly thru SharePoint WebForms).
The only way to recover is to restart the server (hard reset)!
We are unable to capture any logs from the server, as the OS immediately freezes.
The OS is Windows Server 2012 and the SharePoint is installed as a single-server deployment.
Uploading small files (less than 6 MB) is working fine without any troubles.
Any clues?
Finally found out the root cause!
It was the VM storage making all the troubles. Once we moved the VM to another storage, the issue disappeared.
Related
I have a WebApp that uploads files to a server, but it is not working in a Google VM with Windows Server 2016 and IIS 10 to files larger than 1MB. It works on others servers with IIS8.5 and WS 2012. When I try to upload, it takes a lot of time, returns a success message, but the file does not go to the folder on the server.
That problem does not occours when I access my webapp from the server itself or another Google VM. It looks like a firewall problem, but I do not found anything different there.
There is some different configuration on WS 2016 or IIS 10 that explains Why it is happening?
I found a VPN that had been disabled, but still blocking the connections. I just remove the VPN and all start to work.
I have a VM with Windows 2012 R2 installed on it, and today I started experiencing file copy problems.
These are the cases:
File Copy;
If I try to copy a folder full of files to another location, the processing is frozen in a random file for some seconds or minutes.
This issue continues randomly untill all the files are copied.
Unzip:
The same as above. If I try to unzip a file, the process in frozen in a random file for some seconds or minutes.
Delete folder:
The same as Copy Files. If I try to delete a folder, it takes looong to start deleting, then the process in frozen in a random file for some seconds or minutes.
RDC:
If I copy from local machine to remote machine using Remote Desktop, the upload is done until the 99%, when the process is frozen for seconds or minutes.
It started to happen today, yesterday was working fine.
Is this a hardware problem? May the disk dying? Or a software problem?
I have even started the Windows Update, but after 30 minutes or so, the downloading contines (0 KB Total).
Any ideas?
Note: there are no errors on Event Viewer.
Looks like the VM File server drive is acting up. Are you seeing this for any other VMs that you have ? Silly question, did you try restarting the VM ? May be some service is on a dead lock.
May be there was a service issue during the time the issue was happening, you should be able to check the state here if it is still happening for your region the VM is present.
https://azure.microsoft.com/en-us/status/
If it is an expendable VM you can try reimaging it.
If it still acting up, open a support ticket with Azure through the new portal,
To Open a Support Request
Go to the New Portal at https://portal.azure.com
Browse to your VM that is causing the Problem.
You should be able to see New Support Request on the Settings Blade of the VM.
I am currently having serious issues connecting to a FoxPro database using an ODBC connection from IIS7.5
The database is on another machine than the IIS server and is accessed via a fileshare.
When I call the webpage from IE on the IIS server everything works fine. When I call the webpage from another machine I get a '[Microsoft][ODBC Visual FoxPro Driver]Cannot open file' error.
The application pool runs as a domain user.
When I run ProcMon on the IIS Server and call the page, when it is called from the IIS Server it accesses the offending file and then a whole bunch of other FoxPro files for that database.
When I run the page from another machine, I get an ACCESS DENIED error when it tries to access the first file.
It is a CreateFile call for a file called Comp_W.DBC that fails.
I checked and it is the same user that is invoking these calls to the fileshare so it is not differing credentials that is causing the problem. I even went as far as making the app pool account a domain admin to see if that might sort out the issue but still the same problem.
I cannot move the database onto the same server as IIS. I have tried to run the web application on the same server as the FoxPro database but I hit different issues to do with the fact that OWA runs on that server, and the 32bit ODBC driver causes conflict with an OWA dll that is loaded as a global module. I really need it to run IIS on a separate server from the FoxPro database.
The server(s) do not seem to be running kerberos as the delegation tab is not present when you administer users.
Any help would be greatly appreciated.
James :-)
I'd use the Visual FoxPro OLE DB driver instead of ODBC, because it's newer, faster and won't conflict with OWA. That would let you move it onto the IIS server.
Past experience suggests that you haven't given the IIS user permission to access the folder where the DBFs live. When you run IE on the local machine, you're passing the credentials right through -- when you run it on another machine, I believe the anonymous user rules come into play. (Been a while since I had to debug this one, take it with a grain of salt.)
I got a backup of a Sharepoint 2010 site that I created from our client's production server so that I can make some new changes to it on my staging server.
I can restore the site collection from the backup without a problem but when I try to create a backup of the same site on my staging server, I always get the error "Operation is not valid due to the current state of the object".
Before the error is given however, a small part of the backup file is created. If I try to run the Backup-SPSite again, it always fails at the same point and the corrupt backup files are always the same size.
Going through the logs it looks like the problem might be related to user permissions. I wonder if it's possible that the user permissions, user data, etc that came over from the client's production server are somehow screwing the backup process now because the same data cannot be found on my staging server.
The same error is mentioned here http://technet.microsoft.com/en-us/library/ee748617.aspx but UseSqlSnapshot parameter doesn't work anyway in my case.
I've been hitting my head against the wall with this problem and would appreciate if anyone has any advice on what might help! :)
The setup:
Windows Server 2008 R2
Sharepoint 2010 Server (no SP1 because it hasn't been installed on the client's production server)
Microsoft SQL Server Express Edition
Cheers!
The backup process started working after I checked in a file that had been checked out by a user on my client's production server.
I found out what file that was by opening the corrupted backup file and looking at the title of the last entry.
I've started porting an existing ASP.NET web app to Windows Azure and have noticed that the development process is really slow. Each time I make a change to my code and want to view it, I have to effectively redeploy it to the local dev cloud (using Start debugging (F5) or Start without debugging (Ctrl-F5). The process itself takes over a minute, during which time Visual Studio is completely unresponsive.
Am I doing something wrong or is that simply how things are developing for Azure?
My specs:
Visual Studio 2008 9.0.30729.1 SP
5 projects running on .NET 3.5 SP1
Azure SDK 1.1 (February 2010)
Single instance of a single web role
Dual-core AMD 64 machine with 8GB RAM, 64-bit Windows 7, fully patched
The main project itself is quite large (3k files, ~200k lines) but compiles normally in 10-15 seconds
If your web role has limited functionality, you might be able to just set the Web project as the Active Project in your VS solution and run from there.
For example, my web role doesn't call into table storage, blob storage, etc... it just makes some Azure logging calls and interacts with SQL Azure. So sometimes I just set the web project to be the startup project in the VS debugger, not Azure, and run from there. I've properly written my logging calls to check if Azure is available before they write, so they don't execute in this situation.
Of course, if you're doing lots with table storage, queues, blobs, etc. then this is not for you.
Normally in a development machine we just compile and run the solution. In case of Azure development there is a additional step where the specific project is deployed in the Dev fabric which involves copying the complete web site content to the dynamically created deployment folder. Since you have a large number of files this would require all those files to be copied into a new folder every time you press F5 or Ctrl-F5. This may cause the delay you are noticing.
This scenario also highlights the inflexibility in deploying the solution over the App fabric. Any time you change any content (static or dynamic) in the website the complete site has to be packaged and re uploaded on your production server.
In my case when I changed the port from 80 to something else (under end points) the speed returned to normal.
Microsoft's Steve Marx has a blog post about running a website from a mounted VM in Azure. This may be a good development pattern since you simply update the contents of a VM stored in blob storage instead of having to redeploy to the fabric each time.