Getting Peformace Counter related error on Window Azure - azure

I am facing some critical issue which might be interesting for whom , those who are playing with window azure sdk. I have created on EXE which read performance counter data like CPU, memory, asp.net session from system like
queryCollection = ExecuteWMIQuery("SELECT * FROM win32_perfformatteddata_perfdisk_physicaldisk");
and I have aded this EXE in startup task of simple asp.net application which i have uploaded on window Azure. Now when i connecting to RDP of that I can see following errors in my event log as per below.
Disabled performance counter data collection from the
"ASP.NET_64_2.0.50727" service because the performance counter library
for that service has generated one or more errors. The errors that
forced this action have been written to the application event log.
Correct the errors before enabling the performance counters for this
service.
======================================================================
Windows cannot open the 64-bit extensible counter DLL
ASP.NET_64_2.0.50727 in a 32-bit environment. Contact the file vendor
to obtain a 32-bit version. Alternatively if you are running a 64-bit
native environment, you can open the 64-bit extensible counter DLL by
using the 64-bit version of Performance Monitor. To use this tool,
open the Windows folder, open the System32 folder, and then start
Perfmon.exe.
So i am thinking that my EXE trying to fetch performance counter for 32 bit (win32 indicate that) and that will log above error.
So anyone here came across this type of issue , also if my guess is correct then is there any way to implement my EXE logic such way that it can be run smoothly in any environment(32 or 64 bit)?
Hope that this would remain interesting question here!!!
Thanks In Advance
Arun.

That is correct. IIS running in Azure is running 64-bit unless you change it to run 32-bit in a startup task. You could try building it with the Any CPU setting. But most likely the best way is to do something like what the sysinternal tools does. They will spawn a new process that runs in 64-bit mode when needed. Then you can handle both.

I encountered this error while migrating to a Azure VM.
Solved it by using the InstallUtil which is located in the Framework64 folder instead of the one in the Framework folder

Related

DLL not downloading in WinDbg when analysing crash-dump file

After failing to get DebugDiag to analyse crash-dump files it was suggested that I try using WinDbg instead.
The crash-dump files have been created on a Windows Server 2016 box, running my ASP.Net 4.5.2 web application on IIS-10. My ASP.Net web application contains several 3rd party components, with their individual DLLs.
I have copied the crash-dump files onto my Windows 10 development machine, and am running WinDbg locally instead of on the server.
The problem is... when I run !analyze -v in WinDbg on any of the crash-dump files, it effectively hangs while "Downloading file xxx.DLL" (xxx.DLL being the name of just one of the 3rd party component DLLs), and eventually cancels itself after a period of time.
I'm running WinDbg on the same machine that I built the website on in the first place... so is there a way of telling WinDbg that it can find the DLL in a particular location on the local machine?
I obviously don't have a .pdb file for any of the 3rd party components, and so I'm not bothered about it loading symbols for those DLLs... but either I somehow tell it to ignore those particular DLLs, or I tell it how to find them locally.
Can anybody point me in the right direction?
You don't have to analyze the dump file with !analyze -v.
If you need to load dll, then .load D:.... is enough.
To maunal analyze a dump file.
Please run .loadb sos clr to load debug module. If the crash server and your machine run different version of .net framework. Then you need to load sos.dll manually.
When you need to debug .net application in IIS, !mex extension is recommened.
https://www.microsoft.com/en-us/download/details.aspx?id=53304
You can load mex.dll via .load c:\.....\mex.dll
!mex.aspxpages can show all requests inside the process and their process
!mex.mthreads show the status of all threads
!mex.clrstack2 will show all exceptions and mananaged call stack in specific thread.
1.You can use ~* k to load the full call stack in all threads and !mex.mthreads check status.
Then you may find something like KERNELBASE!RaiseException in specific thread
2.Then go to this thread via threadid~ like 12~
3.Run !mex.clrstack2 and it will show the crash exception
Basically, no, you cannot speed up the process of loading symbols for DLLs where you don't have symbols. IMHO, the only way of speeding up the symbol process would be to disable the HTTP server, so that symbols are only searched on your local disk.
See also: How to set up symbols in WinDbg if you have not done this often.
Getting a HTTP 404 for those files should not take very long. However, it tries various file endings and pointers etc. Sometimes Microsoft servers are slow. Also, having a lot of 3rd party DLLs may sum up of course. That can be pretty anoying.
I'll start by saying I don't 100% understand everything I had to do, but here are the step I took to discover where the stackoverflow issue was in my application...
The majority of the information came from this blog.
On the server I added the following registry settings to create the crash dump files...
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps]
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps\w3wp.exe]
"DumpCount"=dword:00000005
"DumpFolder"=hex(2):43,00,3a,00,5c,00,43,00,72,00,61,00,73,00,68,00,44,00,75,\
00,6d,00,70,00,73,00,5c,00,00,00
(The DumpCount is the number of files to store before it starts overwriting old ones - DumpFolder is where the files are to be saved, is a REG_EXPAND_SZ and in my case represents C:\CrashDumps\)
Waited for crashes to happen
Copied the crash files into a directory on my local machine called C:\WinDbg\CrashDumps\
Create another directory called C:\WinDbg\Symbols, into which I placed...
clr.dll (from the server, taken from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\)
sos.dll (from the server, taken from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\)
all .dll and .pdb files from my local development environment, including third party component .dll files
Installed WinDbg via Windows Store on my Windows 10 development machine
Ran windbgx -y c:\windbg\symbols via Run command (for some reason it's windbgx on my machine but maybe that's because it's via the Store rather than manual download)
In the file menu Open dump file and select one of the dump files in C:\WinDbg\CrashDumps
Ran the following commands...
.symfix
.reload
.load c:\windbg\symbols\sos.dll (see note 1 below)
!clrstack (see note 2 below)
Although this didn't give me all the information I expected, what it did show was that one of my 3rd party components was 100% to blame for the stackoverflow exception.
Note 1 - Lots of places I read said that .loadby sos clr should be used, but that just gave me The call to LoadLibrary(C:\ProgramData\Dbg\sym\clr.dll\5E7D1F3B9eb000\sos.dll) failed and I couldn't figure out how to fix it... so instead I've used .load c:\windbg\symbols\sos.dll.
Note 2 - The !clrstack command worked because WinDbg appeared to pre-select the thread that had the exception. The other option is to use ~*e !clrstack which will show you call stacks for ALL threads.

System crashes while using clearcase 8.0.1.x /9.0.1.x (checking out files) on windows 10 (1803) platform

After upgrading system to Windows 10 - os 1803 we are getting below issues while working with ClearCase 8.0.1.x/9.0.1.x
Unable to checkin/checkout.
Not able to create views.
Not able to add any file to source control.
The system hangs & crashes while performing any ClearCase operation.
There is no error message, but I have attached screenshot for reference.
Please let us know if there is any issue with the Windows 10 ver(1803), any security system enabled?
Or has ClearCase provided any fix?
We have tried 9.0.1.5 and issue still persists.
This is what we got from windows event log.
The computer has rebooted from a bugcheck.
The bugcheck was:
0x000000c2 (0x0000000000000004, 0x00000000535be990, 0x000000000004efd3, 0xfffff803e01848b1)
for most of them whoever has upgraded to windows 1803 ver :( for people who are still using ver1709 it is working perfectly fine
Then I would recommand contacting IBM support: only them can update their ClearCase 9/Windows 10 compatibility matrix and confirm if MVFS is supported on a more recent (1803) Windows 10 edition.
We also facing same problem and I have raised the case with IBM. Still not yet resolved. As IBM said there are some limitations to work ClearCase with windows 10 and windows 2016.
We tried all the options except Secure boot disable. If possible please do disable secure boot option in Windows 10 and try to checkin/checkout code from CleraCase
Note : It works for Snapshot views. That means the issue related to MVFS
I'm seconding #VonC's recommendation to open a ticket with IBM. When you do that, save a step and collect a clearbug2 and a kernel memory dump to send in as soon as the case is opened. It will save the turn-around time of us asking you for it. If the installed programs list doesn't list installed security software (DLP, Privilege management sw like Avecto, other endpoint security tools), please list those separately as well.
I would also love to know who # IBM told you there are "limitations" with Win10-1803.
There are a few issues with Windows 10 "version upgrades" breaking things, but they generally don't cause system crashes. Windows 10 upgrades are actually full OS installs that then (imperfectly) migrate application settings. Anything that uses custom network providers (ClearCase is one example) will find that the network providers will be broken or partially broken. Reinstalling is usually required. Again, that has not yet been reported as a cause of a BSOD.
If the upgrade/reinstall didn't fix view creation, please post a separate question on the view creation issue. There may be things we can do to the SMB 2 caches to allow view creation to work in cases where the view storage is not on the client host.
I noticed that the screen shot you posted is a Terminal Services disconnect screenshot. Does the issue only occur over a Terminal Services client connection or does it also happen on a local connection?

IIS in Classic Mode ignores Sitecore, does not loads the startItem

I am using Sitecore 8.1, and our IIS crashes quite often in production - on average, 2 times a day.
Following this guide to improve Sitecore stability on 64-bit machines I have set the Enable 32-bit Applications option to True and changed the application pool's Managed Pipeline Mode to Classic.
Sitecore now displays the empty "Default Page" page, and even after deleting its file it attempts to simply list the directory content rather than loading my Sitecore application as it always did in Integrated mode.
Does anyone knows how can I configure IIS in order to have Sitecore to work properly in Classic Mode?
This is not a solution to switch your Application Pool to Classic mode to stable your solution.
In Sitecore 8.1 : Classic mode for IIS has been deprecated, and the httpModules and httpHandlers elements have been removed from the Web.config file.
Informations about classic mode deprecated you can find here
It's very difficult to find exactly your problem, I suggest you to open a support ticket.
What you will want to do it catch the actual crash of the IIS app pool with DebugDiag. You install it and then configure it to watch your app pool until it crashes. Once it does, DebugDuag will dump the memory of the app pool to your hard drive. You can finally analyze that for the exact function and cause of the dump. You most likely have a process that is spinning off into a stack overflow in unmanaged code.
https://blogs.msdn.microsoft.com/chaun/2013/11/12/steps-to-catch-a-simple-crash-dump-of-a-crashing-process/

How to use firebird embedded on Linux with IBPP without running a service?

We're about to integrate a firebird database in our software via IBPP. Accordingly to the firebird documantation this should be possible.
We already managed to use the firebird database via IBPP while the service was running. But, we want avoid to run a service. On windows we already accomplished to do this - but on the linux side there are two main differences:
Installation
On windows it is not neccessary to make an installation. On Linux it seems to be, as the docs say:
Finally, you can't just ship libfbembed.so with your application and use it to connect to local databases. Under Linux, you always need a properly installed server, be it Classic or Super.
Is this true? I found the firebird documentation beeing outdated sometimes. If this is still valid, how to deal with this installtion? Can we just run it on the customer's pc. I looked at the shell script. It starts a service. For me it seems running this service is needed during installation process. Anyway, this would be no problem if the service is running only for the installtion and is never needed afterwards - but I'm not sure about this.
IBPP
On windows you just load the DLL via loadlibrary: We put the fbembed.dll, icuuc30.dll and icudt30.dll on any_dirctory, changed the passage in IBPP where the embedded dll is called to loadlibary("any_directory\fbembed.dll") and added any_directory to PATH variable. Everything works now. (Aside: By doing this it is possible to call the database via a DLL we created using IBPP. This DLL can be used by every EXE we give to the customer withour caring about the path the EXE is places in).
But on Linux I didn't found the code where this is done. On this HOWTO it seems a special directory structure is needed. Is this really neccessary? Is it possible to place the .so-files on any_directory and run the application from another_dirctory? Is it neccessary to add loadlibary to Linux section in IBPP? (BTW: My problem is I can't really test things because Linux integration is doing someone else for me).

Windows Azure local development environment speed

I've started porting an existing ASP.NET web app to Windows Azure and have noticed that the development process is really slow. Each time I make a change to my code and want to view it, I have to effectively redeploy it to the local dev cloud (using Start debugging (F5) or Start without debugging (Ctrl-F5). The process itself takes over a minute, during which time Visual Studio is completely unresponsive.
Am I doing something wrong or is that simply how things are developing for Azure?
My specs:
Visual Studio 2008 9.0.30729.1 SP
5 projects running on .NET 3.5 SP1
Azure SDK 1.1 (February 2010)
Single instance of a single web role
Dual-core AMD 64 machine with 8GB RAM, 64-bit Windows 7, fully patched
The main project itself is quite large (3k files, ~200k lines) but compiles normally in 10-15 seconds
If your web role has limited functionality, you might be able to just set the Web project as the Active Project in your VS solution and run from there.
For example, my web role doesn't call into table storage, blob storage, etc... it just makes some Azure logging calls and interacts with SQL Azure. So sometimes I just set the web project to be the startup project in the VS debugger, not Azure, and run from there. I've properly written my logging calls to check if Azure is available before they write, so they don't execute in this situation.
Of course, if you're doing lots with table storage, queues, blobs, etc. then this is not for you.
Normally in a development machine we just compile and run the solution. In case of Azure development there is a additional step where the specific project is deployed in the Dev fabric which involves copying the complete web site content to the dynamically created deployment folder. Since you have a large number of files this would require all those files to be copied into a new folder every time you press F5 or Ctrl-F5. This may cause the delay you are noticing.
This scenario also highlights the inflexibility in deploying the solution over the App fabric. Any time you change any content (static or dynamic) in the website the complete site has to be packaged and re uploaded on your production server.
In my case when I changed the port from 80 to something else (under end points) the speed returned to normal.
Microsoft's Steve Marx has a blog post about running a website from a mounted VM in Azure. This may be a good development pattern since you simply update the contents of a VM stored in blob storage instead of having to redeploy to the fabric each time.

Resources