Debugging Azure WebJob locally - recompile fails - azure

I have a webjob that I'm debugging locally as a console app. But once I stop the code from within VS2017 I'm unable to recompile the exe as I get the following error
Unable to copy file "obj\Debug\******.******.exe" to "bin\Debug\******.******.exe". Access to the path 'bin\Debug\******.******.exe' is denied.
When I look at the processes that are running theres nothing there that jumps out.
I've closed VS and restarted it, but thats not cleared the issue.
Other than restarting my machine is there anything else I can try?

You can use the Windows Resource Monitor tool. Once opened, navigate to the CPU tab and enter the full path of the .exe (e.g. C:\Project\bin\debug\App.exe) in the Search Handles text box in the bottom-mid right corner and search. You should see the list of processes currently locking on the particular resource (in this case, the .exe). Select all the unwanted processes, right-click and end process. Depending upon on the version of Windows you are running, the experience might differ a bit but the general idea is more or less the same.

Related

DLL not downloading in WinDbg when analysing crash-dump file

After failing to get DebugDiag to analyse crash-dump files it was suggested that I try using WinDbg instead.
The crash-dump files have been created on a Windows Server 2016 box, running my ASP.Net 4.5.2 web application on IIS-10. My ASP.Net web application contains several 3rd party components, with their individual DLLs.
I have copied the crash-dump files onto my Windows 10 development machine, and am running WinDbg locally instead of on the server.
The problem is... when I run !analyze -v in WinDbg on any of the crash-dump files, it effectively hangs while "Downloading file xxx.DLL" (xxx.DLL being the name of just one of the 3rd party component DLLs), and eventually cancels itself after a period of time.
I'm running WinDbg on the same machine that I built the website on in the first place... so is there a way of telling WinDbg that it can find the DLL in a particular location on the local machine?
I obviously don't have a .pdb file for any of the 3rd party components, and so I'm not bothered about it loading symbols for those DLLs... but either I somehow tell it to ignore those particular DLLs, or I tell it how to find them locally.
Can anybody point me in the right direction?
You don't have to analyze the dump file with !analyze -v.
If you need to load dll, then .load D:.... is enough.
To maunal analyze a dump file.
Please run .loadb sos clr to load debug module. If the crash server and your machine run different version of .net framework. Then you need to load sos.dll manually.
When you need to debug .net application in IIS, !mex extension is recommened.
https://www.microsoft.com/en-us/download/details.aspx?id=53304
You can load mex.dll via .load c:\.....\mex.dll
!mex.aspxpages can show all requests inside the process and their process
!mex.mthreads show the status of all threads
!mex.clrstack2 will show all exceptions and mananaged call stack in specific thread.
1.You can use ~* k to load the full call stack in all threads and !mex.mthreads check status.
Then you may find something like KERNELBASE!RaiseException in specific thread
2.Then go to this thread via threadid~ like 12~
3.Run !mex.clrstack2 and it will show the crash exception
Basically, no, you cannot speed up the process of loading symbols for DLLs where you don't have symbols. IMHO, the only way of speeding up the symbol process would be to disable the HTTP server, so that symbols are only searched on your local disk.
See also: How to set up symbols in WinDbg if you have not done this often.
Getting a HTTP 404 for those files should not take very long. However, it tries various file endings and pointers etc. Sometimes Microsoft servers are slow. Also, having a lot of 3rd party DLLs may sum up of course. That can be pretty anoying.
I'll start by saying I don't 100% understand everything I had to do, but here are the step I took to discover where the stackoverflow issue was in my application...
The majority of the information came from this blog.
On the server I added the following registry settings to create the crash dump files...
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps]
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps\w3wp.exe]
"DumpCount"=dword:00000005
"DumpFolder"=hex(2):43,00,3a,00,5c,00,43,00,72,00,61,00,73,00,68,00,44,00,75,\
00,6d,00,70,00,73,00,5c,00,00,00
(The DumpCount is the number of files to store before it starts overwriting old ones - DumpFolder is where the files are to be saved, is a REG_EXPAND_SZ and in my case represents C:\CrashDumps\)
Waited for crashes to happen
Copied the crash files into a directory on my local machine called C:\WinDbg\CrashDumps\
Create another directory called C:\WinDbg\Symbols, into which I placed...
clr.dll (from the server, taken from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\)
sos.dll (from the server, taken from C:\Windows\Microsoft.NET\Framework64\v4.0.30319\)
all .dll and .pdb files from my local development environment, including third party component .dll files
Installed WinDbg via Windows Store on my Windows 10 development machine
Ran windbgx -y c:\windbg\symbols via Run command (for some reason it's windbgx on my machine but maybe that's because it's via the Store rather than manual download)
In the file menu Open dump file and select one of the dump files in C:\WinDbg\CrashDumps
Ran the following commands...
.symfix
.reload
.load c:\windbg\symbols\sos.dll (see note 1 below)
!clrstack (see note 2 below)
Although this didn't give me all the information I expected, what it did show was that one of my 3rd party components was 100% to blame for the stackoverflow exception.
Note 1 - Lots of places I read said that .loadby sos clr should be used, but that just gave me The call to LoadLibrary(C:\ProgramData\Dbg\sym\clr.dll\5E7D1F3B9eb000\sos.dll) failed and I couldn't figure out how to fix it... so instead I've used .load c:\windbg\symbols\sos.dll.
Note 2 - The !clrstack command worked because WinDbg appeared to pre-select the thread that had the exception. The other option is to use ~*e !clrstack which will show you call stacks for ALL threads.

Unable to debug in Visual Studio because process can not access file

When I try to debug in Visual Studio I get the error message:
Unable to copy file "C:\Users\Name\Dropbox\Company Name\Development\Product Name 4 - Release Candidate\packages\MahApps.Metro.1.1.2.0\lib\net45\MahApps.Metro.dll" to "bin\Debug\MahApps.Metro.dll". The process cannot access the file 'bin\Debug\MahApps.Metro.dll' because it is being used by another process. Product Name 4 - Release Candidate
How can I fix this error?
This happens all the time in Dropbox. Dropbox does some occasional (very brief) locking of files as it is indexing them, and if you happen to attempt to open a file handle with the write attribute set at the same moment, the program will receive a file I/O exception (this can happen to your own code as well, so if you regularly work in Dropbox, be sure to handle that gracefully).
Try compiling/running it again and see if the problem goes away. If not, then you likely still have an instance of your application running in the background. This can occur if your program ever forks. VS will terminate the original process, but often not forked processes from it. Check task manager to be sure. It will be listed as a background process in Windows 8/8.1

How do I debug a Worker Role using Remote Desktop with Windows Azure?

I now have my Windows Azure environment set up so that I can access my Worker Role with Remote Desktop. However, I'm not sure how to proceed at the moment. After much digging I found a web site that was offline but in Google's cache there was mention of attaching to the Worker Role running in the Azure Cloud from the Visual Studio debugger. But I only have Visual Developer (not studio) 2010 and I have searched all over and as far as I can see there is no such option to attach to a remote server. I am able to publish my project to the Azure Cloud without error and I have a "healthy" instance of my Worker Role showing as active and running.
I did connect with RDP through the Azure Management portal. The login worked fine and up came the remote desktop window. I searched through much of what I could find and was unable to find my Worker Role. I must have the wrong impression of RDP, because I had hoped to see the Worker Role's main display form when I logged in, just like I do when I debug it locally in the Cloud Emulator. But instead all I saw was a blank desktop with some base level server inspection and management routines. I even checked the Event Viewer for Application related messages and saw none.
So now I'm stuck wondering if my Worker Role is actually running or not, despite the seemingly positive status messages from the Management Portal, and I still want to attach to my Worker Role for debugging through Visual Developer, if it's possible, but I am unable to figure out how.
Anyone with experience in this area that can give me some solid tips on what to do next, please respond.
UPDATE: I believe my worker role may be running because I opened a command window and did a Netstat and saw it listening on the correct port. However, that may just be my Worker Role shell class that starts the custom EXE I have it launch as a spawned proces. I still haven't confirmed if my custom EXE is running yet.
UPDATE-2: Just ran TaskList from a command window and the custom EXE is listed.
UPDATE-3: Everything is working as I just ran a remote test of the service so that's not a problem. Still want to know how to attach to the Worker Role from Visual Developer 2010 for remote debugging, and if it's possible to see the custom EXE's display form like I do when doing local debugging in the Cloud Emulator.
-- roschler
There is a set of articles here which goes in length on how to set up for remote debugging in Azure:
http://blogs.u2u.be/peter/post/2011/06/21/Remote-debugging-an-Azure-Worker-role-using-Azure-Connect-Remote-desktop-and-the-remote-debugger.aspx
http://blogs.u2u.be/peter/post/2011/06/24/Remote-debugging-an-Azure-worker-role-using-Azure-Connect-remote-desktop-and-remote-debugger-part-2.aspx
http://blogs.u2u.be/peter/post/2011/06/26/Remote-debugging-a-Windows-Azure-Worker-Role-using-Azure-Connect-Remote-desktop-and-the-remote-debugger-part-3.aspx
The key takeaway is that you don't need to actually install Visual Studio on Azure, you only need to copy the Remote Debugger bits and then use Azure Connect to add your developer machine to the Virtual Network.
You can setup Remote Debugging with Visual Studio 2012
http://code.msdn.microsoft.com/Remote-Debugging-Windows-dedaaec9
When you say:
But instead all I saw was a blank desktop with some base level server inspection and management routines.
this is exactly what you get with an Azure VM. It's a basic OS install, plus the bare minimum of Azure stuff it needs to run and the code you've uploaded. There's no fancy monitoring or health checks available on the machine by default, you're expected to have provided those yourself to have them available without having to RDP into the machine to check on it.
RDP is very good for tracking down certain problems, like checking that a startup task will run, checking which directories items are installed in and just generally being nosey. If you need extra tools to track down a problem, you can just install them while you're connected to the server. For example I have RDPed into a server and installed the Microsoft Debugging Tools, to track down a memory issue.
I suppose you could remote into your VM, install Visual Studio there, and debug the process...
I also suppose it might be possible to enable remote debugging (not sure what's involved there, but such a thing exists, and it works over TCP) and debug from a local instance of Visual Studio.
To my knowledge, neither is commonly done.
Based on other answers, you would be better off writing a log file to a local storage. You can read the file from RDP if you reallyhace to. Keep in mind, debugging on Azure isn't really simple, and rightly so.
What I was thinking though was, maybe you could run the process using the user's credentials. I can't verify at the moment, but you have a better shot of seeing the ui when you rdp.

IIS executable not executing

I have been looking at an issue for a week straight and have been unable to figure it out and I am desperate for the fix.
On a client site, we have two environments: UAT and PROD. UAT works perfect (Please keep this in mind). We are now trying to deploy the solution to PROD but certain parts of the solution are not working.
We have developed an asp.net application that we provide to clients to allow them to invoke SSIS packages (there are a couple of drop downs that they first select then click a button named "invoke").
When the user clicks the Invoke button, a batch file named InvokeSSIS.bat is called that assembles a command line call to dtexec with the appropriate parameters.
I'm having a problem with a particular package that is responsible for calling an executable which generates a spreadsheet that i will be importing into my system.
The executable is on an mapped H:\ drive.
I have modified the InvokeSSIS.bat batch file to capture the command the batch file is generating. If I execute this command from the command line, it works perfectly. From the webapp Invoker, it executes the package but the tasks responsible for calling the executable doesn't execute as the entire package takes only 1 second to complete (whereas it should take about a minute.)
The executable DOES have a GUI, but it is NOT interactive. This is because when you call the GUI with specific parameters, it automatically runs in batch mode and executes a macro used to generate the desired spreadsheet.
I know this is ok because it works on the UAT server AND it works from the command line!
I have checked the permissions on the executable (bu right-clicking the executable and clicking properties.) I have granted Full Control on the executable to the same user specified as the identity tab of the application pool i am using.
Can someone please help me? As I said I am dying over here!
Please let me know if you have any ideas or what other info you need.
Environment (both UAT and PROD)
OS: Windows Server 2003
IIS 6
asp.net 2.0
SQL Server 2008
Thanks!
Steve
You can't use a mapped drive with IIS.
You must use the \\servername syntax to reach files on other systems.
I agree with user544284 that this is at least in part a mapping issue. I'll ignore for a minute the complete insanity of having a web application call a batch file to start an executable that's on a remote network drive through a drive letter mapping.
Most likely the UAT box has something set up that maps that drive letter for you which Prod is missing.
The only other possibility is a security violation is occurring. Running .exe's from a network drive is generally frowned on. Do the two environments have the exact same version of windows? Are they configured the same with regards to UAC? Any differences here are going to be important.
Which brings up an interesting thought. I wonder if someone logged in to the UAT server using the same account credentials the app pool is using and added the ip address of the machine where the exe lives to the list of "Local Intranet" sites... Or, if they installed SSIS on the UAT server itself.
Just because YOU can log in to the server and run it on the command line means nothing. You have to find out if the drive letter is mapped at all for the user that the web app is running under and whether that user has the required security bits and whether the local OS will allow it regardless.
Okay, I can't ignore it: hairbrained is the nicest adjective I can come up with for this "architecture". Do yourself a favor and go back to the drawing board on this one. It has the word "brittle" written all over it, as you have already found. Instead of building out a batch file to call dtexec, just do it directly either by something like this or this.

Is it possible for a team to use Eclipse installed on a shared network drive?

Our lead programmer likes to install tools on a shared network drive to minimize effort when updating. He recently installed Eclipse to the network drive, but when I run it, I get a window that says Workspace in use or cannot be created, choose a different one. After clicking OK, I get a window that gives me a drop down menu with only one item, the workspace on his machine. I can then browse to the workspace on my machine, click OK, and Eclipse continues to start up and run just fine. There's a check box in that second window that says Use this workspace as the default that I've checked after browsing and selecting my workspace, but the next time I start up Eclipse, it reverts back to the lead's workspace.
Are we violating some assumption that Eclipse makes about the install? We're on a Linux network, if it makes a difference.
Setup the shared eclipse such that it can not be modified by the users accessing it. This should (if I recall correctly) force eclipse into a "Shared User, Hands Off" mode and default to storing settings per user account.
Do not share Workspaces (or Projects) -- this will only break things horribly -- use a different strategy such as a proper revision control system.
Perhaps this documentation will be helpful.
"""The set up for this [shared] scenario requires making the install area read-only for regular users. When users start Eclipse, this causes the configuration area to automatically default to a directory under the user home dir. If this measure is not taken, all users will end up using the same location for their configuration area, which is not supported."""
I would try to run Eclipse locally as well as over the network. Using a shared network drive may make Eclipse more painful than it sometimes is. A development environment should work for the developer, even at the expense of a slightly more complicated setup.
Eclipse stores a lot of settings, including the workspace list, in it's installation directory (especially the "configuration" directory). It's hard to say how well sharing the installation will work, but I wouldn't be surprised if there were a number of issues caused by "fighting" between Eclipse instances running on different developer's workstations.
To fix the particular issue you're having, you could set up a separate startup script that passes your workspace as a command-line argument to Eclipse, bypassing the workspace selection dialog you're seeing.

Resources