Azure web job keep restarting - azure

I've created a console application that I run as a continuous web job. It does not use the web job SDK, but it checks for the file specified in the WEBJOBS_SHUTDOWN_FILE environment variable.
I have turned the 'Always On' option on, and I'm running the job in the shared plan as a singleton using the settings.job file.
The web job keeps getting stopped by the WEBJOBS_SHUTDOWN_FILE and is restarted again after that.
I've been looking in the kudu sources, but I can't find why my web job is restarted.
Does anybody have an idea why this happens?
This is the code that initializes the FileSystemWatcher:
private void SetupExitWatcher()
{
var file = Environment.GetEnvironmentVariable("WEBJOBS_SHUTDOWN_FILE");
var dir = Path.GetDirectoryName(file);
var fileSystemWatcher = new FileSystemWatcher(dir);
FileSystemEventHandler changed = (o, e) =>
{
if (e.FullPath.Equals(Path.GetFileName(file), StringComparison.OrdinalIgnoreCase) >= 0)
{
this.Exit();
}
};
fileSystemWatcher.Created += changed;
fileSystemWatcher.NotifyFilter = NotifyFilters.CreationTime | NotifyFilters.FileName | NotifyFilters.LastWrite;
fileSystemWatcher.IncludeSubdirectories = false;
fileSystemWatcher.EnableRaisingEvents = true;
}

Related

How to reduce Azure web app temp file utilization

I have a web app developed in ASP.Net MVC 5 hosted in Azure. I am using a shared app service, not VMs. Recently Azure has started showing warnings that I need to reduce my app's usage of temporary files on workers.
Temp file utilization
After restarting the app, the problem has gone away. Seems that temporary apps were cleared by doing a restart.
How to detect and prevent unexpected growth of the temporary file usages. I am not sure what generated 20 GB of temporary files. What should I look for reduce app usage of temporary? I am not explicitly storing anything in temporary files in code, data is stored in the database, so not sure what to look for?
What are the best practices that should be followed in order to keep the Temp File usages in a healthy state and prevent any unexpected growth?
Note: I have multiple virtual path with same physical path in my Web App.
Virtual path
try
{
if (file != null && file.ContentLength > 0)
{
var fileName = uniqefilename;
CloudStorageAccount storageAccount = AzureBlobStorageModel.GetConnectionString();
if (storageAccount != null)
{
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
string containerName = "storagecontainer";
CloudBlobContainer container = blobClient.GetContainerReference(containerName);
bool isContainerCreated = container.CreateIfNotExists(BlobContainerPublicAccessType.Blob);
CloudBlobDirectory folder = container.GetDirectoryReference("employee");
CloudBlockBlob blockBlob = folder.GetBlockBlobReference(fileName);
UploadDirectory = String.Format("~/upload/{0}/", "blobfloder");
physicalPath = HttpContext.Server.MapPath(UploadDirectory + fileName);
file.SaveAs(physicalPath);
isValid = IsFileValid(ext, physicalPath);
if (isValid)
{
using (var fileStream = System.IO.File.OpenRead(physicalPath))
{
blockBlob.Properties.ContentType = file.ContentType;
blockBlob.UploadFromFile(physicalPath);
if (blockBlob.Properties.Length >= 0)
{
docURL = blockBlob.SnapshotQualifiedUri.ToString();
IsExternalStorage = true;
System.Threading.Tasks.Task T = new System.Threading.Tasks.Task(() => deletefile(physicalPath));
T.Start();
}
}
}
}
}
}
catch (Exception ex)
{
}
//Delete File
public void deletefile(string filepath)
{
try
{
if (!string.IsNullOrWhiteSpace(filepath))
{
System.GC.Collect();
System.GC.WaitForPendingFinalizers();
System.IO.File.Delete(filepath);
}
}
catch(Exception e) { }
}
You problem may be caused by using temporary files to process uploads or downloads. The solution would be either to process files using memory stream instead of filestream or delete the temporary files after you are finished processing. This SO exchange has some relevant suggestions:
Azure Web App Temp file cleaning responsibility
Given your update, it looks like your file upload code lets temp files accumulate in line 39, because you are not waiting for your async call to delete the file to finish before you exit. I assume that this code block is tucked inside an MVC controller action, which means that, as soon as the code block is finished, it will abandon the un-awaited async action, leaving you with an undeleted temp file.
Consider updating your code to await your Task action. Also, you may want to update to Task.Run. E.g.,
var t = await Task.Run(async delegate
{
//perform your deletion in here
return some-value-if-you-want;
});

Calling WCF endpoint from Azure Function

I have a v2 Azure function written in dotnet core. I want to call in to a legacy WCF endpoint like this:
basicHttpBinding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Basic;
var factory = new ChannelFactory<IMyWcfService>(basicHttpBinding, null);
factory.Credentials.UserName.UserName = "foo";
factory.Credentials.UserName.Password = "bar";
IMyWcfService myWcfService = factory.CreateChannel(new EndpointAddress(new Uri(endpointUri)));
ICommunicationObject communicationObject = myWcfService as ICommunicationObject;
if ( communicationObject != null)
communicationObject.Open();
try
{
return await myWcfService.GetValueAsync());
}
finally
{
if (communicationObject != null)
communicationObject.Close();
factory.Close();
}
This works when I call it from a unit test, so I know the code is fine running on Windows 10. However, when I debug in the func host, I get missing dependency exceptions. I copied two dlls : System.Private.Runtime and System.Runtime.WindowsRuntime into the in directory, but have reached the end of the line with the exception
Could not load type 'System.Threading.Tasks.AsyncCausalityTracer' from assembly 'mscorlib'
This seems like a runtime rabbit hole that I don't want to go down. Has anyone successfully called a Wcf service from an Azure function? Which dependencies should I add to get the call to work?

Web browser with no title / windows bars

I've been looking into this for awhile now as I have created a client I would love to be able to run in a separate window (In a similar design to the Blizzard launcher or the old Ijji reactor). I was wondering if this was possible. Last week I created a web browser within Visual Basic but I was not happy with the final result at the bars where still stationed around the window. Any helpful tips or advice would be appreciated!
You didn't specify language, so you get it in c#. This might work. Starts chrome in app mode. here is the argument list
http://peter.sh/experiments/chromium-command-line-switches/
url = "--app=http://google.com";
Process[] pname = Process.GetProcessesByName("chrome");
if (pname.Length == 0)
{
chrome = false;
}
else // if chrome is running
{
if (!chrome)
{
Process process = new Process();
process.StartInfo.FileName = "chrome";
process.StartInfo.Arguments = url;
process.StartInfo.WindowStyle = ProcessWindowStyle.Normal;
process.Start();
//Process.Start("chrome", url);
}
chrome = true;
}

Issues for taking screenshots for some urls using WebBrowser control (WinForms)

My code (below) using a foreach loop for multiple urls and save screen shots of each url). This code works perfectly in my dev environment ( Visual studio 2010, Windows 7 64 bit).I can take screen shots of url and save it in a folder. But the same code I deployed in a server (Windows server 2003, IIS- 6.0), it is not saving screen shots for some url. I captured the logs and it says System.Threading.ThreadAbortException: Thread was being aborted.System.Threading.Thread.JoinInternal().
Is it works with IIS 6.0 or any other settings has to do with the deployment server? It works perfectly in my local environment. Any help appreciated.
public void Capture(string url, string path)
{
_path = path;
var thread = new Thread(() =>
{
using (var browser = new WebBrowser())
{
browser.ScrollBarsEnabled = false;
browser.AllowNavigation = true;
browser.Navigate(url);
browser.Width = 1600;
browser.Height = 1600;
browser.ScriptErrorsSuppressed = true;
browser.DocumentCompleted += DocumentCompleted;
while (browser.ReadyState != WebBrowserReadyState.Complete)
{
Application.DoEvents();
}
}
});
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
thread.Join();
}

How can I cleanup files & directories in Azure Local Storage after a file begins streaming to the browser?

BACKGROUND: I'm making use of Azure Local Storage. This is supposed to be treated as "volatile" storage. First of all, how long do the files & directories that I create persist on the Web Role Instances (there are 2, in my case)? Do I need to worry about running out of storage if I don't do cleanup on those files/directories after each user is done with it? What I'm doing is I'm pulling multiple files from a separate service, storing them in Azure Local Storage, compressing them into a zip file and storing that zip file, and then finally file streaming that zip file to the browser.
THE PROBLEM: This all works beautifully except for one minor hiccup. The file seems to stream to the browser asynchronously. So what happens is that an exception gets thrown when I try to delete the zipped file from azure local storage afterward since it is still in the process of streaming to the browser. What would be the best approach to forcing the deletion process to happen AFTER the file is completely streamed to the browser?
Here is my code:
using (Service.Company.ServiceProvider CONNECT = new eZ.Service.CompanyConnect.ServiceProvider())
{
// Iterate through all of the files chosen
foreach (Uri fileId in fileIds)
{
// Get the int file id value from the uri
System.Text.RegularExpressions.Regex rex = new System.Text.RegularExpressions.Regex(#"e[B|b]://[^\/]*/\d*/(\d*)");
string id_str = rex.Match(fileId.ToString()).Groups[1].Value;
int id = int.Parse(id_str);
// Get the file object from eB service from the file id passed in
eZ.Data.File f = new eZ.Data.File(CONNECT.eZSession, id);
f.Retrieve("Header; Repositories");
string _fileName = f.Name;
try
{
using (MemoryStream stream = new MemoryStream())
{
f.ContentData = new eZ.ContentData.File(f, stream);
// After the ContentData is created, hook into the event
f.ContentData.TransferProgressed += (sender, e) => { Console.WriteLine(e.Percentage); };
// Now do the transfer, the event will fire as blocks of data is read
int bytesRead;
f.ContentData.OpenRead();
// Open the Azure Local Storage file stream
using (azure_file_stream = File.OpenWrite(curr_user_path + _fileName))
{
while ((bytesRead = f.ContentData.Read()) > 0)
{
// Write the chunk to azure local storage
byte[] buffer = stream.GetBuffer();
azure_file_stream.Write(buffer, 0, bytesRead);
stream.Position = 0;
}
}
}
}
catch (Exception e)
{
throw e;
//Console.WriteLine("The following error occurred: " + e);
}
finally
{
f.ContentData.Close();
}
} // end of foreach block
} // end of eB using block
string sevenZipDllPath = Path.Combine(Utilities.GetCurrentAssemblyPath(), "7z.dll");
Global.logger.Info(string.Format("sevenZipDllPath: {0}", sevenZipDllPath));
SevenZipCompressor.SetLibraryPath(sevenZipDllPath);
var compressor = new SevenZipCompressor
{
ArchiveFormat = OutArchiveFormat.Zip,
CompressionLevel = CompressionLevel.Fast
};
// Compress the user directory
compressor.CompressDirectory(webRoleAzureStorage.RootPath + curr_user_directory, curr_user_package_path + "Package.zip");
// stream Package.zip to the browser
httpResponse.BufferOutput = false;
httpResponse.ContentType = Utilities.GetMIMEType("BigStuff3.mp4");
httpResponse.AppendHeader("content-disposition", "attachment; filename=Package.zip");
azure_file_stream = File.OpenRead(curr_user_package_path + "Package.zip");
azure_file_stream.CopyTo(httpResponse.OutputStream);
httpResponse.End();
// Azure Local Storage cleanup
foreach (FileInfo file in user_directory.GetFiles())
{
file.Delete();
}
foreach (FileInfo file in package_directory.GetFiles())
{
file.Delete();
}
user_directory.Delete();
package_directory.Delete();
}
Can you simply run a job on the machine that cleans up files after say a day of their creation? This could be as simple as a batch file in the task scheduler or a separate thread started from WebRole.cs.
You can even use AzureWatch to auto-re-image your instance if the local space drops below a certain threshold
Could you place the files (esp. the final compressed one that the users download) in Windows Azure blob storage? The file could be made public, or create a Shared Access Signature so that only the persons you provide the URL to could download it. Placing the files in blob storage for download could alleviate some pressures on the web server.

Resources