I'd like to execute a
appcmd set config /commit:WEBROOT /section:sessionState /mode:StateServer /stateConnectionString: tcpip=loopback:42424 /stateNetworkTimeout: 120 /useHostingIdentity:True
command on the box belonging to an Azure Web App. The console is a "sandbox environment", so I don't necessarily expect to have enough privileges, but appcmd is not recognized as a command.
Same thing happened with the KUDU special console - which looked more promising in terms of potential privileges to carry out the task -, both with the CMD and the PowerShell console.
My main goal is to start the IIS's State Server. How to start ASP.Net State Service in Azure mentions "startup task", but I couldn't figure out how to do that. https://technet.microsoft.com/en-us/library/cc732412(v=ws.10).aspx quotes appcmd.
Per your link in your question: Startup tasks are for web/worker roles in Cloud Services, not Web Apps (completely different things; web/worker role instances are Windows Server instances, not a sandboxed environment).
You cannot enable IIS State Server on Web Apps. You'll need to store your session state in something like Redis Cache service, which runs independent of Azure Web Apps. Really, you can use any cache (or storage) you want that's external to the Web App sandbox, as long as you have proper drivers/providers for what you choose.
Related
I am new to Azure.
I was able to use the Microsoft bot Composer to publish my first chatbot to the azure window web service. I also log into the azure portal to use the edge browser to open up KuDu Console.
I also open up the corresponding shell. I can use the shell (such as the cmd or powershell) to trace the nested wwwroot. But, the nested wwwroot is hidden from the kudu console? Why the azure window web service wants to keep the nested wwwroot hidden from the KuDu console? In anther word, the kudu console only displays the top-level wwwroot while keeping the sub-level wwwroot hidden from being discovered.
If I have understood you correctly, It's due to the sandbox restrictions.
All Azure Web Apps run in a secure environment called a sandbox. Each app runs inside its own sandbox, isolating its execution from other instances on the same machine as well as providing an additional degree of security and privacy which would otherwise not be available.
The sandbox generally aims to restrict access to shared components of Windows.
Applications are highly restricted in terms of their access of the file system.
I suggest you to check these docs for more information on this:
Azure Web App sandbox
Accessing the kudu service
Operating system functionality on Azure App Service
I've written a Node app that essentially serves as a Task Scheduler (or cron) to run batch processes on set time intervals using node-schedule. When I run this program locally or on a VM, the process will run continuously and execute my jobs until the process is forcibly killed. When I deploy this app to Azure as an Azure App Service, the process is treated more as a "Web App", and after a period of inactivity on the site (ie no web traffic), Azure kills the process. If I access the "site" via a browser, it kicks it back up again.
It seems as though Azure is tied to the Node app being an express-based "web app" and as far as I can tell, there's not a way to deploy my command line app in a reliable manner. Am I missing something or is there a better way to deploy this application in Azure either via Web App, or another offering? Would really like to avoid having to maintain a VM just for this purpose.
For your immediate problem of idle timeout there is a simple configuration available called Always on. Take a look at the link here - https://learn.microsoft.com/en-us/azure/app-service/web-sites-configure.
Always On. By default, web apps are unloaded if they are idle for some
period of time. This lets the system conserve resources. In Basic or
Standard mode, you can enable Always On to keep the app loaded all the
time. If your app runs continuous WebJobs or runs WebJobs triggered
using a CRON expression, you should enable Always On, or the web jobs
may not run reliably.
Also look at cost implications discussion here - Does the Azure Websites "*Always On" option have any implication on price?
Now whether App Service is the best solution or not for your problem of Task scheduling, is a more subjective and longer discussion, where you need to evaluate multiple offerings that Azure has and your requirements/priorities etc.
Azure has it's own task scheduling service - https://azure.microsoft.com/en-us/services/scheduler/
Scheduler Jobs are very simple to configure from Azure Portal. You can:
Make calls to http/https endpoints (which implicitly gives you multiple ways to solve your problems). Authentication can be done using basic, certificate or AzureAD OAuth Client credentials).
Send messages to Storage queue or Service Bus queue/topic which can then be processed appropriately by other processes.
If those Azure Scheduler capabilities aren't enough and you need something more involved, here is some guidance on the best practices documentation on background jobs - https://learn.microsoft.com/en-us/azure/architecture/best-practices/background-jobs#schedule-driven-triggers
(Note that I'm using the new "blade" Azure Portal exclusively and use the new terminology, so avoid words like "Azure Website" as they do not apply here).
In the Portal I created two Azure App Services, "foo-production" and "foo-staging" - both exist in the same Subscription and Resource Group, and share the same App Service Plan. These App Services represent the production and staging deployments of a straightforward ASP.NET web application, which runs as a normal website.
The App Service Plan is "Basic: 1 Small".
My understanding is that when you use Azure App Services with a Basic or higher App Service Plan, that the Plan represents a VM which I'm able to host as many IIS websites as I want on - these IIS websites are represented in Azure as Azure App Services.
Given this, one would assume when I access the filesystem of the VM in Kudu ( https://yourwebsite.scm.azurewebsites.net/DebugConsole ) that I would be able to see each website's files under some common root directory.
However when I access the Kudu console for the foo-production website, I see that its files are in D:\home\site\wwwroot and files for foo-staging are not to be found.
If I'm understanding this correctly, it means that Azure actually created a whole new VM just for each website and that websites cannot share a filesystem - and that I cannot have a more advanced Azure-managed IIS configuration - I'd have to create my own self-managed Windows Server VM.
I can understand the motivation behind a separate VM for each website, it just seems wasteful - Windows Server requires at least a gigabyte of memory for each VM, yet my website is largely just static files (but I can't use a Shared App Service Plan because I need some of the more advanced functionality). That can't be economical for Microsoft then.
How can I have multiple Azure App Services in an Azure-managed environment share the same VM? Or am I thinking about it incorrectly?
To avoid an X/Y problem: I'll state that my primary concern is the persistence of files. The web-application I'm deploying stores uploaded files to a subdirectory of the webroot and those files should be there permanently. There is ambiguous information out there: some people suggest websites (and all their files) are actively destroyed and recycled and that Azure Storage Blobs should be used. I would like to use Azure File Shares, unfortunately I get ACCESS_DENIED errors when using WNetAddConnection2 and some users report that Azure File Shares cannot be used from within Azure App Services - though I cannot find anything authoritative from Microsoft about this.
If they are in the same App Service Plan, they are running in the same VM. Try typing hostname in Kudu Console for each and you'll see the same machine name.
But note that they each run in a different sandbox, which prevents them from seeing each other's files. Folders like d:\home are virtualized, and are actually pointing to network shares. So you can't use that to make conclusions as to the machines are the same.
As I answered here, all app services within a plan run in the same set of VMs, sharing all compute resources.
You assumed each app service within a plan shares files with all other app services. This is incorrect: Each app service will have its own set of files, in d:\home for each app service. If you need to share files, you'll need to use something external to App Services, like Azure File Service (an SMB share). Azure File Service is separate from the space created for you on a per-app-service basis.
An Azure "App Service" is analogous to a "Container" (Docker terminology). Although it's based on a VM, it's much lighter weight than a VM itself. For example, you cannot RDP into it.
An Azure "VM" is a full-fledged virtual machine. The OS can be Windows or any of several different flavors of Linux.
You can get more information here:
Azure App Service, Cloud Services, Virtual Machines, and Service Fabric comparison
Here is an excellent article that compares Web Sites (one example of an App Service), Cloud Services, and VMs:
http://www.c-sharpcorner.com/UploadFile/42ddd2/azure-websites-vs-cloud-service-vs-virtual-machines/
Azure Websites
Azure Websites has very little responsibility to complete, and
relatively less control. It is the best choice for most web apps.
Deployment and management are integrated directly into the platform we
get.
Azure Cloud Services
If you want more, web server like environment you might want to go
with Azure Cloud Services. You can remote into your cloud services and
configure startup tasks. Cloud Services provide you more Ease of
Management and Agility than Azure Websites
Azure Virtual Machines
Provides you rich set of features; however, correctly configuring,
securing and maintaining VMs require much more time and more IT
expertise compared to Azure Cloud Services and Azure Websites.
I have several web and worker roles in my solution, but I also have a non-Azure application running on a Azure hosted VM. That application connects to Azure storage for various things like reading and writing blobs and queues, and that works fine.
I'd like to use Azure diagnostics from within that same application (a .NET app running on a VM hosted in Azure). However, if I try to initialize diagnostics I get an exception that:
System.InvalidOperationException: Not running in a hosted service or the Development Fabric.
This makes sense, but I'm wondering if it's possible to use the diagnostics in some way without being a hosted service. In particular, I'm using azure diagnostics to gather logging information, written out by System.Diagnostics.Trace, and that's all hidden away from the application code, so if there were some other APIs I have a place I can probably slot that in.
Any ideas?
Thanks,
JC
Unfortunately, no. At least not today. The agent has some hard-coded checks for the RoleEnvironment stuff and when it is not there, it fails. This is also the reason you cannot use the agent in the IaaS stuff today either.
Hi I have created a new azure project with a worker role and have noticed a class WorkerRole.cs. I tried researching on what it's purpose is however I couldn't find a straight answer.
The goal of the WorkerRole.cs is to give you an entry point to execute "something" when the Worker Role instance starts. This could be a command line application you're starting, or a WCF service, or MySQL, ... anything you want.
Note that you'll need to write code to keep the instance 'alive'. If you don't do this, it will restart when the Run method completes. Take a look at the following question for more information: Using Thread.Sleep or Timer in Azure worker role in .NET?
Here is something you can use to understand Windows Azure Worker Role:
Windows Azure is Platform as a Service and you will get to run your application in a dedicated virtual machine (except with extra-small instance)
The application architecture for Windows Azure application supports 3 different kind of applications called Web Role, Worker Role and VM Role,
2.1. A Web role is considered an application in which IIS is pre-configured and ready for your application. In most cases it is a web based application but it may not if you wish, but IIS will always be there. With IIS, you can run either an ASP.NET application or a node.js application it is your choice to decide what kind of application you would want.
2.2. A Worker role is considered to be an application which does not need IIS and it is up to you whatever you would want to run on Worker role, C#, Java, PHP, Python or anything else. It is mainly used for open source web application or an application which perform actions as back-end and does not need web front end.
2.3 VM Role is in BETA and used to run on a custom VHD deployed by user. We would consider it is in following explanation.
All of these roles are actually a libraries means they compiled as DLL and when they run on Windows Azure, they actually needs a host process to run. For Web Role, the host process is WaWebHost.exe or WaWebIIS.exe, for WorkerRole the host process is WaWorkerHost.exe.
When these host process starts in Windows Azure they look for a file call E:__entrypoint.txt which provided the Role DLL name and location so host process and find and load it.
These Web and Worker role class are derived from RoleEntryPoint base class which extend all of required function for a web and worker role to run in Windows Azure Environment
When you create a web or worker role using, Azure SDK template you will get these base code file where Web and Worker Role class can implement required functions. For Worker Role the call WorkerRole defined in WorkerRole.cs and for Web Role it is WebRole.cs.
If you decided to add code specific to Windows Azure Runtime i.e. configuration or some setting you are going to add here because when role will start via host process, the code you have added in WebRole.cs or WorkerRole.cs will execute in Windows Azure runtime context.