I have published a cloud service with my worker role - it is meant to be polling for messages from a queue and uploading a file to a blob services. The messages are being sent to the queue, but I cannot see that the files are being uploaded. From the portal, I can see that the worker role is live. I don't know what I am missing. Am I meant to write some where in my code to run automatically? The code, when run on the virtual machine seems to work fine and will poll for messages as well as upload files. Furthermore, I am not sure how to debug the service once it is deployed and I am open to any suggestions.
I am using Python to develop the whole service.
This is my code:
if __name__ == '__main__':
while True:
message = message_bus_service.receive_subscription_message('jsonpayload-topic','sMessage')
guid = message.body
try:
message.delete()
except:
print "No messages"
//bunch of code that does things with the guid and uploads//
sleep(10)
this is in the csdef file:
<Runtime>
<Environment>
<Variable name="EMULATED">
<RoleInstanceValue xpath="/RoleEnvironment/Deployment/#emulated" />
</Variable>
</Environment>
<EntryPoint>
<ProgramEntryPoint commandLine="bin\ps.cmd LaunchWorker.ps1" setReadyOnProcessStart="true" />
</EntryPoint>
</Runtime>
As you can see, the setReadyOnProcessStart is set to "true"
There's an example of configuring automatic start here that you could check through.. http://www.dexterposh.com/2015/07/powershell-azure-custom-settings.html
Also, have you considered configuring remote access so you can log on and troubleshoot directly (i.e. check your code is running etc.)
Configuring Remote Desktop for Worker role in the new portal
Related
I'm relatively new to Azure development and need some help overcoming the following predicament:
I have an executable that I need to run as part of my Azure service startup. The executable needs access to one of the service's application settings.
So I added the following to my csdef (the batch script just runs the executable with output redirected to a file):
<Startup>
<Task commandLine="StartupTask.cmd" executionContext="elevated" taskType="background">
<Environment>
<Variable name="Var">
<RoleInstanceValue
xpath="/RoleEnvironment/CurrentInstance/ConfigurationSettings/ConfigurationSetting[#name='SomeAppSetting']/#value" />
</Variable>
</Environment>
</Task>
</Startup>
Adding the task caused the deployment to fail and after much hair tearing I realized it was because SomeAppSetting value was too long (see http://blogs.msdn.com/b/cie/archive/2013/07/30/windows-azure-role-recycling-due-to-setting-more-than-256-character-in-environmental-variable-through-azure-start-up-task.aspx) and now I'm at a loss of what to do.
Are the following possible:
1. Accessing the role environment from inside the executable somehow?
2. Passing the setting value to the script as a parameter?
Thanks in advance for any tips!
One option would be to move the app setting from the service configuration to blob storage from where it is accessible to both the startup task and the running service.
You can load the RoleEnvironment information in a PowerShell script (which you load in the startup task) which will let you access your ServiceConfiguration settings:
[Reflection.Assembly]::LoadWithPartialName("Microsoft.WindowsAzure.ServiceRuntime")
$mySetting = [Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::GetConfigurationSettingValue("MySetting")
if($mySetting -eq "True"){ .....}
In my ServiceConfiguration (.cscfg) I have a setting called MySettig which is True/False.
I just started checking out Windows Azure and I have trouble getting any access logs from IIS for my test web role. The web role itself works fine, but I would like to see a log for accesses (both successful and failed).
As far as I can see the default configuration files for a web role contain instructions to send those logs to a blob named "wad-iis-logfiles", but that blob is never even created (it doesn't exist in my blob storage).
My diagnostics.wadcfg for the web role currently is:
<DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M" overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
<DiagnosticInfrastructureLogs bufferQuotaInMB="512" scheduledTransferPeriod="PT5M" />
<Directories bufferQuotaInMB="512" scheduledTransferPeriod="PT5M">
<IISLogs container="wad-iis-logfiles" />
<CrashDumps container="wad-crash-dumps" />
</Directories>
<Logs bufferQuotaInMB="512" scheduledTransferPeriod="PT5M" scheduledTransferLogLevelFilter="Information" />
<PerformanceCounters bufferQuotaInMB="512">
(... snip...)
</PerformanceCounters>
<WindowsEventLog bufferQuotaInMB="512" scheduledTransferPeriod="PT1M" scheduledTransferLogLevelFilter="Error">
<DataSource name="Application!*" />
</WindowsEventLog>
</DiagnosticMonitorConfiguration>
Question 1: is this configuration file correct?
Question 2: are there other things that need to be set before I can get the IIS log files?
With the help of the commenters I was able to solve the issue.
There are several interacting things causing the issue.
As commenter #kwill mentioned, an existing configuration blob in wad-control-container overrides any other configuration, and that configuration is not replaced during an in-place update. I was using in-place update to put my modified diagnostics.wadcfg in place, so that is the explanation why my attempts to change settings that way didn't work. Note that editing the properties of the web test role (in the "Roles" branch of the Azure Cloud services project operates by editing that same file, so that didn't work either. More information on how that wad-control-container overrides setting can be found in http://msdn.microsoft.com/en-us/library/windowsazure/dn205146.aspx .
The reason that blob already existed may have been that I had been changing some other performance measurement settings in the azure management window earlier.
I managed to "fix" the situation by editing the blob found in wad-control-container for my instance, using the tool mentioned by commenter #Gaurav Mantri - "Azure Explorer". As mentioned, without that tool you can download the blob and edit it, but never put it back properly, since the '/' characters in the blob's name get translated to '%2F', and those are not translated back on upload.
Note that the XML schema is not the same as the schema for diagnostics.wadcfg, but some similarities exist. I changed the "Directories" element toward the bottom of the blob to read:
<Directories>
<BufferQuotaInMB>512</BufferQuotaInMB>
<ScheduledTransferPeriodInMinutes>2</ScheduledTransferPeriodInMinutes>
<Subscriptions>
<DirectoryConfiguration>
<Path>C:\Resources\directory\8091b0be14e54213ac12fcbd5f9c8e1b.WebTestRole.DiagnosticStore\CrashDumps</Path>
<Container>wad-crash-dumps</Container>
<DirectoryQuotaInMB>0</DirectoryQuotaInMB>
</DirectoryConfiguration>
<DirectoryConfiguration>
<Path>C:\Resources\directory\8091b0be14e54213ac12fcbd5f9c8e1b.WebTestRole.DiagnosticStore\LogFiles</Path>
<Container>wad-iis-logfiles</Container>
<DirectoryQuotaInMB>16</DirectoryQuotaInMB>
</DirectoryConfiguration>
</Subscriptions>
</Directories>
In the original version the "BufferQuotaInMB" and "DirectoryQuotaInMB" fields were 0.
Note that after uploading the blob again the effect is not immediate. It takes a while for the changed configuration to get picked up, and then it takes another while before the IIS log files are copied for the first time.
Last note: it may be obvious, but I don't think editing that blob is a recommendable solution. It is good to know the option exists though.
I have a WCF service that is connected to a service bus Queue ready to receive messages. This is working great but i would like to be able to mark the message as a DeadLetter if i have an issue processing the message. Currently if my code throws an exception the message still gets removed from the queue, but i want to be able in configuration to specify to not deleted from the queue but mark it as a DeadLetter. I've done some search and I can't figure out on how to do that. I am currently running the service as a windows service
Uri baseAddress = ServiceBusEnvironment.CreateServiceUri("sb",
"namespace", "servicequeue");
_serviceHost = new ServiceHost(typeof(PaperlessImportServiceOneWay), baseAddress);
_serviceHost.Open();
config:
<services>
<service name="Enrollment.ServiceOneWay">
<endpoint name="ServiceOneWay"
address="sb://namespace.servicebus.windows.net/servicequeue"
binding="netMessagingBinding"
bindingConfiguration="messagingBinding"
contract="IServiceOneWaySoap"
behaviorConfiguration="sbTokenProvider" />
</service>
</services>
<netMessagingBinding>
<binding name="messagingBinding" closeTimeout="00:03:00" openTimeout="00:03:00"
receiveTimeout="00:03:00" sendTimeout="00:03:00" sessionIdleTimeout="00:01:00"
prefetchCount="-1">
<transportSettings batchFlushInterval="00:00:01" />
</binding>
</netMessagingBinding>
<behavior name="sbTokenProvider">
<transportClientEndpointBehavior>
<tokenProvider>
<sharedSecret issuerName="owner" issuerSecret="XXXXXXXXXXXXXXXXXXXXXXXX" />
</tokenProvider>
</transportClientEndpointBehavior>
</behavior>
In your interface for the opertion Contract add this
[ReceiveContextEnabled(ManualControl = true)]
then you can manage to commit or abandon the message
Found it in this link:
http://msdn.microsoft.com/en-us/library/windowsazure/hh532034.aspx
I tried the new azure preview that came with the new sdk on my computer.
I put a worker role with cache preview and put co-located role with 30% cache size.
on my controller i put this code:
[OutputCache(Duration=int.MaxValue, VaryByParam="none")]
public ActionResult Index()
{
ViewBag.Message = "Welcome to ASP.NET MVC!";
ViewBag.Id = Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.CurrentRoleInstance.Id;
return View();
}
now i ran the worker role via the emulator with 4 instances. the result was that every time i saw a different id - which mean the output cache never work with all the 4 instances ( to be clear i configure the output cache to work with the cache preview).
Only when i put an extra cache worker role as dedicated role everything start to work like it should be.
My questions is:
Do i need the extra worker role to actually make the cache preview to work ok? - which mean the trade off of not working with azure appfabric cache is putting extra machine
Did i do something work and it should work with the web roles as co located roles?
thanks
edit:
this another section of my web.config
<dataCacheClients>
<tracing sinkType="DiagnosticSink" traceLevel="Error" />
<dataCacheClient name="default">
<autoDiscover isEnabled="true" identifier="NugetTest" />
<!--<localCache isEnabled="true" sync="TimeoutBased" objectCount="100000" ttlValue="300" />
</dataCacheClient>
if my identifier have NugetTest ( which is my web roles - which i have 4) every time i switch machine i get a different cache. if i change the identifier to my worker role i get the result
Can you add applicationName tag in the provider configuration in web.config of you app? If this is not added, instances will not share the cache across. Please note the applicationName tag.
This should be added for the web.config of webrole in both dedicated or colocated cache scenario.
Please reply if this solves your issue.
<caching>
<outputCache defaultProvider="DistributedCache">
<providers>
<add name="DistributedCache" type="Microsoft.Web.DistributedCache.DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache" cacheName="<cacheName>" applicationName ="<anyName>" dataCacheClientName="<dataCacheClientName>" />
</providers>
</outputCache>
</caching>
I'm unable to reproduce this issue. I always see the same instance, and I'm using Ctrl+F5 in the browser (thus rule out browser cache). PLease make sure you've configured output cache provider as described on http://www.windowsazure.com/en-us/develop/net/how-to-guides/cache/.
<!-- If output cache content needs to be saved in a Windows Azure
cache, add the following to web.config inside system.web. -->
<caching>
<outputCache defaultProvider="DistributedCache">
<providers>
<add name="DistributedCache"
type="Microsoft.Web.DistributedCache.DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache"
cacheName="default"
dataCacheClientName="default" />
</providers>
</outputCache>
</caching>
Best Regards,
Ming Xu.
I'm running a command line program (happens to be Redis) inside a Windows Azure Worker Role using ProgramEntryPoint as follows
<WorkerRole name="Worker" vmsize="Small">
<Runtime executionContext="limited">
<Environment>
<Variable name="ADDRESS">
<RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/Endpoints/Endpoint[#name='Redis']/#address" />
</Variable>
<Variable name="PORT">
<RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/Endpoints/Endpoint[#name='Redis']/#port" />
</Variable>
</Environment>
<EntryPoint>
<ProgramEntryPoint commandLine="redis-server.exe" setReadyOnProcessStart="true" />
</EntryPoint>
</Runtime>
<Endpoints>
<InternalEndpoint name="Redis" protocol="tcp" port="6379" />
</Endpoints>
</WorkerRole>
So far, so good (it works).
I now want to run a slave instance of the server in another WorkerRole
<WorkerRole name="SlaveWorker" vmsize="Small">
<Runtime executionContext="limited">
<EntryPoint>
<ProgramEntryPoint commandLine="echo slaveof %ADDRESS% %PORT% | redis-server.exe -" setReadyOnProcessStart="true" />
</EntryPoint>
</Runtime>
<Imports>
<Import moduleName="Diagnostics" />
<Import moduleName="RemoteAccess" />
</Imports>
<Endpoints>
<InternalEndpoint name="Redis" protocol="tcp" port="6379" />
</Endpoints>
</WorkerRole>
You can see I need to tell the slave server where its master is using IP address and port; something that I don't know until Azure has allocated the network resources for that role. I've seen #smarx do something along these lines.
However I think there may be a couple of things wrong with this in my case
I'm setting environment variables in one role and hoping to use them in another - not going to work.
Even if the right data was available the way I need to pass it to the redis-server.exe is not recognized as a valid entry point with the echo at the beginning
Is the only way to know the runtime IP address and Port of another
worker role via code or is there a syntax I'm missing in the config
file?
If I manage to get the IP and Port, is the only way to make my
command line work to push it to a powershell script or batch file?
Thanks for your thoughts.
The only way one instance will know another's IP address will be if a.) it programmatically grabs it, or b.) the other instance publishes it to a wellknown location (e.g. table storage). In your case, it might be easiest to just have the slave role run a startup task that accesses the RoleEnvironment (via Powershell perhaps) and sets an Environment variable with the IP Address of the master. If you do this as a 'simple' type, I believe it will run before your ProgramEntryPoint does (blocking) and you can just use the env var in your command line there.
Couple thoughts here however:
How are you handling multi-instance within a role? Are you only planning on running a single instance?
Do you need two different roles? Why not a single role with 2 instances that decide via election which is master?