Azure DNS issues after destroying/recreating VM - azure

Like many people, we have Azure VMs that we want to destroy when not in use so that we don't have to pay for their core usage. All of the VMs in question are on the same domain and the DC/DNS server is never destroyed/recreated and has a static IP. After successfully using a combination of Export/Remove/Import-AzureVM, however, all of the IP settings for the network adapter (DNS is my primary concern) are gone because a new network adapter is created each time I reconstruct the VM using Import-AzureVM.
I initially tried using NETSH to set my DNS entry at startup, but it depends on knowing the name of the adapter and the adapter name changes daily (since we're taking the machines down for the evening and recreating them in the morning). My next not-so-brilliant idea was to include a VBScript that renamed the adapter to the same name on startup so that NETSH would always have the same adapter name to deal with. However, it was at that point that I discovered that all of the old adapters still exist, but are simply hidden and not in use, rendering my plan moot.
Here are the test NETSH command and VBScript I was attempting to use, just for the sake of reference:
'this script was modified from one i got from the Scripting Guys
Const NETWORK_CONNECTIONS = &H31&
Set objShell = CreateObject("Shell.Application")
Set objFolder = objShell.Namespace(NETWORK_CONNECTIONS)
Set colItems = objFolder.Items
For Each objItem in colItems
'only one adapter is ever returned by this query, but it didn't seem like a bad idea to leave the loop alone just in case
objItem.Name = "testlan"
wscript.echo objItem.Name
Next
NETSH
netsh interface ip add dns name="testlan" 10.0.0.4
I know I can't be the only person trying to solve this issue, but I've been unable to find the solution through a significant amount of Googling and trial and error on my part. Many thanks!
Ben

#Nathan's comment is incorrect. When a VM is "Stopped" it is still being billed. If it is "Stopped(Deallocated)" however then the billing stops. From Azure's Pricing Details FAQ:
To ensure that you are not billed, stop the VM from the management
portal. You can also stop the VM through Powershell by calling
ShutdowRoleOperation with 'PostShutdownAction' equal to
"StoppedDeallocated". However, you will continue to be billed if you
shut down a VM from inside (e.g. using power options in Windows) or
through PowerShell by calling ShutdownRoleOperation with
'PostShutdownAction' equal to "Stopped".
Instead of destroying the VM, you can get to the deallocated state using the azure control panel, or use Azure Cmdlets to force stop the VM. This will deallocate and you wont have the networking problems. Unfortunately this cant be done currently with the REST Api.
I use the following in an app to stop the service:
RunPowerShellScript(#"Stop-AzureVM -ServiceName " + cloudServiceName + " -Name " + vmName + " -Force");
Use that line in on a button, or use the REST api to query your cloud services, then the following function to run your powershell. Be sure to run the getting started initially.
private string RunPowerShellScript(string scriptText)
{
// create Powershell runspace
Runspace runspace = RunspaceFactory.CreateRunspace();
// open it
runspace.Open();
// create a pipeline and feed it the script text
Pipeline pipeline = runspace.CreatePipeline();
pipeline.Commands.AddScript(scriptText);
// add an extra command to transform the script
// output objects into nicely formatted strings
// remove this line to get the actual objects
// that the script returns. For example, the script
// "Get-Process" returns a collection
// of System.Diagnostics.Process instances.
pipeline.Commands.Add("Out-String");
// execute the script
Collection<PSObject> results = pipeline.Invoke();
// close the runspace
runspace.Close();
// convert the script result into a single string
StringBuilder stringBuilder = new StringBuilder();
foreach (PSObject obj in results)
{
stringBuilder.AppendLine(obj.ToString());
}
return stringBuilder.ToString();
}

Try this...
Set-ExecutionPolicy Unrestricted
$wmi = Get-WmiObject win32_networkadapterconfiguration -filter "ipenabled = 'true'"
$wmi.SetDNSServerSearchOrder("10.0.2.6")

Related

Serilog not working in Service Fabric

I am using Serilog to write to a file and try to get more information about an error that is occurring in my production cluster...
In my local dev cluster the log files are created fine but they are not created in the VM's on my production cluster. I think this may be security related
Has anyone ever had this?
My production cluster has 5 nodes with a Windows 2016 VM on each
Even more strange is that this works on a single node cluster in Azure
public static ILogger ConfigureLogging(string appName, string appVersion)
{
AppDomain.CurrentDomain.ProcessExit += (sender, args) => Log.CloseAndFlush();
var configPackage = FabricRuntime.GetActivationContext().GetConfigurationPackageObject("Config");
var environmentName = configPackage.GetSetting("appSettings", "Inspired.TradingPlatform:EnvironmentName");
var loggerConfiguration = new LoggerConfiguration()
.WriteTo.File(#"D:\SvcFab\applog-" + appName + ".txt", shared: true, rollingInterval: RollingInterval.Day)
.Enrich.WithProperty("AppName", appName)
.Enrich.WithProperty("AppVersion", appVersion)
.Enrich.WithProperty("EnvName", environmentName);
var log = loggerConfiguration.CreateLogger();
log.Information("Starting {AppName} v{AppVersion} application", appName, appVersion);
return Log.Logger = log;
}
Paul
I wouldn't recommend logging into local files in Service Fabric, since your node may be moved to another VM any time and you won't have access to these files. Consider using another sinks which write to external system (database, message bus or logging system like loggly)
It is likely a permission issue. Your service might be trying to log to a folder where it does not have permission.
By default, your services will run under same user as the Fabric.exe process, that run as NetworkService, you can find more information about this on this link.
I would not recommend this approach, because many reasons, a few of them are:
Your services might be moved around the cluster so your files will be incomplete
You have to log on multiple machines to find the logs
The node might be gone with files (Scale up + Down, Failure, Disk error)
Multiple instances on same node trying to access the same file
and so on...
On Service Fabric, the recommended way is to use EventSource(or ETW) + EventFlow + Application Insights. They run smoothly together and bring you many features.
If you want to use stay on Serilog, I would recommend you use Serilog + Application Insights instead, it will give you move flexibility on your monitoring. Take a look at the Application Insights sink for serilog here.
This was actually user error! I was connecting to a different cluster of VMs than the one my service fabric was connected to! Whoops!

Azure subscription and webjob questions

So i'm trying to get a small project of mine going that I want to host on azure, it's a web app which works fine and I've recently found webjobs which I now want to use to have a task run which does data gathering and updating, which I have a Console App for.
My problem is that I can't set a schedule, since it is published to the web app which dosen't support scheduling, so I tried using the Azure Webjobs SDK and using a timer but it wont run without a AzureWebJobsStorage connection string which I cannot get since my Azure account is a Dreamspark account and I cannot make a Azure Storage Account with it.
So I was wondering if there is some way to get this webjob to run on a time somehow (every hour or so). Otherwise if I just upgraded my account to "Pay-As-You-Go"? would I still retain my free features? namely SQL Server.
Im not sure if this is the right palce to ask but I tried googling for it without success.
Update: Decided to just make the console app run oin a infinate loop and ill just monitor it through the portal, the code below is what I am using to made that loop.
class Program
{
static void Main()
{
var time = 1000 * 60 * 30;
Timer myTimer = new Timer(time);
myTimer.Start();
myTimer.Elapsed += new ElapsedEventHandler(myTimer_Elapsed);
Console.ReadLine();
}
public static void myTimer_Elapsed(object sender, ElapsedEventArgs e)
{
Functions.PullAndUpdateDatabase();
}
}
The simplest way to get your Web Job on a schedule is detailed in Amit Apple's blog titled "How to add a schedule to a triggered WebJob".
It's as simple as adding a JSON file called settings.job to your console application and in it describing the schedule you want as a cron expression like so:
{"schedule": "the schedule as a cron expression"}
For example, to run your job every 30 minutes you'd have this in your settings.job file:
{"schedule": "0 0,30 * * * *"}
Amit's blog also goes into details on how to write a cron expression.
Caveat: The scheduling mechanism used in this method is hosted on the instance where your web application is running. If your web application is not configured as Always On and is not in constant use it might be unloaded and the scheduler will then stop running.
To prevent this you will either need to set your web application to Always On or choose an alternative scheduling option - based on the Azure Scheduler service, as described in a blog post titled "Hooking up a scheduler job to a WebJob" written by David Ebbo.

Access Denied - get-wmiobject win32_service (Powershell)

11/13/2013 11:35:37 TRCW1 using local computer 11/13/2013 11:35:37
TRCE1 System.Management.ManagementException: Access denied at
System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus
errorCode) at
System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext()
at Microsoft.PowerShell.Commands.GetWmiObjectCommand.BeginProcessing()
Code (inside a loop of server names):
$error.clear() #clear any prior errors, otherwise same error may repeat over-and-over in trace
if ($LocalServerName -eq $line.ServerName)
{
# see if not using -ComputerName on local computer avoids the "service not found" error
Add-Content $TraceFilename "$myDate TRCW1 using local computer "
$Service = (get-wmiobject win32_service -filter "name = '$($line.ServiceName)'")
}
else
{
Add-Content $TraceFilename "$myDate TRCW2 using remote computer $($line.ServerName) not eq $LocalServerName"
$Service = (get-wmiobject win32_service -ComputerName $line.ServerName -filter "name = '$($line.ServiceName)'")
}
if ($error -ne $null)
{
Write-Host "----> $($error[0].Exception) "
Add-Content $TraceFilename "$myDate TRCE1 $($error[0].Exception)"
}
I'm reading a CSV of server names. I finally added the exception logic, to find I'm getting an "Access Denied". This was only happening on the local server. Seems almost backwards, the local server fails, whereas the remote servers work fine. I even changed logic to test to see if it was the local server, then tried leaving off the -ComputerName parms on the WMI (as shown in code above), and still getting error.
So far, my research shows the answer may lie with
set-item trustedhosts
But my main question is whether trustedhosts is applicable to local servers, or only remote servers. Wouldn't a computer always trust itself? Does it still use remoting to talk to itself?
This server apparently was part of a cluster a long time before I got here, and now it's not. I'm also suspicious of that.
When I run interactively the script works fine, it's only when I schedule it and run it under a service account that it fails with the access denied. The Service Account is local Admin on that box.
I'm using get-wmiobject win32_service instead of get-service because it returns extra info I need to lookup the process, and date/time the service was started using another WMI call.
Running on Win 2008/R2.
Below Update 11/13/2013 5:27Pm
I have just verified that the problem happens on more than one server. [I took the scripts and ran them on another server.] My CSV input includes a list of servers to monitor. The ones outside of my own server always return results. The ones to my own server, that omit the -ComputerName fail. (I have tried with and without the -ComputerName parm for the local server).
Are you running the script "as administrator" (UAC)? When your credentials are calculated for your local instance if you have UAC enabled and you didn't run it "as administrator" it removes the local administrator security token. Connecting to a different machine over the network, A) it completely bypasses UAC, and B) when the target evaluates your token, the group memberships you're in are fully evaluated and thus you get "administrator" access.
Probably unrelated, but I've just run across two 2008 R2 servers out of 10 on my system that reject THE FIRST performance criteria that I'm collecting, but only when it's running as a scheduled task. If I run it interactively it works at least 95% of the time. I'm collecting Disk Seconds/Read and Seconds/Write, so it's the reads that don't show, for these two servers only. I flipped the order and what do you know, the Writes don't report. I just added one drive Seconds/Transfer as a sacrificial lamb to the start of my criteria list, and VOILA now I don't get ACCESS DENIED to the reads and writes.
$counterlist = #("\$server\PhysicalDisk(0*)\Avg. Disk sec/Transfer",
"\$server\PhysicalDisk()\Avg. Disk sec/Read",
"\$server\PhysicalDisk()\Avg. Disk sec/Write")
$counters = $counterlist | Get-Counter
(not sure how to edit this, but there are asterisks in between the parenthesis after physicaldisk...)

How to configure Quartz.net to use an Azure SQL database to store ADOJobStore details

I am using quartz.net as a scheduler in a Microsoft Azure Web Role. I can get Quartz.net to work just fine if I use the RamDataStore. However, I want to break this into two components: the first will allow scheduling of jobs through a web interface and the second will execute the jobs through a worker role. To have this distributed processing, I will need to use an ADOJobStore.
Everything works fine with the RamDataStore but it breaks when I try to switch over to the ADOJobStore. So this leads me to believe that there is something in my properties that I'm missing. I am using Azure SQL database and while this is similar to SQL Server, there are some gotchas that sometimes cause problems.
I am using Quartz.net 2.0 (from nuGet) in VS2010, the database is Azure SQL.
When I call .GetScheduler(), I get the following exception:
{"JobStore type 'Quartz.Impl.AdoJobStore.JobStoreTX' props could not
be configured."}
with the details:
{"Could not parse property 'default.connectionString' into correct
data type: No writable property 'Default.connectionString' found"}
My connection code (including programatically set properties):
NameValueCollection properties = new NameValueCollection();
properties["quartz.scheduler.instanceName"] = "SchedulingServer";
properties["quartz.threadPool.type"] = "Quartz.Simpl.ZeroSizeThreadPool, Quartz";
properties["quartz.jobStore.type"] = "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz";
properties["quartz.jobStore.tablePrefix"] = "QRTZ_";
properties["quartz.jobStore.clustered"] = "false";
properties["quartz.jobStore.driverDelegateType"] = "Quartz.Impl.AdoJobStore.SqlServerDelegate, Quartz";
properties["quartz.jobStore.dataSource"] = "default";
properties["quartz.jobStore.default.connectionString"] = "Server=tcp:serverName.database.windows.net;Database=scheduler;User ID=scheduler#serverName;Password=***;Trusted_Connection=False;Encrypt=True;";
properties["quartz.jobStore.default.provider"] = "SqlServer-20";
properties["quartz.jobStore.useProperties"] = "true";
ISchedulerFactory sf = new StdSchedulerFactory(properties);
_scheduler = sf.GetScheduler();
Any help or suggestions would be appreciated.
You have small but subtle error in your data source property naming, it should read:
properties["quartz.dataSource.default.connectionString"] = "Server=tcp:serverName.database.windows.net;Database=scheduler;User ID=scheduler#serverName;Password=***;Trusted_Connection=False;Encrypt=True;";
Also there is a property connectionStringName if you want to use the connection string section of configuration file.

Azure: Using System.Diagnostics.PerformanceCounter

I'm aware of the Microsoft.WindowsAzure.Diagnostics performance monitoring. I'm looking for something more real-time though like using the System.Diagnostics.PerformanceCounter
The idea is that a the real-time information will be sent upon a AJAX request.
Using the performance counters available in azure: http://msdn.microsoft.com/en-us/library/windowsazure/hh411520
The following code works (or at least in the Azure Compute Emulator, I haven't tried it in a deployment to Azure):
protected PerformanceCounter FDiagCPU = new PerformanceCounter("Processor", "% Processor Time", "_Total");
protected PerformanceCounter FDiagRam = new PerformanceCounter("Memory", "Available MBytes");
protected PerformanceCounter FDiagTcpConnections = new PerformanceCounter("TCPv4", "Connections Established");
Further down in the MSDN page is another counter I would like to use:
Network Interface(*)\Bytes Received/sec
I tried creating the performance counter:
protected PerformanceCounter FDiagNetSent = new PerformanceCounter("Network Interface", "Bytes Received/sec", "*");
But then I receive an exception saying that "*" isn't a valid instance name.
This also doesn't work:
protected PerformanceCounter FDiagNetSent = new PerformanceCounter("Network Interface(*)", "Bytes Received/sec");
Is using performace counters directly in Azure frowned upon?
The issue you're having here isn't related to Windows Azure, but to performance counters in general. As the name implies, Network Interface(*)\Bytes Received/sec is a performance counter for a specific network interface.
To initialize the performance counter, you'll need to initialize it with the name of the instance (the network interface) you'll want to get the metrics from:
var counter = new PerformanceCounter("Network Interface",
"Bytes Received/sec", "Intel[R] WiFi Link 1000 BGN");
As you can see from the code, I'm specifying the name of the network interface. In Windows Azure you don't control the server configuration (the hardware, the Hyper-V virtual network card, ...), so I wouldn't advise on using the name of the network interface.
That's why it might be safer to enumerate the instance names to initialize the counter(s):
var category = new PerformanceCounterCategory("Network Interface");
foreach (var instance in category.GetInstanceNames())
{
var counter = new PerformanceCounter("Network Interface",
"Bytes Received/sec", instance);
...
}

Resources