A single cloud project for each environment? - azure

Previously I have always had separate cloud projects for each environment, like this:
This poses some problems:
Maintaining multiple ServiceDefinition.csdef files
When building to a common output path, which ServiceDefinition.csdef is copied?
I am proposing using a single Cloud Project with multiple ServiceConfiguration files for each environment, and multiple profiles for publishing:
Pros:
Less maintenance issues (1 project and 1 ServiceDefinition.csdef)
A single ServiceDefinition.csdef is copied to the output folder
The problem I have now is that all environments need to have the same instance size as this is defined in the ServiceDefinition.csdef.
Is there any way I can get around this problem?

Yes, we create multiple packages (one for each environment). We have different powershell scripts to patch things like the vm-size (one example):
param(
[parameter(Mandatory=$true)]
[string]$fileToPatch,
[parameter(Mandatory=$true)]
[string]$roleName,
[parameter(Mandatory=$true)]
[validateSet("ExtraSmall", "Small", "Medium", "Large", "ExtraLarge", "A5", "A6", "A7")]
[string]$vmsize = 'Small'
)
# MAIN
$xml = New-Object System.Xml.XmlDocument
$xml.Load($fileToPatch)
$namespaceMgr = New-Object System.Xml.XmlNamespaceManager $xml.NameTable
$namespace = $xml.DocumentElement.NamespaceURI
$namespaceMgr.AddNamespace("ns", $namespace)
$xpathWorkerRoles = "/ns:ServiceDefinition/ns:WorkerRole"
$xpathWebRoles = "/ns:ServiceDefinition/ns:WebRole"
$Roles = $xml.SelectNodes($xpathWebRoles, $namespaceMgr) + $xml.SelectNodes($xpathWorkerRoles, $namespaceMgr)
$Roles | Where-Object { $_.name -eq $RoleName} | % { $_.vmsize = $vmsize; Write-Host 'Patched vmsize to' $vmsize 'for' $_.name }
$xml.Save($fileToPatch)

Related

How to remove azure file share old data from the azure storage account?

I have 3 months of old data which is stored on azure storage account ? Now i want remove the data
if it >= 30days
The following script lists files/FileDir recursively in a file share and delete the files older than 14 days.You may give the required day limit.
Refered from thread here.
Refer this for the best practices before delete activity.
$ctx = New-AzStorageContext -StorageAccountName $accountName -StorageAccountKey $key
$shareName = <shareName>
$DirIndex = 0
$dirsToList = New-Object System.Collections.Generic.List[System.Object]
# Get share root Dir
$shareroot = Get-AzStorageFile -ShareName $shareName -Path . -context $ctx
$dirsToList += $shareroot
# List files recursively and remove file older than 14 days
While ($dirsToList.Count -gt $DirIndex)
{
$dir = $dirsToList[$DirIndex]
$DirIndex ++
$fileListItems = $dir | Get-AzStorageFile
$dirsListOut = $fileListItems | where {$_.GetType().Name -eq "AzureStorageFileDirectory"}
$dirsToList += $dirsListOut
$files = $fileListItems | where {$_.GetType().Name -eq "AzureStorageFile"}
foreach($file in $files)
{
# Fetch Attributes of each file and output
$task = $file.CloudFile.FetchAttributesAsync()
$task.Wait()
# remove file if it's older than 14 days.
if ($file.CloudFile.Properties.LastModified -lt (Get-Date).AddDays(-14))
{
## print the file LMT
# $file | Select #{ Name = "Uri"; Expression = { $_.CloudFile.SnapshotQualifiedUri} }, #{ Name = "LastModified"; Expression = { $_.CloudFile.Properties.LastModified } }
# remove file
$file | Remove-AzStorageFile
}
}
#Debug log
# Write-Host $DirIndex $dirsToList.Length $dir.CloudFileDirectory.SnapshotQualifiedUri.ToString()
}
(OR)
Delete activity can be configured from Azure data Factory(ADF) by following the steps below.This require linking your azure storage account with the ADF by giving account name, fileshare name and path if needed.
Deploy an ADF if not already configured:
Open ADF and create a PipeLine with any name.
The Process is :We get the metadata of the file shares in storage account selected>Loop through them>configure Delete activity for the files inside them older than 30 days(or say some x days)
Search for get meta data >> select and drag to place in the area
shown and name the file. 2) Then navigate to Dataset(This data set
points to the files in storage account),select new . So select azure
file storage>Name it 3) >>Select the format as csv 4)>>Link your account
by setting up the properties > give the file path.
To get files older than or equals 30 days you can configure endtime
as #adddays(utcnow(),-30)
Now we have to use foreach loop to iterate through the array of
files. Drag and drop foreach loop from activities and connect/link the arrwoed line with get metadata.
In settings ,Tick mark sequential box to iterate files in sequential order.
childItems is the property of getmetadata output JSON which has array
of objects . So select Dynamic content for Items and configure as
#activity("Get old files").output.childItems and click on
finish. ( Here Get old files is the name of my get activity created
previously )
In the Foreach activity ,edit the activities and set a delete
activity as below.
Go to the dataset where we previously linked our file storage
account.
Create a file name parameter and link it to delete activity by
adding dynamic content for file path by selecting that filepath as
#dataset().FileName
Then go to the delete pipeline and add the file name as shown.
You may also add logging setting to link account and see the activity
happening after the path is debugged.
Other reference: clean-up-files-by-built-in-delete-activity-in-azure-data-factory/

Get Cosmosdb Container Collection items using powershell

Team, I had created new CosmosDB account in Azure portal with a container contains list of collection items. I am able to access Container details in power shell script.
How to list collection items or show specific collection item using partition key using power shell script
Power shell Script :
Get-AzResource -ResourceType "Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers" -ApiVersion "2020-03-01" -ResourceGroupName "testRG" -Name "cosmosaccount1/database1/containercollection1"
You would need to use something like a thirdparty module for this. Azure Resource Manager doesnt support that, hence you need to talk to Cosmos DB directly.
https://github.com/PlagueHO/CosmosDB
The Cosmos DB repo has a set of examples to use Powershell: https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/PowerShellRestApi
Particularly to read Items: https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/PowerShellRestApi/PowerShellScripts/ReadItem.ps1
They are all using the REST API to do REST request, in this case, it is an authenticated GET to https://{databaseaccount}.documents.azure.com/dbs/{db-id}/colls/{coll-id}/docs/{id} (where databaseaccount is your account name, db-id is the id of your database, coll-id is the id of your collection/container, and id is your document id). It is also setting the x-ms-documentdb-partitionkey header for the partition key.
Like #4c74356b41 has indicated, you can use the CosmosDB module, which is now part of the official Az module.
Install-Module -Name Az -Scope CurrentUser -Repository PSGallery -Force
You can see the available commands with Get-Commands:
Import-Module Az
Import-Module -Name CosmosDB
Get-Command -Module CosmosDB
Get all items in a collection
In order to get all entries inside a container, we use the Get-CosmosDbDocument command:
$subscription = "SubscriptionName"
$resourceGroupName = "ResourceGroupName"
$accountName = "AzureCosmosDBAccount"
$databaseName = "DatabaseName"
$cosmosContainer = "TargetCosmosDBContainer"
Set-AzContext $subscription
$backOffPolicy = New-CosmosDbBackoffPolicy -MaxRetries 5 -Method Additive -Delay 1000
$cosmosDbContext = New-CosmosDbContext -Account $accountName -Database
$databaseName -ResourceGroup $resourceGroupName -BackoffPolicy $backOffPolicy
$documentsPerRequest = 100
$continuationToken = $null
$documents = $null
do {
$responseHeader = $null
$getCosmosDbDocumentParameters = #{
Context = $cosmosDbContext
CollectionId = $cosmosContainer
MaxItemCount = $documentsPerRequest
ResponseHeader = ([ref] $responseHeader)
}
if ($continuationToken) {
$getCosmosDbDocumentParameters.ContinuationToken = $continuationToken
}
$documents += Get-CosmosDbDocument #getCosmosDbDocumentParameters
$continuationToken = Get-CosmosDbContinuationToken -ResponseHeader $responseHeader
} while (-not [System.String]::IsNullOrEmpty($continuationToken))
Note: There is no apparent limitation on the number of documents that can be retrieved with this command, but it stands to reason that the command will have the API limitation, and this is 4 MB (as documented here). The value here ($documentsPerRequest = 100) could prove to be either too big or too small, depending on the size of each document. I usually don't use this parameter, but I've mentioned it here in case someone needs it.
List specific collection item
To get a specific entry or group of entries from a container, we use the same Get-CosmosDbDocument command, in a slightly different way:
$query = "SELECT * FROM c WHERE c.property = 'propertyValue'"
$documents = Get-CosmosDbDocument -Context $cosmosDbContext -CollectionId $cosmosContainer -Query $query -QueryEnableCrossPartition $true
Note: For brevity I haven't went to the process of getting a continuation token, but if the query will return a result that is larger than 4 MB, then we will only receive the first part of the response. To make sure this does not happen we should add "Query" and "QueryEnableCrossPartition" in the $getCosmosDbDocumentParameters dictionary.

Powershell getting same values when using if in foreach

I'm trying to get metric from Azure to Zabbix.
The issue is that Metric for VM consists of 2 words:Percentage CPU, and Zabbix doesn't allow item keys to consists of 2 words. I also tried Percentage%20CPU but getting errors in Zabbix, and I created Zabbix key percentage_cpu.
So I decided prior sending data from Zabbix to Azure to "translate" percentage_cpu to Percentage%20CPU. This works great if only that key is present, but issue starts when I add another key (in this example SQL metric).
For SQL metric all values are in one word - no need to change anything, but then metric for VM is also assigned to SQL. I'm trying to avoid writing separate file for every service
$host_items = Get-ZabbixHostItems -url $zabbix_url -auth $zabbix_auth -
zabbix_host $host_name
foreach ($host_item in $host_items)
{
#$host_item_details = select-string -InputObject $host_item.key_ -Pattern '^(azure\.sql)\.(.*)\.(.*)\[\"(.*)\"\]$';
$host_item_details = select-string -InputObject $host_item.key_ -Pattern '^(azure\.\w{2,})\.(.*)\.(.*)\[\"(.*)\"\,(.*)]$';
#$host_item_details = select-string -InputObject $host_item.key_ -Pattern '^(azure)\.(.*)\.(.*)\.(.*)\[\"(.*)\"\,(.*)]$';
$host_item_provider = $host_item_details.Matches.groups[1];
$host_item_metric = $host_item_details.Matches.groups[2];
$host_item_timegrain = $host_item_details.Matches.groups[3];
$host_item_resource = $host_item_details.Matches.groups[4];
$host_item_suffix = $host_item_details.Matches.groups[5];
if ($host_item_metric='percentage_cpu')
{$host_item_metric='Percentage%20CPU'}
else
{ $host_item_metric = $host_item_details.Matches.groups[2];}
#}
$uri = "https://management.azure.com{0}/providers/microsoft.insights/metrics?api-version={1}&interval={2}&timespan={3}&metric={4}" -f `
$host_item_resource, `
"2017-05-01-preview", `
$host_item_timegrain.ToString().ToUpper(), `
$($(get-date).ToUniversalTime().addminutes(-15).tostring("yyyy-MM-ddTHH:mm:ssZ") + "/" + $(get-date).ToUniversalTime().addminutes(-2).tostring("yyyy-MM-ddTHH:mm:ssZ")), `
$host_item_metric;
write-host $uri;
}
output of hostitems_
azure.sql.dtu_consumption_percent.pt1m["/subscriptions/111-222/resourceGroups/rg/providers/Microsoft.Sql/servers/mojsql/databases/test",common]
azure.vm.percentage_cpu.pt1m["/subscriptions/111-222/resourceGroups/rg/providers/Microsoft.Compute/virtualMachines/test",common]
When I ran code above I'm getting these URI's
https://management.azure.com/subscriptions/111-222/resourceGroups/rg/providers/Microsoft.Sql/servers/mojsql/databases/test/providers/microsoft.insights/m
etrics?api-version=2017-05-01-preview&interval=PT1M&timespan=2018-08-11T07:38:05Z/2018-08-11T07:51:05Z&metric=Percentage%20CPU
https://management.azure.com/subscriptions/111-222/resourceGroups/rg/providers/Microsoft.Compute/virtualMachines/test/providers/microsoft.insights/metric
s?api-version=2017-05-01-preview&interval=PT1M&timespan=2018-08-11T07:38:05Z/2018-08-11T07:51:05Z&metric=Percentage%20CPU
For first link (SQL) metric should be dtu_consumption but I'm getting same metric for both links
Second attempt:
if ($host_item_metric -eq 'percentage_cpu')
{$host_item_metric='Percentage%20CPU';}
else
{ $host_item_metric = $host_item_details.Matches.groups[2];}
write-host $host_item_metric
}
output: (original values)
dtu_consumption_percent
percentage_cpu
had to use -like
if ($host_item_metric -like 'percentage_cpu')
{$host_item_metric='Percentage%20CPU';}
else
{ $host_item_metric = $host_item_details.Matches.groups[2]}

Powershell runspace output behaves differently depending on how returning custom object is defined

I am experimenting with Powershell runspaces and have noticed a difference in how output is written to the console depending on where I create my custom object. If I create the custom object directly in my script block, the output is written to the console in a table format. However, the table appears to be held open while the runspace pool still has open threads, i.e. it creates a table but I can see the results from finished jobs being appended dynamically to the table. This is the desired behavior. I'll refer to this as behavior 1.
The discrepancy occurs when I add a custom module to the runspace pool and then call a function contained in that module, which then creates a custom object. This object is printed to the screen in a list format for each returned object. This is not the desired behavior. I'll call this behavior 2
I have tried piping the output from behavior 2 to Format-Table but this just creates a new table for each returned object. I can achieve the desired effect somewhat by using Write-Host to print a line of the object values but I don't think this is appropriate considering it seems there is a built in behavior that can achieve my desired result if I can understand it.
My thoughts on the matter are that it has something to do with the asynchronous behavior of the runspace. I'm new to powershell but perhaps when the custom object comes directly from the script block there is a hidden method or type declaration telling powershell to hold the table open and wait for result? This would be overridden when using the second technique because its coming from my custom function?
I would like to understand why this is occurring and how I can achieve behavior 1 while being able to use the custom module, which will eventually be very large. I'm open to a different method technique as well, so long as its possible to essentially see the table of outputs grow as jobs finish. The code used is below.
$ISS = [InitialSessionState]::CreateDefault()
[void]$ISS.ImportPSModule(".\Modules\Test-Item.psm1")
$Pool = [RunspaceFactory]::CreateRunspacePool(1, 5, $ISS, $Host)
$Pool.Open()
$Runspaces = #()
# Script block to run code in
$ScriptBlock = {
Param ( [string]$Server, [int]$Count )
Test-Server -Server $Server -Count $Count
# Uncomment the three lines below and comment out the two
# lines above to test behavior 1.
#[int] $SleepTime = Get-Random -Maximum 4 -Minimum 1
#Start-Sleep -Seconds $SleepTime
#[pscustomobject]#{Server=$Server; Count=$Count;}
}
# Create runspaces and assign to runspace pool
1..10 | ForEach-Object {
$ParamList = #{ Server = "Server A" Count = $_ }
$Runspace = [PowerShell]::Create()
[void]$Runspace.AddScript($ScriptBlock)
[void]$Runspace.AddParameters($ParamList)
$Runspace.RunspacePool = $Pool
$Runspaces += [PSCustomObject]#{
Id = $_
Pipe = $Runspace
Handle = $Runspace.BeginInvoke()
Object = $Object
}
}
# Check for things to be finished
while ($Runspaces.Handle -ne $null)
{
$Completed = $Runspaces | Where-Object { $_.Handle.IsCompleted -eq $true }
foreach ($Runspace in $Completed)
{
$Runspace.Pipe.EndInvoke($Runspace.Handle)
$Runspace.Handle = $null
}
Start-Sleep -Milliseconds 100
}
$Pool.Close()
$Pool.Dispose()
The custom module I'm using is as follows.
function Test-Server {
Param ([string]$Server, [int]$Count )
[int] $SleepTime = Get-Random -Maximum 4 -Minimum 1
Start-Sleep -Seconds $SleepTime
[pscustomobject]#{Server = $Server;Item = $Count}
}
What you have mentioned sounds completely normal to me. That is how powershell is designed because it shares the burden of display. If the user has not specified how to display, PowerShell decides how to.
I couldn't reproduce your issue with the code provided but I think this will solve your problem.
$FinalTable = foreach ($Runspace in $Completed)
{
$Runspace.Pipe.EndInvoke($Runspace.Handle)
$Runspace.Handle = $null
}
$FinalResult will now have the table format you expect.
It appears that my primary issue, aside from errors in my code, was a lack of understanding related to powershell's default object handling. Powershell displays the output of objects as a table when there are less than four key-value pairs and as a list when there are more.
The custom object returned in my test module had more than for key-value pairs while the custom object I returned directly only had two. This resulted in what I thought was odd behavior. I compounded the issue by removing some key-value pairs in my posted code to shorten it and then didn't test it (sorry).
This stackoverflow post has a lengthy answer explaining the behavior some and providing examples for changing the default output.

List of all subscription

How to check all report's subscriptions on Sharepoint 2010?
I know only how to check subsciption on specific report:
Unfortunately, there is no way way for you to do this from the GUI. You are going to have to break out PowerShell to get this information.
NOTE: I haven't tested this code, kinda writing it from the hip, but the gist should be able to help you out.:
$spWeb = Get-SPWeb <Site reports are contained in>
$spRepList = $spWeb.Lists["<List containing reports Name>"];
#Get all files
$spFileList = $spRepList.Files;
foreach($spFile in $spFileList)
{
#determine if the file is a report or a regular document
if($spFile.URL -like "*.rdl")
{
$reportServiceProxy = New-WebServiceProxy -URI <url to the reporting web service> -Namespace <Namespace of the service> -UseDefaultCredentials
$subscriptionList += $reportServiceProxy.ListSubscriptions($site);
#From here you can write to a file or write to the screen. I will let you decide
$subscriptionList | select Path, report, Description, Owner, SubscriptionID, lastexecuted,Status | where {$_.path -eq $spFile.URL}
}
}
Hope this helps.
Dave

Resources