Does Get-AzureRmHdInsightJobOutput return an object? - azure

I am currently working on a pre-existing kettle job that calls a powershell script that sends Azure HdInsight a pig job. Get-AzureHdInsightJobOutput is part of this script and is now deprecated. So I am replacing it with Get-AzureRmHdInsightJobOutput. However the new cmdlet has two parameter sets, one for display and one for download. I need an object to be returned in order to avoid making changes to the kettle job.
I'm hoping to find out if the display parameters will return an object or if they just print out the results.

Yes, Get-AzureRmHdInsightJobOutput cmdlet will return string object when in use with the Display parameter set, regardless of the DisplayOutputType.
You can refer to the source code of this cmdlet below in the Azure PowerShell github link:
GetAzureHDInsightJobOutputCommand.cs
Hope this helps!

Related

Execute task on Azure VM with condition

I need to take down Azure VMs in a controlled way, first stopping them and then removing them. The "stop" part needs to be executed only if the VM is existing, otherwise the task creates one in a stopped state, and then removes it. I tried variations on when: state == "present" but without success. Where can I find an example or documentation about how I can use that? Or maybe the solution to have a previous task retrieving the VM info, and act on that?
TIA!
If you haven't tried already, give it a try by having the first task using the module "azure_rm_virtualmachine_info" and make sure you save the result to a variable using "register" command. Then have the second task using "when" command to check if the value is "present" or not for the state object of the variable thats saved in the earlier task.

Parametrization using Azure Data Factory

I have a Pipeline job in Azure Data Factory which I want to use to run the pipeline job but pass all files for a specific month through for example.
I have a folder called 2020/01 inside this folder is numerous files with different names.
The question is: Can one pass a parameter through to only extract and load the files for 2020/01/01 and 2020/01/02 if that makes sense?
Excellent, Thanks Jay it worked and i can now run my pipeline jobs passing through the month or even day level.
Really appreciate your response, have a fantastic day.
Regards
Rayno
The question is: Can one pass a parameter through to only extract and
load the files for 2020/01/01 and 2020/01/02 if that makes sense?
You did't mention which connector you are using in pipeline job,but you mentioned folder in your question.As i know,the majority folder path could be parametrization in ADF copy activity configuration.
You could create a param :
Then apply it in the wildcard folder path:
Even if your files' names have same prefix,you could apply 01*.json on the wildcard file name property.

Azure Devops logging commands in release pipeline

I am trying to customize the output of my pipeline release through setting some env variables into a task.
I found the following link:
https://learn.microsoft.com/en-us/azure/devops/pipelines/scripts/logging-commands?view=azure-devops&tabs=powershell
which however does not seem to work.
What I am doing is simply to create a pipeline with a single task (either bash or PS), and there declaring the commands specified in the link through the inline version of the task.
Has anyone already successfully managed to make these commands work?
Do I do something wrong and/or incomplete?
Does anyone have a better way to customise the pipeline with relevant information from a task? E.g. through the release name, or the description and/or tag of the specific release?
Edit:
Write-Host "##vso[task.setvariable variable=sauce;]crushed tomatoes"
Write-Host "##vso[task.setvariable variable=secretSauce;issecret=true]crushed tomatoes with garlic"
Write-Host "Non-secrets automatically mapped in, sauce is $env:SAUCE"
Write-Host "Secrets are not automatically mapped in, secretSauce is $env:SECRETSAUCE"
Write-Host "You can use macro replacement to get secrets, and they'll be masked in the log: $(secretSauce)"
this is the code, copy and pasted. Now I tried also with the script, and it does not work either.
I use an hosted windows agent.
When you set a new variable with the logging command the variable is available only in the next tasks and not in the same task.
So, split your script to 2 tasks, in the second task put the last 3 lines and you will see that the first task works:
this also puzzled me for a while, in the end i found out that if you want to modify the $env:path you can call the special task called task.prependpath by using the special logging command syntax like "##vso[task.prependpath]local directory path". you can find more of this kind of special commands from their source :
https://github.com/microsoft/azure-pipelines-tasks/blob/master/docs/authoring/commands.md

How to find child pipeline run id of execute pipeline

I want to find the expression to find the run id of child pipeline of Execute pipeline, currently I am using:
#activity('Execute Pipeline1').Output.pipeline.RunId
which is giving error. How can I access it?
I figured out that the expression "#activity('Execute Pipeline1').output" gives a JSON which is converted to a String and
hence anything beyond the output value is not being parsed as a JSON.
One way to get past this is by using a custom activity chained to the execute pipeline activity and parse it using .NET code.
Hope this helps.

executing script file from azure blob and write its results to file

I'll explain the task requested from me:
I have two containers in Azure, one called "data" and one called "script". In the "data" container there's a txt file with data, and in the "script" container there's a script file.
Now, I need programatically (with WorkerRole) to execute the script file, with the content of the data file as parameters (Example: a script file that accepts a string 's' and returns to the screen "Hello, 's'", when 's' in the string given, and in the data file there's a string), and save the result of the run into another file which needs to be saved in another container called "result".
How do I do all these? I've already uploaded the files and created the blobs programatically, but I can't seem to understand how to execute the file of how to save its result to another file?
Can I please have some help?
Thanks in advance
Here are the steps in pseudo code:
Retrieve the script from the blob(using DownloadToStream())
Compile the script(I will leave this to you as I have no idea what
format your script is)
Load parameters from blob(same as step 1)
Execute script with those parameters.
If your script's can be written as lambda expressions then this becomes a lot easier as you can turn them into Action's
Edit based on your questiions:
DownloadText() is no longer included in Azure Storage 2.0, you only have access to DownloadToStream(). Even if you are using an older version(say 1.7) I would recommend using DownloadToStream() in the event you ever upgrade in the future. This will prevent having to refactor your code.
In terms of executing your script, depending on what type of script it is(if it is c# code you can use this example: Is it possible to dynamically compile and execute C# code fragments?. If you need to execute a different type of script you would need to run it using Process.Start and you can look at this example: http://www.dotnetperls.com/process-start
I do not have much experience with point number 2 but those are the processes I have heard and seen used.

Resources