I'm going to develop an asset discovery using ping. I need to ping multiple networks simultaneously, so the discovery speed would grow. In order to do that, I define a hash table as the following, containing destination networks. So, how can I ping multiple network simultaneously?
Please take a look at a snippet of my code:
$Hosts = #{}
$Time = Get-Date
$Networks = #{
Network_X = 1..254 | ForEach-Object{ "192.168.50.$_"}
Network_Y = 1..254 | ForEach-Object{ "192.168.63.$_"}
Network_Z = 1..254 | ForEach-Object{ "192.168.65.$_"}
}
I've failed using "ForEach-Object" or "Foreach".
You can use Ping.SendPingAsync() to initiate a ping asynchronously:
$PingTasks = foreach($network in $Networks.GetEnumerator()) {
foreach($ip in $network.Value){
# for each IP in each network, create a new object with both details + an async ping task
[pscustomobject]#{
Network = $network.Name
IP = $ip
Request = [System.Net.NetworkInformation.Ping]::new().SendPingAsync($IP)
}
}
}
# Wait for all tasks to finish
[System.Threading.Tasks.Task]::WaitAll($PingTasks.Request)
# Gather results
foreach($task in $PingTasks){
if($task.Request.Result.Status -eq 'Success'){
# Extract Network + IP from the hosts that responded to our ping
$Hosts[$task.IP] = $task |Select-Object Network,IP
}
}
$Hosts will now contain an entry for each successfully ping'ed IP, with the network name attached.
Related
I have two PowerShell Azure functions. The first one collects data and the second one pushes data to an Azure Storage Table. I can see my first function is calling the second function but the second function isn't getting the data. Here is my first function:
$body = #{"partitionKey"="01";"rowKey"="02";"userId"="00001"}
$dataJson = $body | ConvertTo-Json
$functionUri = 'https://functionNumber2'
Invoke-WebRequest -Uri $functionUri -Method POST -Body $dataJson
And my second function looks like this:
using namespace System.Net
# Input bindings are passed in via param block.
param($Request, $TriggerMetadata)
$tableData = $Request.Body.Address
# Write to the Azure Functions log stream.
Write-Host "PowerShell HTTP trigger function processed a request."
# Interact with query parameters or the body of the request.
$name = $Request.Query.Name
if (-not $name) {
$name = $Request.Body.Name
}
Write-Host "Name $name"
$body = "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
if ($name) {
$body = "Hello, $name. This HTTP triggered function executed successfully."
}
# Associate values to output bindings by calling 'Push-OutputBinding'.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]#{
StatusCode = [HttpStatusCode]::OK
Body = $body
})
Push-OutputBinding -Name tableName -Value $tableData -Clobber
The error I'm getting is ERROR: Cannot bind argument to parameter 'Value' because it is null. Exception : Type : System.Management.Automation.ParameterBindingValidationException Message : Cannot bind argument to parameter 'Value' because it is null. And if I print out $tableData there is nothing there. I am doing something wrong when passing the data.
Can anyone help me with this?
After reproducing from my end I observed that you are receiving this error because of the below line.
$tableData = $Request.Body.Address
I could able to make this work by using the below line.
$tableData = $Request.Body | ConvertFrom-Json
RESULTS:
I want to create an instance in openstack with a pre-defined network interface attached only. I have access to openstack, I know the network interface id/name.
Post creation of an instance I can simply attach the interface. This way it will get a randomly assigned IP from the pool and afterwards get the network interface attached. That's not what I want.
As stated in the beginning I want to attach the interface while I build the instance.
Edit - Example code:
Host Creation:
resource "openstack_compute_instance_v2" "example_host" {
count = 1
name = example-host
image_name = var.centos_7_name
flavor_id = "2"
key_pair = "key"
}
Interface attaching:
resource "openstack_compute_interface_attach_v2" "example_interface_attach" {
instance_id = openstack_compute_instance_v2.example_host[0].id
port_id = "bd858b4c-d6de-4739-b125-314f1e7041ed"
}
This won't work. Terraform returns an error:
Error: Error creating OpenStack server: Expected HTTP response code []
when accessing [POST servers], but got 409 instead
{"conflictingRequest": {"message": "Multiple possible networks found,
use a Network ID to be more specific.", "code": 409}}
Back to my initial query: I want to deploy a new host and attach a network interface. The result should be a host with only one IP-Address, the one I've attached to it.
The error seems to be generated by the instance launch. OpenStack (not Terraform) insists on a network if more than one network is available. From an OpenStack perspective, you have several solutions. Off the cuff, I see three:
Since microversion 2.37, the Nova API allows you to specify "none" as a network, in which case the instance runs, but is not connected after the launch.
Or launch the instance on a port instead of a network, after putting an IP address on the port. Using the openstack client:
openstack port create --network <network> --fixed-ip subnet=<subnet>,ip-address=<ip-address>
openstack server create ... --port <port-ip> ...
I consider that the best solution.
Another solution would be specifying a network and a fixed IP address while launching the instance. CLI:
openstack server create ... --nic NET-UUID,v4-fixed-ip=172.16.7.8 ...
Unfortunately, I can't tell whether Terraform supports any of these solutions. I would try adding the port_id to the resource "openstack_compute_instance_v2" "example_host" block.
I've found the solution and it's incredibly simple. You can plainly add the port id to the network block. I've tried it previously already but it failed. Chances are I've provided the wrong ID.
##Create hosts
resource "openstack_compute_instance_v2" "test_host" {
count = 1
name = format("test-host-%02d", count.index + 1)
image_name = var.centos_7_name
flavor_id = "2"
key_pair = "key"
network {
port = "<port-id>"
}
}
Here's an additional solution removing the potential of providing a wrong id.
##Create hosts
resource "openstack_compute_instance_v2" "test_host" {
count = 1
name = format("test-host-%02d", count.index + 1)
image_name = var.centos_7_name
flavor_id = "2"
key_pair = "key"
network {
port = data.openstack_networking_port_v2.port_1.id
}
}
data openstack_networking_port_v2 "port_1" {
name = "switch-port-208.37"
}
I am working on running runbooks (powershell and graphical) from ADF. One of the ways I found to accomplish this task is to use webhooks. I will have runbooks running in parallel and in series (if dependency exists on previous runbook).
Overall,
If a flat file is dropped in Azure Blob storage then it triggers the pipeline that contains respective runbook(s). This part is working.
The webhook of runbook(s) are used in ADF webhook activity. This is where I am facing the problem. I am unsure about what should be in the body of webhook activity?
After some research I was able to find something about Callback uri that needs to be added (or somehow generated) in the body of the webhook. How can I get this Callback uri? If I don't add proper callback uri then the activity runs till timeout. I believe the functioning should be webhook activity completes when the runbook it's running is executed successfully so we can move on to next webhook activity in a pipeline. I have tried web activity as well but it's the same issue.
The body I am using right now is just below json.
{"body":{"myMessage":"Sample"}}
I have referenced:
https://vanishedgradient.com/2019/04/25/webhooks-with-azure-data-factory/
https://mrpaulandrew.com/2019/06/18/azure-data-factory-web-hook-vs-web-activity/
https://social.msdn.microsoft.com/Forums/en-US/2effcefb-e65b-4d5c-8b01-138c95126b79/in-azure-data-factory-v2-how-to-process-azure-analysis-service-cube?forum=AzureDataFactory
Thanks for the links, they are useful sources. I've managed to get this working for a pipeline that calls a runbook to resize azure analysis services. Having the runbook return failure and success information was not well documented.
Here's some code to assist a little, which i've taken from several places but a lot from the open issue (https://github.com/MicrosoftDocs/azure-docs/issues/43897) on this Microsoft page: https://learn.microsoft.com/en-us/azure/data-factory/control-flow-webhook-activity
The datafactory Webhook activity passes in some "Headers", SourceHost which is #pipeline().DataFactory and SourceProcess which is #pipeline().Pipeline. This was so we can do some checking to confirm that the runbook is being run by acceptable processes.
The Body of the call is then other variables we required:
#json(concat('{"AnalysisServer":"', pipeline().parameters.AASName, '", "MinimumSKU":"', pipeline().parameters.SKU,'"}') )
Your runbook needs the WebhookData parameter
param
(
[Parameter (Mandatory=$false)]
[object] $WebhookData
)
You can then grab all the bits you need, including checking if a callbackuri was provided:
if ($WebhookData)
{
# Split apart the WebhookData
$WebhookName = $WebhookData.WebhookName
$WebhookHeaders = $WebhookData.RequestHeader
$WebhookBody = $WebhookData.RequestBody | Convertfrom-Json
$WebhookADF = $WebhookHeaders.SourceHost
$WebhookPipeline = $WebhookHeaders.SourceProcess
Write-Output -InputObject ('Runbook started through webhook {0} called by {1} on {2}.' -f $WebhookName, $WebhookPipeline, $WebhookADF)
# if there's a callBackURI then we've been called by something that is waiting for a response
If ($WebhookBody.callBackUri)
{
$WebhookCallbackURI = $WebhookBody.callBackUri
}
...
}
The variable $WebHookHeaders: #{Connection=Keep-Alive; Expect=100-continue; Host=sXXevents.azure-automation.net; SourceHost=**MYDATAFACTORYNAME**; SourceProcess=**MYPIPELINENAME**; x-ms-request-id=**UNIQUEIDENTIFIER**}
You can then grab information out of your json body: $AzureAnalysisServerName = $WebHookBody.AnalysisServer
Passing an error/failure back to your runbook is relatively easy, note that I put my success/update message in to $Message and only have content in $ErrorMessage if there's been an error:
$ErrorMessage = "Failed to do stuff I wanted"
if ($ErrorMessage)
{
$Output = [ordered]#{ output= #{
AzureAnalysisServerResize = "Failed" }
error = #{
ErrorCode = "ResizeError"
Message = $ErrorMessage
}
statusCode = "500"
}
} else {
$Output = [ordered]#{
output= #{
"AzureAnalysisServerResize" = "Success"
"message" = $Outputmessage
}
statusCode = "200"
}
}
$OutputJson = $Output | ConvertTo-Json -Depth 10
# if we have a callbackuri let the ADF Webhook activity know that the script is complete
# Otherwise it waits until its timeout
If ($WebhookCallBackURI)
{
$WebhookCallbackHeaders = #{
"Content-Type"="application/json"
}
Invoke-WebRequest -UseBasicParsing -Uri $WebhookCallBackURI -Method Post -Body $OutputJson -Header $WebhookCallbackHeaders
}
I then end the if ($WebhookData) { call with an else to say the runbook shouldn't be running if not called from webhook:
} else {
Write-Error -Message 'Runbook was not started from Webhook' -ErrorAction stop
}
Passing back an error message was quiet easy, passing back a success message has been traumatic, but the above seems to work, and in my datafactory pipeline i can access the results.
Output
{
"message": "Analysis Server MYSERVERNAME which is SKU XX is already at or above required SKU XX.",
"AzureAnalysisServerResize": "Success"
}
Note that with the Invoke-WebRequest, some examples online don't specify -UseBasicParsing but we had to as the runbook complained: Invoke-WebRequest : The response content cannot be parsed because the Internet Explorer engine is not available, or Internet Explorer's first-launch configuration is not complete.
I'm not sure if this is best practice but I have something that is working in a Powershell Workflow Runbook.
If the runbook has a webhook defined then you use the webhookdata parameter. Your request body needs to be in JSON format and the $WebhookData param picks it up. For example supposed the Body in your webhook activity looks like this:
{"MyParam":1, "MyOtherParam":"Hello"}
In your runbook you pick up the parameters this way:
Param([object]$WebhookData)
if($WebhookData){
$parameters=(ConvertFrom-Json -InputObject $WebhookData.RequestBody)
if($parameters.MyParam) {$ParamOne = $parameters.MyParam}
if($parameters.MyOtherParam) {$ParamTwo = $parameters.MyOtherParam}
}
The variables in your runbook $ParamOne and $ParamTwo are populated from the parsed JSON Body string. The data factory automatically appends the callBackUri to the Body string. You don't need to create it.
You have to use the $WebhookData name. It's a defined property.
I hope this helps.
Apologies for delay. I had found the complete solution few months back. Thanks to Nick and Sara for adding the pieces. I used similar code as return code. We were using graphical runbooks with limited changes allowed so I just added return code (Powershell) at the end of the runbook to have little to no impact. I plugged in below code:
if ($WebhookData)
{
Write-Output $WebhookData
$parameters = (ConvertFrom-Json -InputObject $WebhookData.RequestBody)
if ($parameters.callBackUri)
{
$callbackuri = $parameters.callBackUri
}
}
if ($callbackuri)
{
Invoke-WebRequest -Uri $callbackuri -UseBasicParsing -Method POST
}
Write-Output $callbackuri
After this I added an input parameter using "Input and Output" button available in the runbook. I named the input parameter as "WebhookData" and type as "Object". The name of input parameter is case-sensitive and should match the parameter used in Powershell code.
This resolved my issue. The runbook started when called from ADF pipeline and moved to next pipeline only when the underlying runbook called by the webhook was completed.
i have a project with laravel on my localhost and i want to config elasticsearch on it but when i config my host this error appear : "host is not recognized parameter in elasticsearch"
i try it in anyway like :
$params = array('hosts' => array('host'=>'localhost'));
or $params = array('hosts' => array('host'=>'127.0.0.1','port'=>8080));
and some other way
error have some info :
foreach ($params as $key => $value) {
if (array_search($key, $whitelist) === false) {
throw new UnexpectedValueException($key . ' is not a valid parameter');
}
}
The hosts array is not supposed to be keyed itself - that's basically what the error message is saying: key host (and port) is not expected.
In your code, try without the host key, just like this:
$params = array('hosts' => array('localhost:8080'));
The default is localhost:9200. For further details, read the official documentation.
Like matpop said, the "hosts"-array is not keyed itself. Here a example from the es-documentation.
$params = array();
$params['hosts'] = array (
'192.168.1.1:9200', // IP + Port
'192.168.1.2', // Just IP
'mydomain.server.com:9201', // Domain + Port
'mydomain2.server.com', // Just Domain
'https://localhost', // SSL to localhost
'https://192.168.1.3:9200' // SSL to IP + Port );
$client = new Elasticsearch\Client($params);
im trying to scale my Azure SQL DB with php. All the other sql statements works fine, but when im sending
ALTER DATABASE db1_abcd_efgh MODIFY (EDITION = 'Web', MAXSIZE=5GB);
i get an error like that
User must be in the master database.
My database URL is that
xaz25jze9d.database.windows.net
and the database is named linke that
db1_abcd_efgh
function skale_a_m(){
$host = "tcp:xaz25jze9d.database.windows.net,1433\sqlexpress";
$user = "db_user";
$pwd = "xxxxx?!";
$db = "master"; //I have tried out db1_abcd_efgh at this point
try {
$conn = new PDO("sqlsrv:Server= $host ; Database = $db ", $user, $pwd);
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
} catch (Exception $e) {
}
$string = 'use master; ALTER DATABASE db1_a_m MODIFY (EDITION =\'Web\', MAXSIZE=5GB)';
$stmt = $conn->query($string);
}
Now i have modified my function linke this
function skale_a_m() {
$serverName = "tcp:yq6ipq11b4.database.windows.net,1433";
$userName = 'db_user#yq6ipq11b4';
$userPassword = 'xxxxx?!';
$connectionInfo = array("UID" => $userName, "PWD" => $userPassword, "MultipleActiveResultSets" => true);
$conn = sqlsrv_connect($serverName, $connectionInfo);
if ($conn === false) {
echo "Failed to connect...";
}
$string = "ALTER DATABASE master MODIFY (EDITION ='Web', MAXSIZE=5GB)";
$stmt = sqlsrv_query($conn, $string);
}
Now i get no errors but the Db did not scale?
According to ALTER DATABASE (Windows Azure SQL Database), the ALTER DATABASE statement has to be issued when connected to the master database.
With PDO, this can be achieved by a connection string such as:
"sqlsrv:server=tcp:{$server}.database.windows.net,1433; Database=master"
Sample code:
<?php
function scale_database($server, $username, $password, $database, $maxsize) {
try {
$conn = new PDO ("sqlsrv:server=tcp:{$server}.database.windows.net,1433; Database=master", $username, $password);
$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$conn->setAttribute(constant('PDO::SQLSRV_ATTR_DIRECT_QUERY'), true);
$conn->exec("ALTER DATABASE {$database} MODIFY (MAXSIZE={$maxsize}GB)");
$conn = null;
}
catch (Exception $e) {
die(print_r($e));
}
}
scale_database("yourserver", "youruser", "yourpassword", "yourdatabase", "5");
?>
Note: It's not necessary to set the edition; it will be set according to the max size.
To test the sample code, configure it with your details (server name, login, password and database to be scaled) and execute it with PHP configured with the Microsoft Drivers 3.0 for PHP for SQL Server.
After that, refresh (Ctrl+F5) the Windows Azure Management Portal and you should see the new max size reflected on the Scale tab of the database.
You can also verify that it worked by using a tool to connect to the scaled database (not to the master database) and issuing this command:
SELECT CONVERT(BIGINT,DATABASEPROPERTYEX ('yourdatabase', 'MAXSIZEINBYTES'))/1024/1024/1024 AS 'MAXSIZE IN GB'
$string = 'use master; ALTER DATABASE db1_a_m MODIFY (EDITION =\'Web\', MAXSIZE=5GB)'
I'm pretty sure SQL Azure does not support switching Databases using the USE command.
Try connect directly to the master db in your connection, and remove the USE Master statement from the start of your query.
$host = "tcp:xaz25jze9d.database.windows.net,1433\sqlexpress";
That also looks wrong to me. You shouldn't have a named instance called SQLExpress at the end of your server connection afaik.