How do I get disk space statistics for a clustered disk - c#-4.0

I have a working program that retrieves information of disk such as FreeSpace, TotalSpace etc from a remote server. I have a problem however that i cannot get the same statistics for all Clustered Disks configured on the server. The query only returns information for the Local Disk (Logical Disk).
I am able to get sizes for Local Disk(:C) as below :
public List<Disk> GetEnvironmentStatistics()
{
var serverIP = Convert.ToString(System.Web.HttpContext.Current.Session["ServerIP"]);
List<Disk> diskinfo = new List<Disk>();
//Add System.Management to access these utilities
ConnectionOptions options = new ConnectionOptions
{
Username = Convert.ToString(System.Web.HttpContext.Current.Session["Username"]),
Password = Convert.ToString(System.Web.HttpContext.Current.Session["Password"]),
Authority = Convert.ToString(System.Web.HttpContext.Current.Session["Authority"]),
};
//root - root of the tree, cimv2 - version
ManagementScope scope = new ManagementScope("\\\\" + serverIP + "\\root\\CIMV2", options);
scope.Connect();
SelectQuery query = new SelectQuery("Select * from Win32_LogicalDisk");
ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query);
ManagementObjectCollection queryCollection = searcher.Get();
foreach (ManagementObject mo in queryCollection)
{
Disk disk = new Disk();
disk.DiskName = mo["Name"].ToString();
disk.DeviceId = mo["DeviceID"].ToString();
disk.SystemName = mo["SystemName"].ToString();
disk.FreeSpace = Convert.ToDecimal(mo["FreeSpace"]);
var formattedFreeSpace = Helpers.DiskSpaceInGigabytes(disk.FreeSpace ?? 0);
disk.FreeSpace = Decimal.Truncate(formattedFreeSpace);
disk.TotalSpace = Convert.ToDecimal(mo["Size"]);
var formattedTotalSpace = Helpers.DiskSpaceInGigabytes(disk.TotalSpace ?? 0);
disk.TotalSpace = Decimal.Truncate(formattedTotalSpace);
disk.UsedSpace = disk.TotalSpace - disk.FreeSpace;
var HDPercentageUsed = 100 - (100 * disk.FreeSpace / disk.TotalSpace);
disk.PercentageUsed = Convert.ToInt32(HDPercentageUsed);
diskinfo.Add(disk);
}
return diskinfo;
}
I have logged into the server and noted the other disks appear as Clustered Disk.
I have researched a bit about the Clustered Disks or Cluster Shared Volumes but the only guides i see include Powershell Scripting like here : ClusteredSharedVolume Disk Space.
My question - how can i write a WMI query that also retrieves information for Clustered Disks. Precisely how i can adapt the query in the provided link for my needs :
$resources = Get-WmiObject -namespace root\MSCluster MSCluster_Resource -filter "Type='Physical
Disk'"
$resources | foreach {
$res = $_
$disks = $res.GetRelated("MSCluster_Disk")
$disks | foreach {
$_.GetRelated("MSCluster_DiskPartition") |
select #{N="Name"; E={$res.Name}}, #{N="Status"; E={$res.State}}, Path, VolumeLabel, TotalSize, FreeSpace
}
} | ft

The solution requires invoking Powershell script in C# :
1. Add System.Management.Automation Reference
This dll enable access to Powershell utilities. The packages available on Nuget seem ahead and are not being recognized so i had to manually add the Reference by References >> Add Reference >> Browse >> C:\Windows\assembly\GAC_MSIL\System.Management.Automation\1.0.0.0__31bf3856ad364e35
Using Powershell i can get disk information for all Clustered Disks by the command :
get-WmiObject win32_logicaldisk -Computername (ComputerName or IPAddress here)
2. Invoke this command in code
I added an extension method that checks for whether the remote host has Clustered Disks (if any) . If it indeed has , then use the Powershell script , if not then use normal WMI query :
public List<Disk> GetEnvironmentStatistics()
{
List<Disk> diskinfo = new List<Disk>();
var serverIP = Convert.ToString(System.Web.HttpContext.Current.Session["ServerIP"]);
var clusterDisksStatus = CheckForClusteredDisks(serverIP);
if (Helpers.HasClusteredDisks(clusterDisksStatus))
{
string getClusterSharedVolumesStatistics = "get-WmiObject win32_logicaldisk -Computername " + serverIP + "";
PowerShell ps = PowerShell.Create();
ps.AddScript(getClusterSharedVolumesStatistics);
var results = ps.Invoke();
foreach (var psobject in results)
{
if (psobject != null)
{
Disk clusteredDisk = new Disk();
clusteredDisk.DiskName = Convert.ToString(psobject.Members["DeviceID"].Value);
clusteredDisk.FreeSpace = Convert.ToDecimal(psobject.Members["FreeSpace"].Value);
var formattedFreeSpace = Helpers.DiskSpaceInGigabytes(clusteredDisk.FreeSpace ?? 0);
clusteredDisk.FreeSpace = Decimal.Truncate(formattedFreeSpace);
clusteredDisk.TotalSpace = Convert.ToDecimal(psobject.Members["Size"].Value);
var formattedTotalSpace = Helpers.DiskSpaceInGigabytes(clusteredDisk.TotalSpace ?? 0);
clusteredDisk.TotalSpace = Decimal.Truncate(formattedTotalSpace);
clusteredDisk.UsedSpace = clusteredDisk.TotalSpace - clusteredDisk.FreeSpace;
clusteredDisk.VolumeName = Convert.ToString(psobject.Members["VolumeName"].Value);
diskinfo.Add(clusteredDisk);
}
}
}
else
{
//Add System.Management to access these utilities
ConnectionOptions options = new ConnectionOptions
{
Username = Convert.ToString(System.Web.HttpContext.Current.Session["Username"]),
Password = Convert.ToString(System.Web.HttpContext.Current.Session["Password"]),
Authority = Convert.ToString(System.Web.HttpContext.Current.Session["Authority"]),
};
//root - root of the tree, cimv2 - version
ManagementScope scope = new ManagementScope("\\\\" + serverIP + "\\root\\CIMV2", options);
scope.Connect();
SelectQuery query = new SelectQuery("Select * from Win32_LogicalDisk");
ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query);
ManagementObjectCollection queryCollection = searcher.Get();
try
{
foreach (ManagementObject mo in queryCollection)
{
Disk disk = new Disk();
disk.DiskName = mo["Name"].ToString();
disk.DeviceId = mo["DeviceID"].ToString();
disk.SystemName = mo["SystemName"].ToString();
disk.FreeSpace = Convert.ToDecimal(mo["FreeSpace"]);
var formattedFreeSpace = Helpers.DiskSpaceInGigabytes(disk.FreeSpace ?? 0);
disk.FreeSpace = Decimal.Truncate(formattedFreeSpace);
disk.TotalSpace = Convert.ToDecimal(mo["Size"]);
var formattedTotalSpace = Helpers.DiskSpaceInGigabytes(disk.TotalSpace ?? 0);
disk.TotalSpace = Decimal.Truncate(formattedTotalSpace);
disk.UsedSpace = disk.TotalSpace - disk.FreeSpace;
var HDPercentageUsed = 100 - (100 * disk.FreeSpace / disk.TotalSpace);
disk.PercentageUsed = Convert.ToInt32(HDPercentageUsed);
diskinfo.Add(disk);
}
}
catch (DivideByZeroException ex)
{
ExceptionLogger.SendErrorToText(ex);
}
}
return diskinfo;
}
And the Helper method to check :
public static bool HasClusteredDisks(int status)
{
int hasClusteredDisks = 1;
if (status == hasClusteredDisks)
{
return true;
}
else
{
return false;
}
}

Related

Defining log analytic data sources via C#

How can I add the Windows Performance Counters shown in the screenshot below via a C# application?
I found out this can be done via the OperationalInsightsManagementClient. The code below will add a metric.
void AddWorkspaceDatasources(string resourceGroupName, string objectName, string counterName)
{
var client = new OperationalInsightsManagementClient(GetCredentials()) {SubscriptionId = subscriptionId};
var existing = client.DataSources.ListByWorkspace(
new ODataQuery<DataSourceFilter> {Filter = "kind eq 'WindowsPerformanceCounter'"},
resourceGroupName,
resourceGroupName);
if (!existing.Any(c => (c.Properties as JObject)["objectName"].ToString() == objectName && (c.Properties as JObject)["counterName"].ToString() == counterName))
{
var properties = new JObject();
properties["counterName"] = counterName;
properties["instanceName"] = "*";
properties["intervalSeconds"] = 10;
properties["objectName"] = objectName;
properties["collectorType"] = "Default";
client.DataSources.CreateOrUpdate(
resourceGroupName,
resourceGroupName,
Regex.Replace(objectName, "[^a-zA-Z0-9]", "") + Regex.Replace(counterName, "[^a-zA-Z0-9]", ""),
new DataSource
{
Kind = "WindowsPerformanceCounter",
Properties = properties
});
}
}

How to get Logic App Metrics?

I'm trying to get Logic Apps metrics like BillableExecutions, Latency etc in my console application.
Currently I'm able to list the logic apps runs, triggers, versions using the .Net Client Microsoft.Azure.Management. But it doesn't seem to have the API to access the monitoring API's.
Code excerpt
private static void Main(string[] args)
{
var token = GetTokenCredentials();
var client = new LogicManagementClient(token, new HttpClientHandler())
{
SubscriptionId = new AzureSubscription().SubscriptionId
};
var dataQuery = new ODataQuery<WorkflowFilter>
{
Top = 50
};
using (client)
{
var logicAppsWorkFlows = client.Workflows.ListBySubscription(dataQuery);
foreach (var logicAppsWorkFlow in logicAppsWorkFlows)
{
var runs = GetWorkflowRuns(client, logicAppsWorkFlow.Id.Split('/')[4], logicAppsWorkFlow.Name);
Console.WriteLine(runs.Count);
}
Console.WriteLine(logicAppsWorkFlows.Count());
}
}
Can someone tell me how to access Logic Apps Metrics? Is there a client similar to Microsoft.Azure.Management for access metrics data?
Update 2
I have found a client dll which was in pre release mode which is used to get metrics. Below is my current code
var token = GetTokenCredentials();
var insightsClient = new InsightsClient(token, new HttpClientHandler())
{
SubscriptionId = new AzureSubscription().SubscriptionId
};
var logicManagementClient = new LogicManagementClient(token, new HttpClientHandler())
{
SubscriptionId = new AzureSubscription().SubscriptionId
};
var dataQuery = new ODataQuery<WorkflowFilter>
{
Top = 50
};
using (logicManagementClient)
{
var logicAppsWorkFlows = logicManagementClient.Workflows.ListBySubscription(dataQuery);
foreach (var logicAppsWorkFlow in logicAppsWorkFlows)
{
using (insightsClient)
{
var metricsDataQuery = new ODataQuery<Metric>
{
Filter = "name.value eq 'ActionLatency' and startTime ge '2014-07-16'"
};
IEnumerable<Metric> metricsList = null;
try
{
metricsList = insightsClient.Metrics.List(logicAppsWorkFlow.Id, metricsDataQuery);
}
catch (Exception e)
{
Console.WriteLine(e);
}
if (metricsList == null) continue;
foreach (var metric in metricsList)
{
foreach (var metricValue in metric.Data)
{
Console.WriteLine(metric.Name.Value + " = " + metricValue.Total);
}
}
}
}
}
I'm getting an exception saying the filter string is not valid. Im referring the filter string structure provided here
https://learn.microsoft.com/en-us/rest/api/monitor/filter-syntax
Can someone tell what im doing wrong here?
Thanks
It looks like ge is not allowed for Logic Apps StartTime field for some reason. I had to change the code to below to make it work
using (logicManagementClient)
{
var logicAppsWorkFlows = logicManagementClient.Workflows.ListBySubscription(dataQuery);
foreach (var logicAppsWorkFlow in logicAppsWorkFlows)
{
using (insightsClient)
{
var metricsDataQuery = new ODataQuery<Metric>
{
Filter = "startTime eq " + DateTime.Now.AddDays(-1).ToString("yyyy-MM-dd") + " and name.value eq 'BillableTriggerExecutions' and endTime eq " + DateTime.Now.ToString("yyyy-MM-dd")
};
var query = metricsDataQuery.GetQueryString();
Console.WriteLine(query);
IEnumerable<Metric> metricsList = null;
try
{
//throws exception if there is no metrics data
//TODO: Check whether the logic app ran atleast one time
metricsList = insightsClient.Metrics.List(logicAppsWorkFlow.Id, metricsDataQuery);
}
catch (Exception e)
{
Console.WriteLine(e);
}
if (metricsList == null) continue;
foreach (var metric in metricsList)
{
foreach (var metricValue in metric.Data)
{
Console.WriteLine(metric.Name.Value + " = " + metricValue.Total);
}
}
}
}
}

how do you add diagnostics to an azure cloud service using just the Microsoft.WindowsAzure.Management libraries?

I don't want PowerShell involved at all. I can create the cloud service just fine and I have my diagnostics config file as part of the root of the worker role. How do you turn on the extension though ?
found out myself.
var etcs = cloudClient.HostedServices.ListAvailableExtensions();
var et = etcs.FirstOrDefault(p => p.Type == "PaaSDiagnostics");
cloudClient.HostedServices.AddExtension("agent1", new Microsoft.WindowsAzure.Management.Compute.Models.HostedServiceAddExtensionParameters()
{
Type = et.Type,
ProviderNamespace = et.ProviderNameSpace,
Id = "testext",
Version = et.Version,
PublicConfiguration = File.ReadAllText(#"PubConfig.xml"),
PrivateConfiguration = "<?xml version=\"1.0\" encoding=\"utf-8\"?><PrivateConfig xmlns=\"http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration\"><StorageAccount name=\"store\" key=\"" + ks.SecondaryKey + "\"></StorageAccount></PrivateConfig>"
});
var id = cloudClient.Deployments.Create("agent1", Microsoft.WindowsAzure.Management.Compute.Models.DeploymentSlot.Production, new Microsoft.WindowsAzure.Management.Compute.Models.DeploymentCreateParameters()
{
Name = "test",
Configuration = File.ReadAllText(#"ServiceConfiguration.Cloud.cscfg"),
PackageUri = blob.Uri,
Label = "Test",
StartDeployment = true,
ExtensionConfiguration = new Microsoft.WindowsAzure.Management.Compute.Models.ExtensionConfiguration()
{
AllRoles = new[] { new Microsoft.WindowsAzure.Management.Compute.Models.ExtensionConfiguration.Extension ()
{
Id = "testext",
State = "Enable"
}}
}
});

Azure server not letting me use a NuGet package

I have a website hosted by Azure that includes a Web API which I'm using to develop an android app. I'm trying to upload a media file to the server where it's encoded by a media encoder and saved to a path. The encoder library is called "Media Toolkit" which I found here : https://www.nuget.org/packages/MediaToolkit/1.0.0.3
My server side code looks like this:
[HttpPost]
[Route("upload")]
public async Task<HttpResponseMessage> Upload(uploadFileModel model)
{
var result = new HttpResponseMessage(HttpStatusCode.OK);
if (ModelState.IsValid)
{
string thumbname = "";
string resizedthumbname = Guid.NewGuid() + "_yt.jpg";
string FfmpegPath = Encoding_Settings.FFMPEGPATH;
string tempFilePath = Path.Combine(HttpContext.Current.Server.MapPath("~/video"), model.fileName);
string pathToFiles = HttpContext.Current.Server.MapPath("~/video");
string pathToThumbs = HttpContext.Current.Server.MapPath("~/contents/member/" + model.username + "/thumbs");
string finalPath = HttpContext.Current.Server.MapPath("~/contents/member/" + model.username + "/flv");
string resizedthumb = Path.Combine(pathToThumbs, resizedthumbname);
var outputPathVid = new MediaFile { Filename = Path.Combine(finalPath, model.fileName) };
var inputPathVid = new MediaFile { Filename = Path.Combine(pathToFiles, model.fileName) };
int maxWidth = 380;
int maxHeight = 360;
var namewithoutext = Path.GetFileNameWithoutExtension(Path.Combine(pathToFiles, model.fileName));
thumbname = model.VideoThumbName;
string oldthumbpath = Path.Combine(pathToThumbs, thumbname);
var fileName = model.fileName;
try
{
File.WriteAllBytes(tempFilePath, model.data);
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
using (var engine = new Engine())
{
engine.GetMetadata(inputPathVid);
// Saves the frame located on the 15th second of the video.
var outputPathThumb = new MediaFile { Filename = Path.Combine(pathToThumbs, thumbname + ".jpg") };
var options = new ConversionOptions { Seek = TimeSpan.FromSeconds(0), CustomHeight = 360, CustomWidth = 380 };
engine.GetThumbnail(inputPathVid, outputPathThumb, options);
}
Image image = Image.FromFile(Path.Combine(pathToThumbs, thumbname + ".jpg"));
//var ratioX = (double)maxWidth / image.Width;
//var ratioY = (double)maxHeight / image.Height;
//var ratio = Math.Min(ratioX, ratioY);
var newWidth = (int)(maxWidth);
var newHeight = (int)(maxHeight);
var newImage = new Bitmap(newWidth, newHeight);
Graphics.FromImage(newImage).DrawImage(image, 0, 0, newWidth, newHeight);
Bitmap bmp = new Bitmap(newImage);
bmp.Save(Path.Combine(pathToThumbs, thumbname + "_resized.jpg"));
//File.Delete(Path.Combine(pathToThumbs, thumbname));
using (var engine = new Engine())
{
var conversionOptions = new ConversionOptions
{
VideoSize = VideoSize.Hd720,
AudioSampleRate = AudioSampleRate.Hz44100,
VideoAspectRatio = VideoAspectRatio.Default
};
engine.GetMetadata(inputPathVid);
engine.Convert(inputPathVid, outputPathVid, conversionOptions);
}
File.Delete(tempFilePath);
Video_Struct vd = new Video_Struct();
vd.CategoryID = 0; // store categoryname or term instead of category id
vd.Categories = "";
vd.UserName = model.username;
vd.Title = "";
vd.Description = "";
vd.Tags = "";
vd.Duration = inputPathVid.Metadata.Duration.ToString();
vd.Duration_Sec = Convert.ToInt32(inputPathVid.Metadata.Duration.Seconds.ToString());
vd.OriginalVideoFileName = model.fileName;
vd.VideoFileName = model.fileName;
vd.ThumbFileName = thumbname + "_resized.jpg";
vd.isPrivate = 0;
vd.AuthKey = "";
vd.isEnabled = 1;
vd.Response_VideoID = 0; // video responses
vd.isResponse = 0;
vd.isPublished = 1;
vd.isReviewed = 1;
vd.Thumb_Url = "none";
//vd.FLV_Url = flv_url;
vd.Embed_Script = "";
vd.isExternal = 0; // website own video, 1: embed video
vd.Type = 0;
vd.YoutubeID = "";
vd.isTagsreViewed = 1;
vd.Mode = 0; // filter videos based on website sections
//vd.ContentLength = f_contentlength;
vd.GalleryID = 0;
long videoid = VideoBLL.Process_Info(vd, false);
return result;
}
else
{
throw new HttpResponseException(Request.CreateResponse(HttpStatusCode.NotAcceptable, "This request is not properly formatted"));
}
}
When the debugger hits the line using (var engine = new Engine()) I get 500 internal server error thrown. I don't get this error testing it on the iis server. Since it works fine on my local server and not on the azure hosted server, I figured it had to do with the Azure service rather than an error in my code. If so is the case then how would I be able to get around this issue? I don't want to use azure blob storage as it would require a lot of changes to my code. Does anyone have any idea what might be the issue.
Any helpful suggestions are appreciated.
Server.MapPath works differently on Azure WebApps - change to:
string pathToFiles = HttpContext.Current.Server.MapPath("~//video");
Also, see this SO post for another approach.

Drupal removing a node reference from a node

Ok, trying to process a script, both PHP and JavaScript, where I am moving a particular content type NODE from one reference to another. This is the structure:
I have a PROJECT
Inside each PROJECT are PAGES
Inside each PAGE are CALLOUTS
and Inside each CALLOUT are PRODUCTS.
What I want to do is take a PRODUCT from one CALLOUT to another CALLOUT. I am able to merge these, but now what I want to do is delete the first instance. An example:
I have PRODUCT AAG-794200 that is on PAGE 6 CALLOUT A. I am merging that PRODUCT with PAGE 6 CALLOUT B.
I can get the product to merge, but now I need to remove it from CALLOUT A. Here is my code:
$merge = explode(',', $merge); //Merge SKUs
$mpages = explode(',', $mpages); //Merge Pages
$mcallouts = explode(',', $mcallouts); //Merge Callouts
$mcallout_nid = explode(',', $mcallout_nid); //Merge Current callout
$length = count($merge);
$e = 0;
while ($e < $length) {
//Where is the SKU going to?
$to_callout_letter = strtoupper($mcallouts[$e]);
$to_page_num = $mpages[$e];
$sku = $merge[$e];
$from_callout = $mcallout_nid[$e];
//Where is the SKU coming from?
$other_callout = node_load($from_callout);
//Need page ID of current callout for project purposes
$page_nid = $other_callout->field_page[0]['nid'];
$page = node_load($page_nid);
//Need the project NID
$project_nid = $page->field_project[0]['nid'];
//We need to get the NID of the page we are going to
$page_nid = db_query('SELECT * FROM content_type_page WHERE field_page_order_value = "%d" and field_project_nid = "%d" ORDER BY vid DESC LIMIT 1', $to_page_num, $project_nid);
$page_nid_res = db_fetch_array($page_nid);
$to_page_nid = $page_nid_res['nid'];
//We need to get the NID of the callout here
$co_nid = db_query('SELECT * FROM content_type_callout WHERE field_identifier_value = "%s" and field_page_nid = "%d"', $to_callout_letter, $to_page_nid);
$co_nid_res = db_fetch_array($co_nid);
$to_callout_letter_nid = $co_nid_res['nid'];
//Load the present callout the SKU resides on
$f_callout = node_load($from_callout);
$callout = node_load($to_callout_letter_nid);
$long = count($f_callout->field_skus);
$deletecallout = array();
foreach($f_callout->field_skus as $skus) {
$s = 0;
while ($s < $long) {
if($skus['nid'] == $sku) {
$callout->field_skus[] = $skus;
$s++;
}
else {
$deletecallout[] = $skus;
$s++;
}
}
}
foreach($other_callout->field_images as $old_image) {
$callout->field_images[] = $old_image;
}
foreach($other_callout->field_line_art as $old_image) {
$callout->field_line_art[] = $old_image;
}
foreach($other_callout->field_swatches as $old_image) {
$callout->field_swatches[] = $old_image;
}
$callout->field_copy_text[0]['value'] .= $other_callout->field_copy_text[0]['value'];
$callout->field_notes[0]['value'] .= $other_callout->field_notes[0]['value'];
$callout->field_image_notes[0]['value'] .= $other_callout->field_image_notes[0]['value'];
$callout->field_status[0]['value'] = 'In Process';
node_save($callout);
This causes the PRODUCTS to MERGE, but not delete the original.
Thanks for any help. I know it's something simple, and it will be a palm-to-face moment.
I was actually able to solve this myself. #Chris - The brace ended after node_save(callout); I must have missed that when I copied and pasted. However, here is the code I ended up using:
$merge = explode(',', $merge); //Merge SKUs
$mpages = explode(',', $mpages); //Merge Pages
$mcallouts = explode(',', $mcallouts); //Merge Callouts
$mcallout_nid = explode(',', $mcallout_nid); //Merge Current callout
if($merge[0] !== '0') {
//Store NIDs of Old Callouts to the proper SKU
$oc_sku = array();
$oc_sku_e = count($merge);
$oc_sku_ee = 0;
while ($oc_sku_ee < $oc_sku_e) {
$curr_sku = $merge[$oc_sku_ee];
$curr_oldco = $mcallout_nid[$oc_sku_ee];
$oc_sku[$curr_sku] = $curr_oldco;
$oc_sku_ee++;
}
//Convert page numbers to page_nids
$pc = count($mpages); //How many pages are we getting
$pc_e = 0;
while($pc_e < $pc) {
$nid = $mpages[$pc_e];
$sql = db_query('SELECT * FROM content_type_page WHERE field_page_order_value = "%d" AND field_project_nid = "%d" ORDER BY vid DESC LIMIT 1', $nid, $project_nid);
$res = db_fetch_array($sql);
if($res) {
$npage_arr[] = $res['nid'];
} else { //If there is no page, we need to create it here.
$node = new StdClass();
$node->type = 'page';
$node->title = 'Page ' . $nid . ' of ' . $project->title;
$node->field_project[0]['nid'] = $project_nid;
$node->field_page_order[0]['value'] = $nid;
$node = node_submit($node);
node_save($node);
$npage_arr[] = $node->nid;
}
$pc_e++;
}
// Convert callout letters to callout_nids
$coc = count($mcallouts);
$coc_e = 0;
while($coc_e < $coc) {
$cnid = strtoupper($mcallouts[$coc_e]);
$pnid = $npage_arr[$coc_e];
$page_node = node_load($pnid);
$sql = db_query('SELECT * FROM content_type_callout WHERE field_identifier_value = "%s" AND field_page_nid = "%d" ORDER BY vid DESC LIMIT 1', $cnid, $pnid);
$res = db_fetch_array($sql);
if($res) {
$cpage_arr[] = $res['nid'];
} else { //If there is no callout that exists, we need to make it here.
$callout_node = new stdClass();
$callout_node->type = 'callout';
$callout_node->field_page[0]['nid'] = $pnid;
$callout_node->field_identifier[0]['value'] = $cnid;
$callout_node->field_sequence[0]['value'] = 0;
$callout_node->title = "Callout ".$callout." on page ".$page_node->field_page_order[0]['value'];
$callout_node->field_project[0]['nid'] = $project->nid;
$callout_node->field_wholesaler[0]['value'] = $project->field_wholesaler[0]['value'];
$callout_node->field_skus = array();
$callout_node->status = 1;
$callout_node->uid = 1;
$callout_node->revision = true;
$callout_node = node_submit($callout_node);
node_save($callout_node);
$cpage_arr[] = $callout_node->nid;
}
$coc_e++;
}
//Now we need to assign the skus to the appropriate callout for processing
$coc2 = count($cpage_arr);
$coc_e2 = 0;
while($coc_e2 < $coc2) {
$co = $cpage_arr[$coc_e2];
if($co !== '0') {
$sku = $merge[$coc_e2];
$m_arr[$co][] = $sku;
}
$coc_e2++;
}
//we need a way to centrally store all NID's of SKUs to the callouts they belong to
$oc_arr = array();
$oc = count($mcallout_nid);
$oc_e = 0;
while($oc_e < $oc) {
$f_callout = $mcallout_nid[$oc_e];
$former_callout = node_load($f_callout);
foreach($former_callout->field_skus as $key=>$skus) {
$oc_arr[] = $skus;
}
$oc_e++;
}
//Now we are processing the Pages/Callouts/SKUs to save
$pc_e2 = 0;
foreach($m_arr as $key=>$values) {
$callout = node_load($key);
foreach($values as $value) {
$oc = count($oc_arr);
$oc_e = 0;
while($oc_e < $oc) {
$skus = $oc_arr[$oc_e];
if($value == $skus['nid']) {
$callout->field_skus[] = $skus;
//$nid = $oc_sku[$value];
$old_callout_info[] = $oc_sku[$value];
$oc_e = $oc;
}
else {
$oc_e++;
}
}
}
foreach($old_callout_info as $nid) {
/* $nid = $oc_sku[$value]; */
$former_callout = node_load($nid);
foreach($former_callout->field_images as $old_image) {
$callout->field_images[] = $old_image;
}
foreach($former_callout->field_line_art as $old_image) {
$callout->field_line_art[] = $old_image;
}
foreach($former_callout->field_swatches as $old_image) {
$callout->field_swatches[] = $old_image;
}
$callout->field_copy_text[0]['value'] .= $former_callout->field_copy_text[0]['value'];
}
$callout->field_notes[0]['value'] .= $former_callout->field_notes[0]['value'];
$callout->field_image_notes[0]['value'] .= $former_callout->field_image_notes[0]['value'];
$callout->field_logos = $former_callout->field_logos;
$callout->field_affiliations = $former_callout->field_affiliations;
$callout->field_graphics = $former_callout->field_graphics;
$callout->revision = 1;
$callout->field_status[0]['value'] = 'inprocess';
node_save($callout);
$pc_e2++;
}
}
I realize this can probably be simplified in a way, but as for now, this works perfectly considering what I'm trying to do. No complaints from the client so far. Thanks for taking a look Drupal Community.

Resources