At first,I use Frame to navigate my Page,but I found the memory is increase when each navigation.(I have set the NavigationCacheMode to requied).
The managed memory is identical,but non managed memory is increase.
Then I use a Dictionary to save all my pages,Use a ContentPresenter as the page container,but when i set page as the ContentPresenter's Content,the memory is increase also.
Dictionary<Type, Page> navigationPages = new Dictionary<Type, Page>();
public void Navigate(Type pageType, string param)
{
var re = navigationPages.Keys.Contains(pageType);
if (re == true)
{
var page = navigationPages[pageType];
this.contentPresenter.Content = page;
}
else
{
var page = Activator.CreateInstance(pageType) as Page;
navigationPages[pageType] = page;
this.contentPresenter.Content = page;
}
}
Related
Even PerformanceCounter is supported in .NET core, but it is not supported on the Ubuntu OS, so is there any way to get the system overall CPU and memory usage in a .NET core application (like the task manager shows in Windows)?
After some searching work, I did it by below codes (some codes are from the googling result). just FYI
internal static class CpuMemoryMetrics4LinuxUtils
{
private const int DigitsInResult = 2;
private static long totalMemoryInKb;
/// <summary>
/// Get the system overall CPU usage percentage.
/// </summary>
/// <returns>The percentange value with the '%' sign. e.g. if the usage is 30.1234 %,
/// then it will return 30.12.</returns>
public static double GetOverallCpuUsagePercentage()
{
// refer to https://stackoverflow.com/questions/59465212/net-core-cpu-usage-for-machine
var startTime = DateTime.UtcNow;
var startCpuUsage = Process.GetProcesses().Sum(a => a.TotalProcessorTime.TotalMilliseconds);
System.Threading.Thread.Sleep(500);
var endTime = DateTime.UtcNow;
var endCpuUsage = Process.GetProcesses().Sum(a => a.TotalProcessorTime.TotalMilliseconds);
var cpuUsedMs = endCpuUsage - startCpuUsage;
var totalMsPassed = (endTime - startTime).TotalMilliseconds;
var cpuUsageTotal = cpuUsedMs / (Environment.ProcessorCount * totalMsPassed);
return Math.Round(cpuUsageTotal * 100, DigitsInResult);
}
/// <summary>
/// Get the system overall memory usage percentage.
/// </summary>
/// <returns>The percentange value with the '%' sign. e.g. if the usage is 30.1234 %,
/// then it will return 30.12.</returns>
public static double GetOccupiedMemoryPercentage()
{
var totalMemory = GetTotalMemoryInKb();
var usedMemory = GetUsedMemoryForAllProcessesInKb();
var percentage = (usedMemory * 100) / totalMemory;
return Math.Round(percentage, DigitsInResult);
}
private static double GetUsedMemoryForAllProcessesInKb()
{
var totalAllocatedMemoryInBytes = Process.GetProcesses().Sum(a => a.PrivateMemorySize64);
return totalAllocatedMemoryInBytes / 1024.0;
}
private static long GetTotalMemoryInKb()
{
// only parse the file once
if (totalMemoryInKb > 0)
{
return totalMemoryInKb;
}
string path = "/proc/meminfo";
if (!File.Exists(path))
{
throw new FileNotFoundException($"File not found: {path}");
}
using (var reader = new StreamReader(path))
{
string line = string.Empty;
while (!string.IsNullOrWhiteSpace(line = reader.ReadLine()))
{
if (line.Contains("MemTotal", StringComparison.OrdinalIgnoreCase))
{
// e.g. MemTotal: 16370152 kB
var parts = line.Split(':');
var valuePart = parts[1].Trim();
parts = valuePart.Split(' ');
var numberString = parts[0].Trim();
var result = long.TryParse(numberString, out totalMemoryInKb);
return result ? totalMemoryInKb : throw new FileFormatException($"Cannot parse 'MemTotal' value from the file {path}.");
}
}
throw new FileFormatException($"Cannot find the 'MemTotal' property from the file {path}.");
}
}
}
You have to rely on the OS specific utilities that provide CPU and memory information.
Run the command from your application and read/parse output returned.
I found an article which looks in line with what you are trying to achieve.
Reading Windows and Linux memory metrics with .NET Core
I am trying to write a process screen to allocate stock on the sales order in FIFO. The Process screen list all the sales order for a period for allocation.
I have gone through the code LSSOLine and not able to figure out the piece of code where allocation is done. anybody knows how to do it?
Update
I have tried the following code and it is working. Is there any better way to do it?
private static void DoStockAllocation(SOLine row, SOOrderEntry grp)
{
try
{
grp.Document.Current = PXSelect<
SOOrder,
Where<SOOrder.orderType, Equal<Required<SOOrder.orderType>>,
And<SOOrder.orderNbr, Equal<Required<SOOrder.orderNbr>>>>>
.Select(grp, row.OrderType, row.OrderNbr);
if (grp.Document.Current != null && grp.Document.Current.Status == SOOrderStatus.Open)
{
grp.Transactions.Current = row;
PXSelectBase<INLocationStatus> cmd = new PXSelectReadonly2<INLocationStatus,
InnerJoin<INLocation, On<INLocation.locationID, Equal<INLocationStatus.locationID>>,
LeftJoin<INSiteStatus, On<INSiteStatus.inventoryID, Equal<INLocationStatus.inventoryID>,
And<INSiteStatus.subItemID, Equal<INLocationStatus.subItemID>,
And<INSiteStatus.siteID, Equal<INLocationStatus.siteID>>>>>>,
Where<INLocationStatus.inventoryID, Equal<Required<INLocationStatus.inventoryID>>,
And<INLocationStatus.subItemID, Equal<Required<INLocationStatus.subItemID>>,
And<INLocationStatus.siteID, Equal<Required<INLocationStatus.siteID>>,
And<INLocation.salesValid, Equal<boolTrue>,
And<INLocation.inclQtyAvail, Equal<boolTrue>,
And<INLocationStatus.qtyOnHand, Greater<decimal0>>>>>>>>(grp);
foreach (PXResult<INLocationStatus, INLocation, INSiteStatus> ln in cmd.Select(row.InventoryID,row.SubItemID,row.SiteID))
{
INLocationStatus locationStatus = ln;
INSiteStatus siteStatus = ln;
SiteStatus accumsiteavail = new SiteStatus();
PXCache<INSiteStatus>.RestoreCopy(accumsiteavail, siteStatus);
accumsiteavail = (SiteStatus)grp.Caches[typeof(SiteStatus)].Insert(accumsiteavail);
decimal? AvailableQty = 0m;
decimal? SiteAvailableQty = locationStatus.QtyHardAvail;//siteStatus.QtyHardAvail + accumsiteavail.QtyHardAvail;
AvailableQty = SiteAvailableQty;
if (AvailableQty <= 0m)
{
continue;
}
if (row.LocationID == null)
{
row.LocationID = locationStatus.LocationID;
grp.Transactions.Update(row);
}
SOLineSplit split = new SOLineSplit();
if ( grp.splits.Select().Count > 0)
{
split = grp.splits.Select(row.OrderType, row.OrderNbr, row.LineNbr);
}
else
{
split = new SOLineSplit();
split = grp.splits.Insert(split);
split.InventoryID = row.InventoryID;
split.SiteID = row.SiteID;
split.OrderType = row.OrderType;
split.OrderNbr = row.OrderNbr;
split.LineNbr = row.LineNbr;
split.UOM = row.UOM;
split = PXCache<SOLineSplit>.CreateCopy(grp.splits.Update(split));
}
//split.LocationID = locationStatus.LocationID;
split.Qty = (AvailableQty < row.OrderQty) ? AvailableQty : row.OrderQty;
split.IsAllocated = true;
grp.splits.Update(split);
break;
}
grp.Save.Press();
}
}
catch(Exception ex)
{
}
}
You will need to reference the combination of LSSOLine and SOLineSplitPlanID on SOLineSplit.PlanID in your process page. Alternatively you might be able to use an instance of SOOrderEntry to do the updates/mark of allocation.
The following have been copied from the SOOrderEntry graph and are the 2 componenetis from what i can tell that drive the allocation logic. From there you just need to mark the split lines that should be allocated and should be good. Or at least a start. The problem you might have is anything that is looking for current SOOrder. You might have to set the current before marking solines splits as allocated. (assuming i under stand your question correctly)
Manage the allocation records...
public LSSOLine lsselect;
Append the use of SOLineSplitPlanID which drives the INItemPlan records...
[PXMergeAttributes(Method = MergeMethod.Append)]
[SOLineSplitPlanID(typeof(SOOrder.noteID), typeof(SOOrder.hold), typeof(SOOrder.orderDate))]
protected virtual void SOLineSplit_PlanID_CacheAttached(PXCache sender)
{
}
I have a tight loop which runs through a load of carts, which themselves contain around 10 events event objects and writes them to the disk in JSON via an intermediate repository (jOliver common domain rewired with GetEventStore.com):
// create ~200,000 carts, each with ~5 events
List<Cart> testData = TestData.GenerateFrom(products);
foreach (var cart in testData)
{
count = count + (cart as IAggregate).GetUncommittedEvents().Count;
repository.Save(cart);
}
I see the disk says it is as 100%, but the throughout is 'low' (15MB/sec, ~5,000 events per second) why is this, things i can think of are:
Since this is single threaded does the 25% CPU usage actually mean 100% of the 1 core that I am on (any way to show specific core my app is running on in Visual Studio)?
Am i constrained by I/O, or by CPU? Can I expect better performance if i create my own thread pool one for each CPU?
How come I can copy a file at ~120MB/sec, but I can only get throughput of 15MB/sec in my app? Is this due to the write size of lots of smaller packets?
Anything else I have missed?
The code I am using is from the geteventstore docs/blog:
public class GetEventStoreRepository : IRepository
{
private const string EventClrTypeHeader = "EventClrTypeName";
private const string AggregateClrTypeHeader = "AggregateClrTypeName";
private const string CommitIdHeader = "CommitId";
private const int WritePageSize = 500;
private const int ReadPageSize = 500;
IStreamNamingConvention streamNamingConvention;
private readonly IEventStoreConnection connection;
private static readonly JsonSerializerSettings serializerSettings = new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.None };
public GetEventStoreRepository(IEventStoreConnection eventStoreConnection, IStreamNamingConvention namingConvention)
{
this.connection = eventStoreConnection;
this.streamNamingConvention = namingConvention;
}
public void Save(IAggregate aggregate)
{
this.Save(aggregate, Guid.NewGuid(), d => { });
}
public void Save(IAggregate aggregate, Guid commitId, Action<IDictionary<string, object>> updateHeaders)
{
var commitHeaders = new Dictionary<string, object>
{
{CommitIdHeader, commitId},
{AggregateClrTypeHeader, aggregate.GetType().AssemblyQualifiedName}
};
updateHeaders(commitHeaders);
var streamName = this.streamNamingConvention.GetStreamName(aggregate.GetType(), aggregate.Identity);
var newEvents = aggregate.GetUncommittedEvents().Cast<object>().ToList();
var originalVersion = aggregate.Version - newEvents.Count;
var expectedVersion = originalVersion == 0 ? ExpectedVersion.NoStream : originalVersion - 1;
var eventsToSave = newEvents.Select(e => ToEventData(Guid.NewGuid(), e, commitHeaders)).ToList();
if (eventsToSave.Count < WritePageSize)
{
this.connection.AppendToStreamAsync(streamName, expectedVersion, eventsToSave).Wait();
}
else
{
var startTransactionTask = this.connection.StartTransactionAsync(streamName, expectedVersion);
startTransactionTask.Wait();
var transaction = startTransactionTask.Result;
var position = 0;
while (position < eventsToSave.Count)
{
var pageEvents = eventsToSave.Skip(position).Take(WritePageSize);
var writeTask = transaction.WriteAsync(pageEvents);
writeTask.Wait();
position += WritePageSize;
}
var commitTask = transaction.CommitAsync();
commitTask.Wait();
}
aggregate.ClearUncommittedEvents();
}
private static EventData ToEventData(Guid eventId, object evnt, IDictionary<string, object> headers)
{
var data = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(evnt, serializerSettings));
var eventHeaders = new Dictionary<string, object>(headers)
{
{
EventClrTypeHeader, evnt.GetType().AssemblyQualifiedName
}
};
var metadata = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(eventHeaders, serializerSettings));
var typeName = evnt.GetType().Name;
return new EventData(eventId, typeName, true, data, metadata);
}
}
It was partially mentioned in the comments, but to enhance on that, as you are working fully single-threaded in the mentioned code (though you use async, you are just waiting for them, so effectively working sync) you are suffering from latency and overhead for context switching and EventStore protocol back and forth. Either really go the async route, but avoid waiting on the async threads and rather parallelize it (EventStore likes parallelization because it can batch multiple writes) or do batching yourself and send, for example, 20 events at a time.
I tried this:
<awe:WebControl x:Name="webBrowser" Cursor="None" Source="http://example.com/"/>
but the cursor still shows.
I figured that I could alter the CSS of the page by adding the following line:
*{
cursor: none;
}
But, is there a solution for when I don't have the access to the actual page that I'm showing?
You can use a ResouceInterceptor and manipulate the page on the fly to insert custom CSS.
EDIT:
The following implementation should do the job. (It assumes there is a text.css file)
class ManipulatingResourceInterceptor : IResourceInterceptor
{
public ResourceResponse OnRequest(ResourceRequest request)
{
Stream stream = null;
//do stream manipulation
if (request.Url.ToString() == "http://your.web.url/test.css")
{
WebRequest myRequest;
myRequest = WebRequest.Create(request.Url);
Stream webStream = myRequest.GetResponse().GetResponseStream();
StreamReader webStreamReader = new StreamReader(webStream);
string webStreamContent = webStreamReader.ReadToEnd();
stream = webStream;
string extraContent = "*{cursor: none;}";
webStreamContent += extraContent;
byte[] responseBuffer = Encoding.UTF8.GetBytes(webStreamContent);
// Initialize unmanaged memory to hold the array.
int responseSize = Marshal.SizeOf(responseBuffer[0]) * responseBuffer.Length;
IntPtr pointer = Marshal.AllocHGlobal(responseSize);
try
{
// Copy the array to unmanaged memory.
Marshal.Copy(responseBuffer, 0, pointer, responseBuffer.Length);
return ResourceResponse.Create((uint)responseBuffer.Length, pointer, "text/css");
}
finally
{
// Data is not owned by the ResourceResponse. A copy is made
// of the supplied buffer. We can safely free the unmanaged memory.
Marshal.FreeHGlobal(pointer);
stream.Close();
}
}
return null;
}
public bool OnFilterNavigation(NavigationRequest request)
{
return false;
}
}
I'm trying to configure the Quick Launch menu to only display the ancestors and descendant nodes of the currently select node. The menu also needs to display all the childern of the root node. More simply:
Given a site map of:
RootSite
---SubSite1 = navigation set at "Display the current site, the navigation items below the current site, and the current site's siblings"
-----Heading1 = navigation set at "Display the same navigation items as the parent site"
-------Page1 = navigation set at "Display the same navigation items as the parent site"
-------Page2 = navigation set at "Display the same navigation items as the parent site"
-----Heading2 = navigation set at "Display the same navigation items as the parent site"
---SubSite2 = navigation set at "Display the current site, the navigation items below the current site, and the current site's siblings"
-----Heading1 = navigation set at "Display the same navigation items as the parent site"
SiteMapProvider configuration:
<PublishingNavigation:PortalSiteMapDataSource ID="SiteMapDS" Runat="server"
SiteMapProvider="CurrentNavSiteMapProvider" EnableViewState="true"
StartFromCurrentNode="true" ShowStartingNode="false"/>
The expected and actual behavior of the Quick Launch menu displayed at SubSite1 is:
---SubSite1
-----Heading1
-------Page1
-------Page2
-----Heading2
---SubSite2
The expected behavior of the menu after navigating to Heading1 of SubSite2:
---SubSite1
---SubSite2
-----Heading1
What I actually see after navigating to Heading1 of SubSite2:
---SubSite1
-----Heading1
-------Page1
-------Page2
-----Heading2
---SubSite2
-----Heading1
This does not match what I expect to see if I set the Heading1 navigation to "Display the
same navigation items as the parent site" and SubSite2 is set to "Display the current site, the navigation items below the current site, and the current site's siblings". I expect
Heading1 to inherit the navigation item of SubSite2 with the SubSite1 items collapsed from view. I've also played with the various
Trim... attributes without success. Any help will be greatly appreciated!
I followed #Nat's guidance into the murky world Sharepoint webparts to achieve the behavior I described above. My approach was to roll my own version of the MossMenu webpart that Microsoft has released through the ECM Team Blog. This code is based on the native AspMenu control. I used this control to "intercept" the native SiteMapDataSource injected into through DataSourceId attribute in the markup and create a new XML data source to exhibit the desired behavior. I've included the final source code at the end of this wordy answer. Here are the bits from the master page markup:
<%# Register TagPrefix="myCustom" Namespace="YourCompany.CustomWebParts"
Assembly="YourCompany.CustomWebParts, Version=1.0.0.0, Culture=neutral,
PublicKeyToken=9f4da00116c38ec5" %>
...
<myCustom:MossMenu ID="CurrentNav" runat="server" datasourceID="SiteMapDS"
orientation="Vertical" UseCompactMenus="true" StaticDisplayLevels="6"
MaximumDynamicDisplayLevels="0" StaticSubMenuIndent="5" ItemWrap="false"
AccessKey="3" CssClass="leftNav"
SkipLinkText="<%$Resources:cms,masterpages_skiplinktext%>">
<LevelMenuItemStyles>
<asp:MenuItemStyle CssClass="Nav" />
<asp:MenuItemStyle CssClass="SecNav" />
</LevelMenuItemStyles>
<StaticHoverStyle CssClass="leftNavHover"/>
<StaticSelectedStyle CssClass="leftNavSelected"/>
<DynamicMenuStyle CssClass="leftNavFlyOuts" />
<DynamicMenuItemStyle CssClass="leftNavFlyOutsItem"/>
<DynamicHoverStyle CssClass="leftNavFlyOutsHover"/>
</myCustom:MossMenu>
<PublishingNavigation:PortalSiteMapDataSource ID="SiteMapDS" Runat="server"
SiteMapProvider="CurrentNavSiteMapProvider" EnableViewState="true"
StartFromCurrentNode="true" ShowStartingNode="false"/>
...
I followed the excellent step-by-step instructions to create my custom web part in the comments section of the MossMenu webpart at "Wednesday, September 19, 2007 7:20 AM by Roel". In my googling, I also found something to configure a Sharepoint site to display exceptions in the same lovely way that ASP.NET does by making the web.config changes here.
I decided to call my custom behavior a "compact menu" so I created a UseCompactMenus property on the control. If you don't set this attribute in the markup to true, the control will behave identically to an AspMenu control.
My application has the user always starting from the home page at the site map root. I can have the custom control store the initial (complete) site map when the root page is displayed. This is stored in a static string for use in the customizing behavior. If you application doesn't follow this assumption, the control will not work as expected.
On the initial application page, only the direct child pages to the root page are displayed in the menu. Clicking on these menu nodes will open all the child nodes under it but keeps the sibling nodes "closed". If you click on one of the other sibling nodes, it collapses the current node and it opens the newly selected node. That's it, enjoy!!
using System;
using System.Text;
using System.ComponentModel;
using System.Collections.Generic;
using System.Security.Permissions;
using System.Xml;
using System.Xml.Serialization;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.Design.WebControls;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Utilities;
using Microsoft.SharePoint.Security;
namespace YourCompany.CustomWebParts
{
[AspNetHostingPermission(SecurityAction.LinkDemand, Level = AspNetHostingPermissionLevel.Minimal)]
[AspNetHostingPermission(SecurityAction.InheritanceDemand, Level = AspNetHostingPermissionLevel.Minimal)]
[SharePointPermission(SecurityAction.LinkDemand, ObjectModel = true)]
[SharePointPermission(SecurityAction.InheritanceDemand, ObjectModel = true)]
[Designer(typeof(MossMenuDesigner))]
[ToolboxData("<{0}:MossMenu runat=\"server\" />")]
public class MossMenu : System.Web.UI.WebControls.Menu
{
private string idPrefix;
// a url->menuItem dictionary
private Dictionary<string, System.Web.UI.WebControls.MenuItem> menuItemDictionary =
new Dictionary<string, System.Web.UI.WebControls.MenuItem>(StringComparer.OrdinalIgnoreCase);
private bool customSelectionEnabled = true;
private bool selectStaticItemsOnly = true;
private bool performTargetBinding = true;
//** Variables used for compact menu behavior **//
private bool useCompactMenus = false;
private static bool showStartingNode;
private static string originalSiteMap;
/// <summary>
/// Controls whether or not the control performs compacting of the site map to display only ancestor and child nodes of the selected and first level root childern.
/// </summary>
[Category("Behavior")]
public bool UseCompactMenus
{
get
{
return this.useCompactMenus;
}
set
{
this.useCompactMenus = value;
}
}
/// <summary>
/// Controls whether or not the control performs custom selection/highlighting.
/// </summary>
[Category("Behavior")]
public bool CustomSelectionEnabled
{
get
{
return this.customSelectionEnabled;
}
set
{
this.customSelectionEnabled = value;
}
}
/// <summary>
/// Controls whether only static items may be selected or if
/// dynamic (fly-out) items may be selected too.
/// </summary>
[Category("Behavior")]
public bool SelectStaticItemsOnly
{
get
{
return this.selectStaticItemsOnly;
}
set
{
this.selectStaticItemsOnly = value;
}
}
/// <summary>
/// Controls whether or not to bind the Target property of any menu
/// items to the Target property in the SiteMapNode's Attributes
/// collection.
/// </summary>
[Category("Behavior")]
public bool PerformTargetBinding
{
get
{
return this.performTargetBinding;
}
set
{
this.performTargetBinding = value;
}
}
/// <summary>
/// Gets the ClientID of this control.
/// </summary>
public override string ClientID
{
[SharePointPermission(SecurityAction.Demand, ObjectModel = true)]
get
{
if (this.idPrefix == null)
{
this.idPrefix = SPUtility.GetNewIdPrefix(this.Context);
}
return SPUtility.GetShortId(this.idPrefix, this);
}
}
[SharePointPermission(SecurityAction.Demand, ObjectModel = true)]
protected override void OnMenuItemDataBound(MenuEventArgs e)
{
base.OnMenuItemDataBound(e);
if (this.customSelectionEnabled)
{
// store in the url->item dictionary
this.menuItemDictionary[e.Item.NavigateUrl] = e.Item;
}
if (this.performTargetBinding)
{
// try to bind to the Target property if the data item is a SiteMapNode
SiteMapNode smn = e.Item.DataItem as SiteMapNode;
if (smn != null)
{
string target = smn["Target"];
if (!string.IsNullOrEmpty(target))
{
e.Item.Target = target;
}
}
}
}
/// <id guid="08e034e7-5872-4a31-a771-84cac1dcd53d" />
/// <owner alias="MarkWal">
/// </owner>
[SharePointPermission(SecurityAction.Demand, ObjectModel = true)]
protected override void OnPreRender(System.EventArgs e)
{
SiteMapDataSource dataSource = this.GetDataSource() as SiteMapDataSource;
SiteMapProvider provider = (dataSource != null) ? dataSource.Provider : null;
if (useCompactMenus && dataSource != null && provider != null)
{
showStartingNode = dataSource.ShowStartingNode;
SiteMapNodeCollection rootChildNodes = provider.RootNode.ChildNodes;
if (provider.CurrentNode.Equals(provider.RootNode))
{
//** Store original site map for future use in compacting menus **//
if (originalSiteMap == null)
{
//Store original SiteMapXML for future adjustments:
XmlDocument newSiteMapDoc = new XmlDocument();
newSiteMapDoc.LoadXml("<?xml version='1.0' ?>"
+ "<siteMapNode title='" + provider.RootNode.Title
+ "' url='" + provider.RootNode.Url
+ "' />");
foreach (SiteMapNode node in rootChildNodes)
{
XmlNode newNode = GetXmlSiteMapNode(newSiteMapDoc.DocumentElement, node);
newSiteMapDoc.DocumentElement.AppendChild(newNode);
//Create XML for all the child nodes for selected menu item:
NavigateSiteMap(newNode, node);
}
originalSiteMap = newSiteMapDoc.OuterXml;
}
//This is set to only display the child nodes of the root node on first view:
this.StaticDisplayLevels = 1;
}
else
{
//
//Adjust site map for this page
//
XmlDocument newSiteMapDoc = InitializeNewSiteMapXml(provider, rootChildNodes);
//Clear the current default site map:
this.DataSourceID = null;
//Create the new site map data source
XmlDataSource newSiteMap = new XmlDataSource();
newSiteMap.ID = "XmlDataSource1";
newSiteMap.EnableCaching = false; //Required to prevent redisplay of the previous menu
//Add bindings for dynamic site map:
MenuItemBindingCollection bindings = this.DataBindings;
bindings.Clear();
MenuItemBinding binding = new MenuItemBinding();
binding.DataMember = "siteMapNode";
binding.TextField = "title";
binding.Text = "title";
binding.NavigateUrlField = "url";
binding.NavigateUrl = "url";
binding.ValueField = "url";
binding.Value = "url";
bindings.Add(binding);
//Bind menu to new site map:
this.DataSource = newSiteMap;
//Assign the newly created dynamic site map:
((XmlDataSource)this.DataSource).Data = newSiteMapDoc.OuterXml;
/** this expression removes the root if initialized: **/
if (!showStartingNode)
((XmlDataSource)this.DataSource).XPath = "/siteMapNode/siteMapNode";
/** Re-initialize menu data source with new site map: **/
this.DataBind();
/** Find depth of current node: **/
int depth = 0;
SiteMapNode currNode = provider.CurrentNode;
do
{
depth++;
currNode = currNode.ParentNode;
}
while (currNode != null);
//Set the StaticDisplayLevels to match the current depth:
if (depth >= this.StaticDisplayLevels)
this.StaticDisplayLevels = depth;
}
}
base.OnPreRender(e);
// output some script to override the default menu flyout behaviour; this helps to avoid
// intermittent "Operation Aborted" errors
Page.ClientScript.RegisterStartupScript(
typeof(MossMenu),
"overrideMenu_HoverStatic",
"if (typeof(overrideMenu_HoverStatic) == 'function' && typeof(Menu_HoverStatic) == 'function')\n" +
"{\n" +
"_spBodyOnLoadFunctionNames.push('enableFlyoutsAfterDelay');\n" +
"Menu_HoverStatic = overrideMenu_HoverStatic;\n" +
"}\n",
true);
// output some script to avoid a known issue with SSL Termination and the ASP.NET
// Menu implementation. http://support.microsoft.com/?id=910444
Page.ClientScript.RegisterStartupScript(
typeof(MossMenu),
"MenuHttpsWorkaround_" + this.ClientID,
this.ClientID + "_Data.iframeUrl='/_layouts/images/blank.gif';",
true);
// adjust the fly-out indicator arrow direction for locale if not already set
if (this.Orientation == System.Web.UI.WebControls.Orientation.Vertical &&
((string.IsNullOrEmpty(this.StaticPopOutImageUrl) && this.StaticEnableDefaultPopOutImage) ||
(string.IsNullOrEmpty(this.DynamicPopOutImageUrl) && this.DynamicEnableDefaultPopOutImage)))
{
SPWeb currentWeb = SPContext.Current.Web;
if (currentWeb != null)
{
uint localeId = currentWeb.Language;
bool isBidiWeb = SPUtility.IsRightToLeft(currentWeb, currentWeb.Language);
string arrowUrl = "/_layouts/images/" + (isBidiWeb ? "largearrowleft.gif" : "largearrowright.gif");
if (string.IsNullOrEmpty(this.StaticPopOutImageUrl) && this.StaticEnableDefaultPopOutImage)
{
this.StaticPopOutImageUrl = arrowUrl;
}
if (string.IsNullOrEmpty(this.DynamicPopOutImageUrl) && this.DynamicEnableDefaultPopOutImage)
{
this.DynamicPopOutImageUrl = arrowUrl;
}
}
}
if (provider == null)
{
// if we're not attached to a SiteMapDataSource we'll just leave everything alone
return;
}
else if (this.customSelectionEnabled)
{
MenuItem selectedMenuItem = this.SelectedItem;
SiteMapNode currentNode = provider.CurrentNode;
// if no menu item is presently selected, we need to work our way up from the current
// node until we can find a node in the menu item dictionary
while (selectedMenuItem == null && currentNode != null)
{
this.menuItemDictionary.TryGetValue(currentNode.Url, out selectedMenuItem);
currentNode = currentNode.ParentNode;
}
if (this.selectStaticItemsOnly)
{
// only static items may be selected, keep moving up until we find an item
// that falls within the static range
while (selectedMenuItem != null && selectedMenuItem.Depth >= this.StaticDisplayLevels)
{
selectedMenuItem = selectedMenuItem.Parent;
}
// if we found an item to select, go ahead and select (highlight) it
if (selectedMenuItem != null && selectedMenuItem.Selectable)
{
selectedMenuItem.Selected = true;
}
}
}
}
private XmlDocument InitializeNewSiteMapXml(SiteMapProvider provider, SiteMapNodeCollection rootChildNodes)
{
/** Find the level 1 ancestor node of the current node: **/
SiteMapNode levelOneAncestorOfSelectedNode = null;
SiteMapNode currNode = provider.CurrentNode;
do
{
levelOneAncestorOfSelectedNode = (currNode.ParentNode == null ? levelOneAncestorOfSelectedNode : currNode);
currNode = currNode.ParentNode;
}
while (currNode != null);
/** Initialize base SiteMapXML **/
XmlDocument newSiteMapDoc = new XmlDocument();
newSiteMapDoc.LoadXml(originalSiteMap);
/** Prune out the childern nodes that shouldn't display: **/
currNode = provider.CurrentNode;
do
{
if (currNode.ParentNode != null)
{
SiteMapNodeCollection currNodeSiblings = currNode.ParentNode.ChildNodes;
foreach (SiteMapNode siblingNode in currNodeSiblings)
{
if (siblingNode.HasChildNodes)
{
if (provider.CurrentNode.Equals(siblingNode))
{
//Remove all the childerns child nodes from display:
SiteMapNodeCollection currNodesChildren = siblingNode.ChildNodes;
foreach (SiteMapNode childNode in currNodesChildren)
{
XmlNode currentXmNode = GetCurrentXmlNode(newSiteMapDoc, childNode);
DeleteChildNodes(currentXmNode);
}
}
else if (!provider.CurrentNode.IsDescendantOf(siblingNode)
&& !levelOneAncestorOfSelectedNode.Equals(siblingNode))
{
XmlNode currentXmNode = GetCurrentXmlNode(newSiteMapDoc, siblingNode);
DeleteChildNodes(currentXmNode);
}
}
}
}
currNode = currNode.ParentNode;
}
while (currNode != null);
return newSiteMapDoc;
}
private XmlNode GetCurrentXmlNode(XmlDocument newSiteMapDoc, SiteMapNode node)
{
//Find this node in the original site map:
XmlNode currentXmNode = newSiteMapDoc.DocumentElement.SelectSingleNode(
"//siteMapNode[#url='"
+ node.Url
+ "']");
return currentXmNode;
}
private void DeleteChildNodes(XmlNode currentXmNode)
{
if (currentXmNode != null && currentXmNode.HasChildNodes)
{
//Remove child nodes:
XmlNodeList xmlNodes = currentXmNode.ChildNodes;
int lastNodeIndex = xmlNodes.Count - 1;
for (int i = lastNodeIndex; i >= 0; i--)
{
currentXmNode.RemoveChild(xmlNodes[i]);
}
}
}
private XmlNode GetXmlSiteMapNode(XmlNode currentDocumentNode, SiteMapNode currentNode)
{
XmlElement newNode = currentDocumentNode.OwnerDocument.CreateElement("siteMapNode");
XmlAttribute newAttr = currentDocumentNode.OwnerDocument.CreateAttribute("title");
newAttr.InnerText = currentNode.Title;
newNode.Attributes.Append(newAttr);
newAttr = currentDocumentNode.OwnerDocument.CreateAttribute("url");
newAttr.InnerText = currentNode.Url;
newNode.Attributes.Append(newAttr);
return newNode;
}
private void NavigateSiteMap(XmlNode currentDocumentNode, SiteMapNode currentNode)
{
foreach (SiteMapNode node in currentNode.ChildNodes)
{
//Add this node to structure:
XmlNode newNode = GetXmlSiteMapNode(currentDocumentNode, node);
currentDocumentNode.AppendChild(newNode);
if (node.HasChildNodes)
{
//Make a recursive call to add any child nodes:
NavigateSiteMap(newNode, node);
}
}
}
}
[PermissionSet(SecurityAction.LinkDemand, Name = "FullTrust")]
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Security", "CA2117:AptcaTypesShouldOnlyExtendAptcaBaseTypes")]
public sealed class MossMenuDesigner : MenuDesigner
{
[PermissionSet(SecurityAction.Demand, Name = "FullTrust")]
protected override void DataBind(BaseDataBoundControl dataBoundControl)
{
try
{
dataBoundControl.DataBind();
}
catch
{
base.DataBind(dataBoundControl);
}
}
[PermissionSet(SecurityAction.Demand, Name = "FullTrust")]
public override string GetDesignTimeHtml()
{
System.Web.UI.WebControls.Menu menu = (System.Web.UI.WebControls.Menu)ViewControl;
int oldDisplayLevels = menu.MaximumDynamicDisplayLevels;
string designTimeHtml = string.Empty;
try
{
menu.MaximumDynamicDisplayLevels = 0;
// ASP.NET MenuDesigner has some dynamic/static item trick in design time
// to show dynamic item in design time. We only want to show preview without
// dynamic menu items.
designTimeHtml = base.GetDesignTimeHtml();
}
catch (Exception e)
{
designTimeHtml = GetErrorDesignTimeHtml(e);
}
finally
{
menu.MaximumDynamicDisplayLevels = oldDisplayLevels;
}
return designTimeHtml;
}
}
}
I personally don't like the html that the default menu provides (table based layout).
Fortunately the SharePoint team has released the code for that control.
What we have done is to include that code in a project and have overridden the render method to do whatever we want. This give you the flexibility to define the exact relationship between parents that needs to be display as well as setting the styles on any divs you create.
On the down side you are now coding, not configuring and a change needs to be made to the master page you are using to use the control.
Worth it in my opinion. This is now a standard change we make for any site.
The approach we used to accomplish the affect you are looking for was to use the CSS Friendly Control Adapters. The adapters change the HTML that is rendered without changing the controls you used on your pages. You may need to tweak the menu adapter a little bit in order to get the layout you want. It only took a few lines of code for us. Once you get that working, you can use CSS to obtain the behavior you describe.