MQQueue Properties (C# API to Websphere MQ) - c#-4.0

I found this code to enumerate a list of queues for a QueueManager.
It works, but I see a lot of System Queues, and even channel names in the list it provides. Is there some property I can test to see if it is a "normal" user-defined queue?
ObjectType, QueueType, Usage seemed to always give same values for every queue-name.
// GET QueueNames - this worked on 07/19/2012 - but returned a lot of system queue, and unclear how to separate user queues from system queues.
PCFMessageAgent agent = new PCFMessageAgent(mqQMgr);
// Build the query request.
PCFMessage requestMessage = new PCFMessage(CMQCFC.MQCMD_INQUIRE_Q_NAMES);
requestMessage.AddParameter(MQC.MQCA_Q_NAME, "*");
// Send the request and retrieve the response.
PCFMessage[] responses = agent.Send(requestMessage);
// Retrieve the values requested from the response.
string[] queueNames = responses[0].GetStringListParameterValue(CMQCFC.MQCACF_Q_NAMES);
//string[] objType = responses[0].GetStringListParameterValue(CMQCFC.MQIACF_OBJECT_TYPE);
int loopCounter = 0;
foreach (string queueName in queueNames)
{
loopCounter++;
Console.WriteLine("QueueName=" + queueName);
try
{
mqQueue = mqQMgr.AccessQueue(
queueName,
MQC.MQOO_OUTPUT // open queue for output
+ MQC.MQOO_INQUIRE // inquire required to get CurrentDepth
+ MQC.MQOO_FAIL_IF_QUIESCING); // but not if MQM stopping
Console.WriteLine("QueueName=" + queueName +
" CurrentDepth=" + mqQueue.CurrentDepth +
" MaxDepth=" + mqQueue.MaximumDepth +
" QueueType=" + mqQueue.QueueType +
" Usage=" + mqQueue.Usage
);
}
catch (MQException mex)
{
Console.WriteLine(mex.Message);
}
}
}

For me your sample code lists only queues, no other objects but yes it lists all queues. You can add another filter requestMessage.AddParameter(MQC.MQIA_Q_TYPE, MQC.MQQT_MODEL); to list only model queues. Other values available for MQC.MQIA_Q_TYPE are MQC.MQQT_LOCAL, MQQT_ALIAS, MQQT_CLUSTER and MQC.MQQT_REMOTE.
All system or predefined queue names begin with SYSTEM. So you could probably use this string filter out predefined queues after listing. Also if you look at a queue definition, there is DEFTYPE attribute, system defined queues have value of PREDEFINED. But I could not add a third parameter to filter queue names by DEFTYPE. I got 3014 reason code.
HTH

As Shashi noted, you will only see queue names from that PCF command.
If you only queue names that begin with PAYROLL then change:
requestMessage.AddParameter(MQC.MQCA_Q_NAME, "*");
to
requestMessage.AddParameter(MQC.MQCA_Q_NAME, "PAYROLL.*");
Or add an if statement to exclude the queue names you do not want to see:
if (!(queueName.startsWith("SYSTEM.")))
{
// do something
}

Related

Acumatica return to function with PXLongOperation

I'm creating an integration for Acumatica that loads data from another application to synchronize inventory items. It uses an API call to get a list (of up to 5000 items) and then I'm using PXLongOperation to insert or update these items. I can't run it without this method as the large batches (aka inserting 5000 stock items) will timeout and crash.
The processing form is a custom table/form that retrieves this information then parses the JSON list of items and calls a custom function on the InventoryItemMaint graph. All that works perfectly, but it never returns to the calling function. I'd love to be able to write information to record to record that it was a success or failure. I've tried PXLongOperation.WaitCompletion but that doesn't seem to change anything. I'm sure I'm not using the asynchronous nature of this correctly but am wondering if there is a reasonable work around.
// This is the lsit of items from SI
List<TEKDTools.TEKdtoolModels.Product> theItems;
if (Guid.TryParse(Convert.ToString(theRow.DtoolsID), out theCatID))
{
// Get the list of items from dtools.
theItems = TEKDTools.TEKdtoolsCommon.ReadOneCatalog(theCatID);
// Start the long operation
PXLongOperation.StartOperation(this, delegate () {
// Create the graph to make a new Stock Item
InventoryItemMaint itemMaint = PXGraph.CreateInstance<InventoryItemMaint>();
var itemMaintExt = itemMaint.GetExtension<InventoryItemMaintTEKExt>();
foreach (TEKDTools.TEKdtoolModels.Product theItem in theItems)
{
itemMaint.Clear();
itemMaintExt.CreateUpdateDToolsItem(theItem, true);
PXLongOperation.WaitCompletion(itemMaint.UID);
}
}
);
}
stopWatch.Stop(); // Just using this to figure out how long things were taking.
// For fun I tried the Wait Completion here too
PXLongOperation.WaitCompletion(this.UID);
theRow = MasterView.Current;
// Tried some random static values to see if it was writing
theRow.RowsCreated = 10;
theRow.RowsUpdated = 11;
theRow.Data2 = "Elasped Milliseconds: " + stopWatch.ElapsedMilliseconds.ToString();
theRow.RunStart = startTime;
theRow.RunEnd = DateTime.Now;
// This never gets the record udpated.
Caches[typeof(TCDtoolsBatch)].Update(theRow);
One possible solution would be to use the PXLongOperation.SetCustomInfo method. Usually this is used to update the UI thread after the long operation has been finished. In this "class" you can subscribe to events which you can use to update rows. The definition of the class is as follows:
public class UpdateUICustomInfo : IPXCustomInfo
{
public void Complete(PXLongRunStatus status, PXGraph graph)
{
// Set Code Here
}
}
The wait completion method you are using, generally is used to wait for another long operation to finish by passing the key of that long operation.

Cosmos DB .NET SDK V3 Query With Paging example needed

I'm struggling to find a code example from MS for the v3 SDK for queries with paging, they provide examples for V2 but that SDK is a completely different code base using the "CreateDocumentQuery" method.
I've tried searching through GitHub here: https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs
I believe I'm looking for a method example using continuation tokens, with the assumption that if I cache the previously used continuation tokens in my web app then I can page backwards as well as forwards?
I'm also not quite understanding MS explanation in that MaxItemCount doesn't actually mean it will only try to return X items, but simply limits the No. of items in each search across each partition, confused!
Can anyone point me to the right place for a code example please? I also tried searching through https://learn.microsoft.com/en-us/azure/cosmos-db/sql-query-pagination but appears to lead us to the older SDK (V2 I believe)
UPDATE (following comments from Gaurav below)
public async Task<(List<T>, string)> QueryWithPagingAsync(string query, int pageSize, string continuationToken)
{
try
{
Container container = GetContainer();
List<T> entities = new(); // Create a local list of type <T> objects.
QueryDefinition queryDefinition = new QueryDefinition(query);
using FeedIterator<T> resultSetIterator = container.GetItemQueryIterator<T>(
query, // SQL Query passed to this method.
continuationToken, // Value is always null for the first run.
requestOptions: new QueryRequestOptions()
{
// Optional if we already know the partition key value.
// Not relevant here becuase we're passing <T> which could
// be any model class passed to the generic method.
//PartitionKey = new PartitionKey("MyParitionKeyValue"),
// This does not actually limit how many documents are returned if
// what we're querying resides across multiple partitions.
// If we set the value to 1, then control the number of times
// the loop below performs the ReadNextAsync, then we can control
// the number of items we return from this method. I'm not sure
// whether this is best way to go, it seems we'd be calling
// the API X no. times by the number of items to return?
MaxItemCount = 1
});
// Set var i to zero, we'll use this to control the number of iterations in
// the loop, then once i is equal to the pageSize then we exit the loop.
// This allows us to limit the number of documents to return (hope this is the best way to do it)
var i = 0;
while (resultSetIterator.HasMoreResults & i < pageSize)
{
FeedResponse<T> response = await resultSetIterator.ReadNextAsync();
entities.AddRange(response);
continuationToken = response.ContinuationToken;
i++; // Add 1 to var i in each iteration.
}
return (entities, continuationToken);
}
catch (CosmosException ex)
{
//Log.Error($"Entities was not retrieved successfully - error details: {ex.Message}");
if (ex.StatusCode == HttpStatusCode.NotFound)
{
return (null, null);
}
else { throw; }
}
}
The above method is my latest attempt, and whilst I'm able to use and return continuation tokens, the next challenge is how to control the number of items returned from Cosmos. In my environment, you may notice the above method is used in a repo with where we're passing in model classes from different calling methods, therefore hard coding the partition key is not practical and I'm struggling with configuring the number of items returned. The above method is in fact controlling the number of items I am returning to the calling method further up the chain, but I'm worried that my methodology is resulting in multiple calls to Cosmos i.e. if I set the page size to 1000 items, am I making an HTTP call to Cosmos 1000 times?
I was looking at a thread here https://stackoverflow.com/questions/54140814/maxitemcount-feed-options-property-in-cosmos-db-doesnt-work but not sure the answer in that thread is a solution, and given I'm using the V3 SDK, there does not seem to be the "PageSize" parameter available to use in the request options.
However I also found an official Cosmos code sample here: https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/Queries/Program.cs#L154-L186 (see example method "QueryItemsInPartitionAsStreams" line 171) and it looks like they have used a similar pattern i.e. setting the MaxItemCount variable to 1 and then controlling the no. of items returned in the loop before exiting. I guess I'd just like to understand better what, if any impact this might have on the RUs and API calls to Cosmos?
Please try the following code. It fetches all documents from a container with a maximum of 100 documents in a single request.
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.Azure.Cosmos;
namespace CosmosDbSQLAPISamples
{
class Program
{
private static string connectionString =
"AccountEndpoint=https://account-name.documents.azure.com:443/;AccountKey=account-key==;";
private static string databaseName = "database-name";
private static string containerName = "container-name";
static async Task Main(string[] args)
{
CosmosClient client = new CosmosClient(connectionString);
Container container = client.GetContainer(databaseName, containerName);
string query = "Select * From Root r";
string continuationToken = null;
int pageSize = 100;
do
{
var (entities, item2) = await GetDataPage(container, query, continuationToken, pageSize);
continuationToken = item2;
Console.WriteLine($"Total entities fetched: {entities.Count}; More entities available: {!string.IsNullOrWhiteSpace(continuationToken)}");
} while (continuationToken != null);
}
private static async Task<(List<dynamic>, string)> GetDataPage(Container container, string query, string continuationToken, int pageSize)
{
List<dynamic> entities = new(); // Create a local list of type <T> objects.
QueryDefinition queryDefinition = new QueryDefinition(query);
QueryRequestOptions requestOptions = new QueryRequestOptions()
{
MaxItemCount = pageSize
};
FeedIterator<dynamic> resultSetIterator = container.GetItemQueryIterator<dynamic>(query, continuationToken, requestOptions);
FeedResponse<dynamic> response = await resultSetIterator.ReadNextAsync();
entities.AddRange(response);
continuationToken = response.ContinuationToken;
return (entities, continuationToken);
}
}
}
UPDATE
I think I understand your concerns now. Essentially there are two things you would need to consider:
MaxItemCount - This is the maximum number of documents that will be returned by Cosmos DB in a single request. Please note that you can get anywhere from 0 to the value specified for this parameter. For example, if you specify 100 as MaxItemCount you can get anywhere from 0 to 100 documents in a single request.
FeedIterator - It keeps track of continuation token internally. Based on the response received, it sets HasMoreResults to true or false if a continuation token is found. Default value for HasMoreResults is true.
Now coming to your code, when you do something like:
while (resultSetIterator.HasMoreResults)
{
//some code here...
}
Because FeedIterator keeps track of the continuation token, this loop will return all the documents that match the query. If you notice, in my code I am not using this logic. I simply send the request once and then return the result.
I think setting MaxItemCount to 1 is a bad idea. If you want to fetch say 100 then you're making a minimum of 100 requests to your Cosmos DB account. If you have a hard need to get exactly 100 (or any fixed number) documents from your API, you can implement your own pagination logic. For example, please see code below. It fetches a total of 1000 documents with a maximum of 100 documents in a single request.
static async Task Main(string[] args)
{
CosmosClient client = new CosmosClient(connectionString);
Container container = client.GetContainer(databaseName, containerName);
string query = "Select * From Root r";
string continuationToken = null;
int pageSize = 100;
int maxDocumentsToFetch = 1000;
List<dynamic> documents = new List<dynamic>();
do
{
var numberOfDocumentsToFetch = Math.Min(pageSize, maxDocumentsToFetch);
var (entities, item2) = await GetDataPage(container, query, continuationToken, numberOfDocumentsToFetch);
continuationToken = item2;
Console.WriteLine($"Total entities fetched: {entities.Count}; More entities available: {!string.IsNullOrWhiteSpace(continuationToken)}");
maxDocumentsToFetch -= entities.Count;
documents.AddRange(entities);
} while (maxDocumentsToFetch > 0 && continuationToken != null);
}
The solution:
Summary:
From the concerns raised in my question and taking note from Gaurav Mantri's comments, if we are fetching the items from Cosmos in a loop then the MaxItemCount does not actually limit the total number of results returned but simply limits the number of results per request. If we continue to fetch more items in the loop then we end up with more results returned than what the user may want to retrieve.
In my case, the reason for paging is to present the items back to the web App using a razor list view, but we want to be able to set the maximum number of results returned per page.
The solution below is based on capturing information on the count of items in each iteration of the loop, therefore if we check the Count of the items returned on each iteration of the loop and if we have achieved less than or equal to the MaxItemCount value then we break from the loop with our set maximum number of items and the continuationToken that we can use on the next method run.
I have tested the method with continuation tokens and am able to affectively page backwards and forwards, but the key difference from the code example in my original question is that we're only calling Cosmos DB once to get the desired number of results back, as opposed to limiting the request to one item per run and having to run multiple requests.
public async Task<(List<T>, string)> QueryWithPagingAsync(string query, int pageSize, string continuationToken)
{
string unescapedContinuationToken = null;
if (!String.IsNullOrEmpty(continuationToken)) // Check if null before unescaping.
{
unescapedContinuationToken = Regex.Unescape(continuationToken); // Needed in my case...
}
try
{
Container container = GetContainer();
List<T> entities = new(); // Create a local list of type <T> objects.
QueryDefinition queryDefinition = new(query); // Create the query definition.
using FeedIterator<T> resultSetIterator = container.GetItemQueryIterator<T>(
query, // SQL Query passed to this method.
unescapedContinuationToken, // Value is always null for the first run.
requestOptions: new QueryRequestOptions()
{
// MaxItemCount does not actually limit how many documents are returned
// from Cosmos, if what we're querying resides across multiple partitions.
// However this parameter will control the max number of items
// returned on 'each request' to Cosmos.
// In the loop below, we check the Count of the items returned
// on each iteration of the loop and if we have achieved less than or
// equal to the MaxItemCount value then we break from the loop with
// our set maximum number of items and the continuationToken
// that we can use on the next method run.
// 'pageSize' is the max no. items we want to return for each page in our list view.
MaxItemCount = pageSize,
});
while (resultSetIterator.HasMoreResults)
{
FeedResponse<T> response = await resultSetIterator.ReadNextAsync();
entities.AddRange(response);
continuationToken = response.ContinuationToken;
// After the first iteration, we get the count of items returned.
// Now we'll either return the exact number of items that was set
// by the MaxItemCount, OR we may find there were less results than
// the MaxItemCount, but either way after the first run, we should
// have the number of items returned that we want, or at least
// the maximum number of items we want to return, so we break from the loop.
if (response.Count <= pageSize) { break; }
}
return (entities, continuationToken);
}
catch (CosmosException ex)
{
//Log.Error($"Entities was not retrieved successfully - error details: {ex.Message}");
if (ex.StatusCode == HttpStatusCode.NotFound)
{
return (null, null);
}
else { throw; }
}
}
In Code:
var sqlQueryText = $"SELECT * FROM c WHERE OFFSET {offset} LIMIT {limit}";
but this is more expensive (more RU/s) then using continuationToken.
When using Offset/Limit continuationToken will be used in background by Azure Cosmos SDK to get all the results.

Spring Integration: Aggregator to expire message on timeout

I am using SI's aggregator pattern to hold events and wait for the completion events and storing it in JdbcMessage store. I have created the table INT_MESSAGE, INT_MESSAGE_GROUP and INT_GROUP_TO_MESSAGE.
Sometimes, the completion event may not be available and I want to complete and discard the event, remove it from the tables. I don't want the tables to grow big un-necessarily
I have specified the below config in the pipeline
.expireGroupsUponCompletion(true)
.expireGroupsUponTimeout(true)
.groupTimeout(groupMessageTimeOut)
.sendPartialResultOnExpiry(false)
Would this ensure if the completion event doesn't arrive in x minutes then the message group will be expired, discarded in the null channel and removed from the tables.
Please suggest.
Your summary is correct. Both .expireGroupsUponCompletion(true) & .expireGroupsUponTimeout(true) do remove a group from the store.
The sendPartialResultOnExpiry(false) really does what you are asking:
if (this.sendPartialResultOnExpiry) {
if (this.logger.isDebugEnabled()) {
this.logger.debug("Prematurely releasing partially complete group with key ["
+ correlationKey + "] to: " + getOutputChannel());
}
completeGroup(correlationKey, group, lock);
}
else {
if (this.logger.isDebugEnabled()) {
this.logger.debug("Discarding messages of partially complete group with key ["
+ correlationKey + "] to: "
+ (this.discardChannelName != null ? this.discardChannelName : this.discardChannel));
}
if (this.releaseLockBeforeSend) {
lock.unlock();
}
group.getMessages()
.forEach(this::discardMessage);
}
Tell us, please, what made you to be confused about that configuration?

Can't confirm any actors are being created

In Service Fabric I am trying to call an ActorService and get a list of all actors. I'm not getting any errors, but no actors are returned. It's always zero.
This is how I add actors :
ActorProxy.Create<IUserActor>(
new ActorId(uniqueName),
"fabric:/ECommerce/UserActorService");
And this is how I try to get a list of all actors:
var proxy = ActorServiceProxy.Create(new Uri("fabric:/ECommerce/UserActorService"), 0);
ContinuationToken continuationToken = null;
CancellationToken cancellationToken = new CancellationTokenSource().Token;
List<ActorInformation> activeActors = new List<ActorInformation>();
do
{
var proxy = GetUserActorServiceProxy();
PagedResult<ActorInformation> page = await proxy.GetActorsAsync(continuationToken, cancellationToken);
activeActors.AddRange(page.Items.Where(x => x.IsActive));
continuationToken = page.ContinuationToken;
}
while (continuationToken != null);
But no matter how many users I've added, the page object will always have zero items. What am I missing?
The second argument int in ActorServiceProxy.Create(Uri, int, string) is the partition key (you can find out more about actor partitioning here).
The issue here is that your code checks only one partition (partitionKey = 0).
So the solutions is quite simple - you have to iterate over all partitions of you service. Here is an answer with code sample to get partitions and iterate over them.
UPDATE 2019.07.01
I didn't spot this from the first time but the reason why you aren't getting any actors returned is because you aren't creating any actors - you are creating proxies!
The reason for such confusion is that Service Fabric actors are virtual i.e. from the user point of view actor always exists but in real life Service Fabric manages actor object lifetime automatically persisting and restoring it's state as needed.
Here is a quote from the documentation:
An actor is automatically activated (causing an actor object to be constructed) the first time a message is sent to its actor ID. After some period of time, the actor object is garbage collected. In the future, using the actor ID again, causes a new actor object to be constructed. An actor's state outlives the object's lifetime when stored in the state manager.
In you example you've never send any messages to actors!
Here is a code example I wrote in Program.cs of newly created Actor project:
// Please don't forget to replace "fabric:/Application16/Actor1ActorService" with your actor service name.
ActorRuntime.RegisterActorAsync<Actor1> (
(context, actorType) =>
new ActorService(context, actorType)).GetAwaiter().GetResult();
var actor = ActorProxy.Create<IActor1>(
ActorId.CreateRandom(),
new Uri("fabric:/Application16/Actor1ActorService"));
_ = actor.GetCountAsync(default).GetAwaiter().GetResult();
ContinuationToken continuationToken = null;
var activeActors = new List<ActorInformation>();
var serviceName = new Uri("fabric:/Application16/Actor1ActorService");
using (var client = new FabricClient())
{
var partitions = client.QueryManager.GetPartitionListAsync(serviceName).GetAwaiter().GetResult();;
foreach (var partition in partitions)
{
var pi = (Int64RangePartitionInformation) partition.PartitionInformation;
var proxy = ActorServiceProxy.Create(new Uri("fabric:/Application16/Actor1ActorService"), pi.LowKey);
var page = proxy.GetActorsAsync(continuationToken, default).GetAwaiter().GetResult();
activeActors.AddRange(page.Items);
continuationToken = page.ContinuationToken;
}
}
Thread.Sleep(Timeout.Infinite);
Pay special attention to the line:
_ = actor.GetCountAsync(default).GetAwaiter().GetResult();
Here is where the first message to actor is sent.
Hope this helps.

Sending Event date value from the phone Calendar to database

I am designing a j2me application prototype that requires reading the users phone calendar in order to retrieve the users schedule information. I use JSR 75 PIM API. I actually can read the date values but while sending the value to the database, it only saves the first date. I can't seem to figure out the real problem behind. Help please....
I use J2me for the client side, PHP for the server and MYSQL for the database.
I try to adopt the code of PIM example from the sun wireless toolkit and in its ItemSelectionScreen class. I try to modify the code like this
String getDisplayedField(PIMItem item) throws PIMException {
int fieldCode = Event.REVISION;
if (item.countValues(fieldCode)!= 0) {
long b = item.getDate(fieldCode, 0);
cal = Calendar.getInstance();
cal.setTimeZone(TimeZone.getTimeZone("GMT"));
cal.set(Calendar.HOUR,12);
cal.set(Calendar.MINUTE, 0);
cal.set(Calendar.AM_PM, Calendar.AM);
Date d = new Date(b);
cal.setTime(d);
Date t= new Date(cal.getTime().getTime());
a=t.toString().substring(0,10);
c=t.toString().substring(23,28);
f=t.toString().substring(10,19);
//fieldValue1=a.concat(c);
fieldValue=a.concat(c).concat(f);
System.out.println(fieldValue);
//fieldValue=d.toString();
//fieldValue=d.toString().substring(0, 9);
}
return fieldValue;
}
My thought was since the "fieldValue" is a string, after getting the value I can split in in the server side and get only the required info but that's not the case here. So, my question is how can I send each date values separately to the server and store it in the database?
I'm not sure what your code snippet is supposed to do. I don't see anywhere it actually fetches anything from the calendar.
If you want to fetch all events from the calendar, you do this:
private EventList events;
try {
events = (EventList) PIM.getInstance().openPIMList(PIM.EVENT_LIST, PIM.READ_ONLY);
} catch (PIMException e) {
System.out.println("Can't open EventList");
return;
}
Now you have opened your calendar, and is ready to fetch all the events into the events variable, and loop through them.
Enumeration all;
Event event;
try {
all = events.items(); // Puts all events into this variable
while (all != null && all.hasMoreElements()) { // Loop through them
event = (Event) all.nextElement();
System.out.println("Event found: " + event.getString(Event.SUMMARY, 0));
// Add code here, to send this event to PHP
// You'll need to serialize the event
// For example:
// myHTTPConnention.call("http://www.example.com/receiveEvent.php?summary=" + event.getString(Event.SUMMARY, 0) + "&start=" + event.getString(Event.START, 0));
}
} catch (Exception e) {
System.out.println("Error while looping through events");
}
Just to be clear: The myHTTPConnection is pseudo code. You need to add your own code that sends the data to your site there.

Resources