getMap() + subsequent changes to sync map - hazelcast

I have an IMap, with Journal enabled.
Using a client (Hazelcast, or Jet), I would like to get the full map, and get all the subsequent updates to enrich the Map.
How could I achieve this?
If do a .getMap(), and then call getJournalMap() or .addEntryListener(), I am concerned with the possibility of missing updates in between the getMap() and addEntryListener() call.
Is there are more intuitive way to get full map+updates?
Thanks

What you are looking for is Continues Query Cache feature of Hazelcast. Please see https://docs.hazelcast.org/docs/3.11/manual/html-single/index.html#continuous-query-cache
Below is a sample usage from client
HazelcastInstance instance = Hazelcast.newHazelcastInstance();
QueryCacheConfig queryCacheConfig = new QueryCacheConfig("cache");
PredicateConfig predicateConfig = new PredicateConfig().setImplementation((Predicate) entry -> true);
queryCacheConfig.setPredicateConfig(predicateConfig);
ClientConfig clientConfig = new ClientConfig();
clientConfig.addQueryCacheConfig("map", queryCacheConfig);
HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
IMap<Object, Object> map = client.getMap("map");
QueryCache<Object, Object> cache = map.getQueryCache("cache");

Currently the event journal does not expose a public API for reading the event journal in Hazelcast IMDG. The event journal can be used to stream event data to Hazelcast Jet, so it should be used in conjunction with Hazelcast Jet. You can see some examples here: https://github.com/hazelcast/hazelcast-jet-code-samples/tree/0.7-maintenance/event-journal

Related

How to use CosmosClient.CreateAndInitializeAsync() with CosmosClientBuilder.Build()

This YouTube video #27:20 talks about populating the cache with routing info to avoid latency during a cold start.
You can either try to get a document you know doesn't exist, or you can use CosmosClient.CreateAndInitializeAsync().
I already have this code set up:
private async Task<Container> CreateContainerAsync(string endpoint, string authKey)
{
var cosmosClientBuilder = new CosmosClientBuilder(
accountEndpoint: endpoint,
authKeyOrResourceToken: authKey)
.WithConnectionModeDirect(portReuseMode: PortReuseMode.PrivatePortPool, idleTcpConnectionTimeout: TimeSpan.FromHours(1))
.WithApplicationName(UserAgentSuffix)
.WithConsistencyLevel(ConsistencyLevel.Session)
.WithApplicationRegion(Regions.AustraliaEast)
.WithRequestTimeout(TimeSpan.FromSeconds(DatabaseRequestTimeoutInSeconds))
.WithThrottlingRetryOptions(TimeSpan.FromSeconds(DatabaseMaxRetryWaitTimeInSeconds), DatabaseMaxRetryAttemptsOnThrottledRequests);
var client = cosmosClientBuilder.Build();
var databaseResponse = await CreateDatabaseIfNotExistsAsync(client).ConfigureAwait(false);
var containerResponse = await CreateContainerIfNotExistsAsync(databaseResponse.Database).ConfigureAwait(false);
return containerResponse;
}
Is there any way to incorporate CosmosClient.CreateAndInitializeAsync() with it to populate the cache?
If not, is it ok to do this to populate the cache?
public class CosmosClientWrapper
{
public CosmosClientWrapper(IKeyVaultFacade keyVaultFacade)
{
var container = CreateContainerAsync(endpoint, authenticationKey).GetAwaiter().GetResult();
// Get a document that doesn't exist to populate the routing info:
container.ReadItemAsync<object>(Guid.NewGuid().ToString(), PartitionKey.None).GetAwaiter().GetResult();
}
}
The point of CreateAndInitialize or BuildAndInitialize is to pre-establish the connections required to perform Data Plane operations to the desired containers (reference https://learn.microsoft.com/azure/cosmos-db/nosql/sdk-connection-modes#routing).
If the containers do not exist, then it makes no sense to use CreateAndInitialize or BuildAndInitialize because there are no connections that can be pre-established/warmed up, because there are no target backend endpoints to connect to. That is why the container/database information is required, because the only benefit is warming up the connections to the backend machines that support that/those container/s.
Please see CosmosClientBuilder.BuildAndInitializeAsync which creates the cosmos client and initialize the provided containers. I believe this is what you are looking for.

Handling Acumatica timeout on API Invoke action

I have code in a standalone application that invokes an Acumatica action to generate reports; I am running into timeouts on large documents while the action completes.
What is the best method to handle these timeouts? I need to wait for the action to complete in order to retrieve the files I've generated.
Standalone application code:
public SalesOrder GenerateAcumaticaLabels(string orderNbr, string reportType)
{
SalesOrder salesOrder = null;
using (ISoapClientProvider clientProvider = soapClientFactory.Create())
{
try
{
SalesOrder salesOrderToFind = new SalesOrder
{
OrderType = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).First() },
OrderNbr = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).Last() },
ReturnBehavior = ReturnBehavior.OnlySpecified,
};
salesOrder = clientProvider.Client.Get(salesOrderToFind) as SalesOrder;
InvokeResult invokeResult = new InvokeResult();
invokeResult = clientProvider.Client.Invoke(salesOrder, new exportSFPReport());
ProcessResult processResult = clientProvider.Client.GetProcessStatus(invokeResult);
//Wait for the update to complete before we attempt to retrieve the files
while (processResult.Status == ProcessStatus.InProcess)
{
Thread.Sleep(1000); //pause for 1 second
processResult = clientProvider.Client.GetProcessStatus(invokeResult);
}
}
And the action in Acumatica:
public PXAction<SOOrder> ExportSFPReport;
[PXButton]
[PXUIField(DisplayName = "Generate Robot SFP PDF")]
protected IEnumerable exportSFPReport(PXAdapter adapter)
{
//Report Paramenters
Dictionary<String, String> parameters = new Dictionary<String, String>();
parameters["SOOrder.OrderType"] = Base.Document.Current.OrderType;
parameters["SOOrder.OrderNbr"] = Base.Document.Current.OrderNbr;
IEnumerable reportFileInfo = ExportReport(adapter, "IN619217", parameters);
exportTrayLabelReport(adapter, "SFP");
return reportFileInfo;
}
The problem here is that your action is synchronous, so it is trying to complete within the Invoke call (which is not a good thing for long processes). You have to explicitly make your operation long-running by using PXLongOperation.StartOperation inside your handler, and then your client code should work properly, as it already handles the waiting and checking.
I believe the reason why you encounter time-out is because there is no TCP communication between the time you sent the request and receive the response. With TCP KeepAlive flag set to true, the client will periodically ping the server to reset the time-out period.
That would be the best way. However Acumatica connections are rather high level so I don't think you'll be able to easily access that flag. What I would try first in a scenario that doesn't involve external application is to wrap your action event-handler code in a PXLongOperation block which has to do something similar to keep connection alive under the hood:
PXLongOperation.StartOperation(this or Base, delegate
{
your code here
});
When I do encounter time-outs in Acumatica that can't be solved with PXLongOperation I go for the simplest method which is increasing IIS timeout in Web.Config file. I'm not sure if your use case with external application will go well with async PXLongOperation. The handler would return prematurely and the client could not be able to retrieve the async payload.
So you might have to increase time-out instead. As far as I know there's no real practical drawback to doing this unless your website is under threat of DOS attacks.
You can locate and edit the Web.Config file of your Acumatica instance using inetmgr program if you are self-hosting Acumatica. Otherwise talk to your SAAS contact to see if that's an option.
I'm pretty sure you are hitting IIS time-out. A tell-tale sign would be lost connection after exactly 5 minutes which is the default 300 seconds value. You can edit Web.Config file to increase executionTimeout value. It's not a bad idea to increase maxRequestLength too if you are requesting large amount of data from Acumatica API as this is also a common cause of failure that you miss in testing and occurs in real-life scenarios:
<httpRuntime executionTimeout="300" requestValidationMode="2.0" maxRequestLength="1048576" />

Convert QueueClient.Create to MessagingFactory.CreateQueueClient

Trying to convert an implementation using the .net library from using QueueClient.Create to the MessagingFactory.CreateQueueClient to be able to better control the BatchFlushInterval as well as to allowing the use of multiple factories over multiple connections to increase send throughput but running into roadblocks.
Right now we are creating QueueClients (they are maintained throughout the app) like this:
QueueClient.CreateFromConnectionString(address, queueName, ReceiveMode.PeekLock); // address is the connection string from the azure portal in the form of Endpoint=sb....
Trying to change it to creating a MessagingFactory in the class construtor that will be used to create the QueueClients:
messagingFactory = MessagingFactory.Create(address.Replace("Endpoint=",""),mfs);
// later on in another part of the class
messagingFactory.CreateQueueClient(queueName, ReceiveMode.PeekLock);
// error Endpoint not found.,
This throws the error Endpoint not found. If I don't replace the Endpoint= it won't even create the MessagingFactory. What is the proper way to handle this?
Notes:
address = Endpoint=sb://pmg-bus-mybus.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=somekey
As an aside, we have a process that is trying to push as many messages as possible to a queue and others reading it. The readers seem to easily keep up with the sender and I'm trying to maximize the send rate.
The address is the base address of namespace(sb://yournamespace.servicebus.windows.net/) you are connecting to. For more information, please refer to MessagingFactory. The following is the demo code :
var Address = "sb://yournamespace.servicebus.windows.net/"; //base address of namespace you are connecting to.
MessagingFactorySettings MsgFactorySettings = new MessagingFactorySettings
{
NetMessagingTransportSettings = new NetMessagingTransportSettings
{
BatchFlushInterval = TimeSpan.FromSeconds(2)
},
TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider("RootManageSharedAccessKey", "balabala..."),
OperationTimeout = TimeSpan.FromSeconds(30)
}; //specify operating timeout (optional)
MessagingFactory messagingFactory = MessagingFactory.Create(Address, MsgFactorySettings);
var queue = messagingFactory.CreateQueueClient("queueName",ReceiveMode.PeekLock);
var message = queue.Receive(TimeSpan.Zero);

Datastax recommended approach for session and connection

We are using Datastax Graph database. We are creating a bean for graphtraversalSource like,
#Bean
GraphTraversalSource graphTraversalSource() {
DseCluster.Builder dseBuilder = DseCluster.builder();
dseBuilder.addContactPoints(contactPoints);
dseBuilder.withGraphOptions(new GraphOptions().setGraphName(dseGraph));
DseSession dseSession = dseBuilder.build().connect();
return DseGraph.traversal(dseSession);
}
The GraphTraversalSource is injected in our repository layer. DseSession is re-used for all our queries. we are seeing a performance impact, each query takes 2 seconds to execute.
We want to understand whether should we have to handle session connection and close ? or should we have to create GraphTraversalSource for each query?
is GraphTraversalSource stateless?

Is it possible to use the Distributed Object Events for Hazelcast Client?

As stated in Docs: http://docs.hazelcast.org/docs/latest-development/manual/html/Distributed_Events/Event_Listeners_for_Clients.html
Client does not supports the Distributed Object Events. Is there any way to detect the Events for a given Map on server and Refresh the delta using the Client Instance. I have Centralized HZ Distributed Cache. Each time something changes on server side; I want the client notification to fecth the changes/delta.
Client supports http://docs.hazelcast.org/docs/latest-development/manual/html/Distributed_Events/Cluster_Events/Listening_for_Distributed_Object_Events.html
I want to know if it can Supports MAp/Distributed Map events
http://docs.hazelcast.org/docs/latest-development/manual/html/Distributed_Events/Distributed_Object_Events/Listening_for_Map_Events.html
You can use MapListeners on client side, there are several types of events you can observe:
EntryAddedListener
EntryRemovedListener
EntryUpdatedListener
MapClearedListener
EntryEvictedListener
...
Look into MapListener JavaDoc for more details.
Usage is simple:
HazelcastInstance client = HazelcastClient.newHazelcastClient();
client.getMap("test").addEntryListener(new EntryAddedListener<String, String>() {
#Override
public void entryAdded(EntryEvent<String, String> event) {
System.out.println("Added: " + event.getKey() + "=" + event.getValue());
}
}, true);

Resources