I couldn't find any possibilities to construct Source from MultiMap,
why such Source is not provided in standard API?
https://docs.hazelcast.org/docs/jet/3.2/manual/#overview-of-sources-and-sinks
While there's (currently, in Jet 3.2) no Source.multimap(), Jet provides a way to create your own custom Sources and Sinks via a Builder API.
Please check the relevant documentation: https://docs.hazelcast.org/docs/jet/3.2/manual/#source-sink-builder
Related
According to the integration guide for Contacts OData the Sap-Cuan-SequenceId header is mandatory when updating a ContactOriginData record. When updating in singleton mode I am able to set this header as follows and it works without issue:
service
.updateContactOriginData(contact)
.withHeader("Sap-Cuan-SequenceId", "PatchUpdate")
.executeRequest(destination);
However, there is no option to set this header when performing the same update in batch mode:
service
.batch()
.beginChangeSet()
.updateContactOriginData(contact)
.withHeader(...) // this option does not exist
.endChangeSet()
.executeRequest(destination);
When I run the batch one my SAP Import Monitor shows the error:
Invalid content in field Sap-Cuan-SequenceId
Is it possible to set this header in batch mode and I'm just not seeing how? I am using version 3.39.0 of the SDK. Any help would be greatly appreciated!
Thanks!
This clearly looks like an implementation shortcoming. The SDK has a new API for OData BATCH in the OData v4 client which shouldn't have this issue. Mentioned service exposes OData v2 only and the OData v2 BATCH implementation has been historically different. For compatibility reasons, it has to be kept like this. We plan to provide a parallel implementation to align it with OData v4 and fix many minor and major inconsistencies.
If this is super urgent we can try to provide a workaround using the SDK's generic OData client otherwise create an issue in this GitHub repository and the SDK team will update you when the fix for adding headers is going to be released.
Does the ADX C# Ingestion SDK (Kusto.Ingest) support the ingestion of zip files, similar to the capability that the LightIngest tool has?
If yes, I would love to see a code snippet that demonstrates such a scenario?
Yes, it's fully supported.
A sample project can be found here: https://github.com/Azure/azure-kusto-samples-dotnet/tree/master/client/QueuedIngestFromStorageExample
I have a written a sample project in which i have created an index using SOLR.NET (.NET Wrapper Java Based SOLR)
I want to remove the dependency of Java. So i am trying Lucene.NET.
Now is it possible to re-use the same indexed data (Created with SOLR.NET & SOLR) and perform searches / updates to that index data using LUCENE.NET?
Environment: VS2013, C#, .NET Framework 4.0, WinForms
The Lucene codec format evolves over time, and most alternative Lucene implementation are only compatible with a specific range of versions. So the answer is "it depends, but probably not". You'd have to try to read the segment files present in your Solr installation with Lucene.NET instead.
Remember that this moves Solr from being a distributed dependency (running as a separate server) to an in-process dependency instead - requiring you to write your own service on top of Lucene.NET if you want to keep it distributed.
As you've just written a sample project, drop everything you've indexed and re-index with your own code for Lucene.NET instead.
Does anyone have experience, information or some (coding) examples about a solution to establish a connection between the appserver of progress and node.js!? The aim is to create REST-Services to the db which can be accessed by the web like an angular-app.
Thanks for any advice
Christian
Starting with 11.2 (and enhanced in later versions) you can create REST-based applications utilizing the AppServer as a platform. ProDatasets are used as output (they convert easily to xml and/or json).
This is all explained in the Web Services part of the documentation. I'm providing a link below.
Basic steps
You need to consult the manual for all these steps...
Create an ABL program with input parameters (could be a parameter, a temp-table or a dataset) and and a single output parameter (could be a temp-table, a dataset or a single character or longchar parameter).
Add ABL-specific REST annotation to the program
Map the parameters in OpenEdge Studio
Setup REST agents with the restman utility
Export a "WAR-file" and deploy your webservice.
Calling the web service from node.js should be no greater problem than calling any REST based web service.
In versions prior to 11.2 you can "fake it and make it" utilizing WebSpeed. You can create a webspeed program that read parameters from the query-string (using get-field()) and then writes a response to the "webstream". Use either the WRITE-XML or WRITE-JSON methods on a temp-table or a dataset for writing the result. Don't forget to add a good MIME type though... This might not be as robust and customizable but it will work...
OE 11.4 Product Documentation - Web Services See chapter "II Creating OpenEdge REST Web Services"
These might also be useful:
OE 11.4 Product Documentation - Working with XML
OE 11.4 Product Documentation - Working with JSON
What's the difference between these two assemblies and when should I use each? I find that there are class name collisions between them so I imagine that I should only use one.
Example
Microsoft.WindowsAzure.Storage has Microsoft.WindowsAzure.Storage.Table.CloudTableClient
Microsoft.WindowsAzure.StorageClient has Microsoft.WindowsAzure.StorageClient.CloudTableClient
This seems very confusing. I can't imagine that Microsoft intends these to both be used in the same project.
Microsoft.WindowsAzure.Storage is version 2.0 of storage client library while Microsoft.WindowsAzure.StorageClient is the older version. There have been many changes in version 2.0 of the library (some of them are breaking). If you're starting new, I would actually recommend using 2.0 of the library as I found it more intuitive and easy to use than the older version. If you have an application which makes use of 1.7 version of the library, before you decide to upgrade, I would actually recommend reading the following blog posts by Windows Azure Storage Team:
http://blogs.msdn.com/b/windowsazurestorage/archive/2012/10/29/introducing-windows-azure-storage-client-library-2-0-for-net-and-windows-runtime.aspx
http://blogs.msdn.com/b/windowsazurestorage/archive/2012/10/29/windows-azure-storage-client-library-2-0-breaking-changes-amp-migration-guide.aspx
http://blogs.msdn.com/b/windowsazurestorage/archive/2012/11/06/windows-azure-storage-client-library-2-0-tables-deep-dive.aspx
However please note that there're still some components that your application might be using which has a dependency on storage client library 1.7. Windows Azure Diagnostics is one of them. So for some time you will need to use both versions. Good thing is that you can use both versions simultaneously in your project.
Hope this helps.
EDIT:
I also wrote a few blog posts about migrating code from storage client library 1.7 to 2.0 where I covered some basic scenarios. You can read those posts here:
Migrating blob storage code: http://gauravmantri.com/2012/11/28/storage-client-library-2-0-migrating-blob-storage-code/
Migrating queue code: http://gauravmantri.com/2012/11/24/storage-client-library-2-0-migrating-queue-storage-code/
Migrating table storage code: http://gauravmantri.com/2012/11/17/storage-client-library-2-0-migrating-table-storage-code/