I'm looking to create an Error Handling Flow, and need to capture the name of the failing processor on particular points only. An Update Attribute would be last resort as it would clutter up the templates. Ideally I'm looking for a script or similar, but I'm open to suggestions from NiFi experts.
You can use the Data Provenance feature for this via manual inspection or REST API, but by design ("Flow Based Programming"), components in Apache NiFi are black boxes independent and unaware of their predecessors and successors.
If you need a programmatic capability to access the error messages, look at SiteToSiteBulletinReportingTask. With this component, you can send the bulletins back to the same (or a different) NiFi instance via Site-to-Site and ingest and process them as any other arbitrary data.
Related
I have done some research over information related to below question, but couldn't get right information.
I have a scenario where a user creates some data using a create rest API and saves it in backend. Then, the user retrieves the saved data using a get API later to validate the data that's saved in the backend as part of create API.
Now, can creating the data in backend and retrieving the data be combined as a feature? or should there be two features – one for creating the data and other for retrieving the data? If it can be done in both ways – what are advantages of one over other?
There are no specific rule of thumb how one would group business logic by features. However there are some technical details making your entire code behave differently depending on how you group features. Here is some advice:
Background is defined once per feature. So if your tests require different background it probably makes sense to put it to different features (probably testing get would imply you have to insert some data before the test which is not necessary for testing put)
If you're not "gluing" your files explicitly they are taken depending on the position of your runner classes within the package structure. So that you can play with different configuration not only on gherkin level but also on the level of the particular test framework (like JUnit and TestNg). This is very much like the previous point but only using the capabilities provided by underlying unit test framework.
If you need to run your tests in parallel, sometimes the way how you group things by features also matters. When you run Cucumber-JUnit4 in parallel, it runs each feature file in parallel but all the scenarios inside a single feature in a sequence.
You would also might need to tag some tests in some special way. If there is a lot of such tests it makes sense to put sthem in a spearate file and apply the tag to entire feature rather than to tag each test individually.
I would suggest to have two separate scenarios to validate the POST and the GET. In that way, you have better visibility of two separate APIs. In the future during execution, you would also be able to know by looking at the title which API works and which one is broken (if any). You don't need to go into the step definitions and check whether the scenario for the POST API also includes the validation for GET as well or if that's a separate scenario.
So, one scenario to validate the POST and whether it returns 201 Created. And another scenario to validate the GET.
I'm coding an application via nodejs that parses APIs to collect data and organize it. However, the need for systematic logging and display of differential logs has risen. The application needs to show users what changed with each consecutive state changes or within a specified time span. Is there any existing tool that would help me achieve that?
We are building a work order management integration layer on top of the base Maximo, communicating via provided REST/OSLC API, but we are stuck when it comes to finding all possible statuses a work order could transition to for a given work order.
Is there a REST/OSLC API, or some way to expose it externally (ex. some kind of one-time config export), the possible status transitions for a given work order?
This should consider all the customizations we've made to Maximo including additional statuses, extra conditions, etc. We are targeting version 7.6.1.
IBM seems to have dropped some things from the new NextGen REST/JSON API documentation. There is almost no mention of the "getlist" action anymore, something I have really enjoyed using for domain controlled fields. This should give you exactly what you are looking for, a list of the possible statuses that a given work order could go into. I was unable to verify this call today, but I remember it working as desired when I last used it (many months ago).
<hostname>/maximo/oslc/os/mxwo/<href_value_of_a_specific_wo>?action=getlist&attribute=status
The method you're looking for is psdi.mbo.StatefulMbo.getValidStatusList
See details here:
https://developer.ibm.com/assetmanagement/7609-maximo-javadoc/
Now, you want to expose the result to a REST API. You could create an automation script that given the WONUM would return the allowed status list. You can leverage the new REST API to achieve that quite easily.
See how you can call an automation script with a REST call here:
https://developer.ibm.com/static/site-id/155/maximodev/restguide/Maximo_Nextgen_REST_API.html#_automation_scripts
Last part: you will need to create a request response based on the mboset returned from getValidStatusList.
I would like to know if there is any API in "vespa platform" which I can use to create a search definition (sd) in runtime.
This is a requirement, because the documents that I will index are depending on the user input in my front end application.
No, there is no such API available. The idea of deploying an immutable application package (including the SD) is a conscious design choice to ensure appropriate management of multiple search clusters in multiple locations over time as well as enabling source control management.
If needed, one could build what you describe "on top" of Vespa: A web service that will let you mutate an existing SD and, upon submit, create the updated application package and deploy to your Vespa cluster. Vespa will (in most cases) handle schema changes without impacting serving.
I'm working on an IoT project that involves a sensor transmitting its values to an IoT platform. One of the platforms that I'm currently testing is Thingsboard, it is Open Source and I find it quite easy to manage.
My sensor is transmitting active energy indexes to Thingsboard. Using these values, I would like to calculate and show on a widget the values of the active power (= k*[ActiveEnergy(n)- ActiveEnergy(n-1)/Time(n)-Time(n-1)]). This basically means that I want to have access to history data, use this data to generate new data and inject it to my device.
Thingsboard uses Cassandra database to save history values.
One alternative to my problem could be to find a way to communicate with the database via a Web API for example, do the processing and send back the active power by MQTT or HTTP on my device using its access token.
Is this possible?
Is there a better alternative to my problem?
There are several options how to achieve this (based on a layer or component of the system):
1) Visualization layer only. Probably the most simple one. There is an option to apply post-processing function. The function has following signature:
function(time, value, prevValue)
Please note that prevTime is missing, but we may add this in future releases.
post processing function
2) Data processing layer. Use advanced analytics frameworks like Apache Spark to post-process your data using sliding time window, for example.
See our integration article about this.