I have done some research over information related to below question, but couldn't get right information.
I have a scenario where a user creates some data using a create rest API and saves it in backend. Then, the user retrieves the saved data using a get API later to validate the data that's saved in the backend as part of create API.
Now, can creating the data in backend and retrieving the data be combined as a feature? or should there be two features – one for creating the data and other for retrieving the data? If it can be done in both ways – what are advantages of one over other?
There are no specific rule of thumb how one would group business logic by features. However there are some technical details making your entire code behave differently depending on how you group features. Here is some advice:
Background is defined once per feature. So if your tests require different background it probably makes sense to put it to different features (probably testing get would imply you have to insert some data before the test which is not necessary for testing put)
If you're not "gluing" your files explicitly they are taken depending on the position of your runner classes within the package structure. So that you can play with different configuration not only on gherkin level but also on the level of the particular test framework (like JUnit and TestNg). This is very much like the previous point but only using the capabilities provided by underlying unit test framework.
If you need to run your tests in parallel, sometimes the way how you group things by features also matters. When you run Cucumber-JUnit4 in parallel, it runs each feature file in parallel but all the scenarios inside a single feature in a sequence.
You would also might need to tag some tests in some special way. If there is a lot of such tests it makes sense to put sthem in a spearate file and apply the tag to entire feature rather than to tag each test individually.
I would suggest to have two separate scenarios to validate the POST and the GET. In that way, you have better visibility of two separate APIs. In the future during execution, you would also be able to know by looking at the title which API works and which one is broken (if any). You don't need to go into the step definitions and check whether the scenario for the POST API also includes the validation for GET as well or if that's a separate scenario.
So, one scenario to validate the POST and whether it returns 201 Created. And another scenario to validate the GET.
Related
We are building a work order management integration layer on top of the base Maximo, communicating via provided REST/OSLC API, but we are stuck when it comes to finding all possible statuses a work order could transition to for a given work order.
Is there a REST/OSLC API, or some way to expose it externally (ex. some kind of one-time config export), the possible status transitions for a given work order?
This should consider all the customizations we've made to Maximo including additional statuses, extra conditions, etc. We are targeting version 7.6.1.
IBM seems to have dropped some things from the new NextGen REST/JSON API documentation. There is almost no mention of the "getlist" action anymore, something I have really enjoyed using for domain controlled fields. This should give you exactly what you are looking for, a list of the possible statuses that a given work order could go into. I was unable to verify this call today, but I remember it working as desired when I last used it (many months ago).
<hostname>/maximo/oslc/os/mxwo/<href_value_of_a_specific_wo>?action=getlist&attribute=status
The method you're looking for is psdi.mbo.StatefulMbo.getValidStatusList
See details here:
https://developer.ibm.com/assetmanagement/7609-maximo-javadoc/
Now, you want to expose the result to a REST API. You could create an automation script that given the WONUM would return the allowed status list. You can leverage the new REST API to achieve that quite easily.
See how you can call an automation script with a REST call here:
https://developer.ibm.com/static/site-id/155/maximodev/restguide/Maximo_Nextgen_REST_API.html#_automation_scripts
Last part: you will need to create a request response based on the mboset returned from getValidStatusList.
I was wondering whether the namespaces themselves can be grouped?
Our REST server project has a highly decentralized structure (along the lines of a Redux fractal pattern) and every feature has its own namespace. This predictably has led to many namespaces, and the swagger page is getting rather full now.
If this is not achievable, I guess we can live with it, or consider emitting only the swagger json to be consumed by the official Swagger UI that we can run in a separate server. But I'd much prefer a restplus-y solution, since that represents the least amount of code friction.
The underlying OpenAPI Specification has a concept of tags. The namespace feature in Flask-RESTPlus assigns these names as tags for path definitions, so this is how you get the grouping in a Swagger UI. The specification does not offer any hierarchical grouping mechanism, so therefore Flask-RESTPlus doesn't offer any such feature.
You could consider a different strategy for assigning namespaces/tags to create more manageable groupings, split the API across multiple Swagger UI pages/sites, etc. Sounds like there is no way around your Swagger UI needing to render a very large number of API methods, so making it more understandable via general content structuring may be your best approach.
I have a question about implementing DDD and repository pattern.
Should I modify a entity inside a repository?
Let's say I have an Order and want to mark that order as finished.
As I see this I have two choices.
1.
var order _orderRepository.GetById(1);
order.Finish();
_orderRepository.Update(order);
...where the change is persisted to the database in the Update call.
2.
var order _orderRepository.GetById(1);
var finishedOrder = _orderRepository.Finish(order);
...where the change is persisted to the database in the Finish call.
Is there a advantage of using one method over the other? What is the DDD-way of doing this?
You should not modify it in the repository.
The reason is that the repository is responsible of abstracting away the persistence (i.e. reading/writing to the data storage).
If you also make it responsible of some business logic you are violating the Single Responsibility Principle.
If you are doing automated testing, it also means that you have to do integration tests to be sure that the database communication/mapping works and then unit tests to verify the business logic that you introduced in it.
It can seem trivial. But it's only trivial the first time you violate the principle. But one violation usually leads to another and another and finally an application that isn't as easy to maintain :)
An application where classes have mixed responsibilities are also harder to navigate. Each time you want to update a feature you have to go through all layers to find where the actual logic is done.
Use the application layer to coordinate behaviors for one or more domain objects, the domain objects should execute all state changes and lastly the repo should persist those changes to the database or wherever you are storing the domain's state.
I have a number of XPAges design elements that I use in many different databases. If I read the wiki correctly the single store is an all or nothing situation.
So I want to create unique design in a database but use the set of reusable XPages element from a single store location. the wiki says:
Apart from the "dummy or blank XPage with the same name of the default XPage" in each instance application, does it matter if an 'instance' contains XPage design elements?
No. If SCXD is set on an application all XPages design elements are ignored on the database and the application uses the design elements on the SCXD database.
If this is the case then I have to create databases where probably 75% of the code is reusable but I would have to repeat it (and maintain it) in dozens of separate databases. pity!
XPages and related elements (Custom Controls, SSJS Libraries, Java Code) can be inherited from a specific template like other design elements. So, I would setup a database called, perhaps, "Core Components" (.ntf or .nsf) with a template name of "CoreComponents". Then on the individual elements in the target DB you would set inheritance to be specifically from the "CoreComponents" template. Then the elements that are unique to each database do not inherit from any template. You can then use File-Application-Refresh design to update the elements with specific inheritance and the one which are unique in that database will not get overwritten.
You do need to do a clean build after the refresh, so I recommend that you keep the Core Components database locally or on a different server than the others so that the daily design task will not update them resulting in corrupted xsp elements.
IBM's preferred model for reusing XPage artifacts across multiple applications is to create OSGi plugins that leverage the XPages Extensibility API.
NotesIn9 episode 64 demonstrates how to make an existing Custom Control design element a library component, which can then be used in any app that has the library available, instead of having to copy the design element to each app separately. Any subsequent changes to that component are then applied immediately to any apps that use it when a new version of the library is deployed.
If you truly have "dozens" of apps that all share certain features, but the entire design should not be identical across all of them, then the OSGi model is definitely the way to go.
But why not flip the entire model on its head? Traditionally, we've always put the code and the data in the same place (e.g. same NSF) because it was a pain to access -- and, especially, visually represent -- data in one NSF via code in another NSF. That's not true anymore. Why have dozens of apps just because the data lives in dozens of places? Any data source in XPages can be told where the data lives... you can link a central user interface to any number of "remote" data stores (either different NSFs on the same server, or even databases on other servers).
Red Pill, for instance, takes this to its logical extreme: they deploy one NSF, which acts as a portal to all your data, no matter where that data lives. The ACLs of the various NSFs (and Readers fields) still ensure that users don't pry into data they haven't been granted access to, and they have complex analytics algorithms for determining which data the users will actually care about. But if you have 500 NSFs in the domain, you're not maintaining 500 different code templates... it's literally just 1; but that one user interface is how users find, and interact with, all their data.
You certainly don't have to take this premise to that extreme, but perhaps you could identify, say, 5 apps where the UI and / or business logic is similar (or even identical), but the data just lives in multiple places. Create one central app for interacting with all of that data. Create a "homepage" that gives users a way to select which "app" they're trying to access (or, if they should only have access to one to begin with, compute which one that is), and then once they navigate in to the specific "app", just bind the data sources to the relevant NSF instead of assuming each view or document lives in the same NSF that the code does.
It's still a good idea to be aware of the Extensibility API, not only for the sake of code reusability, but also to understand just how much of the behavior of the platform truly is within our control now -- provided, of course, that we're willing to occasionally write some custom Java code. But if you shift away from the one-to-one mapping between code and data that we've habitually maintained in Domino for so long, I can practically guarantee that you'll prefer this approach... both for the ease of implementation and maintenance, and for the comparative simplicity it offers to end users.
You can combine the template technique and the all-code-in-one-database approach:
Divide the application design into two parts: a data part and a code part.
The data part contains all Notes views. If it's an classic Notes application it would contain also all design elements for Notes client like Forms, Subforms, Frames and so on.
The code part contains all XPages, Custom Controls, CSS, client/server JavaScript libraries, Themes, images, jars and so on.
Put your 75% common code into masterData.ntf and masterCode.ntf.
The application code databases appCodeX.ntf inherit all design elements of masterCode.ntf and contain the additional application specific design elements.
The code from all application templates gets united in allCode.ntf. It inherits all from masterCode.ntf and inherits the additional pieces of code from application templates.
Based on that you create an allCode.nsf.
On the data side you use the classic template way.
From here you have to possibilities:
You use Single Copy XPage Design - connect every appData database with allCode.nsf
You connect your XPages in allCode.nsf with appData databases
I prefer the latter. You can define in allCode.nsf where all the application data databases are located, e.g. in property documents.
With the approach showed in picture you're still able to separate application easily e.g. in case you want to sell them. You have already a separate template for every single application.
Do you do automated testing on a complex workflow system like K2?
We are building a system with extensive integration between Sharepoint 2007 and K2. I can't even imagine where to start with automated testing as the workflow involves multiple users interacting with Sharepoint, K2 workflows and custom web pages.
Has anyone done automated testing on a workflow server like K2? Is it more effort than it's worth?
I'm having a similar problem testing workflow-heavy MOSS-based application. Workflows in our case are based on WWF.
My idea is to mock pretty much everything that you can't control from unit tests - documents storage, authentication, user rights and actions, sharepoint-specific parts of workflows for sharepoint (these mocks should be thoroughly tested to mirror behavior of real components).
You use inversion of control to make code choose which component to use at runtime - real or mock.
Then you can write system-wide tests to test workflows behavior - setting up your own environment, checking how workflow engine reacts. These tests are too big to call them unit-tests, still it is automated testing.
This approach seems to work on trivial cases, but I still have to prove it is worthy to use in real-world workflows.
Here's the solution I use. This is a simple wrapper around the runtime that allows executing single activity, simplifies passing the parameters, blocks the invoking thread until the workflow or activity is done, and translates / rethrows exceptions if any. Since my workflow only sends or waits for messages through a custom workflow service, I can mock out the service to expect certain messages from workflow and post certain messages to it and here I'm having real unit-tests for my WF! The credit for technology goes to Michael Kennedy.
If you are going to do unit testing, Typemock Isolator is the only tool that can currently mock SharePoint objects.
And by the way, Richard Fennell is working on a workflow mocking solution here.
We've just today written an application that monitors our K2 worklist, picks up certain tasks from it, fills in some data and submits the tasks for completion. This is allowing us to perform automated testing, find regressions, and run through as many different paths of the workflow in a fraction of the time that it would take people to do it. I'd imagine a similar program could be written to pretend to be sharepoint.
As for the unit testing of the workflow items themselves, we have a dll referenced from k2 which contains all of our line rule and processing logic. We don't have any code in the k2 workflows themselves, it is all referenced from these dlls. This allows us to easily write unit tests on them to test all of the individual line rules.
I've done automated integration testing on K2 workflows using the K2ROM API (probably SourceCode.Workflow.Client if you're using K2 blackpearl).
Basically you start a process on a test server with a known folio (I generate a GUID), then use the management API to delete it afterwards. I wrote helper methods like AssertAtClientActivity (basically calls ProvideWorkItem with criteria).
Use the IsSynchronous parameter to StartProcessInstance, WorklistItem.Finish, etc. so that relevant method calls will not return until the process instance has reached a stable state.
Expect tests to be slow and to occasionally fail. These are not unit tests.
If you want to write unit tests against other systems, you'll probably want to wrap the K2 API.
Consider looking at Windows Workflow 4 and the new workflow features in SharePoint 2010. You may not need K2.