Asynchronous Programming with OOP in TypeScript - node.js

I am having a problem writing clean OOP code, say in TypeScript, while some of my objects contain async methods: what 'ends up' happening is that I am writing static methods, and whichever object or method 'uses' these static methods is 'contaminated' and needs to be converted into a sort of promise itself... I am sure I doing something wrong - is there some architecture trick that I am missing?
Let's take a concrete example: I am writing a Node JS app with a model object of my MongoDB document. Nothing fancy. However, when I use the object's methods in my app, doesn't matter in which class, every method that uses the methods has to be async. And then every method that uses the method that uses the method has to be async as well... etc.
Is there a way to use the MongoDB async operations in such a way as to at least keep up the façade of normal OOP architecture, or is there an entirely new sort of logic I need to learn to write async OOP apps?
Hope I made my question clear,

Async methods don't have to be static, and there really isn't any reason that a program using async operations can't have the same overall structure as one that doesn't.
Async operations are contagious, however. That's known as the red/blue function problem: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/
It's not ideal, but it's the best that can be done without requiring some very special capabilities from the language/system used to implement JS. You can either have threads, which come with their own problems, or you can have some mechanism for copying call stacks around, like Go and upcoming Java project Loom.

Related

Codeigniter4 - Difference between Libraries, helper and model

I walk the first steps in codeigniter4.
Now I ask myself what are the big differences between a "Model" where I do at first all database related things, a "Helper" where I defined a set of functions or a "Libary"?
In which cases should I create my own libary, helper, model?
The CI4 Docu won't give me a answer for that, so I hope someone can explain it for me (and other ones)
The documentation is pretty straight forward when it comes to Models, there's really no caveats there. A Model is a class that represents a single table in the database and it provides to you a wide variety of related functionality: built-in CRUD methods, the ability to save Entities, transparent use of Query Builder methods, etc.
In general, you would typically have one Model class per database table that you're working with. That being said, you do not necessarily need Models in order to work with a database; however if you expect to need the functionality a Model provides, it can be used to save you the work of doing it yourself.
The documentation is indeed far more opaque on the differences between Libraries and Helpers. As such, I've found the most objective measure of difference to be in how CodeIgniter itself utilizes them, and the distinction is rather subtle:
Libraries provide their functionality through methods that exist within the namespace of the class they're defined in.
Helpers provide their functionality through functions that exist within the namespace of the importing class.
(NB: In PHP, a method is simply the name for a function that's defined within a class)
In other words, Libraries are typically non-static classes that (like all non-static classes) must be 'constructed' before use. The methods defined within that class reside within the namespace of the class itself, not the class they're called from.
For example, to gain access to the current Session object, I retrieve an instance of the Session Library class: $session = session(); Using $session, I can then invoke methods provided by that Session class, like $session->destroy().
On the other hand, Helpers are typically just a collection of functions that are imported into the namespace of the importing class itself. They are called in the current context and their use is not predicated upon the use of an object.
For example, if I loaded the Array Helper (helper('array');), it would grant me access to a handful of functions that I could call directly from the current context (e.g. $result = array_deep_search(...)). I didn't have to create an object first, that function was imported directly into my current namespace.
This distinction could matter for a couple of reasons, the biggest of which is probably naming conflicts. If you have two Helpers, each with an identically-named function, you can't import both of those functions at the same time. On the other hand, you could have one hundred different classes with the destroy() method defined, because each of those method definitions resides within the namespace of the defining class itself and is invoked through an instance of that specific class.
You may notice that all of CI's Helper functions are prefixed with the name of the Helper itself, e.g. 'array' or 'form'; this is to prevent the aforementioned naming conflict.
This probably doesn't answer when you want to use one or the other, and in truth that really comes down to your opinion. In the end, it's all just namespacing. You can put things wherever you want, as long as the code that needs it knows where to look for it. I personally try to follow CI's lead on things, so I tend to follow their distinction between the two.

ViewModel and multithreading best practice

I want to understand some best practices regarding using MVVM and multithreading. Let us assume I have a ViewModel and it has an observableCollection. Also, let us assume I pass this collection to another service class which does some calculation and then udpates my collection.
After a point I realize that I want to make this a multithreaded call. When I make the call to the service class using threads or tasks what results is a cross thread operation. The reason is quite obvious because the service class updates the collection whcih in turn will update the UI on the background thread.
In such scenarios what is the best practice? Should we always write our service class in such a way that it first clones the input and then updates that cloned copy? Or should the view model always assuem that the service calls might be multithreaded and send a cloned copy?
What would be the recommended way to solve this?
Thanks
Jithu
A solution that might solve the cross-thread exception is by implementing the OnPropertyChanged in the base class of all ViewModels to switch to the correct thread/synchronization context so all properties in the View that are bound to the changing property will have their handlers called on the correct thread. See: Avoid calling BeginInvoke() from ViewModel objects in multi-threaded c# MVVM application
If/when you create copies you are postponing the synchronization and, in many cases, making it harder than need be.
A web service will always return new objects, how you, or a framework, updates the model using these object is up to you. A lot would depend on the amount of checks and updates coming in. There is no recommended way, see whatever fits the applications requirements.

Can I skip operations in nested contexts?

Before nested contexts my life was easy declaring some common functions from NSManagedObjects such as:
awakeFromFetch
awakeFromInsert
validateX
didChangeValueForKey
Through try and fail, call it, life lessons. I learnt to be more careful about those functions when using inheritance with my entities.
Now jumping to the nested context part of Core Data, I am getting used to see the very same NSManagedObject existing in several contexts. I am trying to grasp a key part of this: The above mentioned functions might be called several times from different contexts, but things like validation, value observation, and initialization are only relevant once. Therefore I normally check if the self.managedObjectContext correspond to the main context I work with. If they are the same, I run the whole function, if not, I just call [super] and return.
Is that the way I am supposed to deal with nested contexts?
Doing this has saved me several times. I was surprised seing awakeFromInsert called more than once for the same object, but even if I abort the initialization part of that function in all contexts but the main one, the information is persistet back to the store. I would like some confirmation that I am dealing with this topic the right way.
UPDATE:
Trying to make my ideas clear I wrote a post about this topic in my blog. What I wrote there has saved me several times from invalid entity layouts and other errors. However I am not sure if there are approaches better than mine.
http://www.digital-lumberjack.com/blog/2012/11/surviving-core-data/

Partial-mocking considered bad practice? (Mockito)

I'm unit-testing a business object using Mockito. The business object uses a DAO which normally gets data from a DB. To test the business object, I realized that it was easier to use a separate in-memory DAO (which keeps data in a HashMap) than to write all the
when(...).thenReturn(...)
statements. To create such a DAO, I started by partial-mocking my DAO interface like so:
when(daoMock.getById(anyInt())).then(new Answer() {
#Override
public Object answer(InvocationOnMock invocation) throws Throwable {
int id = (Integer) invocation.getArguments()[0];
return map.get(id);
}
});
but it occurred to me that it was easier to just implement a whole new DAO implementation myself (using in-memory HashMap) without even using Mockito (no need to get arguments out of that InvocationOnMock object) and make the tested business object use this new DAO.
Additionally, I've read that partial-mocking was considered bad practice. My question is: is what I'm doing a bad practice in my case? What are the downsides? To me this seems OK and I'm wondering what the potential problems could be.
I'm wondering why you need your fake DAO to be backed by a HashMap. I'm wondering whether your tests are too complex. I'm a big fan of having very simple test methods that each test one aspect of your SUT's behaviour. In principle, this is "one assertion per test", although sometimes I end up with a small handful of actual assert or verify lines, for example, if I'm asserting the correctness of a complex object. Please read http://blog.astrumfutura.com/2009/02/unit-testing-one-test-one-assertion-why-it-works/ or http://blog.jayfields.com/2007/06/testing-one-assertion-per-test.html to learn more about this principle.
So for each test method, you shouldn't be using your fake DAO over and over. Probably just once, or twice at the very most. Therefore, having a big HashMap full of data would seem to me to be EITHER redundant, OR an indication that your test is doing WAY more than it should. For each test method, you should really only need one or two items of data. If you set these up using a Mockito mock of your DAO interface, and put your when ... thenReturn in the test method itself, each test will be simple and readable, because the data that the particular test uses will be immediately visible.
You may also want to read up on the "arrange, act, assert" pattern, (http://www.arrangeactassert.com/why-and-what-is-arrange-act-assert/ and http://www.telerik.com/help/justmock/basic-usage-arrange-act-assert.html) and be careful about implementing this pattern INSIDE each test method, rather than having different parts of it scattered across your test class.
Without seeing more of your actual test code, it's difficult to know what other advice to give you. Mockito is supposed to make mocking easier, not harder; so if you've got a test where that's not happening for you, then it's certainly worth asking whether you're doing something non-standard. What you're doing is not "partial mocking", but it certainly seems like a bit of a testing smell to me. Not least because it couples lots of your test methods together - ask yourself what would happen if you had to change some of the data in the HashMap.
You may find https://softwareengineering.stackexchange.com/questions/158397/do-large-test-methods-indicate-a-code-smell useful too.
When testing my classes, I often use a combination of Mockito-made mocks and also fakes, which are very much what you are describing. In your situation I agree that a fake implementation sounds better.
There's nothing particularly wrong with partial mocks, but it makes it a little harder to determine when you're calling the real object and when you're calling your mocked method--especially because Mockito silently fails to mock final methods. Innocent-looking changes to the original class may change the implementation of the partial mock, causing your test to stop working.
If you have the flexibility, I recommend extracting an interface that exposes the method you need to call, which will make it easier whether you choose a mock or a fake.
To write a fake, implement that small interface without Mockito using a simple class (nested in your test, if you'd like). This will make it very easy to see what is happening; the downside is that if you write a very complicated Fake you may find you need to test the Fake too. If you have a lot of tests that could make use of a good Fake implementation, this may be worth the extra code.
I highly recommend "Mocks aren't Stubs", an article by Martin Fowler (famous for his book Refactoring). He goes over the names of different types of test doubles, and the differences between them.

DDD and Node.js: Good practice to have a repository with async callbacks?

Here's the thing:
a client shouldn't be aware of the persistence mechanism used by a repository
When, say, implementing a repository with MongoDB as persistence store, retrieving a value is done through async-callbacks
a sync call can still be implemented using a callback, a async-call MUST be implemented using a callback. (Or futures, etc. but I don't want to go there)
To me means that repositories (at least for fetches) should have a callback defined in the interface, even though the implementation happens to be synchronous, e.g:
var repo = {
cache: {},
getById: function(id,callback){
callback(null,this.cache[id]);
}
}
Since I might (will) change this soon to something like:
var repo = {
getById: function(id,callback){
mongoose.findOne({_id:id},callback);
}
}
IMHO, for C/U/D the same need arises primarily for error reporting from the persistence layer.
in short: Do you consider it best practice to define repositories (in Nodejs, although not really relevant) with async callbacks?
I would consider it a good practice if the remainder of the surrounding application code is also async. It certainly is a good practice technically, because repositories are IO bound and there is no sense in keeping blocking calling threads. The problem is that async callbacks result in a continuation-passing style which incurs a learning curve and a degree of friction from most languages.

Resources