I know how to set up a json_decode function in twig.. but why is there no native support for decoding in twig? I can easily call json_encode without setting up a Twig Filter, but that is not the case for json_decode.
It seems logical to have it has a native function. Am I missing the rational behind not having it? Maybe it's computationally expensive?
json_encode makes sense, json_decode however does not really.
It adds a non trivial dependency towards the fact that the data passed is JSON.
Filters are here to transform data not crafting data. Computations (that are not transformations) should be made ahead of time.
One could also argue that json_encode should be done ahead of time but considering how frequent it is to return/send JSON it seems fair enough to do it in the template.
PS :
This seems to be a primarily opinion based question (unless there is an official answer to it).
Related
I am building an app in which I have a Room entity that one of its columns is supposed to hold a List.
What is the best approach for doing this in an app that uses Flow, Coroutines and Room?
I tried serializing with Jackson (turning the List to a long json String and then bring it back to a List when fetched) but I am not sure if this is the correct approach.
Thank you,
What is the best approach for doing this in an app that uses Flow, Coroutines and Room?
This is very much open to opinion.
From a database perspective the approach would be to have any list as a table and thus
reducing the JSON bloat and thus reducing efficiency,
reduce duplication and thus be more likely to conform to normalisation
not potentially introducing complexities and even greater inefficiencies (e.g. not mentioned in the answer below but wild-character as the first character must do a full table scan)
perhaps consider this question and answer matching multiple title in single query using like keyword where if the table per list approach were taken then a simple SELECT * FROM task WHERE task_tags IN(:taglist) could do the same
From a coding point of view at first the coding is simpler when embedding JSON as the complex code is within the JSON libraries.
My question may seem dumb to the experienced but hey, I am just trying to learn. In a react-redux-thunk setup or for that matter any similar setup, should I use complex joins at backend or return normalized values to the front end as much as possible and use something like redux selectors to perform something similar to joins.
The second approach it feels will let me keep the state light but at the same time without proper algorithms, things can get messy. Like running three nested loops increasing time complexity etc.
Any thoughts or pointers to articles on best practices in this regard?
Should I use complex joins at backend?
Yes. In case if you have complex logic/data structure and need more computational power to do calculation/mutation with the data.
Try avoiding too many computation at UI for better user experience.
Server side languages Java, C# etc... shine well for this use case.
(or)
Return normalized values to the front end as much as possible and use something like redux selectors to perform something similar to joins?
Yes. In case if you have plain data structures and you are not performing too many manipulations of any nested structure at UI.
Check this for the ways to normalise your redux store data.
As a normal user, I am ok to wait a fraction of second more for server response whereas any lag in using my application after loading is not better experience (Clicking/Navigate to other tabs etc...).
To answer your question in one word: It depends.
Remember only user experience matters at the end.
I might have just missed them, but I can't seem to find any mention of immutable data structures in Pharo. Coming from functional languages, I've found the immutable map and set useful on various occasions. Even though Pharo has a particular bias towards using mutation, I'd be surprised if nobody got around to implementing them yet.
The code at http://source.lukas-renggli.ch/container/ implements a modern container and iterator library; with mutable and immutable lists; unmodifiable views; and sorted, ordered and unordered set and map data structures. It also supports efficient lazy iteration over all containers using common filtering, mapping, flattening, partitioning, ... operations.
I am not claiming the library has a perfect design or is more performant than the standard collection library, but it is certainly a good starting point for further exploration.
It is completely possible that someone implemented something like that. And maybe there will be immutable collections as a part of the main library in the future. However, for now, there is nothing like that and it is for a very simple reason: what for? When I started to learn Pharo I was fascinated by the null-propagation idea of Objective-C (if you have null, and you send a message to a null you get null back, etc...) So the first thing that I did was to implement null propagation in Pharo. It was fun, it was educational, and it was completely useless. It was useless because no one uses Pharo in that way, it was a wrong approach for that context. I strongly encourage you to make your own immutable collections in Pharo.
But while you do this, think about what should be immutable and why. Is it about shrinking or growing a collection? Arrays are like that — they are fixed size. Is it about not being able to add/remove/swap elements? But what if you get an element and modify it? Finally, consider this example:
array := #('a' 'b' 'c').
array first become: 'd'.
array = #('d' 'b' 'c')
I don't use any setters and still, I can end up with a different array in the end.
Pharo community cares about transparency and good design. It is known that you shouldn't modify contents of collections directly, you shouldn't interact with the internal state of the objects from outside, etc… On the other hand, no one will punch you in the face if you want to do that. I mean, what if you prototype? what if you hack? what if there is literally no other way? You are always able to choose, the question is how can we help people to learn about better choices.
P.S. My answer may sound like immutability is not important. It's not the case. There were even prototypes of read-only objects that can be used to ensure a certain degree of security. It's not that simple to come up with a single concept that will work for everything though
I've discovered Helm (http://helm-engine.org/) the other day and I've been playing a bit with it. I like Elm, so Helm has been great to use thus far.
Simply put, the update function gets called every tick and get passed the model, and has to return an updated version of that model. Another function then gets called to render that model on screen.
For small games with not much in the model it seems ideal, but I've been thinking about something a bit bigger, for which a HashMap (or a lot of them) would be ideal, and I'm wondering about the performances of that.
I'm no expert but I believe using the Data.HashTable.IO should modify the hashtable in ram instead of creating a new copy on change, but that seems complicated to interface with Helm. That would mean using a Cmd for each lookups and each changes and returning that to Helmn and then get passed the result from a new call to update, a nightmare to use if you have more than one or two things to lookup I think.
Data.HashMap.Strict (or Lazy ?) would probably work better, but I imagine each change would create a new copy, and the GC would free up the old one at some future point. Is that correct ?
That would potentially mean hundreds of copy then free per frame, unless the whole thing is smart enough to realise I'm not using the old hashtable again after the change and just not copy it.
So how does this work in practice ? (I'm thinking of HashMap because it seems like the easier solution, but I guess this applies to regular lists too).
I support the comments about avoiding premature optimization and benchmarking, instead of guessing, to determine if performance is acceptable. That said, you had some specific questions too.
Data.HashMap.Strict (or Lazy ?) would probably work better, but I imagine each change would create a new copy, and the GC would free up the old one at some future point. Is that correct ?
Yes, the path to the modified node will consist of new nodes. Modulo balancing, the sub trees on the left and right of the path will all be shared (not copied) by the old and new tree structures.
That would potentially mean hundreds of copy then free per frame
I'm not sure where you get "hundreds" from. Are you saying there are hundreds of updates? For some structures there are rewrite rules that allow much of the intermediate values to be used in a mutating manner. See for example, this small examination of vector.
So how does this work in practice ?
In practice people implement what they want and rework parts that are too slow. I might reach for HashMap early, instead of assuming the containers Data.Map will suffice, but I don't go beyond that without evidence.
I need an array-like data structure with the fastest possible functional update. I've seen a few different implementation of flexible arrays that provide me with this property (Braun, Random Access Lists) but I'm wondering if there is an implementation that is specifically optimized for the case when we are not interested in append or prepend - just updates.
Jean-Cristophe Filliâtre has a very nice implementation of persistent arrays, that is described in the paper linked at the same page (which is about persistent union-find, of which persistent arrays are a core component). The code is directly available there.
The idea is that "the last version" of the array is represented as an usual array, with O(1) access and update operations, and previous versions are represented as this last version, plus a list of the differences. If you try to access a previous version of the structure, the array is "rerooted" to apply the list of differences and present you again the efficient representation.
This will of course not be O(1) under all workflows (if you constantly access and modify unrelated versions of the structure, you will pay rerooting costs frequently), but for the common workflow of mainly working with one version, and occasionally backtracking to an older version that becomes the "last version" again and gets the updates, this is very efficient. A very nice use of mutability hidden under a observationally pure interface.
I have a very good experience with repa (nice demo). Very good performance, automatic parallelism, multidimensional, polymorphic. Recommended to try.
Which language are you using? In Haskell you can use mutable arrays with the state monad, and in Mercury you can use mutable arrays by threading the IO state. Ocaml also has an array module, which unfortunately does not maintain referential transparency, if that is what you are after.
I also needed functional arrays and felt on this SO question some days ago. I was not satisfied with the solution proposed by Gasche as creating a new array is a costly operation and I need to access older versions of array quite frequently (I plan to use this for an AI alpha/beta implementation playing on an array).
(When I say costly, I guess it is O(n*h) where h is the history size because in the worst case only one cell was updated repeatedly and it is needed to go through the whole update list for each cell. I also expect most of the cells not being updated when I need to reroute the array).
This is why I propose another approach, maybe I can get some feedback here. My idea is to store the array like in a B-Tree, except that as it is not mutable, I can access and update any value by index quite easily.
I wrote a small introduction on the project's repository : https://github.com/shepard8/ocaml-ptarray. The order is chosen so as to even depth and order of the tree, so I can get nice complexities depending only on the order for get/set operations, namely O(k^2).
With k = 10 I can store up to 10^10 values. Actually my arrays should not contain more than 200 values but this is to show how robust my solution is intended to be.
Any advice welcome!