Coding convention: Matching backend with frontend labels - frontend

In my backend, I have data attributes labeled in camelcase:
customerStats: {
ownedProducts: 100,
usedProducts: 50,
},
My UI code is set up in a way that an array of ["label", data] works best most of the time i.e. most convenient for frontend coding. In my frontend, I need these labels to be in proper english spelling so they can be used as is in the UI:
customerStats: [
["Owned products", 100],
["Used products", 50],
],
My question is about best practices or standards in web development. I have been inconsistent in my past projects where I would convert the data at random places, sometimes client-side, sometimes right on the backend, sometimes converting it one way and then back again because I needed the JSON data structure.
Is there a coding convention how the data should be supplied to the frontend?
Right now all my data is transfered as JSON to the frontend. Is it best practice to convert the data to the form that is need on the frontend or backend? What if I need the JSON attributes to do further calculations right on the client?
Technologies I am using:
Frontend: Javascript / React
Backend: Javascript / Node.js + Java / Java Spring

Is there a coding convention for how to transfer data to the front end
If your front end is JavaScript based, then JSON (Java Script Object Notation) is the simplest form to consume, it is a stringified version of the objects in memory. See this healthy discussion for more information on JSON
Given that the most popular front end development language is JavaScript these days, (see the latest SO Survey on technology) It is very common and widely accepted to use JSON format to transfer data between the back and front end of solutions. The decision to use JSON in non-JavaScript based solutions is influenced by the development and deployment tools that you use, seeing more developers are using JavaScript, most of our tools are engineered to support JavaScript in some capacity.
It is however equally acceptable to use other structured formats, like XML.
JSON is generally more light-weight than XML as there is less provision made to transfer meta-data about the schema. For in-house data streams, it can be redundant to transfer a fully specced XML schema with each data transmission, so JSON is a good choice where the structure of the data is not in question between the sender and receiver.
XML is a more formal format for data transmission, it can include full schema documentation that can allow receivers to utilize the information with little or no additional documentation.
CSV or other custom formats can reduce the bytes sent across the wire, but make it hard to visually inspect the data when you need to, and there is an overhead at both the sending and receiving end to format and parse the data.
Is it best practice to convert the data to the form that is need on the frontend or backend?
The best practice is to reduce the number of times that a data element needs to be converted. Ideally you never have to convert between a label and the data property name... This is also the primary reason for using JSON as the data transfer format.
Because JSON can be natively interpreted in a JavaScript front end, in a JavaScript front end we can essentially reduce conversion to just the server-side boundary where data is serialized/deserialized. There is in effect no conversion in the front end at all.
How to refer to data by the property name and not the visual label
The general accepted convention in this space is to separate the concerns between the data model and the user experience, the view. Importantly the view is the closest layer to the user, it represents a given 'point of view' of the data model.
It is hard to tailor a code solution for OP without any language or code provided for context, in an abstract sense, to apply this concept means to not try and have the data model carry the final information about how the data should be displayed, instead you have another piece of code that provides the information needed to present the data.
In different technologies and platforms we refer to this in different ways but the core concept of separating the Model from the View or Presentation is consistently represented through these design patterns:
Exploring the MVC, MVP, and MVVM design patterns
MVP vs MVC vs MVVM vs VIPER
For OP's specific scenario, this might involve a mapping structure like the following:
customerStatsLabels: {
ownedProducts: "Owned products",
usedProducts: "Used products",
}
If this question is updated with some code around how the UI is constructed I will update this response with something more specific.
NOTE:
In JavaScript, objects are simply arrays of arrays, and as such it is very easy to tweak existing code that is based on arrays, into code based on objects and vice-versa.

Related

TipTap: Should I use JSON or HTML for backend storage

The TipTap editor and its progeny quasar-tiptap can export user created content in the browser in both HTML and JSON formats. If I plan on allowing round trips of the user data to my server what are the pros and cons of using either format for storage.
I would assume HTML has a greater likelihood of XSS attacks, and indeed such a vulnerability has been found (and rectified) in the past. And using JSON would be easier for backend parsing should it ever be required.
Beyond this, are there any major benefits of using either format? Preserving fidelity of user input is important. Size is important (any differences in image storage?). Editor performance is important. Scripting attack vulnerability is extremely important.
Which to choose?
I am sure at 12 months later you have made your choice. The usual caveats that this question is not really answerable and can only really elicit opinions. However, the best I can offer (and I am definately no expert):
User entered text is a risk (cross-site scripting, SQL injection attack, potentially creating a false alert dialog with disinformation for the user reading another user's content) regardless of how the rich text data is sent to the server and/or stored. This is because eventually the content will be a) very likely stored in a database b) sent to and interpreted by a browser in HTML format. Whether it goes through an intermediary JSON format is largely irrelevant. Stripping out all unwanted tags, attributes and SQL commands is going to be an extremely important server responsibility (known as sanitising), whatever the format. Many more well tested sanitising libraries exist for HTML in many more languages/server side technologies than for the relatively niche ProseMirror JSON format used by TipTap.
The TipTap documentation talks about the advantages of the 2 formats here
The ProseMirror JSON format is reasonably verbose, and inspecting both the JSON and HTML side by side, the JSON format seems to require more characters. Obviously it will depend on the rich text itself - how many formatting marks etc, but I would not worry about this in terms of server storage. Given you are likely using Vue or React with TipTap, it is likely you can set up an editor and have 2 seperate divs displaying the output side by side. You can see the 2 formats for the same text by scrolling to the bottom of this page
Editor performance - if you store the data as HTML you can pass it straight to the client. If you store it as JSON you will have to parse it and create the HTML. If you do this client side, the client will have to download and execute a library to perform this task - on NPM this is currently 14kB, so not a big issue.
Both formats will almost certainly send image data to the server as Base64 strings, so no bandwidth will be saved with either format for image files. Base64 is less efficient for DB storage as compared to saving as binary objects, particularly if compression is used, so you could strip these out at the server, but this will have a cost in time spent developing and testing the backend which may well be better spent on other things

Storing data obtained from Information Extration

I have some experience with java and I am a student doing my final year project.
I need to work on a project in Natural language processing , well I am currently trying to work on stanford-nlp libraries (but am not locked to it , i can change my tool) so answers can be for any tool proper for my problem.
I have planned to work on Information Extraction IE , and have seen some page/pdf that explain how it works with various NLP techniques. Data will be processed with NLP and i need to perform Information Retrieval IR on the processed data
My problem now is: What data-structure or storage medium should I use to store the data I have retrieved by using NLP techniques
that data-store must have a capacity to support query
XML,JSON does not look an ideal candidate . (i could be wrong) : if they can be then some help/guidance on best way to do it will be helpful.
my current view is to convert/store the parse tree into a data format that can be directly read for query .(parse tree:a diagrammatic representation of the parsed structure of a sentence or string)
a sample of type of data need to be stored , for the text "My project is based on NLP." the Dependency would be as below
root(ROOT-0, based-4)
poss(project-2, My-1)
nsubjpass(based-4, project-2)
auxpass(based-4, is-3)
prep(based-4, on-5)
pobj(on-5, NLP-6)
Have you already extracted the information or are you trying to store the parse tree? If the former, this is still an open question in NLP. See, for example, the book by Jurafsky and Martin, which discusses many ways to do this.
Basically, we can't answer until we know what you're trying to store. If it's super simple information, you might be able to get away with a simple relational database.

Best approach to data filtering

I'm building a little library applications to have a visual catalogue of my programming ebooks.
By now, I've added some of my ebooks info into a ko.observableArray in my BooksViewModel.js file.
Later, I'll be implementing a NodeJS applications with all the data saved in a MongooseDB and access them from there, but by now, I'm just experimenting directly from Knockout.js.
By default, my library shows all the books I added, desorganized, so I'm looking forward to implement "categories" by language. Every book object contains a language attribute.
I want to filter the books showed by language but I'm a little bit confused on how will be the best way to do this.
The books on the array are not organized, they are all dropped there.. some talks about javascript, other C and so on.
At first I thought about creating a separated array for each language, and then implemeting a method in the ViewModel to select the corresponding array of the language you requested.
Later, I would implementa NodeJS API, to get them by language, lets say:
GET /languages/C // will get a json corresponding all the books that talks about C
The ViewModel could contain a method:
self.findByLanguage = function(lang) {
self.books = // GET /languages/:lang
};
But that would query the database every time. I guess is better to load the whole books json first, saved all of them to an array on the client side, and then filter them. That way only one request would be made.
I could have a global array containing all the books, and then implement the filter with ko.utils.ArrayFilter.
What do you guys think will be the best approach? Maybe there is a better way.
Thanks in advance!
If "my programming ebooks" means this application is for you only, there's a trivial difference between querying all and only the selected few books as the database load will generally be close to zero in either of these cases. The number of books would be a few hundred perhaps.
But wait, what's the actual benefits from loading them all at once?
Upsides of storing the whole list client-side
If you are always looking at most of the categories will save you some milliseconds of database load and all bandwith just from changing categories.
Downsides
Bandwith usage is awful, initial page loading is slower giving you plenty of books you don't want or need.
The database system you're using is having speed as important optimization factor. Add an index on language and querying should be done in no time, anyhow. For the time you're using arrays as data source, this might not show in comparison to 'just sending the whole array'.
Opening the page in multiple windows/browsers/on multiple pcs will require you to syncrhonize all changes to all clients. If you don't do this, you'll have old objects until you reload the page which is exactly what you should avoid if having the list client-side.
If you're planning to run this on your local computer or within your local network, speed should be a trivial issue, so why not let the database do the work? If you're not and speed is an issue, I would personally value "I can load category X pretty fast" over "Initial page loading is slow, but it's fast once everything's loaded".

Code generation against Sprocs?

I'm trying to understand choices for code generation tools/ORM tools and discover what solution will best meet the requirements that I have and the limitations present.
I'm creating a foundational solution to be used for new projects. It consists of ASP.NET MVC 3.0, layers for business logic and data access. The data access layer will need to go against Oracle for now, and then switch to SQL this year as the db migration is finished.
From a DTO standpoint mapping to custom types in the solution, what ORM/code generation tool will work with creating my needed code but can ONLY access Stored Procs in Oracle and SQL.?
Meaning, I need to generate the custom objects that are the artifacts from and being pushed to the stored procedures as the parameters, I don't need to generate the sprocs themselves, they already exist. I'm looking for the representation of what the sproc needs and gives back to be generated into DTOs. In some cases I can go against views and generate DTOs. I'm assuming most tools already do this. But for 90% of the time, I don't have access directly to any tables or views, only stored procs.
Does this make sense?
ORMs are best at mapping objects to tables (and/or views), not mapping objects to sprocs.
Very few tools can do automated code generation against whatever output a sproc may generate, depending on the complexity of the sproc. It's much more straight-forward to code generate the input to a sproc as that is generally well defined and clear.
I would say if you are stuck with sprocs, your options for using third party code to help reduce your development and maintenance time are severely limited.
I believe either LinqToSql or EntityFramework (or both?) are capable of some magic with regards to SQL Server to try to mostly automatically figure out what a sproc may be returning. I don't think it works all the time, it's just sophisticated guess work and I seriously doubt it would work with Oracle. I am not aware of anything else software-wise that even attempts to figure out what a sproc may return.
A sproc can return multiple diverse record sets that can be built dynamically by the sproc depending on the input and data in the database. A technical solution to automatically anticipating sproc output seems like it would require the following:
A static set of underlying data in the database
The ability to pass all possible inputs to the sproc and execute the sproc without any negative impact or side effects
That would give you a static set of possible outputs for any given valid input. A small change in the data in the database could invalidate everything.
If I recall correctly, the magic Microsoft did was something like calling the sproc passing NULL for all input parameters and assuming the output is always exactly the first recordset that comes back from the database. That is clearly an incomplete solution to the problem, but in simple cases it appears to be magic because it can work very well some of the time.

Concerns about Core Data

I'm getting ready to dive into my first Core Data adventure. While evaluating the framework two questions came up that really got me thinking about using Core Data at all for this project or to stick with SQLite.
My app will heavily rely upon importing data from an external source. I'm aware that one can import into Core Data but handling complex relationships seems complicated and tedious. Is there an easy way to accomplish complex imports?
The app has to be able to execute complex queries spanning multiple tables or having multiple conditions. Building these predicates and expressions simply scares me...
Is it worth to take the plunge and use Core Data or should I stick with SQLite?
As I and others have said before, Core Data is really an object-graph management framework. It manages the relationships between model objects, including constraints on their cardinality, and manages cascading deletes etc. It also manages constraints on individual attributes. Core Data just happens to also be able to persist that object graph to disk. It can do this in a number of formats, including XML, binary, and via SQLite. Thus, Core Data is really orthogonal to SQLite. If your task is dealing with an embedded SQL-compatible database, go with SQLite. If your task is managing the model layer of an MVC app, go with Core Data. In specific answers to your questions:
There is no magic that can automatically import complex data into any model. That said, it is relatively easy in Core Data. Taking a multi-pass approach and using the SQLite backend can help with memory consumption by allowing you to keep only a subset of the data in memory at a time. If the data sets can be kept in memory, you can write a custom persistent store format that reads/writes directly to your legacy data format from within Core Data (see the Atomic Store Programming Guide).
Building a complex NSPredicate declaratively is somewhat verbose but shouldn't scare you. The Predicate Programming Guide is a good place to start. You can, of course, also write predicates using a string format, much like a string-formatted SQL statement. It's worth noting that, as described above, the predicates in Core Data are on the objects and object graph, not on the SQL tables. If you really want to think at the level of tables, stick with SQLite and write your own wrapper.
I can't really speak to your first point.
However, regarding your second point, using Core Data means you don't have to really worry about complex queries since you can just pretend that all the relationships are properly established in memory already (Apple's implementation details aside). It doesn't matter how complex a join it might be in a database environment because you really aren't in a database environment. If you need to get the fourth child of the grandparent of your current object and then find that child's pet's name and breed, all you do is traverse up the object tree in code using a series of messages or properties. No worries about joins or anything. The only problem is it might be really slow depending on your objects' relationships, but I can't really speak accurately to that since I haven't actually implemented anything using Core Data (I've just read about it extensively on Apple's and others' websites).
If the data importer from an external source is written based on the same core data model (for the targeted/destination side of the import) - nothing will be conceptually different as compare to using/updating the same data (through the core data stack from your actual application).
If you create the data importer without using the core data stack, make sure you learn well the db schema that would be generated/expected by the core data based model. There is nothing magic there - just make sure you follow how the cross entity relationships are implemented and how entity hierarchies are stored.
I had to create recently a data importer from Access database into the core data based Sqlite store as a .NET app. Once my destination core data model was define, I created a small app that populated the Sqlite store with randomly generated entities (including all the expected relationships). Then, I reverse engineered how the core data actually created the Sqlite store for the model and how it handles the relationships by learning from the generated and persisted data. Then, I implemented the .NET based importer/data-transformer according to my observations. At the end, I got perfect core data friendly data store that could be open an modified from the application that was using the core data stack on Mac OSX.

Resources