Blockly Support for Class Types - blockly

I am using blockly in my application to achieve scratch Programming and I have requirement to for creating Complex Types and assigning to Variables while creating them.
I have seen this blockly Pull request but this feature not yet available

Related

Implementation patterns for multiple programming languages in a single web application [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I've only created web application with one programming language (like python or js).
I'm aware that multiple programming languages are used to create advanced services. But I don't how exactly does it works together, what are the different patterns to implement this.
Here's a scenario. If we have a Nodejs application that accepts like 100s of key-value pair data ( say JSON ) from a user and if we need to work on the data using Haskell... which are compiled to binary.
I have a hierarchy of data of say, a set of people and their managers along with some performance metrics and their points. And I want to pass them to a program written in Haskell to compute some values based on their role etc...
What methods could be used to pass the data into the program?
should I be running a server that accepts the values as JSON (via HTTP) and parses them inside Haskell?
or can I link them with my nodejs application in some other way ? in that case, how can I pass the data from nodejs application to Haskell?
My concern is also about the latency, It's a real-time computation that would happen every time requested.
For instance, facebook uses Haskell for spam filtering and an engineer states they use c++ and Haskell in that service.. c++ accepts input passes to Haskell, which returns back with info. how might be the interfacing working here?
What are the methods used to pass the data into the program ? Should the binary services be Daemon ?
The exact approach depends on the exact requirement in hand, software components planned for usage.
If you are looking for interworking between different languages, there are various ways.
The method based on Addons(dynamically-linked shared objects written in C++) provides an interface between JavaScript and C/C++ libraries. The Foreign Function Interface (FFI) and Dynamic Libraries (.dylib) allow a function written in another language(rust) to be called from language in host(node.js) language. This shall rely on the require() function that shall load Addon as ordinary Node.js modules.
For example, the node-ffi addon can be used to create bindings to native libraries without writing any C++ code for loading and calling dynamic libraries using pure JavaScript. The FFI based approach is used for dynamically loading and calling exported Go functions as well.
In case if you would like to call the Go functions from python, then you can use the ctypes foreign function library for calling the the exported Go functions
If you are looking for design pattern for a architecture that accommodates modules, services built out of various languages, it depends on your exact application & performance requirement.
In general, if you would like to develop a loosely coupled solution taking advantage of emerging technologies (various language, frameworks), then microservices based architecture can be more beneficial. This shall bring in more independency as a change in a module/service shall not impact other services drastically. If your application is large/complex, then you may need to go with microservices pattern like , "Decompose by business capability" or "Decompose by subdomain". There are many patterns related to the microservices pattern like "Database per Service" pattern where each service shall have own database based on your requirement, "API gateway" pattern that is based on how services are accessed by the clients ("Client-side Discovery pattern" or "Server-side Discovery pattern") and other related variants of microservices are available which you can deploy based on your requirement.
The approach in-turn also shall be based on the the messaging mechanism (synchronous / asynchronous), message formats between microservices as per the solution requirement.
For a near perfect design, you may need to do some prototyping and performance test / load test / profiling on your components both software & hardware with the chosen approach and check if the various system requirements / performance metrics are met and decide accordingly.
Use Microservices Architecture.
Microservice Architecture is an architecture where the application itself is divided into various components, with each component serving a particular purpose. Now, these components are called Microservices collectively. The components are no longer dependent on the application itself. Each of these components is literally and physically independent. Because of this awesome separation, you can have dedicated Databases for each component, aka Microservices as well as deploy them to separate Hosts / Servers and moreover, having a specific programming language for each microservice.

NestJs - Class library

I am using nestjs to build several small applications, in doing so I would quite like a class library in which all projects can reference to pass the classes from one project to another if they'd like to, or at least standardise the objects to a contract they must adhere to.
Is there a way to approach this where I have a single standalone collection of simplistic domain classes not bound to a specific nestjs project, without specific domain logic involved, moreover the structure of how something should look as opposed to what it does?
Sounds like a perfect use case for using a NestJS Monorepo with a library, or to use something like Nx and use a library from there. In both cases, you're creating a reusable set of code (usually interfaces or classes) to be used throughout different parts of your servers/applications.

Grouping namespaces

I was wondering whether the namespaces themselves can be grouped?
Our REST server project has a highly decentralized structure (along the lines of a Redux fractal pattern) and every feature has its own namespace. This predictably has led to many namespaces, and the swagger page is getting rather full now.
If this is not achievable, I guess we can live with it, or consider emitting only the swagger json to be consumed by the official Swagger UI that we can run in a separate server. But I'd much prefer a restplus-y solution, since that represents the least amount of code friction.
The underlying OpenAPI Specification has a concept of tags. The namespace feature in Flask-RESTPlus assigns these names as tags for path definitions, so this is how you get the grouping in a Swagger UI. The specification does not offer any hierarchical grouping mechanism, so therefore Flask-RESTPlus doesn't offer any such feature.
You could consider a different strategy for assigning namespaces/tags to create more manageable groupings, split the API across multiple Swagger UI pages/sites, etc. Sounds like there is no way around your Swagger UI needing to render a very large number of API methods, so making it more understandable via general content structuring may be your best approach.

WebAssembly: Standardized Interfaces

The way WebAssembly interfaces with the external world is quite elegant and secure. Adding a function interface is easy, but not yet standardized.
Have calling conventions been established already for Javascript environments (mostly for accessing the DOM in the Browser or the filesystem in Node)?
Conventions for manipulating DOM nodes or using external APIs have not been created yet, but a couple of the WebAssembly proposals / future features will support this.
The first is the reference types proposal, which allows extends the type system, adding a new anyref type that allows modules to hold references to objects provided by the host environment, i.e. you can pass a JS object to your wasm module.
The second is the host bindings proposal that allows WebAssembly modules to create, pass around, call, and manipulate JavaScript / DOM objects. It adds a number host bindings section that includes annotations that describes binding mechanism / interface that should be constructed.
Rust already has a tool, wasm-bindgen, that is very similar in purpose and closely aligns with this proposal. With wasm-bindgen you can pass objects such as strings across the wasm / JS boundary with ease. The tool adds the binding metadata to the wasm module, and generates the required JS glue code.

Choice of technical solution to handling and processing data for a Liferay Project

I am researching to start a new project based on Liferay.
It relies on a system that will require its own data model and a certain agility and flexibility in data management as well as its visualization.
These are my options:
Using Liferay Expando fields and define their own data models. I must do all the view layer.
Using Liferay ECMS adding patches creating structures and hooks that allow me to define data models Master - Detail. It makes much easier viewing issue (velocity templates), but perhaps is the most "dirty" way.
Generating data layer and access to services with Hibernate and Spring. (using Service Factory, for example).
Liferay Service Builder would be similar to the option of creating the platform with Hibernate and Spring.
CRUD generation systems as OpenXava or your XMLPortletFactory
And now my question, what is your advice? What advantages or disadvantages do you think would provide one or another option?
Thanks in advance.
I can't speak for the other CRUD generation systems but I can tell you about the Liferay approaches.
I would take a hybrid approach.
First, I would create the required data models as best as I can with the current requirements in Liferay Service Builder and maintain them there as much as possible. This would require that you rebuild and redeploy your plugin every time you changed the data model but would greatly enhance performance compared to all the other Liferay approaches you've mentioned. Service Builder in that regard is much more rigid and cannot be changed via GUI.
However, in the event for some reason you cannot use Service Builder to redefine your data models and you need certain aspects of it the be changed via GUI, you can also use Expandos to extend the models you've created with Service Builder. So, it is the best of both worlds.
On the other option, using the ECMS would be a specialized case and I would only take this approach if there is a particular requirement it satisfies (like integration with the ECMS).
With that said, Liferay provides you many different ways to create your application. It ultimately depends on how you're going to use your application.

Resources