Is there http health monitoring in crate? - health-monitoring

is there an interface in crate like there is in ES - like this one /_cluster/health?pretty=true
I know i can enable the ES one, just wondering if there was one more specific to crate.

The admin-ui that is included in Crate uses some SQL queries to determine the health. That's explained in this issue on github. Other than that there is no interface to do that. (Other than using the ES API as you mentioned)

Related

How to create shared language resources with i18next in multi-app node/express & react monorepo?

I just started to use i18next in my monorepo (new to it), which has several serverside microservices and a couple of frontends, including several custom library components. Many of the language strings are shared, and some are app specific. I could not decide on a logical way to approach the problem.
Tooling: Typescript - Node/Express - React/Vite - Electron/React (Desktop)
Firstly, questions:
Where should I keep the language resources during development? In another library? App in monorepo? Under each library module?
From where should I serve them? Something like lang.mydomain.com? Re-dividing them under each app during build (e.g. with Vite)?
All examples/tutorials I could reach use a single app and use i18next.js/ts to be included at the app level. I think I need to wrap it into a library module for my purposes. How can I do that, without losing access to its capabilities/types/methods etc? Dynamically creating instances in a higher-order module (the library is extensive, and I'm nearly lost)?
My initial thoughts:
As many translations will be shared, translating each one will be illogical, they should be shared.
As there can be many languages, it seems using i18next-http-backend is logical for web and embed with i18next-fs-backend for desktop app.
Dividing the resources as common/graphs/tables/ui etc will be logical (these will be divided in a library module hierarchy though).
A logical way can be to include a module's language resources in the module but that would not help translators, they should be at the same place at the top level in this respect.
PS: I used to use react-intl-universal, it is really easy, but it's release schedule is falling back.
Thank you in advance...

Building HTTP client / server pairs in Rust

There are many high-quality HTTP clients and web application (micro-) frameworks available in Rust.
Are there sensible strategies for deriving clients from server specifications (or vice versa) or for building both in parallel while keeping all possible contract constraints (methods, paths, headers, bodies & their serializations) typed and in sync?
The use case is a rather large API surface of an SPA with client and backend both written in Rust. Other clients (also written in Rust) are planned.
The serialization is relatively easy to keep in sync if you stay within the Rust world - make dedicated serialization structs in a crate both server and client refer to.
As for the entire API, the best effort I'm aware of is OpenAPI. There's a bunch of crates that aim to work with it, okapi, paperclip, openapiv3, and probably more if you search around. I haven't used any of them though.

Supporting multiple versions of models for different REST API versions

Are there any best practices for the implementation of API versioning? I'm interested in the following points:
Controller, service - e.g. do we use a different controller class for each version of the API? Does a newer controller class inherit the older controller?
Model - if the API versions carry different versions of the same model - how do we handle conversions? E.g. if v1 of the API uses v1 of the model, and v2 of the API uses v2 of the model, and we want to support both (for backward-compatibility) - how do we do the conversions?
Are there existing frameworks/libraries can I use for these purposes in Java and JavaScript?
Thanks!
I always recommend a distinct controller class per API version. It keeps things clean and clear to maintainers. The next version can usually be started by copying and pasting the last version. You should define a clear versioning policy; for example N-2 versions. By doing so, you end up with 3 side-by-side implementations rather than an explosion that some people think you'll have. Refactoring business logic and other components that are not specific to a HTTP API version out of controllers can help reduce code duplication.
In my strong opinion, a controller should absolutely not inherit from another controller, save for a base controller with version-neutral functionality (but not APIs). HTTP is the API. HTTP has methods, not verbs. Think of it as Http.get(). Using is another language such as Java, C#, etc is a facade that is an impedance mismatch to HTTP. HTTP does not support inheritance, so attempting to use inheritance in the implementation is only likely to exacerbate the mismatch problem. There are other practical challenges too. For example, you can unherit a method, which complicates the issue of sunsetting an API in inherited controllers (not all versions are additive). Debugging can also be confusing because you have to find the correct implementation to set a breakpoint. Putting some thought into a versioning policy and factoring responsibilities to other components will all, but negate the need for inheritance in my experience.
Model conversion is an implementation detail. It is solely up to the server. Supporting conversions is very situational. Conversions can be bidirectional (v1<->v2) or unidirectional (v2->v1). A Mapper is a fairly common way to convert one form to another. Additive attribute scenarios often just require a default value for new attributes in storage for older API versions. Ultimately, there is no single answer to this problem for all scenarios.
It should be noted that backward-compatibility is a misnomer in HTTP. There really is no such thing. The API version is a contract that includes the model. The convenience or ease by which a new version of a model can be converted to/from an old version of the model should be considered just that - convenience. It's easy to think that an additive change is backward-capable, but a server cannot guarantee that it is with clients. Striking the notion of backwards-capable in the context of HTTP will help you fall into the pit of success.
Using Open API (formerly known as Swagger) is likely the best option to integrate clients with any language. There are tools that can use the document to create clients into your preferred programming language. I don't have a specific recommendation for a Java library/framework on the server side, but there are several options.

What is major difference when we want to build IoT solution if we use middleware or libraries or custom development?

What is major difference when we want to build IoT solution if we use middleware or libraries or custom development?
Let's imagine that there are so many street lights, camera for illegal parking or some sensors and we should build some solution to integrate. What I found is that they are using different protocol(tcp, serial) and data type(binary, xml, text). Colleague recommend some way like middleware or libraries but I doubt if it is efficient for maintenance or not.
Middleware is strong tool for IoT solution which provides connection between different layers. It is easy to use, but there might be multiple adjustments needed for matching middleware requirements.
You can use library as a joint. If you have suitable library, you can easily connect using minimum extra programming. You might have to use multiple libraries, and additional libraries could be needed when new, unsupported components are added.
Custom development is a traditional way. It is time-consuming job, but If you code everthing, you don't need any other help.
I heard declarative backend software like Interactor might be another solution. You can construct connections and make your own logic with smaller resources.

Use external cache provider for RazorEngine

Currently RazorEngine, caches the template in memory.
Is there anyway to use an external caching provider?
We have 10 web servers in a webfarm, and now each of them needs to cache the template separately. That would be great if we can implement our own caching system and use something like Memcached.
Yes, you should now be able to do that in 3.5.0 (currently beta). You can provide your own ICachingProvider implementation that fits your needs. Documentation and an example implementation can be found here. What you want to do is saving the compiled assemblies and then loading the assemblies and the template-types when needed.
Disclaimer: I contributed that API to RazorEngine.

Resources