Quarkus-grpc : How to configure for automatic openapi view? - quarkus-grpc

In quarkus, is there a way to add a simple annotation, to expose underlying gRPC implementation as REST/json also? I.e two views with one implementation.
Springboot seem to have ProtobufJsonFormatHttpMessageConverter.
https://medium.com/#thinhda/build-service-that-provides-http-and-grpc-api-with-spring-9e7cff7aa17a
I belive proto syntax allows annotation for rest endpoint
syntax = "proto3";
package pn.api;
//import "google/protobuf/timestamp.proto";
//import "google/api/annotations.proto";
option java_package = "pn.api.protobuf";
option java_outer_classname = "Proto";
service SearchService{
rpc search(SearchRequest) returns (SearchResponse){
// option (google.api.http) = { get: "/v1/search/{queryObj}" };
};
}

No.
There is no way to do it. If you think that's beneficial, feel free to create an issue in Github Issues (https://github.com/quarkusio/quarkus/issues) and we can discuss it there.
In the issue, please focus on what would be a benefit of adding such a feature.

Related

Nestjs versioning configuration when using Fastify

For performance reasons of a relatively complex Nestjs application, we have chosen to use Fastify as our HTTP provider.
We are at a stage where we need to version out api and are running into problems after following instructions on the standard Nestjs guide:
const app = await NestFactory.create<NestFastifyApplication>(
AppModule,
new FastifyAdapter(fastifyInstance),
{},
);
app.enableVersioning();
The error received is:
Property 'enableVersioning' does not exist on type 'NestFastifyApplication'.ts(2339)
I haven't been able to find a solution anywhere and thought I'd ask and see if anyone else has had the same problem and found a solution.
looks like you need to upgrade #nestjs/common because enableVersioning does exists in NestFastifyApplication:
https://github.com/nestjs/nest/blob/d5b6e489209090544a4f39c4f4a716b9800ca6a8/packages/platform-fastify/interfaces/nest-fastify-application.interface.ts#L16
https://github.com/nestjs/nest/blob/d5b6e489209090544a4f39c4f4a716b9800ca6a8/packages/common/interfaces/nest-application.interface.ts#L47

Environment specific configuration of datasources in loopback4 application

I have just started my first loopback project and chosen loopback4 version for the application. Its purely a server application which will interact with databases (Redis and mongodb) and will call external API services due to micro-service architecture.
Now, I have 3 datasources in my application i.e. mongodb, Redis, and REST based datasource to call external services. I am facing 2 problems in going forward.
1. Environment specific configurations of Datasources: I need to maintain configuration for all three datasources according to the NODE_ENV environment variable. For lb3 i found this solution,
https://loopback.io/doc/en/lb3/Environment-specific-configuration.html#data-source-configuration
which does not work in lb4. One solution is to add configuration files having names mongodb.staging.json and mongodb.production.json and same for redis and rest datasources in directory src/datasources, and load this config according to NODE_ENV variable using if condition and pass it to the constructor of datasource. It works but it does not seem nice, as it should be application's responsibility to do this.
Can somebody suggest me lb3 equivalent solution for the above?
2. Calling External APIs via datasource: in lb4, To call external services its recommended to have a separate REST based datasource and its service to call it via controller. Now, In REST datasource config, one has to define a template of all the API calls which will happen to the external service https://loopback.io/doc/en/lb4/REST-connector.html#defining-a-custom-method-using-a-template.
As my application calls external service heavily with relatively large number of request parameters. It becomes really messy to declare each API call with its request params and to maintain this in the datasource config which will be environment specific.
Can somebody tell me a more robust and cleaner alternative of the above problem?
Thanks in advance!!
Using environment variables in datasource configs
The datasource config is simply a JSON file that's imported in into *.datasource.ts. Hence, you can replace that JSON file with a Typescript file and import it accordingly. LoopBack 4 does not provide any custom variable substitution mechanism. Instead, it is recommended to use process.env.
Recent CLI versions replace the JSON config in favour of using a single Typescript file:
import {inject} from '#loopback/core';
import {juggler} from '#loopback/repository';
const config = {
name: 'db',
connector: 'memory',
};
export class DbDataSource extends juggler.DataSource {
static dataSourceName = 'db';
static readonly defaultConfig = config;
constructor(
#inject('datasources.config.db', {optional: true})
dsConfig: object = config,
) {
super(dsConfig);
}
}
The dependency injection in the constructor allows you to override the config programmatically via the IoC container of the application.
Further reading
https://loopback.io/doc/en/lb4/DataSources.html
Calling external APIs without REST connector
The REST connector enforces a well-defined interface for querying external APIs so as to be able to do validation before sending out the request.
If this is not favourable, it is possible to create a new Service as a wrapper to the HTTP queries. From there, you can expose your own functions to handle requests to an external API. As Services do not need to follow a rigid structure, it is possible to customize it to your use-case.
It is also possible to create a new request directly inside the controller using either built-in or external libraries.
Overall, there isn't a 100% right or wrong way of doing certain things in LoopBack 4. Hence why the framework provides numerous ways to tackle the same issue.

Autofac Dependency Injection in Azure Function

I am trying to implement DI using Autofac IOC in Azure function.
I need to build the container, but not sure where to put the code to build the container
I did write a blog entry for doing dependency injection with Autofac in Azure Functions. Have a look here:
Azure Function Dependency Injection with AutoFac: Autofac on Functions
It follows a similar approach like the one by Boris Wilhelms.
Another implementation based on Boris' approach can be found on github: autofac dependency injection
-- update ---
With Azure Function v2 it is possible to create nuget packages based on .net standard. Have a look onto
Azure Functions Dependency Injection with Autofac: Autofac on Functions nuget Package
I think for now you would need to do something ugly like:
public static string MyAwesomeFunction(string message)
{
if (MyService == null)
{
var instantiator = Initialize();
MyService = instantiator.Resolve<IService>();
}
return MyService.Hello(message);
}
private static IService MyService = null;
private static IContainer Initialize()
{
// Do your IoC magic here
}
While Azure Functions does not support DI out of the box, it is possible to add this via the new Extension API. You can register the container using an IExtensionConfigProvider implementation. You can find a full example DI solution in Azure here https://blog.wille-zone.de/post/azure-functions-proper-dependency-injection/.
Azure Functions doesn't support dependency injection yet. Follow this issue for the feature request
https://github.com/Azure/Azure-Functions/issues/299
I've written a different answer to the main question, with a different solution, totally tied to the main question.
Previous solutions were either manually initializing a DI or using the decorator way of doing it. My idea was to tie the DI to the Functions Builder in the same way we do with aspnet, without decorators.
I don't know why my post got deleted by #MartinPieters, it seems that it was not even read.
I found no way to officially disagree with that decision, so I kindly ask that the moderator read my answer again and undelete it.
You can do it using a custom [inject] attribute. See example here https://blog.wille-zone.de/post/azure-functions-proper-dependency-injection/

How should I use Swagger with Hapi?

I have a working ordinary Hapi application that I'm planning to migrate to Swagger. I installed swagger-node using the official instructions, and chose Hapi when executing 'swagger project create'. However, I'm now confused because there seem to be several libraries for integrating swagger-node and hapi:
hapi-swagger: the most popular one
hapi-swaggered: somewhat popular
swagger-hapi: unpopular and not that active but used by the official Swagger Node.js library (i.e. swagger-node) as default for Hapi projects
I though swagger-hapi was the "official" approach, until I tried to find information on how do various configurations on Hapi routes (e.g. authorization, scoping, etc.). It seems also that the approaches are fundamentally different, swagger-hapi taking Swagger definition as input and generating the routes automatically, whereas hapi-swagger and hapi-swaggered seem to have similar approach with each other by only generating Swagger API documentation from plain old Hapi route definitions.
Considering the amount of contributors and the number of downloads, hapi-swagger seems to be the way to go, but I'm unsure on how to proceed. Is there an "official" Swagger way to set up Hapi, and if there is, how do I set up authentication (preferably by using hapi-auth-jwt2, or other similar JWT solution) and authorization?
EDIT: I also found swaggerize-hapi, which seems to be maintained by PayPal's open source kraken.js team, which indicates that it might have some kind of corporate backing (always a good thing). swaggerize-hapi seems to be very similar to hapi-swagger, although the latter seems to provide more out-of-the-box functionality (mainly Swagger Editor).
Edit: Point 3. from your question and understanding what swagger-hapi actually does is very important. It does not directly serves the swagger-ui html. It is not intended to, but it is enabling the whole swagger idea (which the other projects in points 1. and 2. are actually a bit reversing). Please see below.
It turns out that when you are using swagger-node and swagger-hapi you do not need all the rest of the packages you mentioned, except for using swagger-ui directly (which is used by all the others anyways - they are wrapping it in their dependencies)
I want to share my understanding so far in this hapi/swagger puzzle, hope that these 8 hours that I spent can help others as well.
Libraries like hapi-swaggered, hapi-swaggered-ui, also hapi-swagger - All of them follow the same approach - that might be described like that:
You document your API while you are defining your routes
They are somewhat sitting aside from the main idea of swagger-node and the boilerplate hello_world project created with swagger-cli, which you mentioned that you use.
While swagger-node and swagger-hapi (NOTE that its different from hapi-swagger) are saying:
You define all your API documentation and routes **in a single centralized place - swagger.yaml**
and then you just focus on writing controller logic. The boilerplate project provided with swagger-cli is already exposing this centralized place swagger.yaml as json thru the /swagger endpoint.
Now, because the swagger-ui project which all the above packages are making use of for showing the UI, is just a bunch of static html - in order to use it, you have two options:
1) to self host this static html from within your app
2) to host it on a separate web app or even load the index.html directly from file system.
In both cases you just need to feed the swagger-ui with your swagger json - which as said above is already exposed by the /swagger endpoint.
The only caveat if you chose option 2) is that you need to enable cors for that end point which happened to be very easy. Just change your default.yaml, to also make use of the cors bagpipe. Please see this thread for how to do this.
As #Kitanotori said above, I also don't see the point of documenting the code programmatically. The idea of just describing everything in one place and making both the code and the documentation engine to understand it, is great.
We have used Inert, Vision, hapi-swagger.
server.ts
import * as Inert from '#hapi/inert';
import * as Vision from '#hapi/vision';
import Swagger from './plugins/swagger';
...
...
// hapi server setup
...
const plugins: any[] = [Inert, Vision, Swagger];
await server.register(plugins);
...
// other setup
./plugins/swagger
import * as HapiSwagger from 'hapi-swagger';
import * as Package from '../../package.json';
const swaggerOptions: HapiSwagger.RegisterOptions = {
info: {
title: 'Some title',
version: Package.version
}
};
export default {
plugin: HapiSwagger,
options: swaggerOptions
};
We are using Inert, Vision and hapi-swagger to build and host swagger documentation.
We load those plugins in exactly this order, do not configure Inert or Vision and only set basic properties like title in the hapi-swagger config.

Mocking Repository but Then Swapping Out for Real Implementation in Node.js

I'm building a Repository layer with higher level API for my abstractions above to make calls to the database persistence. But since JavaScript doesn't have the concept of Interfaces like a language such as C# or Java does, how do you swap out the mock for the real implementation?
I prefer creating custom mocks, node repository modules with data persitence high level methods in them vs. Sinon.js or something like that.
If I'm creating node modules, then how? I could send in a mock representation of the repository where I mock out what the repository methods are doing but then the actual node modules using those repository modules would need to use the real repository implementation that calls the real database. How is this done in Node? I want to just inject via a property, I don't want some gigantic injection IoC framework either.
Since there's no concept of an interface then wtf do you do in Node/JS? I have to create a data layer below the repository (whether it be a custom set of modules making real query calls to Postgres or whether I'm using Mongoose or whatever it may be, I need a DL set of modules that the repository calls for tis real DB calls under the hood).
And lets say I do choose to use some framework like Sinon.js, what's the common interface for the module you're mocking that can be shared by the mocking framework and the real module?
There's more than one way to do it. If you come from a different background it may take some getting used to Node.
You can do this:
module.exports = function(db) {
this.db = db;
this.myQuery = function(n, cb) {
this.db.query(n, cb);
}
}
Then in config.js
var exports.db = require('./mydb');
Then
var config = require('./config.js');
var db = require('./db')(config.db);
There are lots of variations possible. You could do a dynamic require somewhere based on a string or something. Or use classes or init functions. Most are going to probably end up being similar.
The proxyrequire module could be helpful. So can Sinon.js.
Since there really isn't type checking people generally are verifying that with their tests at runtime. If you are really doing TDD it might not make a huge difference.

Resources