Foxx/ArangoDB: Can you create a response that adhere to JSON API specification? - arangodb

I am currently writing some micro services with Foxx to be consumed by Ember.js. Ember data plays very nicely with JSON API (http://jsonapi.org) responses. So I tried to serialize the Foxx responses with the json-api-serializer (https://www.npmjs.com/package/json-api-serializer) - but with no luck. I only found the forClient method, but this only allows me to operate on the JSON representation of single objects, not the whole response. So my question: Is it possible to implement JSON API with Foxx/ArangoDB?

You can return arbitrary responses from Foxx routes, so it's entirely possible to generate JSON responses that conform to JSON API.
However there's no built-in way to do this automatically.
I don't see anything in json-api-serializer that shouldn't work in Foxx, so I'm not sure what problems you are encountering. You should be able to simply return the output object with res.json(outputFromSerializer) and set the content type with res.set('content-type', 'application/vnd.api+json').
If everything else fails you can just write your own helper functions to generate the boilerplate and metadata JSON API expects.

Related

AWS lambda function proxies requests of fetching binary blob(PDF) from service layer and then returns to the client

I've created a lambda function so that I can use it for validation purposes and then proxy the request to the service layer. Then the service layer response contains a binary blob(PDF), which goes through the lambda function then the API gateway finally would reach the client.
The first problem we ran into was the PDF got transformed or corrupted, just returned blank PDF. And then I found this post which did not make any sense to me at first. Until I saw this aws doc. It turns out it's required to encode the binary data into base64 and then put the indictor 'isBase64Encoded' to true. The gateway eventually converts the response back to the binary blob.
TBH, I am new to aws and I don't really understand why this is the way..what's wrong of passing through the original binary blob, why those conversion steps are necessary?
Here are list of things i had to do
Configured / as a Binary Media Type on gateway. (I tried to use application/pdf, but did not work?)
Make sure the response body from the service layer not transformed into string (I am using request, and by default it gives me string). I send encoding: null along with the request
When i get the Buffer data from the service layer, i use Buffer to convert response body into base64 encoding.
In the lambda output, I set isBase64Encoded to true
Finally, get the unaltered PDF...
I am wondering if someone can confirm i am doing in an expected way? Or maybe if there is a better way?
Also, when we set binary support media type to /, doesn't this mean it accepts all media types? But i only want the PDF to be supported.
This doc (https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings.html) should be able to answer your question. And there are two things you need to note:
You can pass the original binary file (blob) as well as a base64-encoded binary file through API Gateway.
Ref: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-content-encodings-examples-image-lambda.html
*/* (or /) works in your case, but it means the API Gateway will treat all payload as binary data and this breaks payload with text data, for example JSON payload. So, ideally application/pdf should be used as the "Binary Media Type".

Generate openApi DTO's with NestJS without a Controller

I am writing a NestJS service that provides a REST API and it publishes some messages to NATS. We are using the NestJS support to generate OpenAPI docs, and from the OpenAPI docs we generate an SDK that we import into our clients. This all works great, but only the REST API of our code is in the SDK.
What we'd like to also do is to have NestJS include the DTO's for the content for the messages we publish to NATS. Then our SDK will also include interfaces for these DTO's, and then our clients can cast the message content to the correct interface (based on the message subject). This way, the publisher of an event defines the content of the event, and users of it don't have to replicate the interface, yet they get strongly-typed code.
I've tried adding the #Api decorators to the DTO, but it appears that unless the DTO is used in the definition of an #Controller, it is not included in the resultant openApi docs.
I was hoping for a way to decorate a "random" DTO in my code so it will then be included in the swagger docs, and in-turn included in a generated SDK. Is something like that possible?
you can also pass extraModels array as a part of SwaggerDocumentOptions
SwaggerModule.createDocument(app, config, {
extraModels: [.......]
});
https://github.com/nestjs/swagger/issues/241

apollo graphql query an uploaded file

Apollo Server 2.0 has the ability to receive file uploads as described in this blog post.
However, all the tutorials and blog posts I found only showed how to upload a file. Nobody demonstrated how to actually retrieve the file back to display it onscreen.
Does anybody know how to properly query the file contents for display onscreen?
Also, there's the possibility that maybe there is no way of querying a file and you have to build a separate rest endpoint to retrieve the contents?
Some thoughts:
I imagine the query to be something like
query {
fetchImage(id: 'someid')
}
with the respective server-side definition
type Query {
fetchImage(id : ID!): Upload //maybe also a custom type, but how do I include the actual file contents?
}
Hint: Upload is a scalar type that apollo-server automatically adds to your type definition. It is used for the upload so I imaging it also being usable for the download/query. Please read the blog post mentioned above for more information.
The response from a GraphQL service is always serialized as a JSON object. Technically, a format other than JSON could be used in serialization but in practice only JSON is used because it meets the serialization requirements in the spec. So, the only way to send a file through GraphQL would be to convert the file into some format that's JSON-compatible. For example, you could convert a Buffer to a byte array and send that as an array of integers. You would also have to send the appropriate mime type. It would be up to the client to convert the byte array back into a usable format on receiving the response.
If you go this route, you'd have to use your own scalar or object type -- the Upload scalar does not support serialization so it will throw if you try to use it as an output type (and it's not really suitable for this sort of thing anyway).
However, while doing this is technically possible, it's also inadvisable. Serializing a larger file could cause you to run out of memory since there's no way to stream data through GraphQL (the entire response has to be in memory before it can be sent). It's much better to serve the file statically (ideally using nginx instead of Node). If your API needs to refer to the file, it can then just return the file's path.
You can do this by installing express with apollo server.
apollo-server-express
Install above package and instantiate Express object with Apollo Server as explained in package docs.
Then set the static folder using express like this
app.use("/uploads", express.static("uploads")); //Server Static files over Http
uploads is my static folder & /uploads will server get request to that path
//Now I can access static files like this
http://localhost:4000/uploads/test.jpg

How to create Index with custom analyzers from json file in Azure Search .NET SDK?

I had read that the Azure Search .NET SDK uses NewtonSoft.Json to convert it's models to/from json in it's underlying REST API calls so I've been doing the same in my own app.
I have a simple app which creates a new Index using the .NET SDK. To do this, I was defining my Index in a json file, using the format outlined here https://learn.microsoft.com/en-us/rest/api/searchservice/create-index and then I was converting this to a Microsoft.Azure.Search.Models.Index object using Newtonsoft.
var index = JsonConvert.DeserializeObject<Microsoft.Azure.Search.Models.Index>(System.IO.File.ReadAllText("config.json");
This was working fine before I configured custom Analyzers, but now that I have custom Analyzers in my config, the Analyzers, Tokenizers, and TokenFilters are not being resolved into the correct types. ie. my custom Analyzer is being deserialized as a Microsoft.Azure.Search.Models.Analyzer, instead of Microsoft.Azure.Search.Models.CustomAnalyzer, same goes for the Tokenizers and TokenFilters, they are being deserialized into the base types.
Is there an easy way I can create an Index like this in the .NET SDK from a json file?
Unfortunately this is not an officially supported scenario. While it works for simple index definitions, we're working to understand what we need to do to be able to support all cases.
Please post your feature request on our User Voice page to help us prioritize: https://feedback.azure.com/forums/263029-azure-search
In the meantime, you might be able to get it working yourself by adapting the JsonSerializerSettings initialization code at the bottom of this file.

Create a new content type to consume JSON

I need to create a new content type in orchard cms that will only have a field indicating a json url to be consumed.
My question is how to consume json in content-type and return the data consumed.
Thanks
You need to issue a web request using eg. RestSharp (or built-in WebClient class) and return deserialized data. RestSharp already has a JSON deserializer built-in, but if you need more robust solution you can use JSON.NET.
The request would be best done in one of the content part handler events (eg. OnLoaded) - just set a value of some property on your custom part to the returned data from there.

Resources