I need to create something similar to a GraphQL server but fully contained within a node process rather than an actual server. So essentially a JavaScript function which you would call with a query or mutation as a string, and it would return an object or string with the response based on your resolvers.
It’s a weird requirement I know. We need to mock a GraphQL server at my company and due to some limitations in our build pipeline we can’t run an actual server.
Apologies that this quite an open question but I don't know where to start. What package contains the core functionality for GraphQL? If I was making a GraphQL server I'd use Apollo Server or GraphQL Yoga package, but it's hard to Google what I need as it's such an unusual requirement.
You just need the vanilla GraphQL.js package (graphql) to execute a query against a schema. The package exports a graphql function with the following signature:
graphql(
schema: GraphQLSchema,
requestString: string,
rootValue?: ?any,
contextValue?: ?any,
variableValues?: ?{[key: string]: any},
operationName?: ?string
): Promise<GraphQLResult>
From the docs:
The graphql function lexes, parses, validates and executes a GraphQL request. It requires a schema and a requestString. Optional arguments include a rootValue, which will get passed as the root value to the executor, a contextValue, which will get passed to all resolve functions, variableValues, which will get passed to the executor to provide values for any variables in requestString, and operationName, which allows the caller to specify which operation in requestString will be run, in cases where requestString contains multiple top-level operations.
So given a schema, you can just do:
const request = `
query MyQuery {
someField
}
`
const { data, errors } = graphql(schema, request)
Note: If you have typeDefs and resolvers that you'd normally pass to an ApolloServer config, you can create a GraphQLSchema object by passing them to graphql-tools' makeExecutableSchema instead (which is what apollo-server and graphql-yoga use under the hood).
Related
I am building an API with nest. For E2E tests I am using Jest, in the beforeAll() I instantiate a complete NestJS application using createNestApplication from the AppModule:
const module = await Test.createTestingModule({
imports: [AppModule],
}).compile();
app = module.createNestApplication<NestFastifyApplication>(
new FastifyAdapter(),
);
await app.init();
Tests and validations are carried out with the app object. For some validations, and to delete records from the database after testing, I am looking for a way to get from the app object the repositories of my entities, or at least, the instance of TypeOrm that has the connection to the database.
Reviewing the object and its documentation I found that nestJS provides some methods to extract modules, or providers from this object, but I haven't seen anything to extract the DB connection instance. I tried to extract the TypeOrm module using the select method like this:
const typeOrmModule = app.select(TypeOrmModule);
But that causes the error:
Nest could not select the given module (it does not exist in current context)
I don't want to have to generate a new connection to the DB knowing that the app object already has one. So I wonder if there is a way to extract that instance of the connection or the repositories that the object has inside. Thanks in advance
Jay McDoniel's comment guided me to find the solution. The method getRepositoryToken(Entity) from #nestjs/typeorm returns the token for the entitie's repo and with it I could get the repo I needed:
const repo = app.get<Repository<Entity>>(getRepositoryToken(Entity));
I have a graphql server I've created using ApolloServer and Type-Graphql.
I have two types defined on my graphql server:
User{
id:string;
name:string
email:string;
...
}
UserPrefrences{
userId:string;
theme: string;
color:string;
...
}
The data for the User type is saved in a database which I access through a different graphql server by forwarding the request I get from client.
The data for the UserPrefrences type is saved on a different database which I access directly from my graphql server.
I don't want my client side to need to know these two separate types and to need to run two separate queries.
I'm looking for a way to let my client run the following query on my graphql server:
query UserData($userId: String!) {
id
name
email
theme
color
}
But if I forward this request to the graphql server that I'm querying, I will get a response saying the fields 'theme' and 'color' are unknown to him.
I'm trying to find a way to forward only the relevant fields to the graphql server, and then resolving the rest within my graphql server. But I receive the query as a string which makes it a pain trying to use regex to only forward the fields I'm interested in.
I'd be more than happy for any ideas on how to solve this issue.
the only way I found is using a Graphql client in Node.js.
I'm using the npm library called graphql-request
https://www.npmjs.com/package/graphql-request
import { GraphQLClient, gql } from 'graphql-request';
const client = new GraphQLClient('http://localhost:4000/graphql');
const query = `
{
yourQuery {
id
name
}
}
`;
const response = client.request(query);
I'm trying to take advantage of db connection reuse in Lambda, by keeping the code outside of the handler.
For example - something like:
import dbconnection from './connection'
const handler(event, context, callback){
//use dbconnection
}
The issue is I don't decide what database to connect to until I do a lookup to see where they should be connecting. In my specific case I have 'customer=foo' in a query param then I can look to see that foo should connect to database1.
So what I need to do is something like this :
const dbconnection = require('./connection)('database1')
The way it is now I need to do this in every handler method which is expensive.
Is there some way I can pull the query parameter, look up my database and set it / switch it globally within the Lambda execution context?
I've tried this:
import dbconnection from './connection'
const handler(event, context, callback){
const client = dbconnection.setDatabase('database1')
}
....
./connection.js
setDatabase(database) {
if(this.currentDatabase !== database) {
// connect to different database
this.currentDatabase = database;
}
}
Everything works locally with sls offline but doesn't work through the AWS Lambda execution context. Thoughts?
You can either hardcode (or provide it via environment variable) it or not. If you can, then pull it out of then handler and it will not be executed each time. If you can't, as you have mentioned, then what you are trying to do is to make lambda stateful. Lambda was designed to be stateless and AWS intentionally doesn't expose specific informations about the underlying containers so that you don't start doing something like what you are trying to do now - introducing state to it.
I am using firebase realtime database and I was wondering which is a better pattern regarding
firebase.database()
is it considered bad practice to have multiple instances of this. Is it better if I have a single instance of the database which is exported within the node app. Or is it basically the same thing to create a new instance for every single action creator file.
import * as firebase from 'firebase';
firebase.initializeApp(config);
export const provider = new firebase.auth.GoogleAuthProvider();
export const auth = firebase.auth();
export default firebase;
I have this approach for the firebase app instance and I am unsure if a similar pattern is required for the database instance as well. There weren't any specifications within the firebase docs.
Every time you call one of the product methods on the firebase object that you get from the import, it will give you exactly the same object in return. So, every time you call firebase.auth(), you'll get the same thing back, and every time you call firebase.database(), you'll get the same thing. How you want to manage those instances is completely your preference.
I am new to mongodb and nodejs.So far I have been able to create a new mongodb database and access it via nodejs. However I want to write some generic set of methods for accessing collections (CRUD), as my list of collections will grow in number. For example I have a collection which contains books and authors
var books = db.collection('books');
var authors = db.collection('authors');
exports.getBooks = function(callback) {
books.find(function(e, list) {
list.toArray(function(res, array) {
if (array) callback(null, array);
else callback(e, "Error !");
});
});
};
Similar to this I have the method for getting authors as well.Now this is getting too repetitive as I want to add methods for CRUD operations as well. Is there a way to have common/generic CRUD methods for all my collections ?
You should take a look at Mongoose, it makes it easy to handle Mongodb from node.js, Mongoose js has a schema based solution, where each schema maps to a Mongodb collection, and you have a set of methods to manipulate these collections via models, that are obtained by compiling the schemas. I was exactly in the same place couple of months ago and have found that Mongoosejs is a good enough for all your needs.
#Dilpa - not sure if you have looked at or are utilizing Mongoose link, but it can be helpful with implementing CRUD.
I wrote my own service to handle very simple CRUD operations on mongodb documents. Mongoose is excellent, but imposes structure on the documents (which IMO goes against the purpose of mongodb--if you're going to have a schema, why not just use a relational db?).
https://github.com/stupid-genius/MongoCRUD
This service also has the advantage of being implemented as a REST API, so it can be consumed by a node.js app or others. If you send a GET request to the root path of the server, you'll get a screen that shows the syntax for all the CRUD operations (the GUI isn't implemented yet). My basic approach was to specify the db and collection on the URL path host/db/collection and then pass the doc in the POST body. The route handlers then just pass on the doc to an appropriate mongodb function; my service just exposes those methods in a pretty raw state (it does require authentication though).