I am quite desperate right now and I am looking for any kind of help.
I am trying to setup cache mechanism in my project using GraphQL and Redis.
This is how I configure GraphQLModule:
GraphQLModule.forRoot({
cache: new BaseRedisCache({
client: new Redis({
host: 'localhost',
port: 6379,
password: 'Zaq1xsw#',
}),
cacheControl: {
defaultMaxAge: 10000
},
}),
plugins: [
responseCachePlugin()
],
autoSchemaFile: path.resolve(__dirname, `../generated/schema.graphql`),
installSubscriptionHandlers: true,
}),
This is how I’ve created queries and mutations:
#Resolver()
export class AuthResolver {
constructor(
private readonly prismaService: PrismaService,
private readonly authService: AuthService,
){}
#Query(returns => String)
async testowe(#Args(`input`) input: String, #Info() info: any) {
info.cacheControl.setCacheHint({ maxAge: 5000, scope: 'PUBLIC' });
return 'test';
}}
When I am using GraphQL Playground and try this query I get the response and header looks like that:
HTTP/1.1 200 OK
X-Powered-By: Express
Access-Control-Allow-Origin: *
Content-Type: application/json; charset=utf-8
cache-control: max-age=5000, public
Content-Length: 28
ETag: W/"1c-2Df/lONPXcLzs1yVERHhOmONyns"
Date: Tue, 28 Dec 2021 21:35:11 GMT
Connection: keep-alive
Keep-Alive: timeout=5
As You may see there is a part with “cache-control”.
My problem is that I cannot see any keys or values stored in Redis. I am connected to Redis server with redis-cli tool and Ive tried “KEYS ‘*’” command. There is nothing stored in Redis.
Also I have problem with more complex queries - I do not even get a header with “cache-control” part.
Do You have any idea what I am doing wrong here? Should I be able to see stored values in Redis with such approach?
Thank You in advance for any advice.
For what i can see, you don't tell your resolver to store it's result in Redis. The Apollo Server docs are not super clear about this.
I've did a research project around caching & graphql so feel free to read my Medium post about it: https://medium.com/#niels.onderbeke.no/research-project-which-is-the-best-caching-strategy-with-graphql-for-a-big-relational-database-56fedb773b97
But to answer your question, I've implemented Redis with GraphQL this way:
Create a function that handles the caching, like so:
export const globalTTL: number = 90;
export const checkCache = async (
redisClient: Redis,
key: string,
callback: Function,
maxAge: number = globalTTL
): Promise<Object | Array<any> | number> => {
return new Promise(async (resolve, reject) => {
redisClient.get(key, async (err, data) => {
if (err) return reject(err);
if (data != null) {
return resolve(JSON.parse(data));
// logger.info("read from cache");
} else {
// logger.info("read from db");
let newData = await callback();
if (!newData) newData = null;
redisClient.setex(key, maxAge, JSON.stringify(newData));
resolve(newData);
}
});
});
};
Then in your resolver, you can call this function like so:
#Query(() => [Post])
async PostsAll(#Ctx() ctx: any, #Info() info: any) {
const posts = await checkCache(ctx.redisClient, "allposts", async () => {
return await this.postService.all();
});
return posts;
}
You have to pass your Redis client in to the context of GraphQL, that way you can acces your client inside your resolver using ctx.redisClient ...
This is how I've passed it:
const apolloServer = new ApolloServer({
schema,
context: ({ req, res }) => ({
req,
res,
redisClient: new Redis({
host: "redis",
password: process.env.REDIS_PASSWORD,
}),
}),
});
This way you should be able to store your data in your Redis cache.
The info.cacheControl.setCacheHint({ maxAge: 5000, scope: 'PUBLIC' }); way you are trying is for using another caching strategy within Apollo Server. Apollo is able to calculate the cache-control header with this information, but you have to set this setting then:
const apolloServer = new ApolloServer({
schema,
plugins: [
ApolloServerPluginCacheControl({
// Cache everything for 1 hour by default.
defaultMaxAge: 3600,
// Send the `cache-control` response header.
calculateHttpHeaders: true,
}),
],
});
Note: You can set the default max-age to a value that suits your needs.
Hope this solves your problem!
You can find my implementation of it at my research repo: https://github.com/OnderbekeNiels/research-project-3mct/tree/redis-server-cache
I faced the same problem. Try the following
GraphQLModule.forRoot({
plugins: [
responseCachePlugin({
cache: new BaseRedisCache({
client: new Redis({
host: 'localhost',
port: 6379,
password: 'Zaq1xsw#',
}),
}),
})
],
autoSchemaFile: path.resolve(__dirname, `../generated/schema.graphql`),
installSubscriptionHandlers: true,
}),
Related
I'm mocking the next/router dependency in my Jest+React-testing-libray tests as I always have:
import * as nextRouter from 'next/router';
export const routerData = {
pathname: '/users/create',
route: '/users/create',
query: { },
asPath: '/users/create',
isFallback: false,
basePath: '',
isReady: true,
isPreview: false,
isLocaleDomain: false,
events: {},
};
// mock router
jest.mock('next/router');
nextRouter.useRouter.mockImplementation(() => (routerData));
describe('a component that requires next/router, () => ... );
This had been working correctly but after updating to NextJs 12.2.0 I get this warning:
No router instance found.
You should only use "next/router" on the client side of your app.
This warning makes all my tests with the mocked router to fail.
Ideas to fix this?
Well, it appears that this is not related to 12.2.0. Somehow my last version of Next - 12.0.0 - wasn't thrownig this error but other older versions did.
Thanks to bistacos for the response here.
const useRouter = jest.spyOn(require('next/router'), 'useRouter');
useRouter.mockImplementation(() => ({
pathname: '/',
...moreRouterData
}));
I’m migrating from 0.2.* to 0.3.6 version of typeorm. I’m not sure how to handle multi-tenant connections with new DataSource.
My previous implementations was based on connectionManager and it looked something like this:
{
const connectionManager = getConnectionManager();
// Check if tenant connection exists
if (connectionManager.has(tenant.name)) {
const connection = connectionManager.get(tenant.name);
return Promise.resolve(
connection.isConnected ? connection : connection.connect()
);
}
// Create new tenant connection
return createConnection({
type: 'postgres',
name: tenant.name,
host: tenant.host,
port: tenant.port,
username: tenant.username,
password: tenant.password,
database: tenant.database,
entities: [...TenantModule.entities],
});
}
Connection manger is now deprecated. Maintaining my own array of connections doesn’t sound right to me.
Any ideas on how this should be handled the correct way?
with typeorm: 0.3.6 getConnection, createConnection among others are deprecated. You can find migration guide in here
to create a new connection, you will have to use DataSource as follows:
import { DataSource, DataSourceOptions } from 'typeorm';
const dataSourceOptions: DataSourceOptions = {
type: 'mysql',
host,
port,
username,
password,
database,
synchronize: false,
logging: false,
entities: ['src/database/entity/*.ts'],
migrations: ['src/database/migration/*.ts'],
};
export const AppDataSource = new DataSource(dataSourceOptions);
where new DataSource is equivalent to new Connection and dataSource is equal to getConnection.
To check if you are connection to database, you will have utilize AppDataSource:
AppDataSource.initialize()
.then(() => {
// db initialized
})
.catch((err: Error) => {
throw new Error(`'Database connection error: ${err}`);
});
I am trying to use Sequelize (v 5.21.13) to connect to my SQL Server database in my Expressjs app.
dbconfig.js
var dbConfig = {
server: process.env.DB_HOST,
authentication: {
type: 'default',
options: {
userName: process.env.DB_USERNAME,
password: process.env.DB_PASSWORD
}
},
options: {
database: process.env.DB_NAME
}
};
module.exports = dbConfig;
index.js:
const dbConfig = require('./dbConfig');
const Sequelize = require('sequelize');
const connection = new Sequelize(
dbConfig.options.database,
dbConfig.authentication.options.userName,
dbConfig.authentication.options.password,
{
host: dbConfig.server,
dialect: 'mssql',
}
);
connection.sync().then(() => {
console.log('Connected!');
}).catch((e) => {
console.log('Error:\n', e);
});
Now the thing is that each time I run the server, I get this error
AccessDeniedError [SequelizeAccessDeniedError]: Login failed for user 'master'.
I have also tried adding additional properties to the new Sequelize() like the following with no luck.
dialectOptions: {
instanceName: 'instance',
options: {
encrypt: true,
trustServerCertificate: true,
requestTimeout: 30000
}
}
I even tried changing the password to a very simple one with no special characters, connection with Datagrip works after changing but not using Sequelize.
Everything on the dbconfig object is correct so I don't see what the issue might be.
Solved it. I was putting the the db instance id as the database name, I realized that the database name was different. Changed it and I'm now connected through Sequelize.
I am using serverless framework for lambda deployment and I am facing this really weird problem. I use Sequelize to connect to RDS Aurora MySql DB. The deployment is successful, but when I invoke the APIs I see SequelizeConnectionError: Connect ETIMEDOUT. The API works fine when I run offline. But the APIs deployed don't work. They start working as soon as I make any small change on the console and save it, like changing time out from 30 to 31. But when I redeploy I face the same problem and I just can't figure out what the problem is.
Error:
SequelizeConnectionError: connect ETIMEDOUT
Edits:
Yes. This is Aurora serverless with Data API enabled. The lambda function runs in the same VPC as the DB is in. The security group and subnets are are also same. Here is my DB config snippet:
DB_PORT: 3306
DB_POOL_MAX: 10
DB_POOL_MIN: 2
DB_POOL_ACQUIRE: 30000
DB_POOL_IDLE: 10000
This is my db.js:
const sequelize = new Sequelize(process.env.DB_NAME, process.env.DB_USER, process.env.DB_PASSWORD, {
host: process.env.DB_HOST,
port: process.env.DB_PORT,
dialect: 'mysql',
pool: {
max: process.env.DB_POOL_MAX,
min: process.env.DB_POOL_MIN,
acquire: process.env.DB_POOL_ACQUIRE,
idle: process.env.DB_POOL_IDLE
}
});
My handler file is really long with over 33 APIs. Below is one of them:
context.callbackWaitsForEmptyEventLoop = false;
if (event.source === 'serverless-plugin-warmup') {
try {
const { Datalogger } = await connectToDatabase();
} catch (err) { }
return {
statusCode: 200,
headers: { 'Access-Control-Allow-Origin': '*' },
body: 'warm-up'
}
}
try {
const { Datalogger } = await connectToDatabase();
var datalogger;
if (!event.headers.dsn && !event.headers.sftp)
throw new HTTPError(404, `Datalogger serial number and SFTP root directory is mandatory`);
datalogger = await Datalogger.findOne({
where: {
dsn: event.headers.dsn,
sftpRootDir: event.headers.sftp,
}
})
if (!datalogger)
throw new HTTPError(404, `Datalogger with this input was not found`);
console.log('datalogger', datalogger);
await datalogger.destroy();
return {
statusCode: 200,
headers: { 'Access-Control-Allow-Origin': '*' },
body: JSON.stringify(datalogger)
}
} catch (err) {
console.log('Error in destroy: ', JSON.stringify(err));
return {
statusCode: err.statusCode || 500,
headers: { 'Content-Type': 'text/plain', 'Access-Control-Allow-Origin': '*' },
body: err.message || 'Could not destroy the datalogger.'
}
}
I reached out to AWS support for this issue and they suspect the error has occurred due to node js version upgrade. I was previously using 8 and it was working. After upgrading to 10 its resulting in intermittent timeouts. Few calls are successful and then one failure. Now even if I go back to version 8, its same issue.
I have a simple Express API where I use MySQL to retrieve my data. I use Webpack 4 to bundle it with a very simple configuration:
'use strict';
const path = require('path');
module.exports = {
entry: './src/main.js',
target: 'node',
output: {
filename: 'gept_api.js',
path: path.resolve(__dirname, 'dist'),
},
node: {
__dirname: true,
},
};
When I use webpack --config webpack.config.js -d for development everything works just fine.
However, when I run webpack --config webpack.config.js -p for production it suddenly doesn't work anymore, and throws an error when it's getting a connection from the pool.
TypeError: Cannot read property 'query' of undefined
at Object.getItem (C:\Users\freek\Dropbox\Code\Apps\GEPT\GEPTv2_API\dist\gept_api.js:1:154359)
at t.db_pool.getConnection (C:\Users\freek\Dropbox\Code\Apps\GEPT\GEPTv2_API\dist\gept_api.js:1:154841)
at c._callback (C:\Users\freek\Dropbox\Code\Apps\GEPT\GEPTv2_API\dist\gept_api.js:1:68269)
at c.end (C:\Users\freek\Dropbox\Code\Apps\GEPT\GEPTv2_API\dist\gept_api.js:1:8397)
at C:\Users\freek\Dropbox\Code\Apps\GEPT\GEPTv2_API\dist\gept_api.js:1:322509
at Array.forEach (<anonymous>)
at C:\Users\freek\Dropbox\Code\Apps\GEPT\GEPTv2_API\dist\gept_api.js:1:322487
at process._tickCallback (internal/process/next_tick.js:112:11)
So somehow this is broken by using the production mode in webpack 4. The connection object undefined somehow, while it isn't in development mode.
I have no idea how to fix this, since I'm a noob in using Webpack. I tried searching on google, but couldn't find anything relevant.
How I create my pool:
'use strict';
var mysql = require('mysql');
var secret = require('./db-secret');
module.exports = {
name: 'gept_api',
hostname: 'https://api.toxsickproductions.com/gept',
version: '1.3.0',
port: process.env.PORT || 1910,
db_pool: mysql.createPool({
host: secret.host,
port: secret.port,
user: secret.user,
password: secret.password,
database: secret.database,
ca: secret.ca,
}),
};
How I consume the connection:
pool.getConnection((err, connection) => {
PlayerRepository.getPlayer(req.params.username, connection, (statusCode, player) => {
connection.release();
res.status(statusCode);
res.send(player);
return next();
});
});
and
/** Get the player, and logs to HiscoreSearch if exists.
*
* Has callback with statusCode and player. Status code can be 200, 404 or 500.
* #param {string} username The player's username.
* #param {connection} connection The mysql connection object.
* #param {(statusCode: number, player: { username: string, playerType: string }) => void} callback Callback with statusCode and the player if found.
*/
function getPlayer(username, connection, callback) {
const query = 'SELECT p.*, pt.type FROM Player p JOIN PlayerType pt ON p.playerType = pt.id WHERE username = ?';
connection.query(query, [username.toLowerCase()], (outerError, results, fields) => {
if (outerError) callback(500);
else if (results && results.length > 0) {
logHiscoreSearch(results[0].id, connection, innerError => {
if (innerError) callback(500);
else callback(200, {
username: results[0].username,
playerType: results[0].type,
deIroned: results[0].deIroned,
dead: results[0].dead,
lastChecked: results[0].lastChecked,
});
});
} else callback(404);
});
}
I found what was causing the issue. Apparantly the mysql package relies on Function.prototype.name because setting keep_fnames: true fixed the production build. (https://github.com/mishoo/UglifyJS2/tree/harmony#mangle-options)
I disabled the Webpack 4 standard minification and used custom UglifyJSPlugin settings:
'use strict';
const path = require('path');
const UglifyJsPlugin = require('uglifyjs-webpack-plugin')
module.exports = {
entry: './src/main.js',
target: 'node',
output: {
filename: 'gept_api.js',
path: path.resolve(__dirname, 'dist'),
},
node: {
__dirname: true,
},
optimization: {
minimize: false,
},
plugins: [
new UglifyJsPlugin({
parallel: true,
uglifyOptions: {
ecma: 6,
mangle: {
keep_fnames: true,
},
},
}),
],
};