I am using Next.js and in the API side i have write a dummy code to get rows from my database with this library : serverless-mysql
I have followed the example on the documentation and on my computer this working very fine, I can connect to the database et get the rows. My Database is on my VPS not on my localhost.
But when i deploy my code on Vercel, and I try to access to /api/hello
In my vercel log I have this error :
[GET] /api/hello
{
error: Error: Error: getaddrinfo ENOTFOUND "**.***.**.**"
at connect (/var/task/node_modules/serverless-mysql/index.js:80:15)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async Object.query (/var/task/node_modules/serverless-mysql/index.js:182:5)
at async excuteQuery (/var/task/.next/server/pages/api/hello.js:33:25)
at async handler (/var/task/.next/server/pages/api/hello.js:59:24)
at async Object.apiResolver (/var/task/node_modules/next/dist/server/api-utils.js:102:9)
at async Server.handleApiRequest (/var/task/node_modules/next/dist/server/next-server.js:1064:9)
at async Object.fn (/var/task/node_modules/next/dist/server/next-server.js:951:37)
at async Router.execute (/var/task/node_modules/next/dist/server/router.js:222:32)
at async Server.run (/var/task/node_modules/next/dist/server/next-server.js:1135:29)
}
(I have replaced the real Ip showed in the message by "** . *** . ** . **")
My database accept connection from outside because I can access to it on my computer.
I have also correctly configured the .env var in project settings.
Thank you very much for your help
You will need to set the environment variables both on your vercel dashboard and your nextjs app.
In your .env file
NEXT_PUBLIC_VERCEL_URL = "http://localhost:3000";
In your code, reference the variable
export const getBaseUrl = () => {
if (process.env.NODE_ENV === "development") {
return "http://localhost:3000";
}
return process.env.NEXT_PUBLIC_VERCEL_URL;
}
You can then execute the utility function anywhere in your code.
On vercel, set the environment variable to VERCEL_URL
Related
I am trying to create a Dapp with nodejs that allow me to upload a file to IPFS running in local, but I'm haveing some troubles.
I've tried the next
submit_change: document.getElementById('upload_form').addEventListener('submit', async function(event) {
event.preventDefault();
const ipfs = create({host: "127.0.0.1", port: 8081, protocol: "http"})
await ipfs.add("Hello world!", (error, res) => {
console.log(res)
if(error){
console.log(err)
}
})
}),
But I get this:
POST http://127.0.0.1:8081/api/v0/add?stream-channels=true&progress=false 404 (Not Found)
fetch # http.js:147
Client.fetch # core.js:148
post # http.js:189
addAll # add-all.js:23
await in addAll (async)
last # index.js:13
add # add.js:25
eval # index.js:98
core.js:67 Uncaught (in promise) HTTPError: 404 page not found
at Object.errorHandler [as handleError] (core.js:67:15)
at async Client.fetch (http.js:155:9)
at async addAll (add-all.js:23:17)
at async last (index.js:13:20)
at async HTMLFormElement.eval (index.js:98:5)
errorHandler # core.js:67
In theory, IPFS is running in the port 8081 and the app in the 8080. If I add something from my linux terminal with the comand "ipfs add", it works fine and I can se what I've uploaded with the url http://127.0.0.1:8081/ipfs/hash.
What am I doing wrong in my Dapp
You can check out Moralis IPFS API which makes this process a lot easier!
Here you can find a guide on how to use it https://docs.moralis.io/web3-data-api/evm/how-to-upload-a-folder-to-ipfs.
Let me know if you need any help!
I created some functions in firebase cloud functions, but all of them are works. But i have a new function which is not work properly. I don't know why but i think it has same pattern with others.
this is my code:
const functions = require('firebase-functions');
const appVideo = express();
const cors = require('cors')({ origin: true });
appVideo.use(cors);
appVideo.get('/update-video', async(req, res) => {
console.log('updateStatusVideo idCourse', req.query.idCourse, ' idMateri: ', req.query.idMateri, ' idVideo:', req.query.idVideo);
res.status(200).send('Oke')
})
exports.video = functions.https.onRequest(appVideo)
I often call partial deploy like
firebase deploy --only functions:video. But when i execute the functions https through browser it often return
Request failed with status code 404
other weird things is when i inspect the browser and switch to console, i found
Failed to load resource: the server responded with a status of 500 ()
this is the url of function in firebase:
https://us-central1-my-apps.cloudfunctions.net/video [modified for confidential]
Please help
When you export this line:
exports.video = functions.https.onRequest(appVideo);
You define a Cloud Function called video that is deployed as https://us-central1-PROJECT_ID.cloudfunctions.net/video where PROJECT_ID is whatever your Firebase Project ID is.
Because you use a express application for this exported function, any URL that is handled must first start with https://us-central1-PROJECT_ID.cloudfunctions.net/video (the BASE_URL).
This line:
appVideo.get('/update-video', ...)
attaches a listener to BASE_URL/update-video, which becomes https://us-central1-PROJECT_ID.cloudfunctions.net/video/update-video.
If you want to use just https://us-central1-PROJECT_ID.cloudfunctions.net/video as-is, you'll need to change to using
appVideo.get('/', ...)
If you want to use just https://us-central1-PROJECT_ID.cloudfunctions.net/update-video, you'll need to change to using
appVideo.get('/', ...)
and
exports.update = {};
exports.update.video = functions.https.onRequest(appVideo);
// "update.video" is deployed as "update-video"
Note: This last part abuses deploying groups to get the desired URL
I tried to deploy the server side of my Angular Universal SSR app to Firebase Functions, but ran into the error Upload Error: HTTP Error: 400, Unknown Error.
From what I understand, this error happens pretty often when the deployment is a huge file (in my case it's 438mb). The reason it's so big is because I'm deploying localized versions of my website so dist/browser and dist/server both have en, de, and fr directories with pretty much the same content. How can I solve this issue?
console output
=== Deploying to 'PROJECT_NAME'...
i deploying functions
Running command: npm --prefix "$RESOURCE_DIR" run lint
+ functions: Finished running predeploy script.
i functions: ensuring required API cloudfunctions.googleapis.com is enabled...
i functions: ensuring required API cloudbuild.googleapis.com is enabled...
+ functions: required API cloudfunctions.googleapis.com is enabled
i functions: preparing dist directory for uploading...
i functions: packaged dist (438.04 MB) for uploading
! functions: Upload Error: HTTP Error: 400, Unknown Error
Error: HTTP Error: 400, Unknown Error
index.js
const functions = require('firebase-functions');
// Increase readability in Cloud Logging
require("firebase-functions/lib/logger/compat");
const expressApp = require('./server/proxy').app();
exports.ssr = functions
.region('us-central1')
.runWith({})
.https
.onRequest(expressApp);
proxy.ts (which gets compiled to js and put into the dist/server folder)
import * as express from 'express';
import * as cookieParser from 'cookie-parser';
import { join } from 'path';
export function app() {
const server = express();
server.use(cookieParser());
const languages = ['en', 'de', 'fr'];
languages.forEach((locale) => {
const appServerModule = require(join(__dirname, locale, 'main.js'));
server.use(`/${locale}`, appServerModule.app(locale));
});
server.get('/(:locale(en|fr|de)/)?*', (req, res, next) => {
const { locale } = req.params;
let userLocale = (req.headers['accept-language'] || '').substring(0, 2);
if(!languages.includes(userLocale)) {
userLocale = 'en';
}
if (locale !== userLocale) {
res.redirect(userLocale + req.url);
}
});
return server;
}
function run() {
app().listen(4200, () => {
console.log(`Node Express server listening on http://localhost:4200`);
});
}
run();
I had the same problem with NextJS and firebase functions,
Here is my solution:
Remove node_modules and run npm i
Remove cache files, out or public builds
Check functions console registry from firebase panel (here you can check your serve status/logs)
I hope this could help to someone.
I decreased the size of my deployment, and it immediately fixed the issue. I would say you just have to optimize the files you have, or think about storing some files in another location other than the firebase functions codebase (perhaps firebase storage)
I had this issue and just was able to fix it. In my case I had deployed to firebase a number of times and with each deploy the upload directory got bigger and bigger. I ended up changing my hosting settings to only keep the last 3 deploys which ended up dropping my package size from around 170MB to 15MB once the previous deploys were deleted. For some reason with each deploy it would take in each previous deploy when trying to upload a new release which doesn't make sense to me.
I have an Express application that runs a blog in a NextJS app, very similar to the example in their repo
I have set it up so that my app runs a query to fetch a blog article, and if the result is empty it throws a NotFoundException.
I catch this exception in my NextJS _error.js file, which is similar to a React error boundary, where I route the user to my 404 page. This part works fine.
The problem I'm having is that this exception is logged to the node console even though I'm not logging it when catching the exception. This pollutes our company's logging software with all our 404's
Is there some node/express setting I'm missing here that prevents the logging of exceptions? Here's my Express process error handler:
process.on('unhandledRejection', (reason, promise) =>
console.error(`Unhandled Rejection at: ${promise}.\nreason: ${reason.stack || reason}`));
I know there is a log there, but the format of the one I want to eliminate is different to this, so I'm confident this is not the source.
I won't pretend to know what's going on, but my best guess is that next.js is logging the error somewhere. I did some digging and it appears there's an error logger in the server code that will log on errors unless a quiet property is set on the server:
https://github.com/zeit/next.js/blob/canary/packages/next-server/server/next-server.ts#L105:
return this.run(req, res, parsedUrl)
.catch((err) => {
this.logError(err)
res.statusCode = 500
res.end('Internal Server Error')
})
Here's the sig and body for the logError function:
private logError(...args: any): void {
if (this.quiet) return
// tslint:disable-next-line
console.error(...args)
}
If you look at the documentation for using the next API with a custom server, it notes the following options object properties that can be passed to the constructor:
The next API is as follows:
next(opts: object)
Supported options:
dev (bool) whether to launch Next.js in dev mode - default false
dir (string) where the Next project is located - default '.'
quiet (bool) Hide error messages containing server information - default false
conf (object) the same object you would use in next.config.js - default {}
When constructing the next object, try passing quiet as true to see if it resolves your issue:
const express = require('express')
const next = require('next')
const port = parseInt(process.env.PORT, 10) || 3000
const dev = process.env.NODE_ENV !== 'production'
const app = next({ dev, quiet: true })
const handle = app.getRequestHandler()
The docs also mentions errors are logged in non-production environments (identified when process.env.NODE_ENV !== 'production'), so I would also check to ensure you're setting NODE_ENV to 'production' when starting your application:
NODE_ENV=production node server.js
I hope this helps!
In express you can setup an ErrorMiddleware.
After all your routes declaration, put
server.use(function(req, res, next) {
handler(req, res).catch(e => {
// use rejected promise to forward error to next express middleware
next(e)
})
});
Like this, when you reject a Promise, next(e) will send your error to next middleware. I usually setup a middleware where i send error, and then i manage all errors in one single function (based on statusCode error,...).
I'm running Apollo/React with Express and I'm trying to get server side rendering to work. The Apollo docs suggest the following initialisation code for connecting to the API server:
app.use((req, res) => {
match({ routes, location: req.originalUrl }, (error, redirectLocation, renderProps) => {
const client = new ApolloClient({
ssrMode: true,
networkInterface: createNetworkInterface({
uri: 'http://localhost:3000', // Instead of 3010
opts: {
credentials: 'same-origin',
headers: {
cookie: req.header('Cookie'),
},
},
}),
});
const app = (
<ApolloProvider client={client}>
<RouterContext {...renderProps} />
</ApolloProvider>
);
getDataFromTree(app).then(() => {
const content = ReactDOM.renderToString(app);
const initialState = {[client.reduxRootKey]: client.getInitialState() };
const html = <Html content={content} state={initialState} />;
res.status(200);
res.send(`<!doctype html>\n${ReactDOM.renderToStaticMarkup(html)}`);
res.end();
});
});
});
which uses the match() function from React Router v3 (as evidenced by package.json in the "GitHunt" example linked from the docs). I'm using React Router v4 from which match() is absent, so I refactored the code as follows, using renderRoutes() from the react-router-config package.
app.use((req, res) => {
const client = new ApolloClient(/* Same as above */)
const context = {}
const app = (
<ApolloProvider client={client}>
<StaticRouter context={context} location={req.originalUrl}>
{ renderRoutes(routes) }
</StaticRouter>
</ApolloProvider>
)
getDataFromTree(app).then(/* Same as above */)
})
My understanding is that <StaticRouter> obviates the use of match(). However react-router-config provides a matchRoutes() function which seems to provide a similar functionality (albeit without the callback) if needed.
When I visit http://localhost:3000, the page loads as expected and I can follow links to subdirectories (e.g. http://localhost:3000/folder). When I try to directly load a subdirectory by typing in the name in the address bar, my browser keeps waiting for the server to respond. After about six seconds, Terminal shows one of the following errors (not sure what causes the error to change on subsequent tries):
(node:1938) UnhandledPromiseRejectionWarning: Unhandled promise
rejection (rejection id: 1): Error: Network error: request to
http://localhost:3000 failed, reason: getaddrinfo ENOTFOUND localhost
localhost:3000
or
(node:8691) UnhandledPromiseRejectionWarning: Unhandled promise
rejection (rejection id: 1): Error: Network error: request to
http://localhost:3000 failed, reason: socket hang up
I've been struggling with this problem for a few hours now, but can't seem to figure it out. The solution to a similar problem seems unrelated to this case. Any help will be much appreciated!
Further information
If I don't kill the nodemon server, after some time I get thousands of the following errors:
POST / - - ms - -
(node:1938) UnhandledPromiseRejectionWarning:
Unhandled promise rejection (rejection id: 4443): Error: Network
error: request to http://localhost:3000 failed, reason: socket hang up
If I do kill the server, however, I immediately get this error instead:
/Users/.../node_modules/duplexer/index.js:31
writer.on("drain", function() {
^
TypeError: Cannot read property 'on' of undefined
at duplex (/Users/.../node_modules/duplexer/index.js:31:11)
at Object.module.exports (/Users/.../node_modules/stream-combiner/index.js:8:17)
at childrenOfPid (/Users/.../node_modules/ps-tree/index.js:50:6)
at kill (/Users/.../node_modules/nodemon/lib/monitor/run.js:271:7)
at Bus.onQuit (/Users/.../node_modules/nodemon/lib/monitor/run.js:327:5)
at emitNone (events.js:91:20)
at Bus.emit (events.js:188:7)
at process. (/Users/.../node_modules/nodemon/lib/monitor/run.js:349:9)
at Object.onceWrapper (events.js:293:19)
at emitNone (events.js:86:13)
Also, port 3000 is correct. If I change the number, I get a different error instead:
(node:2056) UnhandledPromiseRejectionWarning: Unhandled promise
rejection (rejection id: 1): Error: Network error: request to
http://localhost:3010 failed, reason: connect ECONNREFUSED
127.0.0.1:3010
Are you running your server / project inside a container / containers. I had the same issue as this and ended up doing the following to fix it.
const networkInterface = createNetworkInterface({
uri: process.browser
? 'http://0.0.0.0:1111/graphql'
: 'http://my-docker-container-name:8080/graphql'
})
I have an internal docker network created for my containers in my docker-compose.yml, which allows the containers to communicate with each other, however the browser communicates to the GQL server on a different URL, causing the issue you described on SSR the getaddrinfo ENOTFOUND, so although it was working client side, on the SSR it would fail.
I am using nextJS framework which gives the ability to detect the browser or SSR, i'm sure you could do the same outside of nextJS.
I finally found out that the error was due to renderToString and renderToStaticMarkup not being made available by importing ReactDOM.
The import statement import ReactDOM from 'react-dom' had to be replaced by import ReactDOMServer from 'react-dom/server'.
Also, uri had to point to http://localhost:3000/graphql instead of http://localhost:3000.