No idea what happened... It was working and then it wasnt.
I am currently building a web app and decided to take some time off from the product side and build a landing page.
For some reason, I decided to build the landing page on a separate Github branch. So I checked out to a new branch, deleted everything, and started working on the landing page.
I soon realized this is a terrible idea and created a new repo to store my landing page.
I checked back into my master branch and spun my Node server up but for some reason now, everything is timing out. I opened Postman and tried hitting some of my endpoints but after like 3 minutes of loading, it tells me that it could not get any response and that there was an error connecting to localhost:3001/api/posts
In my terminal, all I see is this when I hit the route:
GET /api/posts - - ms - -
This has never happened to me before and I am completely clueless on WTH happened.
I tried deleting my local stuff and re-cloning the repo and installing my dependencies but to no avail...
Would love to know if someone has an idea on what's going on.
Check first is this isn't because of another process already listening on that port (but using resources which were deleted or not properly updated)
Closing applications or even rebooting can help you asserting if the issue is permanent or just linked to your current session.
The OP Syn points out in the comments the ~/.env file missing
.env files allow you to put your environment variables inside a file.
You just create a new file called .env in your project and slap your variables in there on different lines.
To read these values, there are a couple of options, but the easiest is to use the dotenv package from npm.
npm install dotenv --save
Note: it is generally not versioned, as it includes potentially sensitive date.
Related
I have a Next.js project that has been a real delight to work on until recently when changes stopped showing up in the browser. Normally the browser hot reloads, but now even hitting refresh won't show changes to the code—I have to shut down the dev server and run npm run dev again to get the changes to show up. This doesn't happen in all my Next.js projects—just one of them.
I've tried deleting the .next/ directory, but that didn't fix the problem. Any other ideas of where I could look to get this back to normal?
Next 12.1.0
Node 14.18.1
macOS 12.3
I had the same issue. In my case it was case sensitivity.
It turned out I renamed one of my components where the new name only had a letter changed from uppercase to lowercase (e.g. MyCOmponent.tsx -> MyComponent.tsx).
I made all the changes everywhere but missed one, the import path of the component in one of the pages.
I had: import MyComponet from '../../MyCOmponent.tsx'
Everything still worked when restarting the server, but hot reload or even browser refresh wouldn't, same as OP. Fixing the typo fixed everything.
I faced the same issue. I created a new file with a different name and copied all the content which was inside the file that did not show the changes. Then it started showing the changes in the browser.
So I'm getting a blue screen of death whenever I have "npm start" running for a reactjs app. It's an intermittent crash, i.e. it doesn't happen every time I run it nor are there any exact steps to duplicate the crash, but I'll try to explain below under what circumstances it happens.
Create a reactjs app using create-react-app npm module.
Start the app using npm start. Chrome window opens, webpack is listening to changes I make to the source files.
Change any source file, and save it. NPM compiles it, Chrome page refreshes, and I can see my changes.
The above things work fine as expected "normally", but once in a while, right after I save a file, the system crashes with a BSOD saying DRIVER_IRQL_NOT_EQUAL_TO_OR_GREATER_THAN (NETIO.SYS) There is no definite "step" or action other than saving the file, or refreshing Chrome that would cause this to happen, and it also doesn't happen every single time.
Here are the steps I took to find out/eliminate the root cause of this issue:
Disabled by AV (Symantec Endpoint Protection).
Use a different browser (Mozilla, hell, even IE).
Changed the system (used a different laptop, although the same type - Microsoft Surface on Windows 10)
Updated all drivers, etc. (Verified by my organization admins)
Closing all other programs, etc. that might potentially be interfering (Atom IDE, Eclipse, etc.)
The necessary conditions for the crash to happen are:
npm start must be running (webpack server on localhost:3000)
A browser window must be open connected to localhost:3000 (if no browser is connected, it doesn't crash even if you change and save the file 200 times - I checked). Also, doesn't matter which browser (Checked with Mozilla/Edge/Chrome)
I believe the crash happens when NPM is recompiling the files and serving it to the browser (asking it to refresh using some websockets), but I'm not an expert on NodeJS/NPM so I'm not sure.
I've been stuck on this issue for more than 2 weeks now. Any help would be really appreciated. Kindly let me know if more information is needed.
The issue was with Symantec DLP (Data Loss Prevention) that was also installed on all our systems. The issue resolved itself after the admins added application exceptions for Nodejs, NPM, my reactjs project workspace paths.
Just posting this so that in case someone has a similar issue they can try this or remove Symantec DLP altogether.
I'm making an app in the Cloud9 IDE using Node.js with the Express.js framework. Something very odd is happening to a specific .ejs file where if I try to update it (like typing some mumbo jumbo in an h1 tag and then saving and restarting the server), it NEVER gets reflected in the browser no matter what I do. For example, if I delete my jumbotron, save, restart the server, and then refresh the browser, I still see the same page with the jumbotron. I also tried deleting this entire file and then restarting the server and I still see the page and it doesn't break my application which is bizarre. All other .ejs files are fine and I can see the changes that I make.
I've spent about 4 hours trying to figure this out and no one else seems to have my specific issue. I tried clearing my browser cache, using different browsers, logging in/out from Cloud9, creating a new database, going back to older versions of my code, etc. and nothing seems to be working. I'm not even sure what code to post on here since my entire app is about 2000 lines of code so far. Does anyone have any suggestions because this is really frustrating.
Our current deploy process goes something like this:
Use grunt to create production assets.
Create a datestamp and point files at our CDN (eg /scripts/20140324142354/app.min.js).
Sidenote: I've heard this process called "versioning" before but I'm not sure if it's the proper term.
Commit build to github.
Run git pull on the web servers to retrieve the new code from github.
This is a node.js site and we are using forever -w to watch for file changes and update the site accordingly.
We have a route setup in our app to serve the latest version of the app via /scripts/*/app.min.js.
The reason we version like this is because our CDN is set to cache JavaScript files indefinitely and this purposely creates a cache miss so that the code is updated on the CDN (and also in our users' browsers).
This works fine most of the time. But where it breaks down is if one of the servers lags a bit in checking out the new code.
Sometimes a client hits the page while a deploy is in progress and tries to retrieve the new JavaScript code from the CDN. The CDN tries to retrieve it but hits a server that isn't finished checking out the new code yet and caches an old or partially downloaded file causing all sorts of problems.
This problem is exacerbated by the fact that our CDN has many edge locations and so the problem isn't always immediately visible to us from our office. Some edge locations may have pulled down old/bad code while others may have pulled down new/good code.
Is there a better way to do these deployments that will avoid this issue?
As a general rule of thumb:
Don't do live upgrades. (unless the language supports it, but even then think twice)
Pulling code using git pull and then waiting for the app to notice changes to files sounds a lot like the 90's: uploading php files to an apache web server using ftp (or sftp if you are cool) and waiting for apache to notice that they were updated. It can't happen atomically, so of course there is a race condition. Some users WILL get a half built and broken site.
I recommend only upgrading your live and running application while no one is using it. Hopefully you have a pool of servers behind a load balancer of some sort, which will allow you to remove them one at a time and upgrade them.
This will mean that users will be able to use both the old and the new site at the same time depending on how and when they access it, but that is much better then not being able to access it at all.
Ideally you would be able to spin up copies of each of the web servers that you have running with the new version of the site. Check that the new version does work, and then atomically update the load balancer so that everyone gets bumped to the new site at the same time. And only once everything is verified to be working perfectly the old machines are shut down and decommissioned, or reused.
step 4 in your procedure should be:
git archive --remote $yourgithubrepo --prefix=$timestamp/ | tar -xf -
stop-server
ln -sf $timestamp current
start-server
your server would use the current directory (well, a symlink) at all times. no matter how long the deploy takes, your application is in a consistent state.
I'll go ahead and post our far-from-ideal monkey-patch that we're using right now.
We deploy once which may or may not go as planned, once we're sure the code is deployed on all the servers we do another build where the only thing that changes is the version number.
Then we deploy again server by server.
The race condition still exists but because the application code between the two versions is the same this masks the issue since no matter which server the CDN hits it gets the "latest" code.
I've got hgweb up and running on II7 7 (on windows server 2008). The web interface works, and I can view, pull, and clone the repositories there. But I cannot push, doing so gives me a 502 error right after "searching for changes". Using --debug shows the last few lines as:
sending unbundle command
sending 622 bytes
HTTP Error: 502 (Bad Gateway)
I am using TortoiseHG to push, but the result is the same when using the mercurial command line.
I had followed the tutorial here: http://www.sjmdev.com/blog/post/2011/03/30/setting-mercurial-18-server-iis7-windows-server-2008-r2.aspx to setup hgweb.
Looks like an old question but someone is bound to come across it again. I was close to drawing a black circle on a wall and ... anyhow the issue for us was the way central repository was created. We cloned it from BitBucket while being Remote connected to the machine as local administrator.
The issue was in [Repository].hg folder. You need to set correct permissions on it. Try it with adding Everyone -> Full permissions for test purpose. Please make sure you change this to a dedicated network login or appropriate local account afterwards.
I was seeing the exact same behaviour - even push worked fine with exception of getting a Bad Gateway after all the time. After correct permissions were set the issue was gone.
Thinking about it now, probably the best solution is to add each network login that uses the repo to machine users and then set up access permissions to .hg folder to local users.
Hope it helps someone.
Try using the ISAPI module method instead of the CGI that executes phython.exe as documented here. There's also another related, and possibly duplicate question here as well.
Take a look at the 'Push_ssl' setting in your hgweb.config file.
I was getting the same error (had mine set to '*'), and was able to resolve it by removing the line entirely. Granted, this makes Mercurial somewhat less secure, but it lets me get by the configuration issue (for now) while I investigate properly configuring SSL on the server.
You may also have to review the 'Allow_push' setting in order to get past further errors (or take another look at your authorization).
NOTE: At least in my case, having 'push_ssl = false' wasn't enough as that resulted in further errors (authorization failed).
(Again this is simply a temporary solution until the server can be properly secured.)
It could happen by different reasons, to get more details about the error run
hg push --config ui.usehttp2=true --config ui.http2debuglevel=info
For example, problem may occur because of proxy server or just in case when the Mercurial Web Server "forgets" about repositories it needs to serve: in case if you are using TortoiseHg workbench go to Workbench UI, Repository -> Start Web Server, make sure that your repository is in the list of the served repos.
Try use https instead http in .hg/hgrc, I have resolve this problem for code.google.com.
I had this issue, and the problem ended up being the server running out of disk space.