I tried setting up a do-nothing Node app, and it failed.
I developed some Node.js code offline in containers. I now want to try deploying it on DreamHost. I am doing it incrementally, adding features one by one. Starting with “Hello World” and going from there.
I set up a new subdomain and enabled Passenger. I was able to serve up an index.html file. I followed https://help.dreamhost.com/hc/en-us/articles/360029083351-Installing-a-custom-version-of-NVM-and-Node-js and installed Node and nvm (using the versions recommended in that artcle). I then installed a few packages I plan to use (most notably Express, the rest won’t come into play until later).
With just a Hello World app, that failed. The error message is below. But, I checked all the relevant files and they all have global read and execute permissions. I’m wondering if it is something else. I tried multiple Hello World examples for app.js, copied directly from different tutorials, none of which worked (but they do work locally). My more complex code also does not work, but that is the next step.
What am I missing? I followed the directions exactly. What other landmines do I have to look forward to? I really don’t want to spend time wrestling with infrastructure, I want it to “just work”, ideally.
An error occurred while starting the web application. It exited before signalling successful startup back to Phusion Passenger. Please read this article for more information about this problem.
Raw process output:
*** ERROR ***: Cannot execute /home/<user name>/.nvm/versions/node/v12.16.3: Permission denied (13)
Unclear what solved the issue.
Ran through changing the permissions on the files, as would seem obvious. Changed '/home/<user name>/.nvm/versions/node/v12.16.3' to '/home/<user name>/.nvm/versions/node/v12.16.3/bin/node' in the .htaccess file. Neither of those seemed to solve it.
Repeated the process again later. Followed it by `touch <webapp directory>/tmp/restart.txt' and it started working. I had been editing files in the web app's directory, so it isn't clear what touching that file did.
Related
No idea what happened... It was working and then it wasnt.
I am currently building a web app and decided to take some time off from the product side and build a landing page.
For some reason, I decided to build the landing page on a separate Github branch. So I checked out to a new branch, deleted everything, and started working on the landing page.
I soon realized this is a terrible idea and created a new repo to store my landing page.
I checked back into my master branch and spun my Node server up but for some reason now, everything is timing out. I opened Postman and tried hitting some of my endpoints but after like 3 minutes of loading, it tells me that it could not get any response and that there was an error connecting to localhost:3001/api/posts
In my terminal, all I see is this when I hit the route:
GET /api/posts - - ms - -
This has never happened to me before and I am completely clueless on WTH happened.
I tried deleting my local stuff and re-cloning the repo and installing my dependencies but to no avail...
Would love to know if someone has an idea on what's going on.
Check first is this isn't because of another process already listening on that port (but using resources which were deleted or not properly updated)
Closing applications or even rebooting can help you asserting if the issue is permanent or just linked to your current session.
The OP Syn points out in the comments the ~/.env file missing
.env files allow you to put your environment variables inside a file.
You just create a new file called .env in your project and slap your variables in there on different lines.
To read these values, there are a couple of options, but the easiest is to use the dotenv package from npm.
npm install dotenv --save
Note: it is generally not versioned, as it includes potentially sensitive date.
I just started using laravel's lumen and managed to make it work both locally and on a server, when I was about to start exploring it, my index.php consisted in just:
$app = require __DIR__."/../lumenTest/bootstrap/app.php";
$app->run($app->make('request'));
echo $myundefinedvariable;
Which displays a ErrorException: Undefined variable: myundefinedvariable, but inside the "...at Application->Laravel\Lumen\Concerns{closure}" window I can see a giant wall of text with stuff like:
... 'APP_KEY' => 'fake0BqKgHeC72EmT7039B6pDCsJ90key' , ..., 'DB_PASSWORD' => 'secret', ...
And my first thoughts were, maybe it is because im running it localy with XAMPP or something, so I went and tried it on the server and the same thing happened.
Is it normal that sensitive data from my .env file gets shown to everyone after doing any php error?
Is there a way to avoid this happening? (different than not having any PHP errors, because I tend to have them a lot).
Additional info:
PHP version 7.1.12
Lumen (5.6.1) (Laravel Components 5.6.*)
The directory "lumenTest" is one level above my www or public and there is where the .env is located, the site is on a Linux server shared host
No, that's not normal. Professional developers consider this an amaturistic behavior. That's the exact reason why companies don't even consider using Laravel.
Many people (including me) already notified them that this is really not-done, but the developers don't really seem to care. In fact it's the only framework in the world that thinks it's OK to print critical information in a debug page. Surely a visitor should never see stack traces, sql queries, pieces of code... But environment variables are confidential and should never end up in a HTTP response.
The best advice I have is to use a professional MVC framework like ASP.net, codeigniter, or yii, since there's no telling what the Laravel devs also think is OK to do...
If on the other hand you do decide to use Laravel anyway, there's a package that counters this: https://github.com/GlaivePro/Hidevara
It's real easy to setup, just make sure you don't forget the app->extend instruction.
On a production server you must not run "composer install" but instead "composer instal --no-dev". This way filp/whoops will (should, hopefully) not be installed and cannot be triggered.
For professional development, i surely recommend not to use Laravel since the bar of what they think is acceptable seems to be very low.
As a sidenote: the developers claim that nothing can go wrong when APP_DEBUG=false, but incidents in the past have shown that the whoops handler can be triggered when debug mode is disabled. https://www.google.com/amp/s/blog.hacken.io/dangers-of-laravel-debug-mode-enabled%3fhs_amp=true
Yes, if you have debug mode enabled, any sort of data relating to an error can be displayed. This certainly would include sensitive data that would be useful when debugging.
For production, you want all errors to be privately logged, not publicly displayed. For this reason, you will want debug=false in your .env file.
If this is happening while debug mode is already set to false, you will want to configure the hiding/logging of errors at the server level.
I am getting the following error even though I have verified that the PassengerAgent exists at that location. The application I am trying to run is a nodeJS application on a cPanel apache server. I have root access to the server.
An error occurred while starting the web application. It exited before signalling successful startup back to Phusion Passenger. Please read this article for more information about this problem.
Raw process output:
SpawnPreparerShell: /usr/local/share/gems/gems/passenger-5.0.28/buildout/support-binaries/PassengerAgent: No such file or directory
I have finally figured out what my problem was. I had CageFS installed and activated. Even though the permissions and ownership to PassengerAgent were correct, somehow CageFS was still blocking access to it from the user. I still need to figure out how to prevent CageFS from blocking it, but at least I know what the problem is. I will probably change the installation to a new directory. I will post another reply when I have discovered how to let CageFS and Phusion Passenger work together.
I've got hgweb up and running on II7 7 (on windows server 2008). The web interface works, and I can view, pull, and clone the repositories there. But I cannot push, doing so gives me a 502 error right after "searching for changes". Using --debug shows the last few lines as:
sending unbundle command
sending 622 bytes
HTTP Error: 502 (Bad Gateway)
I am using TortoiseHG to push, but the result is the same when using the mercurial command line.
I had followed the tutorial here: http://www.sjmdev.com/blog/post/2011/03/30/setting-mercurial-18-server-iis7-windows-server-2008-r2.aspx to setup hgweb.
Looks like an old question but someone is bound to come across it again. I was close to drawing a black circle on a wall and ... anyhow the issue for us was the way central repository was created. We cloned it from BitBucket while being Remote connected to the machine as local administrator.
The issue was in [Repository].hg folder. You need to set correct permissions on it. Try it with adding Everyone -> Full permissions for test purpose. Please make sure you change this to a dedicated network login or appropriate local account afterwards.
I was seeing the exact same behaviour - even push worked fine with exception of getting a Bad Gateway after all the time. After correct permissions were set the issue was gone.
Thinking about it now, probably the best solution is to add each network login that uses the repo to machine users and then set up access permissions to .hg folder to local users.
Hope it helps someone.
Try using the ISAPI module method instead of the CGI that executes phython.exe as documented here. There's also another related, and possibly duplicate question here as well.
Take a look at the 'Push_ssl' setting in your hgweb.config file.
I was getting the same error (had mine set to '*'), and was able to resolve it by removing the line entirely. Granted, this makes Mercurial somewhat less secure, but it lets me get by the configuration issue (for now) while I investigate properly configuring SSL on the server.
You may also have to review the 'Allow_push' setting in order to get past further errors (or take another look at your authorization).
NOTE: At least in my case, having 'push_ssl = false' wasn't enough as that resulted in further errors (authorization failed).
(Again this is simply a temporary solution until the server can be properly secured.)
It could happen by different reasons, to get more details about the error run
hg push --config ui.usehttp2=true --config ui.http2debuglevel=info
For example, problem may occur because of proxy server or just in case when the Mercurial Web Server "forgets" about repositories it needs to serve: in case if you are using TortoiseHg workbench go to Workbench UI, Repository -> Start Web Server, make sure that your repository is in the list of the served repos.
Try use https instead http in .hg/hgrc, I have resolve this problem for code.google.com.
I had this issue, and the problem ended up being the server running out of disk space.
I'm trying to deploy a winform application with IIS and ClickOnce. I can access the publish.htm page and the install even starts when I click on the provided link.
However I get this error during the installation process:
Downloading http://MyWebSiteUrl/.../Interop.SHDocVw.dll did not succceed.
The remote server returned an error: (500) Internal Server Error.
Can anybody help me out on this ?
Thanks,
Bruno
I found out that I needed to check "use .deploy file extension" (under properties>Publish>Options>Deployment
[Answering this old question because it comes up as the best match in my case and the accepted answer was of no use to me].
Background, in an IIS hosted ClickOnce scenario, the downloadable components are itemized in a manifest file at the root of the deployment (that's how you can specify a single download link and deploy all the supporting components).
I was converting a tested application from a WiX installation to a lightweight version with ClickOnce and received the HTTP 500 error without anything else in the logs. Naturally, I failed to think it through and instead found myself getting dragged down the rabbit hole on the internets, with instructions for detailed logging, magic spells, etc.
Upon more sober reflection, the problem was simple and I should have been able to tell immediately from the IIS log: a 500 followed by a 0 is shorthand for 'you're an idiot, the content isn't where you said it was' and it had almost nothing to do with ClickOnce.
I had copy/paste/edited an existing download link template in MVC that was in use for simple apps and it happened to cater to only two levels of subfolders in the manifest. When I ported a more complex project structure, I ended up leaving items in a Resources sub-sub-subfolder that looked fine in the manifest but the path was being truncated in MVC so that the related item could not be found.
Moral of the story - if you get a 500 error always check first to make sure your non-functioning appliance is plugged into a working outlet...