I'm running a docpad instance, and it has been working just fine. Suddenly, now when I run docpad watch, the server starts alright and there are no error messages, but when I load http://localhost:9778, the site is not available. No errors appear in the console either, or at the command line. Anyone have any ideas about what might be going wrong?
I ran into this recently and was able to get things rolling by adding watchFile to the preferredMethods in the docpad config - like so:
# git diff
--- a/docpad.coffee
+++ b/docpad.coffee
## -23,5 +23,6 ## docpadConfig =
templateData: fetchConfig()
watchOptions:
catchupDelay: 0
+ preferredMethods: ['watchFile','watch']
It's mentioned in the Docpad Troublshooting
Hope this helps someone else.
UPDATE: I've now seen this on a co-workers machine and this did not solve the issue. It seems the server is just not responding. Running under debug mode, all looks ok, but when I try to hit it (with curl) I get
Recv failure: Connection reset by peer.
UPDATE2: After a bunch of tries, (reinstalling docpad and restarting things), this same fix seemed to work. What we found was that watch would appear to run but and would see files change, but wasn't actually updating things in the out directory. By adding the watchFile to preferredMethods, things seemed a bit less flaky.
It's weird to because the original config was working for a while (a week of development) with no issues. But today it started being flaky on 2 separate dev environments.
The solution that I have run with here is simply to use docpad run, which I think is the best practice. See this discussion for more information.
Related
Ubuntu 20.04
Apache 2.4.41
phpmyadmin 4.9.5deb2
I can log in to phpmyadmin and see the main page where the databases are listed in the left-hand panel. But I get an error message. Using F12, I see the following error:
Failed to load resource: the server responded with a status of 403 (Forbidden) config.js:1
There is a config.js file under /usr/share/phpmyadmin/js. There are other js files in that folder that are included in the sources listed in the browser (again, using F12).
I confirmed that the config.js file has read permissions set for everyone. All the js files in that folder have the same permissions set.
I also confirmed that /etc/apache2/conf_enabled/phpmyadmin.conf links to /etc/apache2/conf_available/phpmyadmin.conf and that links to /etc/phpmyadmin/apache.conf. Neither this apache.conf nor /etc/apache2/apache2.conf deny access to config.js.
When I delete config.js, I do not get the error BUT I get other errors concerning functions that are contained in that config.js (like "Uncaught ReferenceError: isStorageSupported is not defined."). This tells me that the forbidden file is indeed this file. I renamed it and created a new empty config.js - still get the same forbidden error(fwiw).
We've been using phpmyadmin on this server since April. The error has cropped up in the last couple weeks. When I ssh'd into the server this week, it displayed a message that it needed restarted to make updates. I did not know there were auto-updates going on. I restarted, which did not help the problem. Then I updated and upgraded everything but still no help. I uninstalled (remove and purge) and installed phpmyadmin, but still no help.
My next step would be to upgrade phpmyadmin to the latest stable using a different repository (the debian repo is a little behind). But I thought I'd ask here because I have a feeling that might not help either.
I'm thinking that something changed in the server stack that isn't allowing the config.js file to be read; I just have no idea what that would be. Has anyone else seen this or something similar?
I implemented a band-aid: I copied the /usr/share/phpmyadmin/js/config.js file to config.1.js. I edited the code in /usr/share/phpmyadmin/libraries/classes/Header.php to use config.1.js instead of config.js.
No error now!
But I don't like this answer because it doesn't tell me what the original source of the problem is and how to keep it from happening again.
I am getting an error on a form submit in localhost, but no errors on my live website.
The error is: Error: Can't set headers after they are sent.
I ran git diff to see the differences between my remote and local repos, and there were some since I had made a couple of changes. Those changes were: adding a custom bootstrap configuration (I wanted to change the base font from 14 to 18px) and uninstalling New Relic, an npm package for analytics. I am not sure how any of that would have caused the error, I am just trying to explain thoroughly.
My remote git repository is the code behind my live site, which is not getting this error, so I decided to make my local files match my remote, so I ran:
git fetch origin
git reset --hard origin/master
git clean -f
and then ran git diff again and there were no differences this time. That is what is puzzling to me. It seems like there should be no difference. Thanks in advance for any tips.
As a result of some earlier testing I had created duplicate usernames, and this error is only occurring when logged in with one of the duplicates, which only exist in my local DB. That explains why I was getting this error in localhost and not in my live site.
OK, I have a test setup running on a local server that is running like a champ.
I would like to reimplement this on my VPS. The config file only differs with regards to the mail server section, as the VPS has this enabled, my local server does not.
The issue that is most apparent (perhaps more) is that when I hit my domain:9080 it redirects to the login page, but loses that port information. My local install does not.
I for the life of me, cannot figure out what I need to change to fix this issue.
To get an idea of what I mean, if the above was unclear, you can goto shadow.schotty.com:9080 and that works perfectly (well obviously not the new user part, as the email isnt setup). schotty.com:9080 has that redirection issue.
As for the obvious questions for me:
Here is the docker publish ports copied from my start script:
--publish 9443:443 --publish 9080:80 --publish 9022:22 \
No, I did not copy over any existing part of the install on the local host, as I wanted to also document what the hell I did and to ensure that since I am using a newer version I wanted none of the potential issues that crop up with incompatible config files.
I did copy my startup script, and modified it appropriately for the volume directories.
The only modifications to any configuration files are the mail server section entries.
Thanks to anyone who can toss an idea my way.
Andrew.
OK, Figured a few things out here that should be of help to others.
First off something had changed somewhat since I had done the install on shadow. But now both are behaving the same since both are on the exact same revision.
To fix the web port across the board, you will need to pick a port to use that the rest of the software suite does not use, nor the obvious of other containers/daemons on the host. 8080 is indeed used, so I chose to stick with 9080.
There are 2 places this matters and has a very specific way of needing to be done. First is in the config -- you will need to setup the variable as follows:
external_url 'http://host.domain.tld:9080'
I am sure many tried stopping there and failed (I sure the heck did). The second spot is in the docker container initialization. For some reason it used to work, but does not anymore. But the simple fix is just map 1:1 the external port to the internal one. So in my case I am using 9080, so the following publish must be used:
--publish 443:443 --publish 9080:9080 --publish 22:22 \
This fixes everything.
Now off to other issues :D
I'm trying to deploy node.js app on heroku for the first time. As instructed I installed heroku toolbelt but something is wrong.
My machine with windows 10 using heroku toolbelt
Issue:
C:\Users(username)\Downloads\Heroku [master]> heroku login heroku-cli:
Installing Toolbelt v4... ! Heroku client internal error. ! Search for
help at: https://help.heroku.com ! Or report a bug at:
https://github.com/heroku/heroku/issues/new
Error: getaddrinfo: No such host is known.
(SocketError(Excon::Errors::SocketError) Command: heroku login
Version: heroku/toolbelt/3.42.45 (i386-mingw32) ruby/2.1.7 Error
ID: c47aaf8f037042beb25fdceb46cfe78f
More information in C:/Users/courtaks/.heroku/error.log
--------------------------------------------------------------------------------
Solved:
as instructed by the heroku team in this isssue: https://github.com/heroku/heroku/issues/1939
it appears to be a temporary DNS issue.
it's working now.
I didn't do anything except waiting the problem to solve it self. also try changing your dns server, maybe that will work.
--------------------------------------------------------------------------------
Almost 6 months later, I got the exact same issue on the first interaction with heroku, not a good start.
I used the command to verify the installation:
heroku --version
It probably needs some additional data downloading for the first start and installing/preparing heroku-cli.
I am behind a corporate firewall and git + npm are working perfectly fine (after one-time proxy configuration).
The issue resolved automatically after some time along with initial frustration and time consumption. Just shared my experience for any next user landing here until the issue is actually resolved.
Log File extract:
Heroku client internal error. getaddrinfo: No such host is known.
(SocketError)
.
.
.
Heroku client internal error. getaddrinfo: This is
usually a temporary error during hostname resolution and means that
the local server did not receive a response from an authoritative
server. (SocketError)
[cross posted partially on the given heroku git link, same like the original question and answer]
--------------------------------------------------------------------------------
Solved:
as instructed by the heroku team in this isssue: https://github.com/heroku/heroku/issues/1939
it appears to be a temporary DNS issue.
it's working now.
I didn't do anything except waiting the problem to solve it self. also try changing your dns server, maybe that will work.
--------------------------------------------------------------------------------
I use capistrano to deploy new versions of a website to servers that run nginx and php-fpm, and sometimes it seems like php-fpm gets a bit confused after deployment and expects old files to exist, generating the "No input file specified" error. I thought it could have had something to do with APC, which I uninstalled, but I realize the process doesn't get as far as checking stuff with APC.
Is there a permission friendly way to tell php-fpm that after deployment it needs to flush its memory (or similar), that I could use? I don't think I want to do sudo restarts.
rlimit_files isn't set in php-fpm.conf, and ulimit -n is 250000.
Nginx has it's own rather aggressive filecache. It's worse when NFS is involved since that has it's own cache as well. Tell capistrano to restart nginx after deployment.
It can also be an issue with your configuration as Mohammad suggests, but then a restart shouldn't fix the issue, so you can tell the two apart.