OK, I have a test setup running on a local server that is running like a champ.
I would like to reimplement this on my VPS. The config file only differs with regards to the mail server section, as the VPS has this enabled, my local server does not.
The issue that is most apparent (perhaps more) is that when I hit my domain:9080 it redirects to the login page, but loses that port information. My local install does not.
I for the life of me, cannot figure out what I need to change to fix this issue.
To get an idea of what I mean, if the above was unclear, you can goto shadow.schotty.com:9080 and that works perfectly (well obviously not the new user part, as the email isnt setup). schotty.com:9080 has that redirection issue.
As for the obvious questions for me:
Here is the docker publish ports copied from my start script:
--publish 9443:443 --publish 9080:80 --publish 9022:22 \
No, I did not copy over any existing part of the install on the local host, as I wanted to also document what the hell I did and to ensure that since I am using a newer version I wanted none of the potential issues that crop up with incompatible config files.
I did copy my startup script, and modified it appropriately for the volume directories.
The only modifications to any configuration files are the mail server section entries.
Thanks to anyone who can toss an idea my way.
Andrew.
OK, Figured a few things out here that should be of help to others.
First off something had changed somewhat since I had done the install on shadow. But now both are behaving the same since both are on the exact same revision.
To fix the web port across the board, you will need to pick a port to use that the rest of the software suite does not use, nor the obvious of other containers/daemons on the host. 8080 is indeed used, so I chose to stick with 9080.
There are 2 places this matters and has a very specific way of needing to be done. First is in the config -- you will need to setup the variable as follows:
external_url 'http://host.domain.tld:9080'
I am sure many tried stopping there and failed (I sure the heck did). The second spot is in the docker container initialization. For some reason it used to work, but does not anymore. But the simple fix is just map 1:1 the external port to the internal one. So in my case I am using 9080, so the following publish must be used:
--publish 443:443 --publish 9080:9080 --publish 22:22 \
This fixes everything.
Now off to other issues :D
Related
I set up a Debian 10 server to host my containers running on Docker version 19.03.2.
It currently hosts 3 DNS containers (pi-hole => bind9 => dnscrypt-proxy) which means my Debian 10 server acts as a DNS server for my LAN.
I want to add a new container. However, I can't build it because it fails when it comes to RUN apt-get update. I checked the content of the /etc/resolv.conf of the container, and the content seems right (nameserver 1.1.1.1 and nameserver 9.9.9.9, that matches with what I wrote in /etc/docker/daemon.json).
If I'm correct, the build step uses - by default - the DNS of the host, except if you specify DNS servers in /etc/default/docker or /etc/docker/daemon.json.
If the DNS servers in /etc/resolv.conf seem correct, and if the container has an Internet access (I tried a RUN ping 8.8.8.8 -c1 and it works), the build should succeed ?
I tried several things, like overwriting the content of /etc/resolv.conf with other DNS, I also rebooted the server, restarted Docker, pruned downloaded images, used the --no-cache option... I also reinstalled Docker. Nothing seems to work.
It must be somehow related to my DNS containers I guess.
Below is the content of the /etc/resolv.conf of the host (the first one is itself, as it redirects to Pi-hole).
Have you any lead to solve this issue ?
I can provide the docker-compose file of my DNS containers and the Dockerfile of my new container if you need them.
Thanking you in advance,
I have found this fix :
RUN chmod o+r /etc/resolv.conf && apt-get [....]
It works when I change the permissions.
I do not really understand why it behaves like this, if you have any lead I would be glad to know more !
I'm trying to connect to an Azure file share from my Mac running High Sierra 10.13.6 using the following command:
mount_smbfs -d 0777 -f 0777 //dolphins:PASSWORDHERE#dolphins.file.core.windows.net/models /Users/b3020111/Azure
However I keep getting the error:
mount_smbfs: server connection failed: No route to host
I have turned off packet signing in /etc/nsmb.conf:
[default]
signing_required=no
After looking around the web I seem to be at a loss as to where to go, any help is appreciated.
I got it working with azure provided connection example.
mount_smbfs -d 777 -f 777 //user:key#storageurl/folder ~/mountfolder
Folder in file share needed after url and mountfolder must exist.
But the main reason for "No route to host" was because the access key had forward slash in it! I did a rebuild of key1 until I got a key without forward slash.
BUT! Be aware, rebuilding key will kill all mounts and connections to that storageaccount.
Came across this issue myself today. Do double check that your ISP does not block SMB port 445. In my case, AT&T does actually block this port. I found this in their guide http://about.att.com/sites/broadband/network
The solution for me was to connect with a VPN which I'm already hosting on Azure. Additionally as others have mentioned in this thread, escape any / with %2f. Also, add the share name in the connection URL. For example, if your share name is my-data then the connection URL should contain xxx.file.core.windows.net/my-data.
This is omitted for some reason in the Azure docs/UI and was required for successful connection on OSX.
It was the "/" after all. I had to regenerate the key over ten times till I get a key that doesn't have the "/" character and then it worked fine through the terminal.
It should work using the following syntax:
mount_smbfs //<storage-account-name>#<storage-account-name>.file.core.windows.net/<share-name> <desired-mount-point>
Without adding the permissions.
Via Finder:
Source can be found here
"mount(2) system call failed no route to host "
while mounting azure file share on linux vm we can have this error.
In my case One package was missing which is - cifs-utils
So, I have used below command
"sudo yum install cifs-utils -y" to resolv the issue.
Important to allow port 445 (TCP) to smb communication. If you don't access it, your firewall block it! Please enable it and try it again.
I ran into this same problem, and while I was never able to get it working through the terminal I did manage to get it resolved in finder.
Essentially the same instructions as #Adam Smith-MSFT, however one key difference.
I created a directory via Azure's web interface, and after that I was able to connect by adding /<directory-name> to the connection string. Without a directory this would not work at all.
I'm running a docpad instance, and it has been working just fine. Suddenly, now when I run docpad watch, the server starts alright and there are no error messages, but when I load http://localhost:9778, the site is not available. No errors appear in the console either, or at the command line. Anyone have any ideas about what might be going wrong?
I ran into this recently and was able to get things rolling by adding watchFile to the preferredMethods in the docpad config - like so:
# git diff
--- a/docpad.coffee
+++ b/docpad.coffee
## -23,5 +23,6 ## docpadConfig =
templateData: fetchConfig()
watchOptions:
catchupDelay: 0
+ preferredMethods: ['watchFile','watch']
It's mentioned in the Docpad Troublshooting
Hope this helps someone else.
UPDATE: I've now seen this on a co-workers machine and this did not solve the issue. It seems the server is just not responding. Running under debug mode, all looks ok, but when I try to hit it (with curl) I get
Recv failure: Connection reset by peer.
UPDATE2: After a bunch of tries, (reinstalling docpad and restarting things), this same fix seemed to work. What we found was that watch would appear to run but and would see files change, but wasn't actually updating things in the out directory. By adding the watchFile to preferredMethods, things seemed a bit less flaky.
It's weird to because the original config was working for a while (a week of development) with no issues. But today it started being flaky on 2 separate dev environments.
The solution that I have run with here is simply to use docpad run, which I think is the best practice. See this discussion for more information.
I use capistrano to deploy new versions of a website to servers that run nginx and php-fpm, and sometimes it seems like php-fpm gets a bit confused after deployment and expects old files to exist, generating the "No input file specified" error. I thought it could have had something to do with APC, which I uninstalled, but I realize the process doesn't get as far as checking stuff with APC.
Is there a permission friendly way to tell php-fpm that after deployment it needs to flush its memory (or similar), that I could use? I don't think I want to do sudo restarts.
rlimit_files isn't set in php-fpm.conf, and ulimit -n is 250000.
Nginx has it's own rather aggressive filecache. It's worse when NFS is involved since that has it's own cache as well. Tell capistrano to restart nginx after deployment.
It can also be an issue with your configuration as Mohammad suggests, but then a restart shouldn't fix the issue, so you can tell the two apart.
I've been using git for a few months and have never run into problems. I met my match today. I have a system running Ubuntu 10.10 (new system). I put my keys in place to access the server, and can ssh in just fine. I cloned my repos just fine. I can push added / deleted files just fine. However, when I try to push modified files, the push doesn't finish. It hangs on the last line (Starts with "Total")
If I wait 15 minutes or so it gives me these errors:
Write failed: Broken pipe
Fatal: The remote host hung up unexpectedly
I've tried pushing as both regular user and sudo user. When I add a verbose flag to the push, nothing.
I think this is an SSH error, but it is completely puzzling me. Can anyone help?
I'm just going to run a list of ideas here.
Is this plain SSH or are you using e.g. -o ProxyCommand or another tunnel of sorts?
I'd check the version of the client, since you report being able to do the same correctly from other machines.
I'd also try creating a bundle from the client to eliminate the transport from the analysis.
I'd check file permissions (and out-of-space/quota/temp space for the user) on the server. Are you using the same user that works for other clients?
You could look at a problem in the garbage collect step on the server (by using git config to make sure it doesn't happen).
Did you try other protocols (git-daemon or smart http server?)
Could something be up locally (like repository on synch NFS, or dropbox or...)?