We have a problem in only one of our servers hosted at Amazon (the development server).
The problem happens when doing a curl request to a specific domain, by running this:
> curl https://api.plivo.com
Results in:
curl: (51) SSL: no alternative certificate subject name matches target host name 'api.plivo.com'
I did some research and found out that it might be a problem with the server's certificate, however if I try this from any other server it works fine, same on my local machine.
So I'm thinking that this might be a cache issue from curl? I tried reinstalling it, updating it, but no dice.
I'm almost creating a new dev machine because of this, because it's blocking us from using this service.
To summarize from the comments:
The good and the bad system actually accessed different servers which were configured with different certificates. That's why it failed on one system but not on the other.
The reason for this difference was that the bad system had an entry in /etc/hosts which was used instead of asking the DNS server.
The problem was found by comparing the output of curl -v and realizing that the shown target IP address was different.
The problem was fixed by removing the old entry in /etc/hosts so that it now queries the DNS server and gets the correct IP address of the server.
Related
I'm trying to set up both Confluence and PostgreSQL in Docker. I've got them both up and running on my fully up to date CentOS 6 machine, with volume-mapping to the host file system so I can back them up easily. I can connect to PostgreSQL using pgAdmin from another machine just fine, and I can get into Confluence from a browser from that same machine. So, basically, both apps seem to be running as expected inside their respective containers and are accessible to the outside world, which of course eliminates a whole bunch of possibilities for my issue.
And that issue is that Confluence can't talk to PostgreSQL during initial setup, which is necessary for it to function. I'm getting connection failed errors (to be specific: "Can't reach database server or port : SQLState - 08001 org.postgresql.util.PSQLException: The connection attempt failed").
PostgreSQL is using the default 5432 port, which of course is exposed, otherwise I wouldn't be able to connect to it via pgAdmin, and of course I know the ID/password I'm trying is correct for the same reason (and besides, if it was an auth problem I wouldn't expect to see this error message). When I try to configure the database connection during Confluence's initial setup, I specify the IP address of the host machine, just like from pgAdmin on the other machine, but that doesn't work. I also tried some things that I basically knew wouldn't work (0.0.0.0, 127.0.0.1 and localhost).
I'm not sure what I need to do to make this work. Is there maybe some special method to specify the IP to a container from the same host machine, some nomenclature I'm not aware of?
At this point, I'm "okay" with Docker in terms of basic operations, but I'm far from an expert, so I'm a bit lost. I'm also not a big-time *nix user generally, though I can usually fumble my way through most things... but any hints would be greatly appreciated because I'm at a loss right now otherwise.
Thanks,
Frank
EDIT 1: As requested by someone below, here's my pg_hba.conf file, minus comments:
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
local replication all trust
host replication all 127.0.0.1/32 trust
host replication all ::1/128 trust
host all all all md5
try changing the second line of the pg_hba.conf file to the following:
host all all 0.0.0.0/32 trust
this will cause PostgreSQL to start accepting calls from any source address. Since a docker container is technically not operating on localhost but on its own ip, the current configuration causes PostgreSQL to block any connections to it.
Also check if confluence is searching for the database on localhost. If that is the case change that to the ip of the hostmachine within the docker network.
Success! The solution was to create a custom network and then use the image name in the connection string to PostreSQL container from Confluence container. In other words, I ran this:
docker network create -d bridge docker-net
Then, on both of the docker run commands for the PostgreSQL and Confluence containers, I added:
--network=docker-net
That way, when I ran through the Confluence configuration wizard, when it asked for the hostname for the PostgreSQL server, I used postgres (the name I gave the container) rather than an IP address or actual hostname. Docker makes that work thanks to the custom network. This also leaves the containers available via the IP of the host machine, so for example I can still connect to PostgreSQL via 192.168.123.12:5432, and of course I can launch Confluence in the browser via 192.168.123.12:8080.
FYI, I didn't even have to alter the pg_hba.conf file, I just used the official PostgreSQL image (latest) as it was, which is ideal.
Thanks very much to RSloeserwij for the suggestions... while none of them proved to be the solution I needed, they did put me on the right track in the Docker docs, which, after some reading, led me to understand a few things I didn't before and figure out the config magic I needed.
I am using Nodejs with the express, bcrypt and body-parser packages on an ubuntu linux system.
Everything works fine so far.
However, I´ve been wondering if there is an easier way to connect to my website.
Until now, I have to type in my IP adress, e.g. https://XXX.XXX.XXX:3000 to actually see the content.
I´ve already tried avahi-daemon but did not get it work. Whenver I try 'hostname.local' I get the same error: Firefox is unable to connect to server.
However, using the IP-Adress: https://XXX.XXX.XXX:3000 works.
I would like to access my NodeJS sever with something similar to: computername.local
FYI: I just want to use it in my local network at home.
Does anybody have any idea how to get this work?
You have a couple choices. The easiest,if available, is probably to setup your home router to always assign it the same IP address (how to do that will vary based on your router). If your router has it available, you could also set a host name for it there in DNS settings.
If your router doesn't have DNS settings available, then you can add a line to each of your home computers /etc/hosts file (if memory serves Windows has it in C:\etc\hosts). Let's say the IP you give the server is 172.16.1.11, your hosts entry would be
172.16.1.11 computername.local
You could also setup your own DNS server in your house, possibly even on the same machine as has your node app, and then configure it to handle the one address before forwarding DNS requests for others to your ISP but that seems like overkill if you have just one app.
I run a high volume website and since yesterday it's not working.
My server (for example) 100.0.0.1 is working fine, I can access WHM etc, rootssh no problem....
Yet none of the domains are working, they say cannot find page.
I have my name server setup at Godaddy using domainname.com
Pointing to 3 IPS, 100.0.0.1, 100.0.0.2, 100.0.0.3
All my domains then have ns1.domainname.com, ns2.domainname.com, ns3.domainname.com
As their Nameserver entries.
This was working fine yesterday, now...nothing.
Any ideas on what I can do? Troubleshoot.
Thank you, I am losing alot of trade as I run an high traffic eCommerce website, so would like to get this fixed as soon as possible.
Have you tried using the IP of the server instead of the domain name? Could roll out a DNS issue. You said you can gain root access fine.
Have you checked your firewall to ensure the correct ports are still open?
If you run a netstat command to check what type of traffic is occurring.
Run this command and then try to access the domain or IP to see if the connection is established or if the service is listening. (watch -n 1 netstat -nat)
At my organisation we've set up a linux server which runs one of our sites. It's been working fine and I have been able to SSH through into it (using Terminal on OSX) no problem.
As of earlier when I tried to ssh root#123.123.123.123 (not my real IP) I was rejected with: ssh_exchange_identification: Connection closed by remote host
Having a look at the /etc/hosts.deny file I can see: sshd: 123.123.123.123 in the list.
This means the IP which I have been using for months no problem has suddenly appeared in the list. I removed it, and was able to SSH in fine, ONCE, then on my second try I was rejected and looking at the list again, I can see we have been added to the list once more!
I have added our IP to the hosts.allow file, but no luck - still no access.
Why do IP's appear in the hosts.deny file?
How can I stop our IP appearing there?
As mentioned, probably a fail2ban or similar (look for denyhosts too - another popular).
The usual fix is to append your IP address to /etc/hosts.allow
This works for denyhosts at least
You may have a system like fail2ban installed which adds you to the hosts.deny file if you enter your password incorrectly a few times..
I have installed the latest hg package available for Fedora Linux. However, hg clone reports an error.
hg clone http://localmachine001:8000/ repository
reports:
"abort: error: Name or service not known"
localmachine001 is a computer within the local network. I can ping it from my Linux box without any problems. I can also use the same http address and browse the existing code. However, hg clone does not work.
If I execute the same command from my Macintosh machine, I can easily clone the repository.
Some Internet resources recommend editing .hgrc file, and adding proxy to it:
[http_proxy]
host=proxy:8080
I have tried that without any success. Also, I assume that proxy is not needed in this case, since the hg server machine is in my local network.
Can anyone recommend me what should I do, or how could I track the problem?
Thank you in advance.
Running hg as root solved the problem for me. But still don't know why.
The problem is (likely) that your http proxy is not able to:
Resolve localmachine
Reach your (or localmachine's) local IP, even if it could resolve 'localmachine' to your correct local address.
First, make sure nothing on your side (iptables / NAT / firewall) is preventing egress or ingress on the proxy port. If it works if you're root, that's the problem - work backwards from there.
Its also conceivable that your proxy is mangling the responses from the remote HG sufficiently to confuse Mercurial, but not your browser. In either case, its best to just go around the proxy if the HG is on the local (localhost/lan) network.
Fortunately, the [http_proxy] directive supports bypassing the proxy for certain host names, which is ideal for dealing with stuff on the same side of a NAT, or hosts that only exist on one machine (e.g. resolved via /etc/hosts.) This saves the pain from having to edit .hgrc every time you need to change the behavior.
See the documentation, or simply make your .hgrc look something like this:
[http_proxy]
host=proxy:8080
no=localmachine,192.168.1.123,192.168.1.234,...,...
The operative directive of course being no. I'm not sure if you can use wildcards when specifying the hosts (I don't use the proxy feature, so no way of testing that .. and its not specified in the documentation). You might try experimenting with that, e.g. 192.168.1.* and let us know if that works as well.
Anyway, for the terminally lazy (or people in a rather big hurry), the related section of the documentation linked above:
http_proxy
Used to access web-based Mercurial repositories through a HTTP proxy.
host
Host name and (optional) port of the proxy server, for example "myproxy:8000".
no
Optional. Comma-separated list of host names that should bypass the proxy.
passwd
Optional. Password to authenticate with at the proxy server.
user
Optional. User name to authenticate with at the proxy server.
This happened to me when my DNS name server settings weren't configured. Try to ping the remote repository host and try to ping some well-known host e.g. google.com. If it doesn't work, fix your DNS settings.
The error is because of DNS, add entry into /etc/resolve.conf.
$ cat /etc/resolve.conf
search xyz.com abc.com --> DOMAINS
nameserver 10.192.160.12 -->DNS1
nameserver 10.193.180.23 -->DNS2
This enables to reach local machine by name, & hence clone will be successful.
Clone and the web interface use exactly the same mechanism, so it's very odd that you can see the repo at http://localmachine001:8000/ but you can't clone from it.
What about trying to go to that machine by IP address? Something like hg clone http://192.168.0.4:8080 and see what happens?
Any more detail with --debug?
I Faced same issue,
To resolve this issue i just disconnected my internet from my laptop and reconnected again
Thankyou Happy coding ;)