following problem:
My webpage example.eu works perfect with SSL connection certified by certbot with apache on ubuntu VPS. If I now enter www.example.eu Firefox says "Your connection is not secure" and "The certificate is only valid for example.eu". The domain is hosted with a different service-provider than the VPS.
First attempt was to direct A-records of the subdomain www.example.eu to the same IP (VPS) as example.eu. Same problem.
Next attempt was to create a CNAME-record for www.example.eu that directs to example.eu. Same problem.
Now I am out of ideas. What can I do?
Thanks in advance and best regards from Germany, Joachim
Ok, I got it now. Problem was with apache/certbot on the VPS:
I checked the virtual host:
sudo nano /etc/apache2/sites-available/example.eu.conf
it showed
ServerName example.eu
ServerAlias www.example.eu
So no issue here. But certbot seemed to be only configured for example.eu.
ls /etc/letsencrypt/live
So I had to
sudo certbot --apache -d www.example.eu
and configure the A-record of the www subdomain to point to the IP of the VPS and: works :)
Related
I am trying to install a certificate using certbot from LetsEncrypt on a Raspberry Pi. I have installed Apache2 and created a webserver at http://subdomain.mydomain.com on the Raspberry Pi. The certbot command obtains a certificate and writes it to http://subdomain.mydomain.com/.well-known/acme-challenge/<etc.>
Background Info: I am doing this because I need a local server to address IoT devices and my Ajax calls are failing because I am not allowed to mix http with https. The IoT devices are incapable of a hosting a webserver with SSL - they use a simple http:/192.168.1.xx/<string> format
I don't want to create a DNS entry at my registrar/ISP because I am trying to create a scalable solution and creating hundreds (perhaps thousands if we do well) of subdomain entries there is impractical. Creating my own DNS server is a possibility, but I would rather just do it all on the Pi - my bash installation script will take care of everything (once I get it to work!).
I tried first to create an entry into the local hosts (/etc/hosts) file which looks like this:
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.1.1 SubDomain
192.168.1.111 subdomain.mydomain.com
This works for commands like ping, but not for nslookup or dig and definitely not for certbot. The certbot command finds my main server - DNS is configured with a * to go to my Public IP for all unknown subdomains:
A * xx.xx.xx.xx //My public IP address
So then I installed dnsmasq (See: When using proxy_pass, can /etc/hosts be used to resolve domain names instead of "resolver"?) and followed the configuration options shown here: How to Setup a Raspberry Pi DNS Server
However, that doesn't work either. certbot still looks at my main (external DNS) and finds my Public (wildcard) IP. Here's a summary of the changes made in /etc/dnsmasq.conf
domain-needed ## enabled
bogus-priv ## enabled
no-resolv ## enabled
server=8.8.8.8 ## added (#server=/localnet/192.168.0.1 left as is)
server=8.8.4.4 ## added
cache-size=1500 ##increased from 150
How can I force certbot to find and use my local/private IP 192.168.1.111? Any alternative solutions using scripts/redirection?
Create a wildcard certificate using Let's Encrypt DNS validation. You will then have to renew the certificate manually. Otherwise, your server must be on the public Internet with correct DNS settings.
I finally solved my problem but I abandoned LetsEncrypt entirely. The answer was not in DNS, but in approaching it from a completely different angle. This was pretty much 95% of the solution.
Important! This only works if you have control over the browser. We do, since it is for our kiosk application which runs in a browser.
Step 1: Become your own CA
Step 2: Sign your SSL certificate as a CA
Step 3: Import the signed CA (.pem file) into the browser (under Authorities)
Step 4: Point your Apache conf file to the local SSL (the process generates .key and .crt files for this as well).
I am new to SSL encryptions and need help! (Using cert bot).
I recently activated SSL on a website that runs on apache and linux on port 80. So, the current website looks like:
http://example.com --> https://example.com (done)
However, I have backend running on port 4000 and want to encrypt that as well to avoid "Mixed Content" page error:
http://example.com:4000 --> https://example.com:4000 (Not done yet)
This is exactly what I need and no work around would help. Please guide.
Thanks in advance! :-)
You can create a new subdomain subdomain.example.com and point it to example.com:4000 and then request a new SSL certificate from LetsEncrypt and specify multiple subdomains when requesting a certificate using certbot.
certbot certonly --webroot -w /var/www/example/ -d www.example.com -d example.com -w /var/www/other -d other.example.net -d another.other.example.net
When you have the certificate and key, add it in your webserver config
Check out the official certbot documentation here
I have to tomcat servers running in my server. And I wanted to do a virtual host routing. So initially I tried it with one tomcat which is running in 8081 port and ajp port enabled to 8011 in the tomcat server.xml file
My conf file in the /etc/apache2/sites-available/mydomain_name.com.conf looks likes this
<VirtualHost *:80>
ProxyRequests off
ProxyPreserveHost On
ServerName mydomain_name.com
ServerAdmin ubuntu#mydomain_name.com
ProxyPass / ajp://localhost:8011/
ProxyPassReverse / ajp://localhost:8011/
</VirtualHost>
Then I did
sudo a2ensite mydomain_name.com.conf
sudo service apache reload
Every thing went find, no issues. And I also ensured the port 8011 is listening.
But when I try to access the server from my personal laptop, the request is blocked by Google chrome.
I have enabled these configurations in the server too.
sudo a2enmod proxy
sudo a2enmod proxy_ajp
sudo a2enmod proxy_http
sudo service apache2 restar
Have anyone has came across this issue ? Shedding some light would be really helpful. Because I have done some thing similar 1 year back, then this issue did not occur, and I'm only trying to direct it to the tomcat home page. Which is a bare minimal page.
After several frustrating hours found the issue. Hope this might help if any one came across this same issue.
Although the port 80 was opened via the aws management console security groups, internally the ports were firewall protected by the ip tables. So by removing the ip-tables entry for the port 80 I was able to make the virtual host work.
My cPanel server is resolving a URL wrong. The website example.com is hosted on my cPanel server at ip 1.0.0.1. In a script I am attempting a cURL command to cp.example.com which is hosted on another server at 2.0.0.2. My server is resolving cp.example.com to the IP of 1.0.0.1. Any help will be greatly appreciated!
It seems like your dns settings for cp.example.com are not visible on the host where you are running your script. You should check the dns settings for cp.example.com. You may also want to contact the Cpanel support
When you make a cURL request from a source hosted on your cPanel server the IP for the domain is first resolved locally, if it's not found in your Server's DNS zones it will be resolved from your configuration at /etc/resolv.conf
You can test to see which IP your server is resolving this by logging via SSH and pinging it
Executed from your cPanel Server
ping cp.example.com
I can think of two workarounds for this issue:
If example.com's DNS zone is hosted in your cPanel account
Go to cPanel -> Zone Editor
Open the DNS zone for example.com
Find the A record for cp.example.com
Change it to 2.0.0.2
If you have root - edit your WHM / cPanel Server's /etc/hosts file
root#server #: vim /etc/hosts
// 2.0.0.2 cp.example.com
I'm having hard time trying to setup an SSL certificate (it's a Comodo PositiveSSL purshased from NameCheap) on my EC2 micro instance (I'm using Amazon Linux AMI 2012.3, which is based on CentOS if I'm not mistaken).
Here's what I did:
I installed mod_ssl & OpenSSL
I enabled port 443 on my EC2's instance security group
I CHMODed the *.key & *.crt files to 777 as Comodo suggested
I'm certain the IP address & files path are correct (put a bunch of
0s in the example but it is correct in my ssl.conf)
I added this VirtualHost entry to ssl.conf
<VirtualHost 00.000.000.00:443>
############# I tried both with & without this section ##############
ServerName www.mydomain.com:443
ServerAlias www.mydomain.com
DocumentRoot /var/www
ServerAdmin webmaster#mydomain.com
######################################################################
SSLEngine on
SSLCertificateKeyFile /etc/ssl/mydomain_com.key
SSLCertificateFile /etc/ssl/mydomain_com.crt
SSLCertificateChainFile /etc/ssl/mydomain_com.ca-bundle
</VirtualHost>
Then I restarted apache...but I stil cannot access https://www.mydomain.com/ !!!
I checked with ssltool.com, it says
The Common Name on the certificate is: ip-00-00-00-000
The certificate chain consists of:
SomeOrganization, ip-00-00-00-000. Expires on: Apr 10 13:39:41 2013 GMT - that's 363 days from today.
The site tested mydomain.com is NOT the same as the Subject CN ip-00-00-00-000!.
I even went & copied the VistualHost to httpd.conf instead of ssl.conf & restarted apache, all in vain.
I've been banging my head against the wall for days now. I'm pretty sure I'm missing a tiny something to make this work, I just don't know what exactly.
I'd be infinitely grateful if someone can suggest something to make this work!
Sometimes this section
<VirtualHost _default_:443>
prevents your real SSL certificate from being used. If this is the case either comment VirtualHost default or move the SSLCertificate* attributes to it, ie.
<VirtualHost _default_:443>
SSLCertificateKeyFile /etc/ssl/mydomain_com.key
SSLCertificateFile /etc/ssl/mydomain_com.crt
SSLCertificateChainFile /etc/ssl/mydomain_com.ca-bundle
</VirtualHost>
Make sure you restart apache after that.
Amazon now provide a certificate manager! (for free)
If you use Elastic Beanstalk this is the new way to do:
It's free, You avoids errors due to the configuration and it's a better choice on a performance point of vue:
Because ELB supports SSL offload, deploying a certificate to a load
balancer (rather than to the EC2 instances behind it) will reduce the
amount of encryption and decryption work that the instances need to
handle.
from the doc:
The new AWS Certificate Manager (ACM) is designed to simplify and
automate many of the tasks traditionally associated with management of
SSL/TLS certificates. ACM takes care of the complexity surrounding the
provisioning, deployment, and renewal of digital certificates!
Certificates provided by ACM are verified by Amazon’s certificate
authority (CA), Amazon Trust Services (ATS).
Even better, you can do all of this at no extra cost. SSL/TLS
certificates provisioned through AWS Certificate Manager are free!
ACM will allow you to start using SSL in a matter of minutes. After
your request a certificate, you can deploy it to your Elastic Load
Balancers and your Amazon CloudFront distributions with a couple of
clicks. After that, ACM can take care of the periodic renewals without
any action on your part.
the doc:
https://aws.amazon.com/fr/blogs/aws/new-aws-certificate-manager-deploy-ssltls-based-apps-on-aws/
Looking at your list, it happens that you forget to enable your configuration with your Virtual host.
sudo a2enmod ssl
sudo a2ensite default-ssl
sudo /etc/init.d/apache2 restart
There is a complete guide on how to install an SSL certificate on your EC2 here https://medium.com/#adnanxteam/how-to-add-ssl-certificate-to-laravel-on-ec2-aws-18104cc036d1