Bitnami drupal HTTPS and Lets Encrypt - web

I've tried multiple different ways to deliver HTTPS certificates to my website and haven't been able to come up with a solution. I'm attempting to use letsencypt to get the certs to work and I think I've isolated the issue and needs some help with an apache redirect?
What I believe is the issue, this is the code I used to setup the access on my Debian server.
apt-get update
apt-get install sudo
apt-get install curl
apt-get install wget
cd /tmp
curl -Ls https://api.github.com/repos/xenolf/lego/releases/latest | grep browser_download_url | grep linux_amd64 | cut -d '"' -f 4 | wget -i -
tar xf lego_v4.9.1_linux_amd64.tar.gz
sudo mkdir -p /opt/bitnami/letsencrypt
sudo mv lego /opt/bitnami/letsencrypt/lego
sudo /opt/bitnami/letsencrypt/lego --tls --email="[EMAILACCOUNT]" --domains="[DOMAINNAME]" --domains="www.[DOMAINNAME]" --path="/opt/bitnami/letsencrypt" --dns="azure" run
This resolves the IP address, so the DNS side is working.
Then I encounter the following error:
urn:ietf:params:acme:error:connection :: [IPADDRESS]: Error getting validation data
Upon investigation I have found that my drupal site can't contact the query url for the website.
On a side not I found this site extremely useful for trouble shooting my website.
https://check-your-website.server-daten.de/?q=[DOMAIN]
Here is the ACME validation information:
Using the address it assume the path required for acme validation is part of the standard Drupal hosting which is clearly not the case.
So I've been attempting edits in the following file:
/opt/bitnami/apache/conf/vhosts/drupal-vhost.conf
RewriteCond %{REQUEST_URI} "!/.well-known/acme-challenge/"
RewriteRule ^/*(.*)$ https://%{HTTP_HOST}/$1 [NE,L,R=301]
But I haven't had any luck, am I on the right track? Does someone know how I can avoid Drupal taking control of this so I can use finish the certificate validation for LetsEncrypt?

Related

Ubuntu 18, proxy not working on terminal but work on browser

(related and perhaps more simple problem to solve: proxy authentication by MSCHAPv2)
Summary: I am using a Ubuntu 18, the proxy is working with web-browser but not with terminal applications (wget, curl or apt update). Any clues? Seems the problem is to interpretate a proxy's "PAC file"... Is it? How to translate to Linux's proxy variables? ... Or the problem is simple: my proxy-config (see step-by-step procedure below) was wrong?
Details:
By terminal env | grep -i proxy we obtain
https_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
http_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
no_proxy=localhost,127.0.0.0/8,::1
NO_PROXY=localhost,127.0.0.0/8,::1
ftp_proxy=http://user:pass#pac._ProxyDomain_/proxy.pac:8080
and browser (Firefox) is working fine for any URL, but:
wget http://google.com say Resolving pac._ProxyDomain_ (pac._ProxyDomain_)... etc.etc.0.26 connecting to pac._ProxyDomain_ (pac._ProxyDomain_)|etc.etc.0.26|:80... conected.
Proxy request has been sent, waiting for response ... 403 Forbidden
2019-07-25 12:52:19 ERROR 403: Forbidden.
curl http://google.com say "curl: (5) Could not resolve proxy: pac._ProxyDomain_/proxy.pac"
Notes
(recent news here: purge exported proxy changes something and not tested all again...)
The proxy configuration procedures that I used (there are some plug-and-play PAC file generator? I need a PAC file?)
Config procedures used
All machine was running, with a direct non-proxy internet connection... Them the machine goes to the LAN with the proxy.
Add lines of "export *_proxy" (http, https and ftp) in my ~/.profile. The URL definitions are in the form http_proxy="http://user:pwd#etc" (supposing that is correct, because testesd before with user:pwd#http://pac.domain/proxy.pac syntax and Firefox promped proxy-login)(if the current proxy-password is using # character, need to change?)
Add lines of "export *_proxy" in my ~root/.profile.(need it?)
(can reboot and test with echo $http_proxy)
visudo procedure described here
reboot and navigate by Firefox without need of login, direct (good is working!). Testing env | grep -i proxy, it shows all correct values as expected.
Testing wget and curl as the begin of this report, proxy bug.
Testing sudo apt update, bug.
... after it more one step, supponing that for apt not exist a file, created by sudo nano /etc/apt/apt.conf.d/80proxy and add 3 lines for Acquire::*::proxy "value"; with value http://user:pass#pac._ProxyDomain_/proxy.pac:8080. where pass is etc%23etc, url-encoded.
Summary of tests performed
CONTEXT-1.1
(this was a problem but now ignoring it to focus on more relevant one)
After (the proxied) cable connection and proxy configurations in the system. (see above section "Config procedures used"). Proxy-password with special character.
curl http://google.com say "curl: (5) Could not resolve proxy..."
When change all .profile from %23 to # the error on wget changes, but curl not. Wget changes to "Error parsing proxy URL http://user:pass#pac._ProxyDomain_/proxy.pac:8080: Bad port number"
PS: when used $ on password the system (something in the internal export http_proxy command or use of http_proxy confused it with a variable).
CONTEXT-1.2
Same as context-1.1 above, but password with no special character. Good and clean proxy-password.
curl http://google.com say "curl: (5) Could not resolve proxy..."
CONTEXT-2
After (the proxied) cable connection and no proxy configurations in the system (but confirmed that connection is working on browser after automatic popup form login).
curl -x 192.168.0.1:8080 http://google.com "curl: (7) Failed to connect..."
curl --verbose -x "http://user:pass#pac._proxyDomain_/proxy.pac" http://google.com say "curl: (5) Could not resolve proxy..."
Other configs in use
As #Roadowl suggested to check:
files ~/.netrc and ~root/.netrc not exists
file more /etc/wgetrc exists, but all commented, exept by passive_ftp = on

SELINUX blocking php-fpm and nginx working together?

I'm having some issues with SELINUX.
When trying to visit my website I get 403 forbidden from nginx and the server pops up with an error and says to use grep NGINX /var/log/audit/audit.log | audit2allow -M mypol which i did however,
when trying to load the page it now says Access Denied and asks to use the command grep PHP-FPM /var/log/audit/audit.log | audit2allow -M mypol and when I do this it then reverts back to 403 forbidden access and asks me to use the first command again.
It's as if grep NGINX overwrites php-fpm and vice verse. How would solve this without disabling selinux?
I have access to the gnome desktop on my server and the SELinux security alert tells me to use the commands to solve the issue and the first command does solve it but then throws up another issue and when using the second command it overwrites the first and back to square one. I know that if i disable selinux it will work but it's unsafe and put's the server at risk.
Thanks.
Figured it out, for anyone else with the same issue, 403 forbidden access and selinux security error use this command on your servers root
restorecon -r /srv/www/domain.com
Fixed it for me and now everything is running as it should.

installations of cake php website in linux server

I've developed a website in cakephp and it is running successfully in localhost of my windows operating system.Now i need to make it run on Linus static IP server.I also need to know that what are all the softwares needed to install and implementation procedures to upload it and where to upload it.Any help would be greatly appreciate.
You have to research a bit more on the net, there's plenty of answers guiding you how to do it. Stackoverflow is more pertaining to specific coding questions. I personally prefer using amazon ec2 for uploading my cakephp applications.
There's lots of tutorials on how to set up a free tier linux server instance on ec2 all over the net. Here's a great one:
http://www.comtechies.com/2013/01/how-to-host-dynamic-php-website-on.html
Once you have your instance set up, this is what you have to do:
In apache your public folder will be /var/www/ so anything you put in there will be directly accessible to people by URL. Use putty to connect to your server.
sudo service apache2 stop
This will stop your apache server for security reasons while you upload etc.
Copy your project to /var/www/cakephp such that your webroot lies in /var/www/cakephp/app/webroot.
type the following to describe location of cakePHP
nano /var/www/cakePHP/app/webroot/index.php
Go to the line starting with define('CAKE_CORE_INCLUDE_PATH' and make it define('CAKE_CORE_INCLUDE_PATH', DS . 'var' . DS . 'www' . DS . cakephp . DS . lib') - assuming cakephp/lib is to be found in /var/www/cakephp/lib
Next, set the new document root:
sudo nano /etc/apache2/sites-available/default
and wherever you see /var/www change it to /var/www/cakephp/app/webroot.
Also, in the change allowoverride none to allowoverride all the first two times they occur from the top of the document.
To allow apache to access your files and write to cache, execute the following commands:
sudo chown www-data:www-data /var/www/myproject -R
sudo chmod 777 /var/www/myproject/tmp -R
To allow CSS to be applied properly:
sudo a2enmod rewrite
Restart apache:
sudo service apache2 start
Now everything should be working according to plan. If you have any further questions do hit me back!

How to access local apt repository that requires HTTP authentication?

I have a couple of local apt repositories which are setup using reprepro.
I want to make them available on the local network and I've got next to everything setup.
However the repos are sitting behind https (this is out of my control) so when I try to pull from them from another server the request just hangs and I think it's because it is waiting for the username / password to be supplied.
I'm not sure how to supply these. Do they go in the sources.list file on the pulling server? What would the format be?
Cheers
Instead of hardcoding the password in sources.list, its better to create an auth.conf file and supply your credentials as is documented in Debian page:
/etc/apt/auth.conf:
machine example.org
login apt
password debian
for an entry like below in sources.list
deb https://example.org/debian buster main
Refer for more info:
Debian Page Reference
In order to supply a password to a debian-style repository with https, add this line to /etc/apt/sources.list
deb https://user:password#repo.server.com/debian ./
If you want per user authentication, you can pass a custom auth.conf per apt call:
trap "rm ./auth.conf.tmp" EXIT
cat << EOF > ./auth.conf.tmp
machine example.org
login apt
password debian
EOF
# overrule /etc/apt/auth.conf with Dir::Etc::netrc config
# see all options with
# zcat /usr/share/doc/apt/examples/configure-index.gz
sudo apt -o Dir::Etc::netrc=./auth.conf.tmp update
Sources:
https://manpages.debian.org/bullseye/apt/apt_auth.conf.5.en.html#FILES
https://manpages.ubuntu.com/manpages/xenial/man8/apt-get.8.html

Web browser not displaying the index.html in the htdocs directory - Apache

The installation path of my Apache web server is /usr/local/apache2.
I start the server using a apachectl start command and when i type localhost in my Web Browser it displays a Apache 2 Test Page powered by CentOS and not the index.html in /usr/local/apache2/htdocs. Does any one know the reason for this?
Also there are two conf.d files in my system. One is in /etc/httpd/conf and other one is in /usr/local/apache2/conf (where i installed Apache). Any reasons for this? Please help
Firstly, I'm guessing you have built apache from source - was there a specific reason for doing this? I usually find systems are a lot more manageable if you use the standard distribution packages or use new packages from other repos if you need later versions.
If you don't have a specific need for using locally-built apache, I'd recommend removing it then installing apache using the normal CentOS repositories.
Next (or first, if you are staying with the locally-built apache), run: httpd -V
For example, one of my systems returns:
[me#here ~]# httpd -V
Server version: Apache/2.2.3
Server built: Jun 6 2012 10:00:36
Server's Module Magic Number: 20051115:3
Server loaded: APR 1.2.7, APR-Util 1.2.7
Compiled using: APR 1.2.7, APR-Util 1.2.7
Architecture: 32-bit
Server MPM: Prefork
threaded: no
forked: yes (variable process count)
Server compiled with....
-D APACHE_MPM_DIR="server/mpm/prefork"
-D APR_HAS_SENDFILE
-D APR_HAS_MMAP
-D APR_HAVE_IPV6 (IPv4-mapped addresses enabled)
-D APR_USE_SYSVSEM_SERIALIZE
-D APR_USE_PTHREAD_SERIALIZE
-D SINGLE_LISTEN_UNSERIALIZED_ACCEPT
-D APR_HAS_OTHER_CHILD
-D AP_HAVE_RELIABLE_PIPED_LOGS
-D DYNAMIC_MODULE_LIMIT=128
-D HTTPD_ROOT="/etc/httpd"
-D SUEXEC_BIN="/usr/sbin/suexec"
-D DEFAULT_PIDLOG="run/httpd.pid"
-D DEFAULT_SCOREBOARD="logs/apache_runtime_status"
-D DEFAULT_LOCKFILE="logs/accept.lock"
-D DEFAULT_ERRORLOG="logs/error_log"
-D AP_TYPES_CONFIG_FILE="conf/mime.types"
-D SERVER_CONFIG_FILE="conf/httpd.conf"
The output will tell you where its true config file is, in this case /etc/httpd/conf/httpd.conf - that way you'll know which config is the one actually being used.
Once you know which config files are being used, you can check them to see where the document root is - it might be in /var/www/html/ instead of /usr/local/apache2/htdocs or just about anywhere.
When you know where the document root is, then check and make sure the files and directories are readable by apache (or whatever user apache is running as - the first column from ps aux | grep httpd will tell you that)
Next check the logfiles, typically /var/log/httpd/error_log and also the system logs in /var/log/messages and /var/log/secure
Lastly, if you are running a recent CentOS which has SELinux enabled, and you have built apache yourself you'll almost certainly be in a world of pain. You can try getenforce to see if SELinux is active, and setenforce 0 to disable it (for testing).

Resources