For various reasons (for example, phpMyAdmin stops working), it is not practical (for those higher up) to disable eval by adding:
[suhosin]
suhosin.executor.disable_eval = On
in php.ini.
Can eval be disabled with .htaccess, and if it can, how? What are the prerequisites, and what would I need to add to .htaccess in order to make it work?
The hosting is dedicated, runs on PHP 5.4.16, with phpMyAdmin 4.0.10.1. Updates of everything involved (PHP, phpMyAdmin, MariaDB, the webapps running, PHP extensions, etc) can not be done at the moment (or in any forseeable future), so what I'm looking for is a patch-up.
Related
I just started using laravel's lumen and managed to make it work both locally and on a server, when I was about to start exploring it, my index.php consisted in just:
$app = require __DIR__."/../lumenTest/bootstrap/app.php";
$app->run($app->make('request'));
echo $myundefinedvariable;
Which displays a ErrorException: Undefined variable: myundefinedvariable, but inside the "...at Application->Laravel\Lumen\Concerns{closure}" window I can see a giant wall of text with stuff like:
... 'APP_KEY' => 'fake0BqKgHeC72EmT7039B6pDCsJ90key' , ..., 'DB_PASSWORD' => 'secret', ...
And my first thoughts were, maybe it is because im running it localy with XAMPP or something, so I went and tried it on the server and the same thing happened.
Is it normal that sensitive data from my .env file gets shown to everyone after doing any php error?
Is there a way to avoid this happening? (different than not having any PHP errors, because I tend to have them a lot).
Additional info:
PHP version 7.1.12
Lumen (5.6.1) (Laravel Components 5.6.*)
The directory "lumenTest" is one level above my www or public and there is where the .env is located, the site is on a Linux server shared host
No, that's not normal. Professional developers consider this an amaturistic behavior. That's the exact reason why companies don't even consider using Laravel.
Many people (including me) already notified them that this is really not-done, but the developers don't really seem to care. In fact it's the only framework in the world that thinks it's OK to print critical information in a debug page. Surely a visitor should never see stack traces, sql queries, pieces of code... But environment variables are confidential and should never end up in a HTTP response.
The best advice I have is to use a professional MVC framework like ASP.net, codeigniter, or yii, since there's no telling what the Laravel devs also think is OK to do...
If on the other hand you do decide to use Laravel anyway, there's a package that counters this: https://github.com/GlaivePro/Hidevara
It's real easy to setup, just make sure you don't forget the app->extend instruction.
On a production server you must not run "composer install" but instead "composer instal --no-dev". This way filp/whoops will (should, hopefully) not be installed and cannot be triggered.
For professional development, i surely recommend not to use Laravel since the bar of what they think is acceptable seems to be very low.
As a sidenote: the developers claim that nothing can go wrong when APP_DEBUG=false, but incidents in the past have shown that the whoops handler can be triggered when debug mode is disabled. https://www.google.com/amp/s/blog.hacken.io/dangers-of-laravel-debug-mode-enabled%3fhs_amp=true
Yes, if you have debug mode enabled, any sort of data relating to an error can be displayed. This certainly would include sensitive data that would be useful when debugging.
For production, you want all errors to be privately logged, not publicly displayed. For this reason, you will want debug=false in your .env file.
If this is happening while debug mode is already set to false, you will want to configure the hiding/logging of errors at the server level.
I am in a situation where the current web server is a production environment and there is no development environment. It is running Joomla on an IIS Web Server and is an Intranet site with all of the security, IP restrictions, Certificates, and whatever else required to run an enterprise level Intranet site.
I am wondering what I can do to set up a development environment to work within (preferably using some type of version control).
I have full reign over the IIS server, and I have had a co-worker set up a VM clone of the current system to work with, however the security is making it difficult to work with and set up.
I would like to not use Visual Studio as I don't believe I have a license for it; however I can get it if need be. I would like to stick with Notepad++ if at all possible.
Thank you.
If you're wanting to literally take the site content out and be able to edit and work without any of the security restrictions of the production environment, there's a couple of ways you could do it. However, it's going to depend on what DB the system is running with.
Joomla, regardless of what web platform it is running on, is coded in PHP, so you don't have to worry about getting visual studio. You can use Notepad++ as normal.
Option 1 - IIS Clone
If you can take a SQL backup of the database, you build a from-scratch box with IIS. You'd need to add the PHP drivers to IIS to do this. Go to Microsoft's site for more info:
PHP for IIS
Option 2 - Apache Port
You can make an Apache box using WAMP to run, if you're using a Windows machine. PHP is PHP, on any platform, so it should work without modification.
The tricky bit will be the database, depending on your situation. If the database is MySQL, you can import your database backup and be good to go, after changing the config files for the Joomla site.
If the site used MSSQL, it's a little trickier. You'll need to install an MSSQL PHP plugin to get this medthod to work. There's plenty of instructions online on how to do this, it's a case of finding the right one for your implementation.
My company's Drupal installation leaves me unable to configure permissions from the admin panel. The key problem arises from the "Administer > By Module" page, where clicking on any of the "configure permissions" links results in a Page Not Found / HTTP 500 Error. A number of the settings pages are broken also, meaning that I cannot change the search_config module's settings, either.
I've checked Drupal's dblog messages, and there's no mention of the HTTP 500 errors there. I also wandered into the host's root (I'm on a shared hosting service) and checked out apache's error logs. No dice. Many errors from the other sites on the server, but only 2 old notifications of RSA certificate issues on my domain.
I've been working at this for about a week now, and I'm deeply perplexed as to what can cause this. I've tried turning off clean URLs, and manually entering what I believe to be the URLs for the settings pages, with the same result. This is developing into quite an issue for me, with permissions configuration offline, and also the search_config settings unavailable. Search_config is a big deal also, as I need to exclude some development nodes from the search index, as they are crashing cron's search indexing, preventing it from being up-to-date. Any light that the brilliant minds here can cast on this would be greatly appreciated. Thanks in advance!
Edit: I'd also like to add that I'd looked into PHP settings, and that the php timeout is set at 120, with php memory set at 96M. (Just to be complete! ;) )
Currently running: Drupal 6.22, MySQL 5.1.61, PHP 5.2.17, on a shared Apache 2.2.22 server.
Alrighty. Turns out, in spite of increasing the allocated memory from 64M to 96M, a further increase from 96M to 128M for PHP execution was what did the trick. The menus are all online and functioning correctly. I guess the sheer number of installed modules represents a lot of overhead for the server. Thanks to everybody for looking!
I was working on my home server remotely and wanted to make some changes to my .htaccess. I could not see this files using my FTP(filezilla) and thought there was none there. I decided to upload one I had in my computer to my server in public_html and although the upload was successful per FZ, this file is not listed anywhere, even when I physically access the server.
It looks like it is being hidden. The main problem is that after this, now I get the following error message and cannot access my test site:
You don't have permission to access / on this server.
If I access my server and DISABLE SELINUX or make it PERMISSIVE, my pages start working as normal. If I make it ENFORCING my webpage becomes unavailable and I see the error listed above.
Questions:
First of all, how can I make this .htaccess visible in a CentOS 5.6 system?
What is the difference between ENFORCING and PERMISSIVE?
Will I run into Security Risks if I leave my server setup as PERMISSIVE?
Thank you all,
Heh. No one has answered this in 4 months because it's hard to find an answer that is direct & specific (per the guidelines) and won't start a discussion. But I'll give it a try.
FileZilla can show hidden files, the method is different for different versions. Try the View or Server menu, or look for "hidden" in the built-in help.
ENFORCING means that selinux is running and prevents actions that violate its active policies. PERMISSIVE means that selinux is running and logs (but does not prevent) actions that violate its active policies.
Yes. Specifically, in ENFORCING mode, a hostile entity would have to both upload a file with malicious code and set the selinux context for the file in order to run it. In PERMISSIVE mode, they just need to upload the file. This is the most likely explanation for your experience: you uploaded a new .htaccess file, but did not set its selinux context.
Please note, my intention here is not malicious. My intentions stem from contractual issues between myself and a client that I am working to enforce.
Is there anything I can do - via PHP, .htaccess, MySQL or otherwise - that will ensure (to a decent extent) that a WordPress site would be difficult to migrate to a different host?
I completely understand that someone extremely well-versed in PHP, MySQL and WordPress might be able to find a workaround, but I need an easy solution that will ensure a client cannot zip up his WordPress app via FTP, export the database, and migrate it to a new host.
Restricting access to the MySQL admin and the root FTP is not an option.
Thanks for all your help!
Really securely - no. As you say, anyone a bit versed in PHP can work around most restrictions.
Halfway decently, impossible for a not-well-versed person to work around - kind of. You'd need to identify some parameter that is going to change if servers are switched - for example, the directory structure, or the server's IP, which is in the $_SERVER["SERVER_ADDR"] variable.
There are other variables and parameters as well - do a phpinfo() to get an overview.
You'll need to know the value of $_ENV['HOSTNAME'] on your current server, but once you know this (and can verify it's not going to change obviously) you could edit the wp-blog-header.php file with the following on line 7:
if ($_ENV['HOSTNAME'] != 'your-host-name') {
wp_die('Message to Display', 'Title of Error');
}
The reason I state to do this in wp-blog-header.php is because if I were going to look on how to fix this, I would look in index.php or the theme files.