Piece of code to ensure site cannot be migrated from host? - security

Please note, my intention here is not malicious. My intentions stem from contractual issues between myself and a client that I am working to enforce.
Is there anything I can do - via PHP, .htaccess, MySQL or otherwise - that will ensure (to a decent extent) that a WordPress site would be difficult to migrate to a different host?
I completely understand that someone extremely well-versed in PHP, MySQL and WordPress might be able to find a workaround, but I need an easy solution that will ensure a client cannot zip up his WordPress app via FTP, export the database, and migrate it to a new host.
Restricting access to the MySQL admin and the root FTP is not an option.
Thanks for all your help!

Really securely - no. As you say, anyone a bit versed in PHP can work around most restrictions.
Halfway decently, impossible for a not-well-versed person to work around - kind of. You'd need to identify some parameter that is going to change if servers are switched - for example, the directory structure, or the server's IP, which is in the $_SERVER["SERVER_ADDR"] variable.
There are other variables and parameters as well - do a phpinfo() to get an overview.

You'll need to know the value of $_ENV['HOSTNAME'] on your current server, but once you know this (and can verify it's not going to change obviously) you could edit the wp-blog-header.php file with the following on line 7:
if ($_ENV['HOSTNAME'] != 'your-host-name') {
wp_die('Message to Display', 'Title of Error');
}
The reason I state to do this in wp-blog-header.php is because if I were going to look on how to fix this, I would look in index.php or the theme files.

Related

Variables from my .env are shown on error

I just started using laravel's lumen and managed to make it work both locally and on a server, when I was about to start exploring it, my index.php consisted in just:
$app = require __DIR__."/../lumenTest/bootstrap/app.php";
$app->run($app->make('request'));
echo $myundefinedvariable;
Which displays a ErrorException: Undefined variable: myundefinedvariable, but inside the "...at Application->Laravel\Lumen\Concerns{closure}" window I can see a giant wall of text with stuff like:
... 'APP_KEY' => 'fake0BqKgHeC72EmT7039B6pDCsJ90key' , ..., 'DB_PASSWORD' => 'secret', ...
And my first thoughts were, maybe it is because im running it localy with XAMPP or something, so I went and tried it on the server and the same thing happened.
Is it normal that sensitive data from my .env file gets shown to everyone after doing any php error?
Is there a way to avoid this happening? (different than not having any PHP errors, because I tend to have them a lot).
Additional info:
PHP version 7.1.12
Lumen (5.6.1) (Laravel Components 5.6.*)
The directory "lumenTest" is one level above my www or public and there is where the .env is located, the site is on a Linux server shared host
No, that's not normal. Professional developers consider this an amaturistic behavior. That's the exact reason why companies don't even consider using Laravel.
Many people (including me) already notified them that this is really not-done, but the developers don't really seem to care. In fact it's the only framework in the world that thinks it's OK to print critical information in a debug page. Surely a visitor should never see stack traces, sql queries, pieces of code... But environment variables are confidential and should never end up in a HTTP response.
The best advice I have is to use a professional MVC framework like ASP.net, codeigniter, or yii, since there's no telling what the Laravel devs also think is OK to do...
If on the other hand you do decide to use Laravel anyway, there's a package that counters this: https://github.com/GlaivePro/Hidevara
It's real easy to setup, just make sure you don't forget the app->extend instruction.
On a production server you must not run "composer install" but instead "composer instal --no-dev". This way filp/whoops will (should, hopefully) not be installed and cannot be triggered.
For professional development, i surely recommend not to use Laravel since the bar of what they think is acceptable seems to be very low.
As a sidenote: the developers claim that nothing can go wrong when APP_DEBUG=false, but incidents in the past have shown that the whoops handler can be triggered when debug mode is disabled. https://www.google.com/amp/s/blog.hacken.io/dangers-of-laravel-debug-mode-enabled%3fhs_amp=true
Yes, if you have debug mode enabled, any sort of data relating to an error can be displayed. This certainly would include sensitive data that would be useful when debugging.
For production, you want all errors to be privately logged, not publicly displayed. For this reason, you will want debug=false in your .env file.
If this is happening while debug mode is already set to false, you will want to configure the hiding/logging of errors at the server level.

IIS application subdomain

At work we have IIS 7
[Server]/[Site-with-irrelevant-name]/[Application]
To access the applications from another computer, the URL format is simply:
http://[Server]/[Application]
My colleague affirms that there is no DNS server included in the process.
I would like to acces to some application using URL
http://[Application].[Server]
I thought that I should use Bindings to solve this but so far I've found no working solution.
There's also nothing relevant in the computers 'hosts' files.
This doesn't look like the kind of configuration I'm used to work with.
Any idea?
Edit: In the comment here below I meant semi-relative path ("/...")
Solution found: My initial problem was that I have developed the application on a local computer using semi-relative path in multiple places.
So another solution is to access the application through
http://[Server]:[A different defined port]/
It's not exactly the solution to the question I wrote so I won't check this as the answer.
Solutions that would allow me to use a real subdomain are still welcome.

Drupal menu items and blog entries disappeared for anonymous users

I've been struggling with a problem now for a few hour and I cannot find any answers or anyone with the same problem -
Some menu items are missing on my site www.namhost.com (Drupal 6.22) and when viewing the blog it shows "No blog entries have been created". When I log in as admin everything works fine, so this problem only occurs for anonymous/guest users.
I've changed nothing on the site which may have caused this problem and here comes the really strange part - When viewing a copy of the site locally everything works 100% even for anonymous/guest users.
I've tried:
flushing caches
rebuilding permissions
checked if the "anonymous" user is present in the database
viewing on different browsers
None of these yielded any results.
Because the problem doesn't occur locally I'm starting to believe this could be a problem on the server the site is hosted on (Linux with PHP5.2), but the admins had a look and couldn't find anything.
Any help/insight would be highly appreciated.
================FIXED<<<<<<<-----------------------------
I am not allowed to answer my own question and it was suggested that I edit the question to include my answer so here goes:
Firstly, thanks for all the responses.
I disabled the "ACL" module (http://drupal.org/project/acl) and the problem was solved. It was previously used for our forum which was also disabled a few months back, so it's not needed any more.
I still have no idea why this module caused the site to work locally but not on the server. I will be in contact with the server admins to find out if they changed/updated anything on the server which may have caused this module to cause a malfunction.
Any insight could still be helpful top prevent this from happening again.
Check your Drupal config:
Are you using node_access, content_access, or any other permissions-related addon mods? Disable them and see if the problem persists. If that doesn't work, disable all non-core mods and re-enable them one-at-a-time until you find the offender.
Compare your hosting configs:
If it's not related to Drupal, compare the local and remote server configurations. Do both use the same versions of php, apache, apc, cgi, etc.? A phpinfo(); on both servers should give you the most important details for comparison. Do a similar comparison of the MySQL setup and content. Finally, check for differences in your .htaccess files (if any) between the two locations.
Test another hosting enviornment:
Download a virtual appliance like QuickStart which is already configured to host Drupal sites for development and non-production purposes, and see if the site works correctly in that. If it does, you could do an additional validation by porting to a new host who offers a trial/money-back-guarantee and see if it works correctly there.
If your site works fine elsewhere, give your current host a good thrashing for making you go through all of this to figure out the problem lies on their end.

CentOS 5.6: Apache access permission after .htaccess upload

I was working on my home server remotely and wanted to make some changes to my .htaccess. I could not see this files using my FTP(filezilla) and thought there was none there. I decided to upload one I had in my computer to my server in public_html and although the upload was successful per FZ, this file is not listed anywhere, even when I physically access the server.
It looks like it is being hidden. The main problem is that after this, now I get the following error message and cannot access my test site:
You don't have permission to access / on this server.
If I access my server and DISABLE SELINUX or make it PERMISSIVE, my pages start working as normal. If I make it ENFORCING my webpage becomes unavailable and I see the error listed above.
Questions:
First of all, how can I make this .htaccess visible in a CentOS 5.6 system?
What is the difference between ENFORCING and PERMISSIVE?
Will I run into Security Risks if I leave my server setup as PERMISSIVE?
Thank you all,
Heh. No one has answered this in 4 months because it's hard to find an answer that is direct & specific (per the guidelines) and won't start a discussion. But I'll give it a try.
FileZilla can show hidden files, the method is different for different versions. Try the View or Server menu, or look for "hidden" in the built-in help.
ENFORCING means that selinux is running and prevents actions that violate its active policies. PERMISSIVE means that selinux is running and logs (but does not prevent) actions that violate its active policies.
Yes. Specifically, in ENFORCING mode, a hostile entity would have to both upload a file with malicious code and set the selinux context for the file in order to run it. In PERMISSIVE mode, they just need to upload the file. This is the most likely explanation for your experience: you uploaded a new .htaccess file, but did not set its selinux context.

How to work collaboratively on a website

I'm working on a website with some other people. Usually when we want to modify something, we do the change on our machine and just upload the new version with ftp, hope it'll works (or that nobody will notice it doesn't the time we correct it) and that's it.
It's already not the best way to work alone but even less to work collaboratively so I'm asking advices.
I think that a solution like svn/git/mercurial could help me. I found bitbucket which allows free private repository with mercurial. But still after, how can I upload the changes I did to the ftp and make sure the version I've on my computer is the same than the one on the server.
We are all doing it during our free time (not paid) and some people comes and leave every year so I'm looking for something free, easy to use (explain to everyone why we should use a DVCS is already hard) and which doesn't rely on a specific person.
The server we are using to host the website is a cheap one and doesn't allow the use of ssh, svn,...
Thank you
Version control will not help with the issue you are describing - namely, uploading untested changes to a production site.
What you (and your team) need, is better quality control procedures - you need a test website and a tester (QA) person. The process would be:
Make a change
Update the test website
Have the update and the whole website signed off by QA
Update the production/live site
What you will gain by using version control (CVS, SVN, Git or anything else) is recoverability - you will be able to go back to a version before any breaking change. It will still not solve the issue of "the new code broke the site".
You want scheduled releases.
Commit and update code regularly
Code freeze or develop in a branch and merge to the trunk
test on a staging environment
Find a bug goto step 1
Release
You need to understand that what represents your latest correct working build is not what's on the server but in your source repository whether that be SVN or just the file system. Anything as long as it isn't the live server! Make sure everything works locally as expected then unless the site is huge (I guess not given your situation) deploy it in its entirety as a single version.

Resources