CentOS 5.6: Apache access permission after .htaccess upload - security

I was working on my home server remotely and wanted to make some changes to my .htaccess. I could not see this files using my FTP(filezilla) and thought there was none there. I decided to upload one I had in my computer to my server in public_html and although the upload was successful per FZ, this file is not listed anywhere, even when I physically access the server.
It looks like it is being hidden. The main problem is that after this, now I get the following error message and cannot access my test site:
You don't have permission to access / on this server.
If I access my server and DISABLE SELINUX or make it PERMISSIVE, my pages start working as normal. If I make it ENFORCING my webpage becomes unavailable and I see the error listed above.
Questions:
First of all, how can I make this .htaccess visible in a CentOS 5.6 system?
What is the difference between ENFORCING and PERMISSIVE?
Will I run into Security Risks if I leave my server setup as PERMISSIVE?
Thank you all,

Heh. No one has answered this in 4 months because it's hard to find an answer that is direct & specific (per the guidelines) and won't start a discussion. But I'll give it a try.
FileZilla can show hidden files, the method is different for different versions. Try the View or Server menu, or look for "hidden" in the built-in help.
ENFORCING means that selinux is running and prevents actions that violate its active policies. PERMISSIVE means that selinux is running and logs (but does not prevent) actions that violate its active policies.
Yes. Specifically, in ENFORCING mode, a hostile entity would have to both upload a file with malicious code and set the selinux context for the file in order to run it. In PERMISSIVE mode, they just need to upload the file. This is the most likely explanation for your experience: you uploaded a new .htaccess file, but did not set its selinux context.

Related

IIS is serving but not executing classic asp script

I wrote a classic ASP script (.asp) for a customer a while back. it was running on IIS v6.1 Windows 2003. The customer contacted me and said they had a catastrophic server failure and restored from backup but my script isn't running now. I logged onto their server to check it out and IIS is serving the file (I am prompted to save when I browse to the script) but not executing the script.
Several people's hands were in the server before they called me, I think this is probably a simple config setting someone tried before they figured out how to enable the "ASP" web server roll feature. But for the life of me I can't figure out how they did it. this is obviously not the default behavior. If I was trying to get this behavior I would add the .asp extension to the MIME types, but I checked and it isn't there.
What could cause IIS to serve the source of the ASP script without executing it?
Based on your question I am assuming your restored server is also windows server 2003 ... in that case you will go to the file\folder and the permissions and select execute permission to enable a server side script processor to handle that request. Been almost a decade that I have touched a 2003 server so I can’t give you the exact steps ... but, you want to enable script permissions on that folder(I think, don’t remember if it’s granular enough to drill down to a file). Also, why on earth are they still running server 2003? Is that version even supported yet?
If it’s IIS 7, you want to make sure your app pool is in Classic ASP mode first off. Then go to site and then the handler mapping section, click edit and configure it that way.

IIS Shared config - applicationHost.config Error: Cannot write configuration file due to insufficient permissions

I've setup a UNC share for IIS shared config using a specific AD service account and set to FULL CONTROL. I've also exported the config from one IIS server and set-up an additional IIS server to point to the share. When I open the applicationhost.config for example on the UNC share and remove an application pool, I can see the entry also remove in both IIS servers.
So I know:
1) I can export to the share with the specific service account
2) Both IIS servers can read the config when I edit manually
3) However when I remove an app pool from one of the IIS servers through the manager I get the above error.
I've tried using the process monitor utility to see what account is being used to write to the config and it seems it is my own AD user account rather than the shared service account. I know IIS Manager has my username e.g. ROOT\MYNAME logged on, but I wouldn't have thought it would use this to write changes to the shared config. Surely it would use the service account?
Does anyone know how to prevent this error? Why does the shared config and tied service account not come into play when making changes on one of the servers?
So, IMHO, this error is a red herring. I was publishing to a server and got a message saying I was out of space. So, I logged in, realized there was a bit of cruft in extra apps published in IIS, we didn't need. I right clicked and tried to remove one. I got the same error as you.
Having done some manual changes to applicationHost, I thought it "might be me" but it seemed very odd that editing this file would cause such a thing. However, I had recently learned that windows does some funky 32 vs 64bit machinations with this file (google it).
Deciding I had better things to do, I asked our IT to add space to the VM and guess what? I am no able to remove these apps. My guess is that I was at the end of the line on space and the backend management of these special files was not completing and throwing this not-so-helpful exception.
I'm not a 100% about this. For full disclosure, I will add that updates had been applied recently, but I'm pretty confident that this is a possible solution.

Does process have permission to view files on azure?

I have recently deployed a fubu mvc application to windows azure. Everything works except when the pipeline tries to find the view to render. This all works correctly on my local machine.
So I am wondering: does the process on the Azure box have rights to read/scan files on disk?
Any suggestions to fix it are welcome though.
EDIT:
As part of the deployment there is a stage that azure goes through called "Preparing files for eployment". I checked on the log and my view was not in there
So I changed copy to output as true and it worked
It depends a bit on where you are trying to read and how you have configured your roles. By default, the code will run as a very low privilege user that only has R/W to the code directory (and any LocalResource(s) defined by the user). However, you can run your code as SYSTEM, in which case you can R/W anywhere (you might still have to take ownership, but you are all powerful as SYSTEM).
If your views are defined as part of your package and uploaded, the code should have permission to view it. I am curious as why you think this is a permission issue right now. Do you see an error that indicates that, or are you guessing it? If I had to guess, my first thought would be your views didn't get packaged correctly and are not on the VM. You can confirm they are there either by RDP or by cracking open the package and snooping around.

How to do remote staging in liferay 6.1.1 GA2?

I have a site when I tried to apply local staging it's worked fine,but we I tried to connect it through remote server it's not working giving error connection can't be established.Does any one tried it?
This is the configuration with the error message:
This blog post (disclaimer: my own) explains how to do it with https - you can omit long parts of it if you don't want encryption. It also covers 6.0, but the general principle is still the same.
You want to pay special attention to the paragraph Allow access to webservices in that article and check if your publishing server (the "stage") has access to the live server. In general, if this is not on localhost, it requires configuration as mentioned in that article.
As you indicate that you can't connect to your production server from your staging server, please check by opening a browser, running on the staging server and connect it to the production server - go to http://production-server-name:8080/api/axis and validate that you can connect (note: You get the authoritative result for this test only when not accessing localhost as the production system: Do run the browser on the staging system!) - with this test you can eliminate the first chance of your remote system being disallowed. Once this succeeds, you'll need credentials for the production server to be entered on the staging server - the account that you use needs to have permissions to change all the data it needs to change when publishing content (and pages etc.)
The error message you give in the added screenshot can appear when the current user on staging does not have access to the production system (with the credentials used) - verify that you have the same user account that you are using on your staging system (the one that gets the error message from the screenshot) in your production system. Synchronize the passwords of the two.
I your comment you give the information that you're using different version for the staging and the production environment - I don't expect that to work, so this might be the root cause. Test with both systems at the same version.
A couple important points to keep in mind with remote publishing:
If you're not on LDAP (or you have different LDAPs for different environments), you should validate that your user account is exactly the same in both source and target environments. So, if you're on the QA site and you want to remote publish to production, your screen name, email address, and password should all be the same.
Email address is uber important. Depending on which distribution (version) of Liferay you are on, the remote publish code uses your email address to irrespective of whether or not you have portal-ext.properties configured to use screenname.
You should have the Administrator role in on both sides. It may not be required in every scenario, but giving that role out to users that do remote publishing has saved me time and effort debugging why someone's remote publish didn't work. Debugging this process takes a very long time.
If remote publishing is causing you problems (and it probably is or you wouldn't be here), try doing lar file exports / imports. This is important since remote publish failures are not exactly helpful in telling you what failed, they just tell you then failed. Surprisingly, there are often problems in the export process and you can sometimes pinpoint some bad documents or a funky development thing you did using Global scope and portlet preferences that caused your RP to fail. I generally use this order in this situation a) documents and media [exclude thumbnails or your lar file will likely double in size, also exclude ranks if you're not using them] from the wrench icon in the control panel b) web content from the wrench icon in the control panel c) public pages [include data > web content display, but remove all the other data check boxes], include permissions, include categories d) private pages [same options as public pages].
If you already have Administrator role and it's saying you don't have permissions to RP to the remote site, setup your user on the target environment with the "Site Administrator" or "Site Owner" role.
A little late for first and foremost, but anytime you have something that's not working (remote publishing or otherwise), check the logs before you do anything else. The Liferay code base doesn't include a lot of helpful logging, but you do occasionally get a nugget of information that helps you piece together enough to do root cause analysis.
Cheers! HTH

Drupal menu items and blog entries disappeared for anonymous users

I've been struggling with a problem now for a few hour and I cannot find any answers or anyone with the same problem -
Some menu items are missing on my site www.namhost.com (Drupal 6.22) and when viewing the blog it shows "No blog entries have been created". When I log in as admin everything works fine, so this problem only occurs for anonymous/guest users.
I've changed nothing on the site which may have caused this problem and here comes the really strange part - When viewing a copy of the site locally everything works 100% even for anonymous/guest users.
I've tried:
flushing caches
rebuilding permissions
checked if the "anonymous" user is present in the database
viewing on different browsers
None of these yielded any results.
Because the problem doesn't occur locally I'm starting to believe this could be a problem on the server the site is hosted on (Linux with PHP5.2), but the admins had a look and couldn't find anything.
Any help/insight would be highly appreciated.
================FIXED<<<<<<<-----------------------------
I am not allowed to answer my own question and it was suggested that I edit the question to include my answer so here goes:
Firstly, thanks for all the responses.
I disabled the "ACL" module (http://drupal.org/project/acl) and the problem was solved. It was previously used for our forum which was also disabled a few months back, so it's not needed any more.
I still have no idea why this module caused the site to work locally but not on the server. I will be in contact with the server admins to find out if they changed/updated anything on the server which may have caused this module to cause a malfunction.
Any insight could still be helpful top prevent this from happening again.
Check your Drupal config:
Are you using node_access, content_access, or any other permissions-related addon mods? Disable them and see if the problem persists. If that doesn't work, disable all non-core mods and re-enable them one-at-a-time until you find the offender.
Compare your hosting configs:
If it's not related to Drupal, compare the local and remote server configurations. Do both use the same versions of php, apache, apc, cgi, etc.? A phpinfo(); on both servers should give you the most important details for comparison. Do a similar comparison of the MySQL setup and content. Finally, check for differences in your .htaccess files (if any) between the two locations.
Test another hosting enviornment:
Download a virtual appliance like QuickStart which is already configured to host Drupal sites for development and non-production purposes, and see if the site works correctly in that. If it does, you could do an additional validation by porting to a new host who offers a trial/money-back-guarantee and see if it works correctly there.
If your site works fine elsewhere, give your current host a good thrashing for making you go through all of this to figure out the problem lies on their end.

Resources