Is it possible to get content of symlink on file?
I can create the file with content in case if fullchain.pem isn't the symlink.
My configuration for fileserver
[shared_files]
path /etc/puppetlabs/shared_files
allow *
Then I try to pass content to another server
file { '/etc/ssl/fullchain.pem':
ensure => file,
mode => '0664',
owner => 'root',
group => 'root',
links => follow,
source_permissions => ignore,
source => "puppet:///shared_files/fullchain.pem",
}
Thank in advance
I think you're asking about what effect the given file resource has if /etc/puppetlabs/shared_files/fullchain.pem is a symbolic link on the master. The basic answer is that Puppet's built-in fileserver follows symbolic links. This is not clearly documented in the places you might be likely to look, but the documentation for the fileserver configuration file says so clearly in the following warning:
CAUTION: Always restrict write access to mounted directories. The file server follows any symlinks in a file server mount, including
links to files that agent nodes should not access (like SSL keys).
When following symlinks, the file server can access any files readable
by Puppet Server’s user account.
Note that this has nothing to do with the links parameter of the File resource. That affects what Puppet does when the specified path on the target node identifies a symbolic link. Specifically, if links is set to follow, as in your example, and the local path identifies a symlink, then Puppet will manage the file to which the link points. Otherwise (if links is set to manage, the default) the specified path itself is always the one managed. In that case, if the path initially identified a symlink, then Puppet would replace it with a regular file (supposing the example is otherwise unmodified).
Related
is there a way to deny outside access to my upload directory ?! I don't want users to access my upload directory : www.example.com/uploads
i used .htaccess in the root of my upload folder however all the links were broken
in my .htaccess :
deny from all
any solution ?
If you wish to disable directory listing, simply place 'Options -Indexes' in your htaccess.
You've applied a 'deny from all', which essentially stops ANYONE from accessing files in the directory to which it applies.
Also make sure that 'AllowOverride All' is specified in the vhost definition, otherwise you are unable to override settings via the htaccess file. That is my understanding anyway.
If you wish to disable access to the upload directory, and control which files in specific users can access, I'd recommend going through a script written in a language such as PHP. A user requests a file from the script, the script looks to see if they're allowed to view the file. IF they are, they file is displayed. IF they aren't then it is not.
References
http://www.thesitewizard.com/apache/prevent-directory-listing-htaccess.shtml
http://mathiasbynens.be/notes/apache-allowoverride-all
Let's say there's a website www.example.com/user/john. Accessing that link takes you to www.example.com/user/john/index.html.
There are files like www.example.com/user/john/picture.png and www.example.com/user/john/document.html. These are also accessible to the public, but there's no link to these from index.html.
Is there a systematic way to find out these files? I'm asking because I'm going to set up my website, and I also want to put up a few files that I don't necessarily want every one to see, only people who I give the link to. So I'm wondering how easy/hard it is to find out that those files exist in my directory.
Most importantly you have to switch off the possibility to just browse the directory with the browser. Every server has its own way to switch this off. Then you can use the proposed way of "security through obscurity".
Another way can be, to have a specific folder whos access is restricted by a http basic authentication. This can be configured in the .htaccess file which you put in the root folder of your directory you want to share only with specific people.
Just google ".htacces" and "basic authentication".
HTTP does not provide a directory listing method as such. Most web servers can be set up to provide a HTML-formatted directory listing if an index file such as index.html does not exist. If you provide an index file so that autoindexing does not happen (or if you disable autoindex by web server configuration), and if your "hidden" file names are hard to guess, they should be pretty hard to find. I sometimes publish such files in a directory with a random gibberish name.
"Share links" used by Dropbox, Picasa and other services do the same, they just use longer random file/directory names or random parameters in the URL.
To provide real security you'll want to set up https (SSL/TLS) so that any eavesdroppers on the network cannot easily look at the requested URLs, and authentication such as HTTP Basic Authentication with username/password. For less sensitive stuff, http to a random hidden directory will often do fine.
i have read many tutorials about file permissions but all they say is for example "if you don't want others to write to your files, set it to xxx..."
but in a web host, who is who really?
there is just a web server (apache) and php and mysql and other programs. there is no "other users". the tutorials said that apache is considered "public". but i have a php scripts wich gets an uploaded file and puts it in "downloads" directory. i set that directory's permission to 744. it means group and public should only be able to "read" and owner has full access.
i expected my uploaded file not to be transfered to that directory because of no "write" permission for "public". but the file was there. and more confusing for me was when i tried to download the file, i got a "forbidden" error. i expected to be able to download the file because the public had the "read" permission.
The user this case is the web server itself. Apache is usually running as the user "apache" or "www-data" when it reads and writes files to the server filesystem. Static content should be readable by the server. Upload locations must be writable. Depending on the other users on the system you may consider the web server to be the "other" user and the webmaster account the actual file owner.
An .htaccess file is uploaded to a directory via ftp, the owner and group of the said file is then generally the ftp user and / or root.
If the said directory had file permissions set to 0777 would it at all be possible for a remote script to write over the said .htaccess file, or would every attempt always be blocked as the owner and group of the .htaccess file is the ftp user (and the root), and the hacker (depending on which port they were attempting to enter through) will not be logged into the server as the ftp user (and hopefully not the root user either).
The reason I ask is because I have the need for a directory to be permissions 0777 and am concerned that the .htaccess file (which prevents scripts from running in the said directory) could simply be overwritten meaning the said server would be vunerable to attack.
Thanks,
John
Personally, I wouldn't set 0777 permissions on a directly containing a .htaccess file. In that situation I would probably advise moving the files requiring 0777 permissions into a sub directory.
You're going to be vulnerable to an attack if a script has write access to that folder regardless. Here's an example from a true story on a friend's server:
Old version of TimThumb allowed files to be uploaded maliciously
The file uploaded was Syrian Shell, a script used to decrypt user permissions and begin creating new files
Access was given to the intruder and the server was effectively turned into a host for a phishing site.
I highly recommend you take a look at your structure. Move write access to a subdirectory. Hope this helps.
Here's what I need to do -- either:
include an external file in my .htaccess file that resides on Server B, or
parse the .htaccess file on Server A using PHP, or
even a more clever solution (which I can't dream up at this time given my limited experience with httpd.conf and apache directives)
Background
I have an .htaccess file on Server A. I set its permissions to -rw-rw-rw (0666) and build it dynamically based on events throughout the day on Server B in order to achieve certain objectives of my app on Server A. I have since discovered that my hosting provider sweeps their server (Server A) each night and removes world writable files files and changes their permissions to 0664. Kudo's to them for securing the server. [Please no comments on my method for wanting to make my .htaccess file world writeable -- I truly understand the implications]
The .htacess file on Server A simply exists to provide Shibboleth authentication. I state this because the only aspect of the apache directives that is dynamic is the Require user stack.
Is it possible to include the "user stack" that resides on Server B in my .htaccess file that resides on Server A?
Or can I parse the .htaccess file on Server A via the PHP engine?
Thanks for helping my solve this problem.
Here's what the .htaccess looks like:
AuthType shibboleth
AuthName "Secure Login"
ShibRequireSession on
Header append Cache-Control "private"
Require user bob jill steve
All I want to do is update the bob jill steve list portion of the file each and every time I add/change/delete users in my application in an effort to make my Shibboleth required users (on Server A) synch with my MySQL/PHP web app (living on Server B).
(Version 2 of this post missed the Require user point on first reading -- sorry).
My immediate and my second instinct here is that dynamic .htaccess files (especially designed to be written from a separate web service) are a disaster waiting to happen in security terms and your hosting provider is right to do this, so you should regard this as a constraint.
However there is nothing to stop a process on server A within the application UID (or GID if mode 664) rewriting the .htaccess file. Why not add a script to A which will service an "htaccess" update request. This can accept the updated Require user dataset as (JSON encapsulated, say) parameter, plus some form shared secret signature. This script can include any necessary validation and update the htaccess file locally. Server B can then build the list and initiate this transfer via web request.
Postcript following reply by Dr DOT
My first comment is that I am really surprised that your ISP runs your scripts as nobody. I assume by this that all accounts are handled the same and therefore there is not UID / GID access control separation of files created by separate accounts -- a big no-no in a shared environment. Typically in suEXEC /suPHP implementations any interactive scripts run in the UID of the scriptfile -- in your case, I assume your ftp account -- what you anonymise to myftpuser. All I can assume is that your ISP is running shared accounts using mod_php5 with apache running as nobody, which is very unusual, IMHO.
However I run a general information wiki for a doctor which is also set up this way, and what I do is to have all of the application writeable contents in (in my case) directories owned by www-data. There is surely nothing stopping you setting up such a directory with its own .htaccess file in it -- all owned by nobody and therefore updateable by a script.
If you want a simple example of this type of script see my article Running remote commands on a Webfusion shared service.
Here's how I solved the problem a few days ago.
Given my HSP sweeps the server every night and changes any world writable file to 664 I thought about a different approach.
I did this:
during the day I made the directory containing my non-writable .htaccess file to 0777
then I deleted my .htaccess file
then I re-ran my script -- my fopen() command uses mode "w" (so I thought...if the file doesn't exist right now, why not let my php script create it brand new.)
because I said somewhere above here that my php runs as "nobody" -- voila!!!! I now had a file owned by nobody in the directory
Later that night my HSP swept the server and changed my directory from world writable -- but no big deal ... I got my .htaccess file owned by "nobody' and I can update the Require user directive automatically.
Thanks for everyone's help on this.