We're using Mercurial on our production servers for some smaller web projects to easily deploy applications by pushing changes to the server over SSH. The repositories reside in the public_html folders of their respective accounts.
Now if I do a
hg clone http://www.domain.com
I get
real URL is http://www.domain.com/
requesting all changes
adding changesets
adding manifests
transaction abort!
rollback completed
abort: empty or missing revlog for .htaccess
Fortunately, cloning doesn't seem to be possible without authentication, but I'd rather not let anyone know there is an hg repository available in the first place.
Does anybody know a way to completely hide a Mercurial repository from the public, even though it is in a public place like public_html/htdocs on webserver? I couldn't find any information on how to achieve that.
ETA: Apparently, I do not yet have enough reputation to vote any answers up. But thanks a lot to the both of you for your helpful answers. :)
In the repo's .hg/hgrc add this:
[web]
allowpull = false
That will error them out much earlier in the process, before they get any data (currently they're getting a lot of data if they want it before rollback). Note that allowpull has no underscore, unlike most other multi-word mercurial settings.
That's completely prevents them from getting the contents via mercurial, but they could still use wget, curl or a webbrowser to pick through http://www.domain.com/.hg/ manually.
To avoid that you can block any URL containing /.hg/ at the web server level. In Apache that would look like:
<Directory "/your/doc/root/.hg">
Order deny,allow
deny from all
</Directory>
You can
make the .hg directory inaccessible to your web server
make .hg invisible by .htacces magic (assuming you use an apache httpd)
place the repositories outside of public_html and populate public_html with hg archive
Related
I am tasked with monitoring the changes made to the source files of a website. I am not developing the website, just watching it. I am a firm believer in using version control, and am a fan of git, but the developer who is actually maintaining the site is not, and I have decided it is better to let him continue to work however he wants (don't ask). I do not want to have to give him any instructions whatsoever (except possibly telling him that I am adding files or directories that he can ignore).
I consider myself an intermediate-level user of git, so I want to run this by an expert or two.
I am thinking I can install git on the (Linux) server, and then ask for status, and do commits, via SSH. Will this work without jeopardizing the normal operation of the web server?
Yes, using Git on a server should not interfere with the normal operation of the server (as mentioned in the comments, doing this on a production server is dodgy but I'll leave that to one side.)
Note that using Git normally will create a .git directory at the root of whatever you're tracking. If that is your web server root directory, you might want to consider whether this is a risk as far as external access to the contents of the .git directory (depending on your server setup, this may or may not be a concern).
If you want to create the .git directory somewhere else outside your working tree, see the GIT_DIR environment variable.
I'm very new to Linux, please bear with me.
I have a linode with a LAMP stack running and I managed to configure my main site and a couple of subdomains and it's working great.
However, I want to have a dir called "dev" where I can put projects that I'm still working on. I need to be able to access this folder from my browser's address bar, and I don't want it to be through a DNS, but directly from my server's IP. For example:
http://218.42.42.42/dev/someproject
Since the document root is set to /var/www, placing the "dev" folder there isn't really an option - I want it to be in my ~ folder, for easier backups.
So what's the best way to make this work? A redirect, or should I move my doc root to the "dev" folder?
Thanks!
First, this would probably be more appropriate for Serverfault. With that in mind...
If I had to keep my dev environment in my home folder, I'd create a symlink in /var/www that ties to the dev folder.
As far as securing it, I don't know if this is still a recommended or viable way of handling secure access, but it seems like http://www.codinglogs.com/blog/server-management/vps-setup-guide/nginx-password-protect-web-directory might be the way to go as long as you feel secure using a username/password combination. Another valid answer (also on stackoverflow) would be password protect /backoffice folder in nginx.
If you want something more secure, the next step would probably firewall rules.
I wish to protect folder with core files of CMS and its sub folders and files from accessing via web, and I tried with .htaccess file with this:
order deny,allow
deny from all
Problem I have is that I can protect that folder but some script from that folder or its sub folder then do not work good.
I also tried with this:
order deny,allow
deny from all
allow from 127.0.0.1
allow from 76.xx.xx.xx
In this case 76.xx.xx.xx is static IP of site.
Is there any way to prevent accessing files in that folder but still to make all work ok?
Another question.
I wish to secure more my site from hackers. So, is there any way to prevent injecting malicious files and code in my scripts/files and/or to block my site of executing files from other sites, hosts, to allow just working with local files.
I prefer .htaccess file, but if it is needed I have access to WHM if there is need for editing other files (but in that case I will need step by step guide). I am running site on Linux VPS with Cent-OS 5 system.
The usual way to do this is to put the accessible files in an apache-accessible directory, but all the rest into a directory out of the way from Apache. For example:
/usr/
local/
mycms/
public/
lib/
/var/
www/
mycms -> softlink to /usr/local/mycms/public
Or better yet, make mycms an alias in Apache config, pointing at the public directory. This way, the files that should be accessible are, those that shouldn't be aren't, and you can still reference all your other files simply by ../lib/ etc.
I know this does not really answer your question literally, and if the CMS directory structure is not under your control, this may not be the best way to do it.
Another way is through rewrites - simply rewrite all requests to your CMS directory except for your CMS's entry script into requests for the entry script.
I tried searching, but I couldn't really figure out the best search terms to find my answer.
I have a Ubuntu 10.04 server with Apache. I want to set up a site that will be versioned, so my file structure will look like:
/var/www/MyApp1.0
/var/www/MyApp1.1
/var/www/dev -> /var/www/html/MyApp1.1
/var/www/test -> /var/www/html/MyApp1.0
Where "dev" and "test" are symbolic links to the other folders. So my URL for those two environments will be "http://my-url.com/dev" or "http://my-url.com/test". For my prod environment, I want the URL in the browser to be just "http://my-url.com", without redirecting to something like "http://my-url.com/prod".
How can I set it up so that the base URL points to a specific version without a redirect changing the URL?
By the way, we use MS SourceSafe for version control, so we have older versions backed up as well, but I need multiple environments for dev, test, and prod.
Thanks,
Travis
I think you can just check out the right development version to each of your dev, test and 'live' folder. I think it would be easier to just get the right version into the dev folder, than change the symbolic link. Ofcourse you could create a .htaccess file that redirects /dev to a specific version folder, but doing this will just cause you to keep a large amount of different folders for different versions, while what you should be doing is put the website in a version control system, develop features and commit them, update the test folder, and if everything's alright, update your production folder as well.
To do this right you may need to drop SourceSafe.
I have mercurial setup by following these instructions.
I'm trying to understand where or what file to setup the users in. Everything I've read seems kind of cryptic... it gives all these snippets of code saying use this but it seems to be leaving out steps of how it's all connected and what file to put the snippets of code in... can someone please de-mystify all this for the ID10T#TheKeyboard?
Keep in mind that the basic model of Mercurial cannot actually prevent anybody from checking something in. The only thing it can do is prevent those users from uploading something to the your copy of the repository.
IIS can set up authentication so that Mercurial knows which user is doing the uploading and so only certain users are even allowed to try to upload. If all you care about is limiting who has commit access to your repository you can stop right here. But if you want something finer grained, I think you are currently out of luck.
But, if it ever ends up working with web server authentication, you'll have to use the ACL extension if you want finer grained access control than simple who's allowed to send changesets to your repository.
The way the ACL extension works when changes are being sent over a network is as a pre-transaction hook on changegroups (a set of Mercurial revisions). It can look through these changegroups to make sure all the changes satisfy a given set of criteria. There are a wide variety of criteria that can be specified.
The ACL extension can be configured either in the global hgrc file, in which case it applies to all repositories, or the .hg/hgrc file of the repository you want to control access to. In my opinion the global option isn't terribly useful.
Check out the "Securing Mercurial" section here:
http://win1337ist.wordpress.com/tag/mercurial-iis7/
Also see this related question that has a lot of good info:
How to setup Mercurial and hgwebdir on IIS?