Can we add multiple git-web URL for a single git server. So that specific people can access their projects with their own URL'S?
When I have tired to search the results for above question, it is there like git daemon is used? If it is so? How it is useful?
It is easier, especially with a Git repository hosting server managed by Gitolite, to:
access the same server
And:
different repositories (one per user)
OR different branches (one per user) within the same repository.
As to that last option, you can read "Gitolite: Personal branches":
"personal" branches are great for environments where developers need to share work but can't directly pull from each other (usually due to either a networking or authentication related reason, both common in corporate setups).
Related
I'm wondering what is the best practice in strengthening and enforcing good security on a circleci pipeline. I'd like to ensure that no one should be able to deploy to prod without having their PR approved by another user in the organization.
Circleci offers one functionality which is contexts. These can be used to ensure that only people within a security group are allowed to run certain jobs and therefore access certain env variables. That works mostly fine, except we would like anyone to be able to deploy prod changes given that their changes have been approved by someone else in a PR.
We've setup so merges to master can only be done by approving the PR, but now we're faced with two options:
Only people with access to the context can merge the change (not what we want, it slows us down too much)
We remove contexts (insecure, anyone with access to the repo could change the CI job to print the credentials and steal them). We could give every user with push access also access to the contexts, but then it becomes equally insecure.
What is the best way to tackle this? Are there other best practices for securing the pipelines?
I'm trying to setup a git server and I want to allow only a specific user to push commits to master branch.
I have tried to use the Linux group permission setting to meet the requirement above but it seems not a correct way.
And I even don't know what are the key words for searching the answer for this.
Any help would be appreciated.
Git does not allow you to have private branches, but you can achieve this functionality by implementing your own server-side pre-receive hook. Github enterprise specific pre-receive hook example is here, as a reference.
However, if you are using Git hosting services (like Github) they might have an option for this. Github, in particular, has an option called branch restrictions, but it requires you to have a paid subscription, unless your project is public.
You have two options:
By far the easiest solution is to use hosting software that already provides this functionality. You might want to look at GitLab, which has free options for both SaaS (hosted at gitlab.com) and self-managed instances (running your own gitlab instance). Or github. Or bitbucket. Or I'm sure there are others I'm not thinking of.
If you really don't want to use any of those, you can implement access control on a simple git server, but it's not so simple. The short (or rather, glib) answer is "hooks" - but a hook is just a script that runs when something happens - like in this case you'd use the prereceive hook, which runs when someone's trying to push and decides whether to accept the push. Now, how does your hook know who is pushing? (The commit metadata does not indicate who's pushing. What you need is authentication around the actual connection, and visibility of the authentication in your script so that the script may implement your authorization rules. That very quickly breaks down into "it depends on your environment".)
Since it's not really possible to exhaustively cover every scenario for doing this manually, hopefully either you'll find a pre-packaged solution you like, or you'll find the above to be enough to get you pointed in the right direction to do it manually.
I was install tortoise SVN in my system and i want to give a checkout to my friend so he needs to access my repository like 192.168.10.24/reponame/ so how he can checkout.
The repository is not accessible.
To share your repository with another person you need to set up a Subversion server. TortoiseSVN is not a server, it's only a client. Your server will need to be accessible to your friend via network, so unless he's on the same network as you are, you'll need to open a port on your firewall, forward the traffic through, and pay attention to all the security concerns that come with operating a server on the Internet.
Or you could try Git, create a repository on Github, tell him where to find it, and have him send you pull requests when he wants to integrate his work with yours. Or try Bitbucket (with Mercurial or Git) - pretty much the same principle, difference being that Bitbucket is free private repositories and Github charges for private repos. You won't have to worry about the networking, server operation, backups, security or anything else related.
I would like to share a project\solution with two teams, ideally on two TFS.
The option to have both teams using the same TFS doesn't work, because both teams don't have access to one of the TFS and hosting the solution on this TFS is a requirement.
It looks as follows:
Project\solution -> Team1 -> TFS1 (requirement)
Same Project\solution -> Team1 + Team2 -> TFS2 (???)
What are my options? Is there a tool out there that can do this? Should I use tow different version control packages?
You can use TFS Integration Plataform to sync the Team Projects between the TFS's installs... But the best world is: access one TFS trought TFS Proxy.
Another way is use Git repository, you will can sync remote the repository with your repository, but access the work items just by TFS.
There are really three ways to solve your problem. The reality is that only #1 is effective if you can't use the cloud. Note that using #3 is fraught with pain and suffering. Aa with all dark side hack/workarounds nothing meets the needs like solving the underlying problem rather than sweeping it under the carpet.
All access - the only really viable solution I to give all required users access to the TFS Server. I have worked with healthcare, banking, defence, and even Insurance. In all cases, in all companies, you can have a single server where all required users can access. In some cases it is hard and fraught with beurocracy but ultimately it can be done.
Visual Studio Online - while there is fear of the cloud this is likely your only option if you have externals that really can't directly access your network. This would be that common server. If you are in Europe then MS has just signed an agreement that ostensibly puts EU located servers for an American company outside of the reach of the Partiot Act (untested.) You can also easily use the TFS Integration Tools to create a one way sync between VSO and your local server.
Bi-directional synchronization - this can be achieved in many ways but there is always a penalty in merging if you have changes on both ends. You can use the TFS Integration Tools that are free or use a Commercially available tool like OpsHub. If your are using Git as your repository within TFS then you can use the command line to push source between two servers... Even if the can't communicate by using a USB stick.
Use #1 or #2 and only as a temporary and short term measure ever use #3
I use the tools all the time to move one way only from one system to another and even this a is a complicated experience. To move stuff bi-directionally you will need a full time resource to resolve conflicts.
If the servers can communicate with each other you may be able to use a system (akin to) replication. There is one master Tfs instance and then external sites use a proxy to allow the second team to work without direct or always-available access to the main server.
Alternatively you may be able to use branches - you could keep a branch for the external team's code and then merge to or from that branch to your mainline. In this scheme you would be able to sync the code by copying between the branch and external site, so you could even transfer updates on memory sticks if there is no direct net connection. Each sync would be pretty time consuming though, as someone on your main server team would have to merge code back and forth through the branch.
Another thing to consider is whether there is any way you can divide up the codebase and the tasks to minimise the overlap between the two teams. for example if one team provides a library that the other uses. This is just too minimise the merging needed.
I have just installed Mercurial 1.9.3 on my Centos 5.5 x64 server. I'm publishing my repositories using hgweb.wsgi and with mod_wsgi.
This server is only for use by our internal codebase and development team so I've also protected my server using .htaccess and basic HTTP authentication. This is all good and I can clone repositories to local and push changes back to the central repo(s).
There is one thing I'm not understanding correctly which is how to control and manage users.
For example, I have two users in my central repository server .htpassword file: bob and kevin.
On each of bob and kevin's local machines they have their own Mercurial .hgrc files with their username settings configured.
However these .hgrc users appear to have no relation at all to the user's specified in the remote server's .htpassword file.
This means that I can end up with pushes to the central repository coming from "Mickey Mouse" and "Donald Duck" which isn't useful.
How do I enforce end to end mapping of the local .hgrc username to the .htpassword user to maintain, i.e. ensure that the user specified in .hgrc matches the .htpassword user?
This isn't exactly an answer, but it's worth saying: Everyone worries about this at first, but in practice it's just not a problem.
At the time a user is committing with a DVCS, be it Mercurial, git, or other, they're not necessarily connected to any authentication/authorization system you control, so their commits are necessarily committed (locally) with whatever authorship info they want to assert. You can later reject those changesets upon push to a repo/server you control, but that will be a big hassle for you and for them. It's not just a matter of re-intering their name/password they have to alter the history of that changeset and all subsequent changesets to change that authorship information.
The list of completely unsatisfying solutions to this is:
reject pushes where changeset authorship doesn't match authenticated users using a hook (in practice this sucks because developers pull from one another and push one another's changesets all the time)
make developers sign their changesets with the gpg extension or hgsigs (a huge hassle and they'll forget)
keep a pushlog on the server that records the authenticated user that pushed each changeset separate from its authorship (Mozilla does this, and it's less of a hassle that than others but still not likely to ever be consulted).
If you positively can't risk bogus changesets entering a specific repo then use a human filter where people push to shared repo A and only reviewer/buildmanager/you can push from repo A to repo B, where repo B is where official builds come from. Mercurial itself uses this system.
In the end my advice is to not worry about it. Developers worth hiring are proud to put their name on their commits, and if you don't have developers worth hiring you're doomed to failure already.
How do I enforce end to end mapping of the local .hgrc username to the .htpassword user to maintain
No ways to do it. But (instead) ACL extension + ssh (hg-ssh or mercurial-server) gives more predictable results
PS
[ui] username =
usable and have sense only in changeset-comment context just as note, nothing more