How to access a repository from another computer using Tortoise svn? - tortoisesvn

I was install tortoise SVN in my system and i want to give a checkout to my friend so he needs to access my repository like 192.168.10.24/reponame/ so how he can checkout.
The repository is not accessible.

To share your repository with another person you need to set up a Subversion server. TortoiseSVN is not a server, it's only a client. Your server will need to be accessible to your friend via network, so unless he's on the same network as you are, you'll need to open a port on your firewall, forward the traffic through, and pay attention to all the security concerns that come with operating a server on the Internet.
Or you could try Git, create a repository on Github, tell him where to find it, and have him send you pull requests when he wants to integrate his work with yours. Or try Bitbucket (with Mercurial or Git) - pretty much the same principle, difference being that Bitbucket is free private repositories and Github charges for private repos. You won't have to worry about the networking, server operation, backups, security or anything else related.

Related

Secure Nexus against supply chain attacks

We switched from a publicly accessible reprepro Debian package repository (which was powered by an Apache web server) to the Sonatype Nexus Repository OSS, which is great piece of software. But we ran into one problem: When someone uploads a Debian package it's signed on the Nexus server, which we expose to our customers/the internet. In addition, the GPG key and passphrase is known to Nexus for package signing.
Or in other words: I am afraid of a similar situation like the SolarWinds supply chain attack. Scenario: Person attacks the publicly accessible Nexus server/Nexus itself, takes over Nexus, changes existing packages and resigns them with the GPG key/GPG passphrase. Then, malicious code is served to our customers.
I thought about exposing the file blob store directory as read only target to a publicly exposed web server and keep Nexus company internal. Sadly the internal file blob store layout is different, so that's not possible.
So my questions:
Is there a good way to expose the the blob storage in a Deb/RPM/Docker/etc. compatible format which can be served by a more protected, publicly accessible Apache server and consumed by tooks like dpkg/yum/dnf/Docker etc?
I also thought about a second read only Nexus server which is rsync'ed every 10 minutes or so. An attacker would then take over this server, but the package signing check (At least for DEB/RPM) prevents installation of the tampered package
Use an Apache reverse proxy with certificate based authentication (I guess the most secure but complex solution)
But maybe there is already such a feature/another way and I just missed it in the documentation?
In the end we came up with several steps to minimize the risk:
Use a proxy that filters via GeoIP (Repository access is only possible from the countries our customers reside)
Block all URIs except the following (Replace with name of your repo):
/service/rest/repository/browse/REPONAME/*>
/repository/REPONAME/*>
/static/css/nexus-content.css*>
/favicon.ico*>
/favicon-*.png>

How to allow only a specific user to push commits to master branch

I'm trying to setup a git server and I want to allow only a specific user to push commits to master branch.
I have tried to use the Linux group permission setting to meet the requirement above but it seems not a correct way.
And I even don't know what are the key words for searching the answer for this.
Any help would be appreciated.
Git does not allow you to have private branches, but you can achieve this functionality by implementing your own server-side pre-receive hook. Github enterprise specific pre-receive hook example is here, as a reference.
However, if you are using Git hosting services (like Github) they might have an option for this. Github, in particular, has an option called branch restrictions, but it requires you to have a paid subscription, unless your project is public.
You have two options:
By far the easiest solution is to use hosting software that already provides this functionality. You might want to look at GitLab, which has free options for both SaaS (hosted at gitlab.com) and self-managed instances (running your own gitlab instance). Or github. Or bitbucket. Or I'm sure there are others I'm not thinking of.
If you really don't want to use any of those, you can implement access control on a simple git server, but it's not so simple. The short (or rather, glib) answer is "hooks" - but a hook is just a script that runs when something happens - like in this case you'd use the prereceive hook, which runs when someone's trying to push and decides whether to accept the push. Now, how does your hook know who is pushing? (The commit metadata does not indicate who's pushing. What you need is authentication around the actual connection, and visibility of the authentication in your script so that the script may implement your authorization rules. That very quickly breaks down into "it depends on your environment".)
Since it's not really possible to exhaustively cover every scenario for doing this manually, hopefully either you'll find a pre-packaged solution you like, or you'll find the above to be enough to get you pointed in the right direction to do it manually.

Accessing git server with specific url

Can we add multiple git-web URL for a single git server. So that specific people can access their projects with their own URL'S?
When I have tired to search the results for above question, it is there like git daemon is used? If it is so? How it is useful?
It is easier, especially with a Git repository hosting server managed by Gitolite, to:
access the same server
And:
different repositories (one per user)
OR different branches (one per user) within the same repository.
As to that last option, you can read "Gitolite: Personal branches":
"personal" branches are great for environments where developers need to share work but can't directly pull from each other (usually due to either a networking or authentication related reason, both common in corporate setups).

Tips for securing our "public" code repository server

I'm working at an IT company where we have used Perforce for years as our code repository system, in our internal company network. Because we are starting to work with an offsite company we are looking into ways of making our Perforce server accessible via the internet.
The most obvious way for me to do this is setup a VPN server on our linux gateway server and allow access through that. Obviously this works but seems very unsafe. If a VPN key of a certain user falls in the wrong hands they can access our code-repository AND our complete internal network.
My first thought was to create a Perforce proxy server (they supply software for this) and host this behind another gateway, with a VPN server. This shields the real Perforce server and our network better. The obvious problem here is that the proxy needs access to our perforce server, meaning the two networks needs to be connected anyway.
Our company is rather small, so taking into account we don't have a huge resource pool to spend on this, how would you approach this?
thanks a lot in advance,
Fred.
Instead of modifying the behavior and configuration of your current internal server, perhaps you should set up a second Perforce server, and use that server only for the interactions between your team and your offsite partner.
Include explicit steps in your workflow to periodically "publish" code from your internal server to your external server, and similar steps to periodically "consume" the offsite company's work by copying their changes back to your internal server and re-submitting them there.
Additionally, there are companies which offer a hosted Perforce service, so you don't even have to operate this external Perforce server yourself; you can let the hosting company manage the operational aspects of this code-sharing server.
One option is to put the official Perforce server in an outside locked-down server. That is to say, everyone accesses Perforce at the outside server, which has Perforce and only Perforce on it.

Mercurial user management and security

I have just installed Mercurial 1.9.3 on my Centos 5.5 x64 server. I'm publishing my repositories using hgweb.wsgi and with mod_wsgi.
This server is only for use by our internal codebase and development team so I've also protected my server using .htaccess and basic HTTP authentication. This is all good and I can clone repositories to local and push changes back to the central repo(s).
There is one thing I'm not understanding correctly which is how to control and manage users.
For example, I have two users in my central repository server .htpassword file: bob and kevin.
On each of bob and kevin's local machines they have their own Mercurial .hgrc files with their username settings configured.
However these .hgrc users appear to have no relation at all to the user's specified in the remote server's .htpassword file.
This means that I can end up with pushes to the central repository coming from "Mickey Mouse" and "Donald Duck" which isn't useful.
How do I enforce end to end mapping of the local .hgrc username to the .htpassword user to maintain, i.e. ensure that the user specified in .hgrc matches the .htpassword user?
This isn't exactly an answer, but it's worth saying: Everyone worries about this at first, but in practice it's just not a problem.
At the time a user is committing with a DVCS, be it Mercurial, git, or other, they're not necessarily connected to any authentication/authorization system you control, so their commits are necessarily committed (locally) with whatever authorship info they want to assert. You can later reject those changesets upon push to a repo/server you control, but that will be a big hassle for you and for them. It's not just a matter of re-intering their name/password they have to alter the history of that changeset and all subsequent changesets to change that authorship information.
The list of completely unsatisfying solutions to this is:
reject pushes where changeset authorship doesn't match authenticated users using a hook (in practice this sucks because developers pull from one another and push one another's changesets all the time)
make developers sign their changesets with the gpg extension or hgsigs (a huge hassle and they'll forget)
keep a pushlog on the server that records the authenticated user that pushed each changeset separate from its authorship (Mozilla does this, and it's less of a hassle that than others but still not likely to ever be consulted).
If you positively can't risk bogus changesets entering a specific repo then use a human filter where people push to shared repo A and only reviewer/buildmanager/you can push from repo A to repo B, where repo B is where official builds come from. Mercurial itself uses this system.
In the end my advice is to not worry about it. Developers worth hiring are proud to put their name on their commits, and if you don't have developers worth hiring you're doomed to failure already.
How do I enforce end to end mapping of the local .hgrc username to the .htpassword user to maintain
No ways to do it. But (instead) ACL extension + ssh (hg-ssh or mercurial-server) gives more predictable results
PS
[ui] username =
usable and have sense only in changeset-comment context just as note, nothing more

Resources