Is there a subversion appliance / toolset for the enterprise - linux

I am looking for an enterprise subversion setup, that will fit the following requirements:
I need at least 2 instances of the repository server for high availability reasons
Management of multiple repositories
The 2 repository servers need to be synchronized.
Easy administration and configuration
User & authorization management with LDAP integration (web-interface) - optional
Backup & restore features, that guarantee the recovery with not more than 1 day of lost data
Fast and easy setup.
Monitoring of the repository(traffic, data volume, hotspots..) - optional
good security
either open source or low price tag, if possible
some pricing range, if a commercial tool is recommended.
a VMWare appliance would be great.
I am interested in an appliance or a set of subversion tools, that support these requirements. The operating system should be Ubuntu.
The configuration and setup of the toolset should be doable in hours or at the most a few days...
Our development team is not huge (about 30 people), but grows continually.
I have been unable to find anything (with the exception of Subversion MultiSite, that seems to big (and expensive? - they give no price information) for our enterprise)
Can anyone recommend a solution? Could you also describe your experiences with the recommended tool?
The easier and faster installation and configuration is, the better... If it is without a price tag, this is even better..
Thank you for any help.

I haven't seen a shrink-wrap setup for this, so far. If you want to build that from scratch, here are some pointers:
You can use builtin commands for the mirroring of the repo.
For multiple repos, just create a huge one and then add paths below the root.
For me, the command line is "easy admin&config", so can't help you there
To get user management, let subversion listen to localhost (127::1) and put an apache web server in front. There a loads of tools for user management for web servers.
For backup&restore, see your standard server backup tools.

VisualSVN Server answers most of your requirements.
From the web promo page (my emphasis):
Zero Friction Setup and Maintenance
One package with the latest versions of all required components
Next-Next-Finish installation
Smooth upgrade to new version
Enterprise-ready Server for Windows Platform
Stable and secure Apache-based Windows service
Support for SSL connections
SSL certificate management
Active Directory authentication and authorization with groups support
Logging to the Windows Event Log
Access and operational logging (Enterprise edition only)
Based on open protocols and standards
Configured by Subversion committer to work correctly out-of-the-box

I can vouch for visual SVN. I use the free version for our team of 4 developers, and it does everything it says on the tin reliably. Installation also took all of 5 minutes. That said, it does require a windows box.

Running a subversion server in a VMWare instance with one of VMWare's "High Availability" tools will give you most of what you need. There are pre-built VMWare Appliances that have a Subversion server built in. http://www.vmware.com/appliances/directory/308
VMWare's HA features will give you the redundancy of the SVN server instance. (You're going to need multiple physical servers for true redundancy. If one server fails, VMWare will re-start the instance on the new server.)
I don't know of any VMWare appliances that have special backup features, but this is pretty trivial to script. Just run an 'svnadmin hotcopy' once a day, so you have a copy of the repository ready to go in case of a corruption. (On top of this, you really should be using a SAN RAID array with tape backups.)
Our setup:
Rack of Blade Servers
VMWare Infrastructure
Virtualized Windows 2003 Server
If Windows crashes or one of the blades goes down, VMWare re-starts the Windows instance.
CollabNet Subversion Server, running Apache with SSPI authentication
SVN repo lives on a SAN
Nightly svnadmin hotcopy and verify of the repo (to another directory on the SAN), so we have a "hot" backup of the repo ready to go in case of a corruption problem.
Nightly tape backups of everything
Tapes taken offsite regularly
The cost of the server hardware and VMWare is going to be your biggest issue (assuming you don't already have this.) If you're not willing to make this kind of cash outlay, it may be worth looking at a hosted SVN provider.

We use svn for enterprise work. It is perfectly adequate. There are plenty of enterprise testimonials, including one from Fog Creek (Joel on Software, Stack Overflow).
I don't believe you need anything beyond the regular version.
I suppose you are aware that it is typical to use Subversion with TRAC, the issue tracking system.

Related

Remote access, configuration and monitoring needs for linux machines in retail venues (shops) distributed across the country.

We plan on deploying machines with a Linux distro, in retail venues (shops) across the country. The venues have their own connectivity with the wider world and their own network, which we have no control over. We also need an ability to configure Chrome in these machines. The machines are simple desktops, which we will set up and distribute in venues, and they have the following ongoing needs…
Remote monitoring – we have New Relic for all our EC2 servers, and a strategy to use New Relic would work well as the team is already familiar with it. Is that feasible?
Remote configuration and upgrade – again puppet and mcollective are the tools of choice as they would probably do the job and the team is well aware of the toolset.
Chrome configuration – will something like Google Admin Console work to configure the browser.

Backup server for a NAS with web interface

I'm evaluating the features of a full-fledged backup server for my NAS (synology). I need
FTP access (backup remote sites)
SSH/SCP access (backup remote server)
web interface (in order to monitor each backup job)
automatic mail alerting if jobs fail
lightweight software (no mysql, sqlite ok)
optional: S3/Glacier support (as target)
optional: automatic long-term storage after a given time (ie local disk for 3 months, Glacier after that)
seems like biggest player are Amanda, Bacula and duplicity (likewise)
Any suggestion?
thanks a lot
Before jumping on the full server backups, please clarify these questions:
Backup software's are agent and non agent based, which one do you want to use?
Are you interested to go for open source or proprietary software?
Determine your source and destination are they in the same LAN or in the Internet. Try to get the picture of the bandwidth between source and destination and the volume of data getting backed up?
Also if you are interested try to know gui requirements and various other os platform support for backup software.
Importantly try to know the mail notification configuration.
Presently am setting one for my project and so far have installed bacula-v7.0.5 with webmin as gui. Trying the same config in the amazon cloud utilizing s3 as storage by mounting s3fs into the ec2 instance.
My bacula software is a free community version.Haven't explored the mail notification until now.

How safe is fresh Centos 6 Standard Server Installation?

Is installing Centos using standard installation for webserver relative safe? (without considering the CMS safety and only for Wordpress). The contents are:
- Virtualmin & Webmin:
- APC caching
- Apache, MySQL and Php
Everything is installed with default settings.
I installed Centos server at home and access it 100% from local network.
If it is not safe then what is the minimum requirement for safety?
'Safe' is too relative a term really. CentOS 6, Virtualmin and Webmin all have security bugs filed against them, some of which can even be exploited automatically by scripts and packages like Metasploit.
That said, no system will ever be perfectly secure unless you bury it underground with no net connection, so here are some good initial steps to take to improve security a little:
Turn off services and daemons that you don't need. For instance, it could be that you won't be using FTP, and will use SFTP for file transfer. If so, turn off the ones you aren't using.
Enforce a policy of unique and secure passwords of a decent length
install system updates, especially security updates.
Modify IPtables settings to disallow access to unused ports. Look into further iptables settings that can help
Consider key-based logins, 2 or 3 factor authentication etc. and weigh the pros and cons (google authenticator PAM module is very easy to install, for example).
That's a good start off, a key thing is to keep an eye on the server, try to monitor if unusual bandwidth, or logins are being used.
No box is a fortress, but you can at the very least discourage opportunists.

RPC command to initiate a software install

I was recently working with a product from Symantech called Norton EndPoint protection. It consists of a server console application and a deployment application and I would like to incorporate their deployment method into a future version of one of my products.
The deployment application allows you to select computer workstations running Win2K, WinXP, or Win7. The selection of workstations is provided from either AD (Active Directory) or NT Domain (WINs/DNS NetBIOS lookup). From the list, one can click and choose which workstations to deploy the end point software which is Symantech's virus & spyware protection suite.
Then, after selecting which workstations should receive the package, the software copies the setup.exe program to each workstation (presumable over the administrative share \pcname\c$) and then commands the workstation to execute setup.exe resulting in the workstation installing the software.
I really like how their product works but not sure what they are doing to accomplish all the steps. I've not done any deep investigations into this such as sniffing the network, etc... and wanted to check here to see if anyone is familiar with what I'm talking about and if you know how it's accomplished or have ideas how it could be accomplished.
My thinking is that they are using the admin share to copy the software to the selected workstations and then issuing an RPC call to command the workstation to do the install.
What's interesting is that the workstations do this without any of the logged in users knowing what's going on until the very end where a reboot is necessary. At which point, the user gets a pop-up asking to reboot now or later, etc... My hunch is that the setup.exe program is popping this message.
To the point: I'm looking to find out the mechanism by which one Windows based machine can tell another to do some action or run some program.
My programming language is C/C++
Any thoughts/suggestions appreciated.
I was also looking into this, since I too want to remote deploy software. I chose to packet sniff pstools since it has proven itself quite reliable in such remote admin tasks.
I must admit I was definitely over-thinking this challenge. You have probably done your packet sniff by now and discovered the same things I have. I hope by leaving this post behind we can assist other developers.
This is how pstools accomplishes execution of arbitrary code:
It copies a system service executable to \\server\admin$ (you either have to already have local admin on the remote machine, or supply credentials). Once the file is copied, it uses the Service Control Manager API to make the copied file a system service and start it.
Obviously, this system service can now do whatever it wants, including binding to an RPC named pipe. In our case, the system service would install an msi. To get confirmation of successful installation you could either remote poll a registry key, or an rpc function. Either way, you should remove the system service when you are done and delete the file (psexec does not do this, I guess they don't want it to be used surreptitiously, and in that case leaving the service behind would at least give an admin a fighting chance of realizing someone had compromised their box.) This method does not require any preconfiguration of the remote machine, simply that you have admin creds and that file sharing and rpc are open in the firewall.
I've seen demos in C# using WMI, but I don't like those solutions. File sharing and RPC are most likely to be open in firewalls. If they aren't, file sharing and remote MMC management of the remote server wouldn't work. WMI can be blocked and still leave these functional.
I've worked with a lot of software that does remote installations, and a lot of them are not as reliable as pstools. My guess is that this is because those developers are using other methods that are not as likely to be open at the firewall level.
The simple solution is often the most elusive. As always, my hat is off to the SysInternals folks. They are true hackers in the positive, old school meaning of the word!
This sort of functionality is also available with products LANDesk and Altiris. You need a daemonized listener on the client side that will listen for instructions/connections from the server. Once a connection is made any number of things can happen: you can transfer files, kick on installation scripts, etc. usually transparently to any users on that box.
I've used the Twisted Framework (http://twistedmatrix.com) to do this with a small handful of Linux machines. It's Python and Linux, not Windows, but the premise is the same: a listening client accepts instructions from a server and executes them. Very simple.
This functionality can also be accomplished with VB/Powershell scripts in a Windows-based domain.

Linux development environment for a small team

Approach (A)
From my experience I saw that for a small team there's a dedicated server with all development tools (e.g. compiler, debugger, editor etc.) installed on it. Testing is done on dedicated per developer machine.
Approach (B)
On my new place there's team utilizing a different approach. Each developer has a dedicated PC which is used both as development and testing server. For testing an in-house platform is installed on the PC to run application over it. The platform executes several modules on kernel space and several processes on user space.
Problem
Now there are additional 2 small teams (~ 6 developers at all) joining to work on the exactly same OS and development environment. The teams don't use the mentioned platform and can execute application over plain Linux, so no need in dedicated machine for testing. We'd like to adopt approach (A) for all 3 teams, but the server must be stable and installing on it in-house platform, described above, is highly not desirable.
What would you advise?
What is practice for development environmentin your place - one server per team(s) or dedicated PC/server per developer?
Thanks
Dima
We've started developing on VMs that run on the individual developers' computers, with a common subversion repository.
Benefits:
Developers work on multiple projects simultaneously; one VM per project.
It's easy to create a snapshot (or simply to copy the VM) at any time, particularly before those "what happens if I try something clever" moments. A few clicks will restore the VM to its previous (working) state. For you, this means you needn't worry about kernel-space bugs "blowing up" a machine.
Similarly, it's trivial to duplicate one developer's environment so, for example, a temporary consultant can help troubleshoot. Best-practices warning: It's tempting to simply copy the VM each time you need a new development machine. Be sure you can reproduce the environment from your repository!
It doesn't really matter where the VMs run, so you can host them either locally or on a common server; the developers can still either collaborate or work independently.
Good luck — and enjoy the luxury of 6 additional developers!

Resources