Approach (A)
From my experience I saw that for a small team there's a dedicated server with all development tools (e.g. compiler, debugger, editor etc.) installed on it. Testing is done on dedicated per developer machine.
Approach (B)
On my new place there's team utilizing a different approach. Each developer has a dedicated PC which is used both as development and testing server. For testing an in-house platform is installed on the PC to run application over it. The platform executes several modules on kernel space and several processes on user space.
Problem
Now there are additional 2 small teams (~ 6 developers at all) joining to work on the exactly same OS and development environment. The teams don't use the mentioned platform and can execute application over plain Linux, so no need in dedicated machine for testing. We'd like to adopt approach (A) for all 3 teams, but the server must be stable and installing on it in-house platform, described above, is highly not desirable.
What would you advise?
What is practice for development environmentin your place - one server per team(s) or dedicated PC/server per developer?
Thanks
Dima
We've started developing on VMs that run on the individual developers' computers, with a common subversion repository.
Benefits:
Developers work on multiple projects simultaneously; one VM per project.
It's easy to create a snapshot (or simply to copy the VM) at any time, particularly before those "what happens if I try something clever" moments. A few clicks will restore the VM to its previous (working) state. For you, this means you needn't worry about kernel-space bugs "blowing up" a machine.
Similarly, it's trivial to duplicate one developer's environment so, for example, a temporary consultant can help troubleshoot. Best-practices warning: It's tempting to simply copy the VM each time you need a new development machine. Be sure you can reproduce the environment from your repository!
It doesn't really matter where the VMs run, so you can host them either locally or on a common server; the developers can still either collaborate or work independently.
Good luck — and enjoy the luxury of 6 additional developers!
Related
There are many websites providing cloud coding sush as Cloud9, repl.it. They must use server virtualisation technologies. For example, Clould9's workspaces are powered by Docker Ubuntu containers. Every workspace is a fully self-contained VM (see details).
I would like to know if there are other technologies to make sandboxed environment. For example, RunKit seems to have a light solution:
It runs a completely standard copy of Node.js on a virtual server
created just for you. Every one of npm's 300,000+ packages are
pre-installed, so try it out
Does anyone know how RunKit acheives this?
You can see more in "Tonic is now RunKit - A Part of Stripe! " (see discussion)
we attacked the problem of time traveling debugging not at the application level, but directly on the OS by using the bleeding edge virtualization tools of CRIU on top of Docker.
The details are in "Time Traveling in Node.js Notebooks"
we were able to take a different approach thanks to an ambitious open source project called CRIU (which stands for checkpoint and restore in user space).
The name says it all. CRIU aims to give you the same checkpointing capability for a process tree that virtual machines give you for an entire computer.
This is no small task: CRIU incorporates a lot of lessons learned from earlier attempts at similar functionality, and years of discussion and work with the Linux kernel team. The most common use case of CRIU is to allow migrating containers from one computer to another
The next step was to get CRIU working well with Docker
Part of that setup is being opened-source, as mentioned in this HackerNews feed.
It uses linux containers, currently powered by Docker.
I read over internet "containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server".
I also read that linux containers cannot run on windows.
The benifits of containers stated "Containers run as an isolated process in userspace on the host operating system."
I don't understand if the containers are not platform independent what we are actually achieving out it?
1) Anyhow all the applications on a linux box should run as an isolated process in their userspace.
2) If containers only contain app code + runtimes + tools + libraries. They can be shipped together. What conatiners are getting here?
Posting the comment as answer::
If containers only contain app code + runtimes + tools + libraries.
They can be shipped together. What conatiners are getting here?
Suppose there is an enterprise with thousands of employees and all of them work on Visual Studio C++. Now, the administrator can create a container with the installed (only C++ components) and configured VS, and deploy that container to all employees. The employees can instantly start working without bothering about installation and configuration of the application. Again, if the employee somehow corrupts the application, they only need to download the container again and they are good to go.
Sandboxing
Security
Maintenance
Mobility
Backup
Many more to go.
Are container platform independent?
IMHO, I don't think so, as they rely on the system calls. Though, I am open to other concepts if anybody knows better on this topic.
Even only considering one platform, containers have their advantages; just not perhaps the ones you need right now. :-) Containers help in administration/maintenance of complex IT systems. With containers you can easily isolate applications, their configuration, and their users, to achieve:
Better security (if someone breaks in, damage is usually limited to one container)
Better safety (if something breaks, or e.g. you make an error, only applications in a given container will be victim to this)
Easier management (containers can be started/stopped separately, can be transferred to another hosts (granted: host with the same OS; in case of Linux containers the host must also be Linux))
Easier testing (you can create and dispose-off containers at will, anytime)
Lighter backup (you can backup just the container; not the whole host)
Some form of increasing availaibility (by proper pre-configuration and automatic switching a container over to another host you can be up and running quicker in case of the primary host failure)
...just to name the first advantages coming to mind.
I installed Redmine 1.4 on Windows Server 2003 and MySQL. As after some time the instance became used by more people, I needed another one for testing (i.e. a development environment). As I also wanted to be able to test plugins without the risk of destroying the production Redmine instance, I copied the original Redmine (say redmine_prod) folder to another one (say redmine_devel). I created a new, empty Redmine database for redmine_devel. I only defined production environment in the first one and development in the second. Both instances run on Webrick started as Windows service, on different ports. Yet, there is a big difference in performance of both these instances - the old, production one runs very fast, whereas development runs slowly (several seconds to bring up pages, which doesn't alter with time).
I also tested running redmine_devel on thin server, which doesn't improve the performance a bit.
What can be the reason? They both run in literally same conditions.
Any hints appreciated.
OK, it's the default log level for environments other than production in Redmine. I.e. production uses :info as default, whereas other environments use :debug. This can be altered by editing config\environments\[your_env].rb and adding:
config.logger.level = Logger::INFO
to the chosed environment. There are also other options available, naturally.
For the first time, I am writing a web service that will call upon external programs to process requests in batch. The front-end will accept file uploads and then place them in a queue. The workers on the backend will take that file, run it through ffmpeg and the rest of my pipeline, and send an email when the process is complete.
I have my backend process working on my computer (Ubuntu 10.04). The question is: should I try to re-create that pipeline using binaries that I've compiled from scratch? Or is it okay to use apt when configuring in The Real World?
Not all hosting services uses Ubuntu, and not all give me root access. (I haven't chosen a host yet.) However, they will let me upload binaries to execute, and many give me shell access with gcc.
Usually this would be a no-brainier and I'd compile it all from scratch. But doing so - not to mention trying to figure out how to create a platform-independent .tar.gz binary - will be quite a task which ultimately doesn't really help me ship my product.
Do you have any thoughts on the best way to set up my stack so that I'm not tied to a specific hosting provider? Should I try creating my own .deb, which contains Ubuntu's version of ffmpeg (and other tools) with the configurations I need?
Short of a setup where I manage my own servers/VMs (which may very well be what I have to do), how might I accomplish this?
The question is: should I try to re-create that pipeline using binaries that I've compiled from scratch? Or is it okay to use apt when configuring in The Real World?
It is in reverse: it is not okay to deploy unpackaged in The Real World IMHO
and not all give me root access
How would you be deploying a .deb without root access. Chroot jails?
But doing so - not to mention trying to figure out how to create a platform-independent .tar.gz binary - will be quite a task which ultimately doesn't really help me ship my product.
+1 You answer you own question. Don't meddle unless you have to.
Do you have any thoughts on the best way to set up my stack so that I'm not tied to a specific hosting provider?
Only depend on wellpackaged standard libs (such as ffmpeg). Otherwise include them in your own deployment. This problem isn't too hard too solve for 10s of thousand Linux applications over decades now, so it would probably be feasible for you too.
Out of the box:
Look at rightscale and other cloud providers/agents that have specialized images/tool chains especially for video encoding.
A 'regular' VPS provider (with Xen or Virtuozzo) will not normally be happy with these kinds of workload, but EC2, Rackspace and their lot will be absolutely fine with that.
In general, I wouldn't believe that a cloud infrastructure provider that doesn't grant root access will allow for computationally intensive workloads. $0.02
I am looking for an enterprise subversion setup, that will fit the following requirements:
I need at least 2 instances of the repository server for high availability reasons
Management of multiple repositories
The 2 repository servers need to be synchronized.
Easy administration and configuration
User & authorization management with LDAP integration (web-interface) - optional
Backup & restore features, that guarantee the recovery with not more than 1 day of lost data
Fast and easy setup.
Monitoring of the repository(traffic, data volume, hotspots..) - optional
good security
either open source or low price tag, if possible
some pricing range, if a commercial tool is recommended.
a VMWare appliance would be great.
I am interested in an appliance or a set of subversion tools, that support these requirements. The operating system should be Ubuntu.
The configuration and setup of the toolset should be doable in hours or at the most a few days...
Our development team is not huge (about 30 people), but grows continually.
I have been unable to find anything (with the exception of Subversion MultiSite, that seems to big (and expensive? - they give no price information) for our enterprise)
Can anyone recommend a solution? Could you also describe your experiences with the recommended tool?
The easier and faster installation and configuration is, the better... If it is without a price tag, this is even better..
Thank you for any help.
I haven't seen a shrink-wrap setup for this, so far. If you want to build that from scratch, here are some pointers:
You can use builtin commands for the mirroring of the repo.
For multiple repos, just create a huge one and then add paths below the root.
For me, the command line is "easy admin&config", so can't help you there
To get user management, let subversion listen to localhost (127::1) and put an apache web server in front. There a loads of tools for user management for web servers.
For backup&restore, see your standard server backup tools.
VisualSVN Server answers most of your requirements.
From the web promo page (my emphasis):
Zero Friction Setup and Maintenance
One package with the latest versions of all required components
Next-Next-Finish installation
Smooth upgrade to new version
Enterprise-ready Server for Windows Platform
Stable and secure Apache-based Windows service
Support for SSL connections
SSL certificate management
Active Directory authentication and authorization with groups support
Logging to the Windows Event Log
Access and operational logging (Enterprise edition only)
Based on open protocols and standards
Configured by Subversion committer to work correctly out-of-the-box
I can vouch for visual SVN. I use the free version for our team of 4 developers, and it does everything it says on the tin reliably. Installation also took all of 5 minutes. That said, it does require a windows box.
Running a subversion server in a VMWare instance with one of VMWare's "High Availability" tools will give you most of what you need. There are pre-built VMWare Appliances that have a Subversion server built in. http://www.vmware.com/appliances/directory/308
VMWare's HA features will give you the redundancy of the SVN server instance. (You're going to need multiple physical servers for true redundancy. If one server fails, VMWare will re-start the instance on the new server.)
I don't know of any VMWare appliances that have special backup features, but this is pretty trivial to script. Just run an 'svnadmin hotcopy' once a day, so you have a copy of the repository ready to go in case of a corruption. (On top of this, you really should be using a SAN RAID array with tape backups.)
Our setup:
Rack of Blade Servers
VMWare Infrastructure
Virtualized Windows 2003 Server
If Windows crashes or one of the blades goes down, VMWare re-starts the Windows instance.
CollabNet Subversion Server, running Apache with SSPI authentication
SVN repo lives on a SAN
Nightly svnadmin hotcopy and verify of the repo (to another directory on the SAN), so we have a "hot" backup of the repo ready to go in case of a corruption problem.
Nightly tape backups of everything
Tapes taken offsite regularly
The cost of the server hardware and VMWare is going to be your biggest issue (assuming you don't already have this.) If you're not willing to make this kind of cash outlay, it may be worth looking at a hosted SVN provider.
We use svn for enterprise work. It is perfectly adequate. There are plenty of enterprise testimonials, including one from Fog Creek (Joel on Software, Stack Overflow).
I don't believe you need anything beyond the regular version.
I suppose you are aware that it is typical to use Subversion with TRAC, the issue tracking system.