Setting up Perforce depot for multiple projects - perforce

Summary: Want help to figure out how to setup the depot and my development environment so that I can support multiple, related projects.
Details:
Until now I've had a depot which had in it only one project - ProjectA - robot version A.
I am starting to work on a new version (ProjectB) which has some differences in HW - I/O port mappings and timers have changed. I would like to continue to develop code for both projects.
This means that ProjectB will share some files with the ProjectA and some files will be different.
Since the differences are HW related items what I'm thinking of doing is creating a common area and then project specific areas where the common area is for device independent code and project specific area is for device dependent code.
The differences are big enough that I don't want to do #ifdef within files. Some differences are simple - different I/O port mapping and some are completely new modules.
To make maintainance easier, I would like to be able to compare differences between device dependent code and propagate selected changes.
Finally, to minimize my burden during comparisons, I would like to mark differences that I know are okay so that in future comparisons they don't show up.
Help!

Your instincts are good -- you're trying to Not Duplicate Code. This is the core of good design & engineering.
As for the file layout, it's always annoying to have your directories too deep, but that's MUCH better than too shallow. Maybe:
<root>
main/
projects/
robot1/...
robot2/...
shared1/
shared2/
(Big repositories are much deeper than that, even.)
As for how you make shared code -- you could have different setup.h or constants.h that drive what the various shared libraries do. Alternatively, build your shared libraries so they are parameterized at runtime.
SetupDrivers(0x80020); // address of PIO registers
And lastly -- if the projects really are different, decide if sharing the code really is the right thing. Usually yes, but everything is a choice. If you hope to manually "diff" your files to look for differences, it's really up to you to keep the structures close enough to diff. The "different config.h file for each project" idea mentioned above would help.
If you roll your own diff tool (in python or whatever) you could use special comments to flag "expected different lines".

Related

What's the advantages of using file system to organize our codes

It is 2017, and as far as I know, the way programmers organize their codes have not changed. We distribute our codes into files and organize them with a tree structure (nested directories and files). When codebase is huge, and the relations between classes/components are complex, this organization approach gives me the inefficient impression. With more files, either one directory has more files in it or the depth of directories increases. And since we handle the directories directly, navigation costs me time and effort without tools like search.
Figure: A complex UML from https://github.com/CMPUT301W15T09/Team9Project/wiki/UML
We can use CAD to design/draw complex things; mind map can be created in a similar manner. For these, we do not need to deal with file systems. Can't we have something similar and hide file system in a black box? Why the fundamental organization methods have not evolved for so long a time.
So I wonder, what's the advantages that keeps us from getting a new way? What's the inherit advantages of using file system to organize our codes.
Different on-disk representations of source-code have been tried (e.g. how Flash stores ActionScript inside binary .fla files) and they're generally unpopular. No-one likes proprietary file formats. It also means you can't use text-based source control systems like Git, which means you can't do a text-merge to resolve change conflicts.
We store source code in files in a tree structure (e.g. one OOP class or procedural module per file), with nested namespaces represented by nested directories because it's intuitive (and again, for better cohesion with source-control systems).
Some languages enforce this, like Java, for example, that requires the source file be named the same as the class it contains and be in the same directory name as its containing package. For other languages like C# and C++ it just makes sense - because otherwise it's confusing to someone who might be new to your codebase when they see class TurboEncabulator inside a file named PrefabulatedAmulite.cs.

Haskell coding via Emacs/Vim in Linux : organising files into projects

I have two questions regarding coding Haskell in Emacs/VIM in Linux :
If one wanted to organise a haskell code base into multiple projects (where files of a given project are stored in a particular folder) then can emacs and vim handle this? The reason is that I have used IDE's before where all the projects are loaded at once into the session, but am not sure how this would work for Emacs/VIM.
Another nice thing about IDEs is that I can go the definition of a function from a given project, even if that function definition is in a separate project (i.e. stored in a separate folder). Can Emacs/VIM handle this?
There was some discussion here : Haskell IDE for Windows?
VIM/Emacs don't care how you organize your files. They're primarily editors, so you can use them to edit files no matter how you lay out your directory structure. Other than that, it's good to follow some standard conventions, or adapt the structure for tools you are going to use. You can have a look at Structure of a Haskell project. It is also convenient to use Cabal to build your project and to manage its dependencies.
VIM/Emacs can use Ctags index files for navigating in your project. See Tags for available options how to create these index files. The indexing tools don't expect any specific project structure, so if you need to navigate in multiple projects, you can just index a directory with multiple projects into one index file.
There's an Emacs mode called Projectile that allows some "project-like" functionality, which might be what you're looking for. I haven't really used it myself (I tend to stick to the old *nix way of just editing files), so I can't give you details, but it can't hurt to check it out.
Not sure if I understood the gist of question correctly. The following is my suggestion concerning the question of management of multiple projects in Vim, without regard to the language employed.
You could take advantage of vim sessions. With a few custom functions/keymappings in your .vimrc you'll be able to keep a separate session file for each project, either in the project directory or the one you'd dedicate for session files.
This is the general how-to: http://vim.runpaint.org/editing/managing-sessions/
And there you'll find a number of scripts that specifically address the issue of handling multiple project-specific sessions: How to auto save vim session on quit and auto reload on start including split window state?

Is there any good build tool that stays out of my way?

As the title says, I want to have a build tool that quite much stays out of my way.
I would rather want to specify rules, rather than steps in the build process. I wan to say that I want a binary file with a name placed in the root directory of my project, .o files should go in an obj/tmp dir and the source is in the Source-directory.
I do NOT want to tell it that it is this'n'that file as I keep adding new files rather quickly, it should just scan the source directory (and its subdirectories) looking for Ragel (.rl) and C++ code (.cxx) and doing what's necessary to make all into an executable.
I have looked into many tools, like auto{make,conf,header} (Did not really like that I placed the files it wanted in a subdir of project root, eclipse did not like that either), CMake (Seems like I have to add all source-files myself, and is quite much a variation of autotools in my eyes). I have also read about ant, maven (I am also allergic against XML, it's a good format to serialize data for applications, not so much for humans. I would prefer YAML) and others on WikiPedia. And I have seen tools which seems good but which require to be set up as a webserver which is kinda overkill.
Also, I really need the ability to be able to work offline without internet connection!.
Right now it seems like the best option is to make a little script that finds all .cxx files and write an Unity.cxx and builds that one with G++, which probably is quite fast but to much an ugly hack, I guess.
Bonus Points:
Fast builds
Ability to type build test-1 or something and it will build and directly run test-1
Multi-core builds (i.e. faster builds)
Does really not interrupt my train of thought
CMake is great. It's free, cross-platform, and reasonably well documented. It supports "out of source builds", meaning none of the build files are placed in the source directory. That makes source control a bit easier. It can be set up to find new files (globbing). Fast?...It generates make files...after that it's up to your compiler. Multicore...again, more a function of the compiler. I've used CMake on Windows, Linux, and Mac...it just works.
Another that I haven't tried but have read about and plan to test is premake... http://industriousone.com/sample-script
cake from CoffeeScript is quite good, and I'm writing a similar tool using Lua myself.
CMake and premake Ain't build/maketools, they are build/make-descriptor generators; which may fit a large number of projects that ain't changing too much. But not for project where rapid prototyping is a key.
Right now, I'm doing a project where the browser updates when you hit the save-button in your text editor; You do not need to go to the browser and hit F5 (Which would cause a small delay while the browser load in everything again, and you would most likely loose the state of the page, like say that you have an menu open, and wish to tweak the look of the menu. You would be forced to navigate there again in your RIA).

How to build Linux system from kernel to UI layer

I have been looking into MeeGo, maemo, Android architecture.
They all have Linux Kernel, build some libraries on it, then build middle layer libraries [e.g telephony, media etc...].
Suppose i wana build my own system, say Linux Kernel, with some binariers like glibc, Dbus,.... UI toolkit like GTK+ and its binaries.
I want to compile every project from source to customize my own linux system for desktop, netbook and handheld devices. [starting from netbook first :)]
How can i build my own customize system from kernel to UI.
I apologize in advance for a very long winded answer to what you thought would be a very simple question. Unfortunately, piecing together an entire operating system from many different bits in a coherent and unified manner is not exactly a trivial task. I'm currently working on my own Xen based distribution, I'll share my experience thus far (beyond Linux From Scratch):
1 - Decide on a scope and stick to it
If you have any hope of actually completing this project, you need write an explanation of what your new OS will be and do once its completed in a single paragraph. Print that out and tape it to your wall, directly in front of you. Read it, chant it, practice saying it backwards and whatever else may help you to keep it directly in front of any urge to succumb to feature creep.
2 - Decide on a package manager
This may be the single most important decision that you will make. You need to decide how you will maintain your operating system in regards to updates and new releases, even if you are the only subscriber. Anyone, including you who uses the new OS will surely find a need to install something that was not included in the base distribution. Even if you are pushing out an OS to power a kiosk, its critical for all deployments to keep themselves up to date in a sane and consistent manner.
I ended up going with apt-rpm because it offered the flexibility of the popular .rpm package format while leveraging apt's known sanity when it comes to dependencies. You may prefer using yum, apt with .deb packages, slackware style .tgz packages or your own format.
Decide on this quickly, because its going to dictate how you structure your build. Keep track of dependencies in each component so that its easy to roll packages later.
3 - Re-read your scope then configure your kernel
Avoid the kitchen sink syndrome when making a kernel. Look at what you want to accomplish and then decide what the kernel has to support. You will probably want full gadget support, compatibility with file systems from other popular operating systems, security hooks appropriate for people who do a lot of browsing, etc. You don't need to support crazy RAID configurations, advanced netfilter targets and minixfs, but wifi better work. You don't need 10GBE or infiniband support. Go through the kernel configuration carefully. If you can't justify including a module by its potential use, don't check it.
Avoid pulling in out of tree patches unless you absolutely need them. From time to time, people come up with new scheduling algorithms, experimental file systems, etc. It is very, very difficult to maintain a kernel that consumes from anything else but mainline.
There are exceptions, of course. If going out of tree is the only way to meet one of your goals stated in your scope. Just remain conscious of how much additional work you'll be making for yourself in the future.
4 - Re-read your scope then select your base userland
At the very minimum, you'll need a shell, the core utilities and an editor that works without an window manager. Paying attention to dependencies will tell you that you also need a C library and whatever else is needed to make the base commands work. As Eli answered, Linux From Scratch is a good resource to check. I also strongly suggest looking at the LSB (Linux standard base), this is a specification that lists common packages and components that are 'expected' to be included with any distribution. Don't follow the LSB as a standard, compare its suggestions against your scope. If the purpose of your OS does not necessitate inclusion of something and nothing you install will depend on it, don't include it.
5 - Re-read your scope and decide on a window system
Again, referring to the everything including the kitchen sink syndrome, try and resist the urge to just slap a stock install of KDE or GNOME on top of your base OS and call it done. Another common pitfall is to install a full blown version of either and work backwards by removing things that aren't needed. For the sake of sane dependencies, its really better to work on this from bottom up rather than top down.
Decide quickly on the UI toolkit that your distribution is going to favor and get it (with supporting libraries) in place. Define consistency in UIs quickly and stick to it. Nothing is more annoying than having 10 windows open that behave completely differently as far as controls go. When I see this, I diagnose the OS with multiple personality disorder and want to medicate its developer. There was just an uproar regarding Ubuntu moving window controls around, and they were doing it consistently .. the inconsistency was the behavior changing between versions. People get very upset if they can't immediately find a button or have to increase their mouse mileage.
6 - Re-read your scope and pick your applications
Avoid kitchen sink syndrome here as well. Choose your applications not only based on your scope and their popularity, but how easy they will be for you to maintain. Its very likely that you will be applying your own patches to them (even simple ones like messengers updating a blinking light on the toolbar).
Its important to keep every architecture that you want to support in mind as you select what you want to include. For instance, if Valgrind is your best friend, be aware that you won't be able to use it to debug issues on certain ARM platforms.
Pretend you are a company and will be an employee there. Does your company pass the Joel test? Consider a continuous integration system like Hudson, as well. It will save you lots of hair pulling as you progress.
As you begin unifying all of these components, you'll naturally be establishing your own SDK. Document it as you go, avoid breaking it on a whim (refer to your scope, always). Its perfectly acceptable to just let linux be linux, which turns your SDK more into formal guidelines than anything else.
In my case, I'm rather fortunate to be working on something that is designed strictly as a server OS. I don't have to deal with desktop caveats and I don't envy anyone who does.
7 - Additional suggestions
These are in random order, but noting them might save you some time:
Maintain patch sets to every line of upstream code that you modify, in numbered sequence. An example might be 00-make-bash-clairvoyant.patch, this allows you to maintain patches instead of entire forked repositories of upstream code. You'll thank yourself for this later.
If a component has a testing suite, make sure you add tests for anything that you introduce. Its easy to just say "great, it works!" and leave it at that, keep in mind that you'll likely be adding even more later, which may break what you added previously.
Use whatever version control system is in use by the authors when pulling in upstream code. This makes merging of new code much, much simpler and shaves hours off of re-basing your patches.
Even if you think upstream authors won't be interested in your changes, at least alert them to the fact that they exist. Coordination is essential, even if you simply learn that a feature you just put in is already in planning and will be implemented differently in the future.
You may be convinced that you will be the only person to ever use your OS. Design it as though millions will use it, you never know. This kind of thinking helps avoid kludges.
Don't pull upstream alpha code, no matter what the temptation may be. Red Hat tried that, it did not work out well. Stick to stable releases unless you are pulling in bug fixes. Major bug fixes usually result in upstream releases, so make sure you watch and coordinate.
Remember that it's supposed to be fun.
Finally, realize that rolling an entire from-scratch distribution is exponentially more complex than forking an existing distribution and simply adding whatever you feel that it lacks. You need to reward yourself often by booting your OS and actually using it productively. If you get too frustrated, consistently confused or find yourself putting off work on it, consider making a lightweight fork of Debian or Ubuntu. You can then go back and duplicate it entirely from scratch. Its no different than prototyping an application in a simpler / rapid language first before writing it for real in something more difficult. If you want to go this route (first), gNewSense offers utilities to fork your own OS directly from Ubuntu. Note, by default, their utilities will strip any non free bits (including binary kernel blobs) from the resulting distro.
I strongly suggest going the completely from scratch route (first) because the experience that you will gain is far greater than making yet another fork. However, its also important that you actually complete your project. Best is subjective, do what works for you.
Good luck on your project, see you on distrowatch.
Check out Linux From Scratch:
Linux From Scratch (LFS) is a project
that provides you with step-by-step
instructions for building your own
customized Linux system entirely from
source.
Use Gentoo Linux. It is a compile from source distribution, very customizable. I like it a lot.

Reorganizing a project for expansion/reuse

The scope of the project I'm working on is being expanded. The application is fairly simple but currently targets a very specific niche. For the immediate future I've been asked to fork the project to target a new market and continue developing the two projects in tandem.
Both projects will be functionally similar so there is a very strong incentive to generalize a lot of the guts of the original project. Also I'm certain I'll be targeting more markets in the near future (the markets are geographic).
The problem is a previous maintainers of the project made a lot of assumptions that tie it to its original market. It's going to take quite a bit of refactoring to separate the generic from the market specific code.
To make things more complex several suggestions have been tossed around on how to organize the projects for the growing number of markets:
Each market is a separate project, commonalities between projects are moved to a shared library, projects are deployed independently.
Expand the existing project to target multiple markets, limiting functionality based on purchased license.
Create a parent application and redesign projects as plugins, purchased separately
All three suggestions have merit and ideally I would like to structure the codeto be flexible enough that any of these is possible with minor adjustments. Suggestion 3 appears to be the most daunting as it would require building a plugin architecture. The first two suggestions are a bit more plausible.
Are there any good resources available on the pros and cons of these different architectures?
What are the pros and cons on sharing code between projects verses copying and forking?
Forking is usually going to get you a quicker result initially, but almost always going to come around and bite you in maintenance -- bug fixes and feature enhancements from one fork get lost in the other forks, and eventually you find yourself throwing out whole forks and having to re-add their features to the "best" fork. Avoid it if you can.
Moving on: all three of your options can work, but they have trade-offs in terms of build complexity, cost of maintenance, deployment, communication overhead and the amount of refactoring you need to do.
1. Each market is a separate project
A good solution if you're going to be developing simultaneously for multiple markets.
Pros:
It allows developers for market A to break the A build without interfering with ongoing work on B
It makes it much less likely that a change made for market A will cause a bug for market B
Cons:
You have to take the time to separate out the shared code
You have to take the time to set up parallel builds
Modifications to the shared code now have more overhead since they affect both teams.
2. Expand the existing project to target multiple markets
Can be made to work okay for quite a while. If you're going to be working on releases for one market at a time, with a small team, it might be your best bet.
Pros:
The license work is probably valuable anyway, even if you move toward (1) or (3).
The single code base allows refactoring across all markets.
Cons:
Even if you're just working on something for market A, you have to build and ship the code for markets B, C and D as well -- okay if you have a small code base, but increasingly annoying as you get into thousands of classes
Changes to one market risk breaking the code for other markets
Changes to one market require other markets to be re-tested
3. Create a parent application and redesign projects as plugins
Feels technically sweet, and may allow you to share more code.
Pros:
All the pros of (1), potentially, plus:
clearer separation of shared and market-specific code
may allow you to move toward a public API, which would allow offloading some of your work onto your customers and/or selling lucrative service projects
Cons:
All the cons of (1), plus requires even more refactoring.
I would guess that (2) is sort of where you find yourself now, apart from the licensing. I think it's okay to stay there for a little while, but put some effort into moving toward (1) -- moving the shared code into a separate project even if it's all built together, for instance, trying to make sure the dependencies from market code to shared code are all one-way.
Whether you end up at (1) or (3) kind of depends. Mostly it comes down to who's "in charge" -- the shared code, or the market-specific code? The line between a plugin, and a controller class that configures some shared component, can be pretty blurry. My advice would be, let the code tell you what it needs.
1) NO! You don't want to manage different branches of the same code base... Because as common as the code may be, you will want to make sweeping changes, and one project will "at the moment" not be as important as the others, and then you will get one branch growing faster than the others.... insert snowball.
2) This is more or less the industry standard. Big config file, limit things based on license/configuration. It can make the app a bit cumbersome, but as long as the code complains about mutually exclusive stuff and all the developers are in constant communication about new features and how they ripple throughout the entire application, you should do fine. This also is the easiest to hack, if that is a concern.
3) This also 'can' work. If you are using C#, plugins are relatively simple, you only have to worry about dependency hell. If the plugins have any chance of becoming circularly interdependant (that is, a requires b requires c requires a), then this will quickly explode and you will revert (quite easily) back to #2.
The best resources you have are probably the past experiences of your coworkers on different projects, and the experience of people yammering about it on here or Slashdot or wherever. Certainly the cheapest.
Pros of sharing code:
One change changes everything.
Unified data model.
There is only one truth. (Much easier for everyone to be on the same page)
Cons of sharing code:
One change changes everything.. Be careful.
If one bug is in it, it affects everything.
Pros of copying/forking:
Usually quicker to implement a specific feature for a specific customer.
Faster to hack when you realize that assumption A is only applicable for markets B and C, not D.
Cons of copying/forking:
One or more of the copied projects will eventually fail, due to a lack of cohesion in your code.
As above said: Sweeping changes take a lot longer.
Good luck.
You said "copying and forking" which leads me to think that perhaps you haven't considered managing this "fork" as a branch in a revision control system like SVN. By doing it this way, when you refactor the branch to accomodate a different industry, you can merge those changes back into the main trunk with the aid of the revision control system.
If you are following a long term strategy of moving to a single app where all the variations are controlled by a config file (or an SQLITE config database) then this approach will help you. You don't have to merge anything until you are confident that you have generalised it for both industries, so you can still build two unique systems as long as you need to. But, you aren't backing yourself into a corner because it is all in one source code tree, the trunk for the legacy industry, and one branch for each new industry.
If your company really wants to atack multiple industries, then I don't think that the config database solution will meet all your needs. You will still need to have special code modules of some sort. A plug-in architecture is a good thing to put in because it will help, particularly if you embed a scripting engine like Python into your app. However, I don't think that plugins will be able to meet all your code variation requirements when you get into the "thousands of classes" scale.
You need to take a pragmatic approach that allows you to build a separate app today for the new industry, but makes it relatively easy to merge the improvements into the existing app as you go along. You may never reach the nirvana of a single trunk with thousands of classes and several industries, but you will at least have tamed the complexity, and will only have to deal with really important variations where there is real divergence in the industry need.
If I were in your shoes, I would also be looking at any and all features in the app which might be considered "reporting" and trying to factor them out, maybe even into an off the shelf reporting tool.

Resources