I have two versions of a framework both stored under a "thirdparty" directory in my depot. One is in beta which I'm evaluating, and the other is stable. When I first made my workspace, I had it set up to use the stable one, but now I'd like to switch it to use the beta one for testing. I've got a few questions:
Let's say the frameworks are named Framework-2.0-beta and Framework-1.0-stable. Ideally I'd like them to just simply map to a "framework" directory on my local machine, so that I don't have to change all my include paths and such in my project files. Then, in theory, if I wanted to swap back and forth between frameworks, I'd just simply change which one from the depot I'm pulling and then do an update again. How do I do this? I tried at first just mapping them like I mentioned above, but I seem to be getting some errors using this method.
Is this the best way to go about something like this? Like, am I supposed to instead just use a unique workspace for use with one version of the framework vs. another?
Thanks for your help.
The most straight forward way with just perforce means is to put both versions framework the
framework to perforce and map one of them in the clientview of your project.
For example submit the frameworks to places like this:
//thirdparty/framework-2.0-beta/...
//thirdparty/framework-1.0-stable/...
In your projects clientview you map one of the two to a fixed target path, e.g.:
//thirdparty/framework-2.0-beta/... //yourclient/framework/...
So far so good.
But in larger environments (with several people developing the same project) you will definitely run into problems with that approach because:
the compile/test/performance results of your workspace are not
necessarily the same of other people working on the same project
(depending on the clientview)
having several modules (thirdparty or not) and handling them in this
way will be hard to manage and lead to problems with crossdependencies (e.g. module a
version 2 will require module b version > 3, but that doesn't work with certain other
modules, etc.)
There are tools to solve these dependency issues. Look for Apache Ivy or Maven.
Related
We have a large code base in Perforce. I would like to do the following nightly, automatically.
- Copy some view of "latest" into two (or more) workspaces, streams, or even just into some other folders not under perforce control.
- Check everything out (if p4 used) and "compile it", (where "compile" may include changing most files, thus the need to have them be writable.
- Rinse repeat the following "night" with a fresh "latest".
I know how to do this via simple copying things out, but would like the nightly modified code to be able to be "seen" from other machines, by other people, thus maybe the stuffing of things back into perforce.
I know how to do this with P4 workspaces.
Just wondering if p4 streams (tasks?) are a better approach, or for any other alternate recommendations.
Using workspaces is indeed what you want to do. The benefit of using streams would mostly be in simplifying the task of generating and managing these workspaces.
Do you want to keep the changes made by the build machine isolated from the mainline that everyone else uses? Should those changes also be isolated from other builds? Or do you want to make sure that everyone gets the changes ASAP and that they make their way into all other variants of the code? These are good questions to ask as you're setting this up and the answers should influence what you do.
Let's say I have 2 features, both use abc.dll, and both reference it from their respective current directories.
So the output will look like this :
Feature1
abc.dll
Feature2
abc.dll
I've created 2 components for this. In reality I have many features and many dll's that are shared, and my installer size is nearly 1GB.
What I am looking for is a smarter way to do this, using IS 2015 professional.
What I've looked at so far:
Merge modules: Not sure if this would work, also it means I need to maintain the merge modules manually should files be upgraded.
DuplicateFile, via direct editor, but this wouldn't work because there is no way to have this bound to a feature, only a component.
A hidden feature which would install the shared files to the target system, then a post script which would copy these files to their respective features, and delete the folder of this feature.
Is there a best practice method to implement what I need?
The most suitable approach in this case is, indeed, merge modules. I am not sure why the concern about maintaining them - you should have an automated build process that creates all merge modules and then builds your main installer with the newly created modules.
However, in my opinion, merge modules are a bit cumbersome to use if you have a lot of custom actions.
An alternative to merge modules - assuming you are using a Windows Installer project - is using small MSI packages which you "chain" to your main installer (you can chain multiple packages with different conditions and supply different properties). Here too, you should have a build process which builds all those small msi packages and then builds the main installer.
If you don't want to have this kind of 'sub-projects', then the option of a hidden feature with a post action is acceptable, I've seen it, and done it, a few times. Note that if you target Windows 7 or later, instead of physically copying the files and deleting them, you can use symbolic links (using the mklink command), which helps reduce the installation's foot print on the target system (and make patching easier - you replace the original file, and all its links are updated automatically).
Is there any chance to tell Node.Js to also look in the global modules folder by default, without changing sources?
I am trying to avoid that my project folders (up to a hundred packages) gets messed with thousands of sub-folders (also, it slows most IDEs into their knees too). I am aware about the npm link trick but it doesn't work on all platforms or its causing other problems. Also, npm/npm3 is sometimes so slow that i have to wait an entire day that my project is ready for actually working on it (i have a top speed computer and broadband).
known solutions:
changing NODE_PATH environment is out for some other reasons, shell .rc changes are little bad too.
changing core files is easy but requires also patches in many other places (when using nodejs. as dependency for instance )
patching node.js's require function as in other versions like require-js which supports require({cache:{}}) or require({config:{}})
At the end I went with https://github.com/h2non/requireg. It doesn't need any kiddie hacks like npm link or special environment variables and works fantastic. It comes with a globalize function which makes subsequent require calls also looking in the global folders.
I need to implement a memory cache with Node, it looks like there are currently two packages available for doing this:
node-memcached (https://github.com/3rd-Eden/node-memcached)
node-memcache (https://github.com/vanillahsu/node-memcache)
Looking at both Github pages it looks like both projects are under active development with similar features.
Can anyone recommend one over the other? Does anyone know which one is more stable?
At the moment of writing this, the project 3rd-Eden/node-memcached doesn't seem to be stable, according to github issue list. (e.g. see issue #46) Moreover I found it's code quite hard to read (and thus hard to update), so I wouldn't suggest using it in your projects.
The second project, elbart/node-memcache, seems to work fine , and I feel good about the way it's source code is written. So If I were to choose between only this two options, I would prefer using the elbart/node-memcache.
But as of now, both projects suffer from the problem of storing BLOBs. There's an opened issue for the 3rd-Eden/node-memcached project, and the elbart/node-memcache simply doesn't support the option. (it would be fair to add that there's a fork of the project that is said to add option of storing BLOBs, but I haven't tried it)
So if you need to store BLOBs (e.g. images) in memcached, I suggest using overclocked/mc module. I'm using it now in my project and have no problems with it. It has nice documentation, it's highly-customizable, but still easy-to-use. And at the moment it seems to be the only module that works fine with BLOBs storing and retrieving.
Since this is an old question/answer (2 years ago), and I got here by googling and then researching, I feel that I should tell readers that I definitely think 3rd-eden's memcached package is the one to go with. It seems to work fine, and based on the usage by others and recent updates, it is the clear winner. Almost 20K downloads for the month, 1300 just today, last update was made 21 hours ago. No other memcache package even comes close. https://npmjs.org/package/memcached
The best way I know of to see which modules are the most robust is to look at how many projects depend on them. You can find this on npmjs.org's search page. For example:
memcache has 3 dependent projects
memcached has 31 dependent projects
... and in the latter, I see connect-memcached, which would seem to lend some credibility there. Thus, I'd go with the latter barring any other input or recommenations.
Summary: Want help to figure out how to setup the depot and my development environment so that I can support multiple, related projects.
Details:
Until now I've had a depot which had in it only one project - ProjectA - robot version A.
I am starting to work on a new version (ProjectB) which has some differences in HW - I/O port mappings and timers have changed. I would like to continue to develop code for both projects.
This means that ProjectB will share some files with the ProjectA and some files will be different.
Since the differences are HW related items what I'm thinking of doing is creating a common area and then project specific areas where the common area is for device independent code and project specific area is for device dependent code.
The differences are big enough that I don't want to do #ifdef within files. Some differences are simple - different I/O port mapping and some are completely new modules.
To make maintainance easier, I would like to be able to compare differences between device dependent code and propagate selected changes.
Finally, to minimize my burden during comparisons, I would like to mark differences that I know are okay so that in future comparisons they don't show up.
Help!
Your instincts are good -- you're trying to Not Duplicate Code. This is the core of good design & engineering.
As for the file layout, it's always annoying to have your directories too deep, but that's MUCH better than too shallow. Maybe:
<root>
main/
projects/
robot1/...
robot2/...
shared1/
shared2/
(Big repositories are much deeper than that, even.)
As for how you make shared code -- you could have different setup.h or constants.h that drive what the various shared libraries do. Alternatively, build your shared libraries so they are parameterized at runtime.
SetupDrivers(0x80020); // address of PIO registers
And lastly -- if the projects really are different, decide if sharing the code really is the right thing. Usually yes, but everything is a choice. If you hope to manually "diff" your files to look for differences, it's really up to you to keep the structures close enough to diff. The "different config.h file for each project" idea mentioned above would help.
If you roll your own diff tool (in python or whatever) you could use special comments to flag "expected different lines".