Perforce Streams - Isolating imported libraries - perforce

I need some guidance on a use case I've run into when using Perforce Streams. Say I have the following structure:
//ProductA/Dev:
share ...
//ProductA/Main
share ...
import Component1/... //Component1/Release-1_0/...
//ProductA/Release-1_0
share ...
//Component1/Dev
share ...
//Component1/Main
share ...
//Component1/Release-1_0
share ...
ProductA_Main imports code from Component1_Release-1_0. Whenever Component1_Release-1_0 gets updated, it will automatically be available to ProductA (but read-only).
Now. The problem I'm running into is that since ProductA_Release-1_0 inherits from Main and thus also imports Component1_Release-1_0, any code or changes made to the component will immediately affect the ProductA Release. This sort of side effect seems very risky.
Is there any way to isolate the code such that in the release stream such that ALL code changes are tracked (even code that was imported) and there are 0 side-effects from other stream depots but for main and and dev streams, the code is imported. This way, the release will have 0 side effects, while main and dev conveniently import any changes made in the depot.
I know one option would be to create some sort of product specific release stream in the Component1 depot, but that seems a bit of a kludge since Component1 shouldn't need any references to ProductA.

If you are just looking to be able to rebuild the previous versions, you can use labels to sync the stream back to the exact situation it was in at the time by giving a change list number (or label) to p4 sync.
If you are looking for explicit change tracking, you may want to branch the component into your release line. This will make the release copy of the library completely immune to changes in the other stream, unless you choose to branch and reconcile the data from there. If you think you might make independent changes to the libraries in order to patch bugs, this might be something to consider. Of course, perforce won't copy the files in your database on the server, just pointers to them in the metadata, and since you're already importing them into the stream, you're already putting copies of the files on your build machines, so there shouldn't be any "waste" except on the metadata front.
In the end, this looks like a policy question. Rebuilding can be done by syncing back to a previous version, but if you want to float the library fixes into the main code line, leave it as is; if you want to lock down the libraries and make the changes explicit, I'd just branch the libraries in.
Integrating into your release branch
In answer to the question in the comments, if you choose to integrate directly into your release branch, you'll need to remove the import line from the stream specification and replace it with the isolate line which would then place the code only in the release branch. Then you would use the standard p4 integrate command (or p4v) to integrate from the //Component1/Release-1_0/... to //ProductA/Main/Component1/....
Firming up a separate component stream
One final thought is that you could create another stream on the //Component1 line to act as a placeholder for the release.

Related

How to defer "file" function execution in puppet 3.7

This seems like a trivial question, but in fact it isn't. I'm using puppet 3.7 to deploy and configure artifacts from my project on to a variety of environments. Puppet 5.5 upgrade is on the roadmap, but without an ETA so far.
One of the things I'm trying to automate is the incremental changes to the underlying db. It's not SQL so standard tools are out of question. These changes will come in the form of shell scripts contained in a special module, that will also be deployed as an artifact. For each release we want to have a file, whose content will list the shell scripts to execute in scope of this release. For instance, if in version 1.2 we had implemented JIRA-123, JIRA-124 and JIRA-125, I'd like to execute scripts JIRA-123.sh, JIRA-124.sh and JIRA-125.sh, but no other ones that will still be in that module from previous releases.
So my release "control" file would be called something like jiras.1.2.csv and have one line looking like this:
JIRA-123,JIRA-124,JIRA-125
The task for puppet here seems trivial - read the content of this file, split on "," character, and go on to build exec tasks for each of the jiras. The problem is that the puppet function that should help me do it
file("/somewhere/in/the/filesystem/jiras.1.2.csv")
gets executed at the time of building the puppet catalog, not at the time when the catalog is applied. However, since this file is a part of the payload of the release, it's not there yet. It will be downloaded from nexus in a tar.gz package of the release and extracted later. I have an anchor I can hold on to, which I use to synchronize the exec tasks, but can I attach the reading of the file content to that anchor?
Maybe I'm approaching the problem incorrectly? I was thinking the module with the pre-implementation and post-implementation tasks that constitute the incremental db upgrades could be structured so that for each release there's a subdirectory matching the version name, but then I need to list the contents of that subdirectory to build my exec tasks, and that too - at least to my limited puppet knowledge - can't be deferred until a particular anchor.
--- EDITED after one of the comments ---
The problem is that the upgrade to puppet 5.x is beyond my control - it's another team handling this stuff in a huge organisation, so I have no influence over that and I'm stuck on 3.7 for the foreseeable future.
As for what I'm trying to do - for a bunch of different software packages that we develop and release I want to create three new ones: pre-implementation, post-implementation and checks. The first will hold any tasks that are performed prior to releasing new code in our actual packages. This is typically things like backing up db. Post-implementation will deal with issues that need to be addressed after we've deployed the new code - example operation would be to go and modify older data because for instance we've changed a type of column in a table. Checks are just validations performed to make sure the release is 100% correctly implemented - for instance run a select query and assert on the type of data in the column, whose type we've just changed. Today all of these are daunting manual operations performed by whoever is unlucky to be doing a release. Above all else though, being manual these are by definition error prone.
The approach taken is that for every JIRA ticket being part of the release the responsible developer will have to decide what steps (if any) are needed to release their work, and script that. Puppet is supposed to orchestrate the execution of all of this.

Perforce:Need some introduction on Perforce streams

I am new to perforce streams. I have gone though some docs over net and not clearly understood what is the main use of perforce streams.
Can any one please help me giving brief intro on perforce streams. what is the main purpose ? when it is useful ?
It would be easier to answer this question if you gave some hint of your context -- do you already understand the general concepts of branching and merging as they're used in other tools, or better yet, are you already familiar with how branching works in Perforce without streams? If so I could give you some specifics on what the benefits of streams are relative to manually managing branch and client views.
For this answer though I'm going to assume you're new to branching in general and will simply direct you to the 2006 Google Tech Talk "The Flow of Change", which was given by Laura Wingerd, one of the chief architects of Perforce streams. This is from about 5 years before streams were actually implemented in Perforce (the 2011.1 release), but the core ideas around what it means to manage the flow of change between variants of software are all there. Hopefully with the additional context of the stream docs you've already read it'll be more clear why this is useful in the real world.
https://www.youtube.com/watch?v=AJ-CpGsCpM0
If you're already familiar with branching in Perforce, you're aware that a branch can be any arbitrary collection of files, managed by two types of view:
One or more client views, which define the set of files you need to map to your workspace in order to sync the branch
One or more branch views, which define how to merge changes between this branch and other branches. (Even if you don't define permanent branch specs, if you run p4 integrate src... dst... that's an ad hoc branch view.)
The main purpose of streams from a technical standpoint is to eliminate the work of maintaining these views. With "classic" Perforce branching, you might declare the file path //depot/main/... is your mainline and //depot/rel1/... is your rel1 release branch, and then define views like this:
Branch: rel1
View:
//depot/main/... //depot/rel1/...
Client: main-ws
View:
//depot/main/... //main-ws/...
Client: rel1-ws
View:
//depot/rel1/... //rel1-ws/...
If you wanted to have one workspace and switch between the two branches you'd do something like:
p4 client -t rel1-ws
p4 sync
(do work in rel1)
p4 submit
p4 client -t main-ws
p4 sync
p4 integ -r -b rel1
This is a very simple example of course -- if you decide you want to unmap some files from the branch, then you have to make that change in both client specs and possibly the branch view, if you create more branches that's more client specs and more branch specs, etc.
With streams the same simple two-branch setup is represented by two streams:
Stream: //depot/main
Parent: none
Type: mainline
Paths:
share ...
Stream: //depot/rel1
Parent: //depot/main
Type: release
Paths:
share ...
To do work in both streams you'd do:
p4 switch rel1
(do work in rel1)
p4 submit
p4 switch main
p4 merge --from rel1
All tasks around managing branch and client views are handled automatically -- the switch command generates a client view appropriate to the named stream and syncs it (it also shelves your work in progress, or optionally relocates it to the new stream similar to a git checkout command), and the merge command generates a branch view that maps between the current stream and the named source stream.
More complex views are also handled; for example, if you want to ignore all .o files in all workspaces associated with either of these streams, just add this to the //depot/main stream:
Ignored:
.o
This is automatically inherited by all child streams and is reflected in all automatically generated client and branch views (it's like adding a -//depot/branch/....o //client/... line to all your client views at once).
There's a lot more that you can do with stream definitions but hopefully that gives you the general idea -- the point is to take all the work that people do to manage views associated with codelines and centralize/automate it for ease of use, as well as provide nice syntactic sugar like p4 switch and p4 merge --from.

Shake: Signal whether anything had to be rebuilt at all

I use shake to build a bunch of static webpages, which I then have to upload to a remote host, using sftp. Currently, the cronjob runs
git pull # get possibly updated sources
./my-shake-system
lftp ... # upload
I’d like to avoid running the final command if shake did not actually rebuild anything. Is there a way to tell shake “Run command foo, after everything else, and only if you changed something!”?
Or alternatively, have shake report whether it did something in the process exit code?
I guess I can add a rule that depends on all possibly generated file, but that seems to be redundant and error prone.
Currently there is no direct/simple way to determine if anything built. It's also not such a useful concept as for simpler build systems, as certain rules (especially those that define storedValue to return Nothing) will always "rerun", but then very quickly decide they don't need to run the rules that depend on them. To Shake, that is the same as rerunning. I can think of a few approaches, which one is best probably depends on your situation:
Tag the interesting rules
You could tag each interesting rule (one that produces something that needs uploading) with a function that writes to a specific file. If that specific file exists, then you need to upload. This might work slightly better, as if you do multiple Shake runs, and in the first something changes but the second nothing does, the file will still be present. If it makes sense, use an IORef instead of a file.
Use profiling
Shake has quite advanced profiling. If you pass shakeProfile=["output.json"] it will produce a JSON file detailing what built and when. Runs are indexed by an Int, with 0 for the most recent run, and any runs that built nothing are excluded. If you have one rule that always fires (e.g. write to a dummy file with alwaysRerun) then if anything fired at the same time, it rebuilt.
Watch the .shake.database file size
Shake has a database, stored under shakeFiles. Each uninteresting run it will grow by a fairly small amount (~100 bytes) - but a fixed size given your system. If it changes in size by a greater amount, then it did something interesting.
Of these approaches, tagging the interesting rules is probably the simplest and most direct (although does run the risk of you forgetting to tag something).

Generating a file in one Gradle task to be operated on by another Gradle task

I am working with Gradle for the first time on a project, having come from Ant on my last project. So far I like what I have been seeing, though I'm trying to figure out how to do some things that have kind of been throwing me for a loop. I am wondering what is the right pattern to use in the following situation.
I have a series of files on which I need to perform a number of operations. Each task operates on the newly generated output files of the task that came before it. I will try to contrive an example to demonstrate my question, since my project is somewhat more complicated and internal.
Stage One
First, let's say I have a task that must write out 100 separate text files with a random number in each. The name of the file doesn't matter, and let's say they all will live under parentFolder.
Example:
parentFolder
|
|-file1
|-file2
...
|-file100
I would think my initial take would be to do this in a loop inside the doLast (shortcutted with <<) closure of a custom task -- something like this:
task generateNumberFiles << {
File parentFolder = mkdir(buildDir.toString() + "/parentFolder")
for (int x=0; x<=100; x++)
{
File currentFile = file(parentFolder.toString() + "/file" + String.valueOf(x))
currentFile.write(String.valueOf(Math.random()))
}
}
Stage Two
Next, let's say I need to read each file generated in the generateNumberFiles task and zip each file into a separate archive in a second folder, zipFolder. I'll put it under the parentFolder for simplicity, but the location isn't important.
Desired output:
parentFolder
|
|-file1
|-file2
...
|-file100
|-zipFolder
|
|-file1.zip
|-file2.zip
...
|-file100.zip
This seems problematic because, in theory, it's like I need to create a Zip task for each file (to generate a separate archive per file). So I suppose this is the first part of the question: how do I create a separate task to act on a bunch of files generated during a prior task and have that task run as part of the build process?
Adding the task at execution time is definitely possible, but getting the tasks to run seems more problematic. I have read that using .execute() is inadvisable and technically it is an internal method.
I also had the thought to add dependsOn to a subsequent task with a .matching { Task task -> task.name.startsWith("blah")} block. This seems like it won't work though, because task dependency is resolved [during the Gradle configuration phase][1]. So how can I create tasks to operate on these files since they didn't exist at configuration time?
Stage Three
Finally, let's complicate it a little more and say I need to perform some other custom action on the ZIP archives generated in Stage Two, something not built in to Gradle. I can't think of a realistic example, so let's just say I have to read the first byte of each ZIP and upload it to some server -- something that involves operating on each ZIP independently.
Stage Three is somewhat just a continuation of my question in Stage Two. I feel like the Gradle-y way to do this kind of thing would be to create tasks that perform a unit of work on each file and use some dependency to cause those tasks to execute. However, if the tasks don't exist when the dependency graph is built, how do I accomplish this sort of thing? On the other hand, am I totally off and is there some other way to do this sort of thing?
[1]: "Gradle builds the complete dependency graph before any task is executed." http://www.gradle.org/docs/current/userguide/build_lifecycle.html
You cannot create tasks during the execution phase. As you have probably figured out, since Gradle constructs that task execution graph during the configuration phase, you cannot add tasks later.
If you are simply trying to consume the output of one task as the input of another then that becomes a simple dependsOn relationship, just like Ant. I believe where you may be going down the wrong path is by thinking you need to dynamically create a Gradle Zip task for every archive you intend to create. In this case, since the number of archives you will be creating is dynamic based on the output of another task (ie. determined during execution) you could simply just create a single task which created all those zip files. Easiest way of accomplishing this would simply to use Ant's zip task via Gradle's Ant support.
We do something similar. While Mark Vieira's answer is correct, there may be a way to adjust things on both ends a bit. Specifically:
You could discover, as we do, all the zip files you need to create during the configuration phase. This will allow you to create any number of zip tasks, name them appropriately and relate them correctly. This will also allow you to individually build them as needed and take advantage of the incremental build support with up-to-date checks.
If you have something you need to do before you can discover what you need for (1) and if that is relatively simple, you could code that specifically not as a task but as a configuration step.
Note that "Gradle-y" way is flexible but don't do this just because you may feel this is "Gradle-y". Do what is right. You need individual tasks if you want to be able to invoke and relate them individually, perhaps optimize build performance by skipping up-to-date ones. If this is not what you care about, don't worry about making each file into its own task.

Deleted Streams still present in Depot view in P4V

We're trying to delete streams for the depot that we no longer need.
In the Graph View I right-clicked and selected delete stream.
The stream was removed from the Graph View, however it is still present in the Depot view.
We want to totally remove it, but we can't seem to do it.
'p4 streams' doesn't know about it.
There are still some files in the stream. I wonder if we need to delete those files first.
Thanks,
- Rich
I wish the people who always answer these posts and say "you don't want to delete your depot!" would understand that people who ask this question are often brand new users who have just made their first depot, made a terrible mess of it, have no idea what they've done and would now like to wipe it and start completely fresh again.
Unless the stream is a task stream, deleting the stream only deletes the stream specification, not the files that have been submitted to the stream.
The files submitted to the stream are part of your permanent history; you want to keep them!
I often find myself referring back to the changelists and diffs of changes made many years ago, sometimes even decades ago, so I do everything possible never to erase old history.
If you really wish to destroy the permanent history of changes made to this stream, you can use the 'p4 obliterate' command, but be aware that this command cannot be undone.
If you are contemplating using obliterate to destroy the files submitted to this stream, you should consider contacting Perforce Technical Support first, as the obliterate command is complicated and has a number of options, and you want to make sure you use the right options. And take a checkpoint first, just for extra protection.
If you are using streams for temporary work, and frequently find yourself wishing to obliterate that work, consider using task streams.

Resources