Short and sweet:
I have one project with an external, which allows me to commit changes to files in that external alongside changes to the main trunk in one operation:
I have another project with an external, which does not allow me to commit changes alongside the main trunk:
The most obvious difference is that the second external is checked out to a compound directory, but other than that I cannot find a difference that would, to miy mind, be preventing SVN from committing everything together.
What's going on here? Obviously I want to be able to commit changes to externals along with the changes to the trunk in one operation. How can I make this happen in the second case?
The answer turned out to be the compound directory. For some reason, externals checked out to a subfolder immediately under the root project, like "SharedLib", can have changes committed, no matter how much deeper the changes actually are. Externals checked out to a folder structure like "Externals/SharedLib" cannot. That also means that externals checked out from various sources into a single subdirectory (to avoid having to get an entire external when you only need one library) won't allow commits.
I'll make do. Now that I know it's an issue I'll adjust how externals are set up when I want to actually work with them and not just have them around.
Related
I work on an application in Python that is developed by a small team.
I need to remember the state of the code at regular points in time for myself, and occasionally I need to commit the code to the group. However, they do not want to see the 10s of intermediate commits I do.
So I am looking for a way to have 2 repositories with independent histories. Or some other smart suggestion!
I need to remember the state of the code at regular points in time for myself
Use tags
I need to commit the code to the group. However, they do not want to see the 10s of intermediate commits I do.
Clone repo
Commit into this local clone
When you'll have any milestone, join your history into one commit with git merge --squash
Push results
I work in 1 project with others in gitlab. But for some reason can I fork twice to one project?
You can create multiple forks. However, the destination for the fork must be unique. So, if you want multiple forks, you will need to use a different name or namespace for the target.
If you just want to update your fork, you can either pull the upstream changes in (see also: mirroring) or move/delete your fork and fork the upstream project again.
You can fork a repository as many times as you want, and then name the repo whatever you want, but you probably only want to do it once.
I am new to perforce streams. I have gone though some docs over net and not clearly understood what is the main use of perforce streams.
Can any one please help me giving brief intro on perforce streams. what is the main purpose ? when it is useful ?
It would be easier to answer this question if you gave some hint of your context -- do you already understand the general concepts of branching and merging as they're used in other tools, or better yet, are you already familiar with how branching works in Perforce without streams? If so I could give you some specifics on what the benefits of streams are relative to manually managing branch and client views.
For this answer though I'm going to assume you're new to branching in general and will simply direct you to the 2006 Google Tech Talk "The Flow of Change", which was given by Laura Wingerd, one of the chief architects of Perforce streams. This is from about 5 years before streams were actually implemented in Perforce (the 2011.1 release), but the core ideas around what it means to manage the flow of change between variants of software are all there. Hopefully with the additional context of the stream docs you've already read it'll be more clear why this is useful in the real world.
https://www.youtube.com/watch?v=AJ-CpGsCpM0
If you're already familiar with branching in Perforce, you're aware that a branch can be any arbitrary collection of files, managed by two types of view:
One or more client views, which define the set of files you need to map to your workspace in order to sync the branch
One or more branch views, which define how to merge changes between this branch and other branches. (Even if you don't define permanent branch specs, if you run p4 integrate src... dst... that's an ad hoc branch view.)
The main purpose of streams from a technical standpoint is to eliminate the work of maintaining these views. With "classic" Perforce branching, you might declare the file path //depot/main/... is your mainline and //depot/rel1/... is your rel1 release branch, and then define views like this:
Branch: rel1
View:
//depot/main/... //depot/rel1/...
Client: main-ws
View:
//depot/main/... //main-ws/...
Client: rel1-ws
View:
//depot/rel1/... //rel1-ws/...
If you wanted to have one workspace and switch between the two branches you'd do something like:
p4 client -t rel1-ws
p4 sync
(do work in rel1)
p4 submit
p4 client -t main-ws
p4 sync
p4 integ -r -b rel1
This is a very simple example of course -- if you decide you want to unmap some files from the branch, then you have to make that change in both client specs and possibly the branch view, if you create more branches that's more client specs and more branch specs, etc.
With streams the same simple two-branch setup is represented by two streams:
Stream: //depot/main
Parent: none
Type: mainline
Paths:
share ...
Stream: //depot/rel1
Parent: //depot/main
Type: release
Paths:
share ...
To do work in both streams you'd do:
p4 switch rel1
(do work in rel1)
p4 submit
p4 switch main
p4 merge --from rel1
All tasks around managing branch and client views are handled automatically -- the switch command generates a client view appropriate to the named stream and syncs it (it also shelves your work in progress, or optionally relocates it to the new stream similar to a git checkout command), and the merge command generates a branch view that maps between the current stream and the named source stream.
More complex views are also handled; for example, if you want to ignore all .o files in all workspaces associated with either of these streams, just add this to the //depot/main stream:
Ignored:
.o
This is automatically inherited by all child streams and is reflected in all automatically generated client and branch views (it's like adding a -//depot/branch/....o //client/... line to all your client views at once).
There's a lot more that you can do with stream definitions but hopefully that gives you the general idea -- the point is to take all the work that people do to manage views associated with codelines and centralize/automate it for ease of use, as well as provide nice syntactic sugar like p4 switch and p4 merge --from.
Why does vscode create a index.lock sometimes when switching branches? Specifically, if the previous branch I just had open had some thing in package-lock.json and I just wanted it reset did a git reset --hard? FYI, I am using node 8. Here is a screenshot:
Git creates index.lock whenever it is updating the index. (In fact, the index.lock lock file itself is the new index being built, to be swapped into place once it's finished. But this is an implementation detail.) Git removes the file automatically (in fact, by swapping it into place) once it has finished the update. At that point, other Git commands are free to lock and then update the index, though of course, one at a time.
If a Git command crashes, it may leave the lock file in place (which, since it's also the new index, may be incomplete and hence not actually useful). In this particular case, there's no ongoing Git command to complete and hence unlock and allow the next Git command to run.
If the file is there at one point, but not there the next time you try something, that means some Git command was still running (and updating) and you were just too impatient. :-) If you mix different Git commands (and/or interfaces such as GUIs) you may have to manually coordinate to avoid these run-time collisions. Any one interface should coordinate with itself internally.
I need some guidance on a use case I've run into when using Perforce Streams. Say I have the following structure:
//ProductA/Dev:
share ...
//ProductA/Main
share ...
import Component1/... //Component1/Release-1_0/...
//ProductA/Release-1_0
share ...
//Component1/Dev
share ...
//Component1/Main
share ...
//Component1/Release-1_0
share ...
ProductA_Main imports code from Component1_Release-1_0. Whenever Component1_Release-1_0 gets updated, it will automatically be available to ProductA (but read-only).
Now. The problem I'm running into is that since ProductA_Release-1_0 inherits from Main and thus also imports Component1_Release-1_0, any code or changes made to the component will immediately affect the ProductA Release. This sort of side effect seems very risky.
Is there any way to isolate the code such that in the release stream such that ALL code changes are tracked (even code that was imported) and there are 0 side-effects from other stream depots but for main and and dev streams, the code is imported. This way, the release will have 0 side effects, while main and dev conveniently import any changes made in the depot.
I know one option would be to create some sort of product specific release stream in the Component1 depot, but that seems a bit of a kludge since Component1 shouldn't need any references to ProductA.
If you are just looking to be able to rebuild the previous versions, you can use labels to sync the stream back to the exact situation it was in at the time by giving a change list number (or label) to p4 sync.
If you are looking for explicit change tracking, you may want to branch the component into your release line. This will make the release copy of the library completely immune to changes in the other stream, unless you choose to branch and reconcile the data from there. If you think you might make independent changes to the libraries in order to patch bugs, this might be something to consider. Of course, perforce won't copy the files in your database on the server, just pointers to them in the metadata, and since you're already importing them into the stream, you're already putting copies of the files on your build machines, so there shouldn't be any "waste" except on the metadata front.
In the end, this looks like a policy question. Rebuilding can be done by syncing back to a previous version, but if you want to float the library fixes into the main code line, leave it as is; if you want to lock down the libraries and make the changes explicit, I'd just branch the libraries in.
Integrating into your release branch
In answer to the question in the comments, if you choose to integrate directly into your release branch, you'll need to remove the import line from the stream specification and replace it with the isolate line which would then place the code only in the release branch. Then you would use the standard p4 integrate command (or p4v) to integrate from the //Component1/Release-1_0/... to //ProductA/Main/Component1/....
Firming up a separate component stream
One final thought is that you could create another stream on the //Component1 line to act as a placeholder for the release.