Can I do fork twice in the same project in gitlab? - gitlab

I work in 1 project with others in gitlab. But for some reason can I fork twice to one project?

You can create multiple forks. However, the destination for the fork must be unique. So, if you want multiple forks, you will need to use a different name or namespace for the target.
If you just want to update your fork, you can either pull the upstream changes in (see also: mirroring) or move/delete your fork and fork the upstream project again.

You can fork a repository as many times as you want, and then name the repo whatever you want, but you probably only want to do it once.

Related

How can I use 2 repositories or have 2 series of commits?

I work on an application in Python that is developed by a small team.
I need to remember the state of the code at regular points in time for myself, and occasionally I need to commit the code to the group. However, they do not want to see the 10s of intermediate commits I do.
So I am looking for a way to have 2 repositories with independent histories. Or some other smart suggestion!
I need to remember the state of the code at regular points in time for myself
Use tags
I need to commit the code to the group. However, they do not want to see the 10s of intermediate commits I do.
Clone repo
Commit into this local clone
When you'll have any milestone, join your history into one commit with git merge --squash
Push results

Why is an a index.lock sometimes created when switching branches in vscode?

Why does vscode create a index.lock sometimes when switching branches? Specifically, if the previous branch I just had open had some thing in package-lock.json and I just wanted it reset did a git reset --hard? FYI, I am using node 8. Here is a screenshot:
Git creates index.lock whenever it is updating the index. (In fact, the index.lock lock file itself is the new index being built, to be swapped into place once it's finished. But this is an implementation detail.) Git removes the file automatically (in fact, by swapping it into place) once it has finished the update. At that point, other Git commands are free to lock and then update the index, though of course, one at a time.
If a Git command crashes, it may leave the lock file in place (which, since it's also the new index, may be incomplete and hence not actually useful). In this particular case, there's no ongoing Git command to complete and hence unlock and allow the next Git command to run.
If the file is there at one point, but not there the next time you try something, that means some Git command was still running (and updating) and you were just too impatient. :-) If you mix different Git commands (and/or interfaces such as GUIs) you may have to manually coordinate to avoid these run-time collisions. Any one interface should coordinate with itself internally.

Perforce Streams - Isolating imported libraries

I need some guidance on a use case I've run into when using Perforce Streams. Say I have the following structure:
//ProductA/Dev:
share ...
//ProductA/Main
share ...
import Component1/... //Component1/Release-1_0/...
//ProductA/Release-1_0
share ...
//Component1/Dev
share ...
//Component1/Main
share ...
//Component1/Release-1_0
share ...
ProductA_Main imports code from Component1_Release-1_0. Whenever Component1_Release-1_0 gets updated, it will automatically be available to ProductA (but read-only).
Now. The problem I'm running into is that since ProductA_Release-1_0 inherits from Main and thus also imports Component1_Release-1_0, any code or changes made to the component will immediately affect the ProductA Release. This sort of side effect seems very risky.
Is there any way to isolate the code such that in the release stream such that ALL code changes are tracked (even code that was imported) and there are 0 side-effects from other stream depots but for main and and dev streams, the code is imported. This way, the release will have 0 side effects, while main and dev conveniently import any changes made in the depot.
I know one option would be to create some sort of product specific release stream in the Component1 depot, but that seems a bit of a kludge since Component1 shouldn't need any references to ProductA.
If you are just looking to be able to rebuild the previous versions, you can use labels to sync the stream back to the exact situation it was in at the time by giving a change list number (or label) to p4 sync.
If you are looking for explicit change tracking, you may want to branch the component into your release line. This will make the release copy of the library completely immune to changes in the other stream, unless you choose to branch and reconcile the data from there. If you think you might make independent changes to the libraries in order to patch bugs, this might be something to consider. Of course, perforce won't copy the files in your database on the server, just pointers to them in the metadata, and since you're already importing them into the stream, you're already putting copies of the files on your build machines, so there shouldn't be any "waste" except on the metadata front.
In the end, this looks like a policy question. Rebuilding can be done by syncing back to a previous version, but if you want to float the library fixes into the main code line, leave it as is; if you want to lock down the libraries and make the changes explicit, I'd just branch the libraries in.
Integrating into your release branch
In answer to the question in the comments, if you choose to integrate directly into your release branch, you'll need to remove the import line from the stream specification and replace it with the isolate line which would then place the code only in the release branch. Then you would use the standard p4 integrate command (or p4v) to integrate from the //Component1/Release-1_0/... to //ProductA/Main/Component1/....
Firming up a separate component stream
One final thought is that you could create another stream on the //Component1 line to act as a placeholder for the release.

SVN - Committing externals on commit of main trunk

Short and sweet:
I have one project with an external, which allows me to commit changes to files in that external alongside changes to the main trunk in one operation:
I have another project with an external, which does not allow me to commit changes alongside the main trunk:
The most obvious difference is that the second external is checked out to a compound directory, but other than that I cannot find a difference that would, to miy mind, be preventing SVN from committing everything together.
What's going on here? Obviously I want to be able to commit changes to externals along with the changes to the trunk in one operation. How can I make this happen in the second case?
The answer turned out to be the compound directory. For some reason, externals checked out to a subfolder immediately under the root project, like "SharedLib", can have changes committed, no matter how much deeper the changes actually are. Externals checked out to a folder structure like "Externals/SharedLib" cannot. That also means that externals checked out from various sources into a single subdirectory (to avoid having to get an entire external when you only need one library) won't allow commits.
I'll make do. Now that I know it's an issue I'll adjust how externals are set up when I want to actually work with them and not just have them around.

Does every path in activity diagram have a finish node. Every "fork" branch need to goto a merge?

Does every path in an activity diagram need to have a finish node? A similar question is does every fork branch need to be merged?
I did an activity diagram (below), but it seems wrong. Some branches (from fork) has no finish node (nor end in a merge).
My idea was the clerk will send shipment packing slip to purchashing, accounting & customer. 2 of which just seem to create/init objects (eg. enter info). They are executed in parallel so I felt I should have a fork?
Does every path in an activity diagram need to have a finish node?
Yes. But there are two kinds of finish node: ActivityFinal and FlowFinal. You need to terminate each of the packaging and shipment flows with a FlowFinal node. See section 12.4 in the spec for details. The symbol is here, the page it's on is a good reference.
Does every fork branch need to be merged?
No. But it needs to terminate - hence existence of FlowFinal node.
hth.

Resources