Deleted Streams still present in Depot view in P4V - perforce

We're trying to delete streams for the depot that we no longer need.
In the Graph View I right-clicked and selected delete stream.
The stream was removed from the Graph View, however it is still present in the Depot view.
We want to totally remove it, but we can't seem to do it.
'p4 streams' doesn't know about it.
There are still some files in the stream. I wonder if we need to delete those files first.
Thanks,
- Rich

I wish the people who always answer these posts and say "you don't want to delete your depot!" would understand that people who ask this question are often brand new users who have just made their first depot, made a terrible mess of it, have no idea what they've done and would now like to wipe it and start completely fresh again.

Unless the stream is a task stream, deleting the stream only deletes the stream specification, not the files that have been submitted to the stream.
The files submitted to the stream are part of your permanent history; you want to keep them!
I often find myself referring back to the changelists and diffs of changes made many years ago, sometimes even decades ago, so I do everything possible never to erase old history.
If you really wish to destroy the permanent history of changes made to this stream, you can use the 'p4 obliterate' command, but be aware that this command cannot be undone.
If you are contemplating using obliterate to destroy the files submitted to this stream, you should consider contacting Perforce Technical Support first, as the obliterate command is complicated and has a number of options, and you want to make sure you use the right options. And take a checkpoint first, just for extra protection.
If you are using streams for temporary work, and frequently find yourself wishing to obliterate that work, consider using task streams.

Related

"Just in time" read only filesystem using mkfifo and inotifywait

I am writing some gross middleware - basically, I have some old code that needs to open 100,000 files for reading only, expecting them all to be in one folder. It never writes. It is multiprocess so it can try to open ~30 files at the same time. The old way, I would have to actually copy the files into that folder (or use links, NFS, etc.). Worth noting I have no ability to change this old code - its just a binary.
I have some new, fancy code that can retrieve a file almost instantly. I want to tie these things together, so when the old code tries to open the file, it is actually, in real time, running the new code.
So I thought of mkfifo and inotifywait. Instead of a folder of 100,000 files, I can make a folder of 100,000 named pipes. So far so good. The legacy code goes to open the files, not knowing that they are indeed named pipes. The problem is, I don't know what order the legacy code is going to open the files (nice, right?). So I would like to TRIGGER the named pipe WRITE (from my fancy new code) when the legacy code goes in for the read. I can't spawn 100,000 writes and have them all block. So I thought hey - inotifywait makes sense. Every time the legacy goes to open the pipe, it triggers a read event, which can then be used to spawn the pipe writer in the background. The problem is.. inotifywait doesn't trigger the read event until AFTER the writer has been spawned!
Any ideas of how to solve this? Basically - I want to intercept a file open, block for a couple hundred ms while I retrieve the contents of the file, then return that contents. Ideally I don't have to create a custom FUSE filesystem to do this.. its just a read-only file open. The problem is this needs to run fast and in parallel.. and I don't know which files are going to be opened in what order. Gotta be a quick and dirty way!
Thanks in advance for everyone's time.

Perforce:Need some introduction on Perforce streams

I am new to perforce streams. I have gone though some docs over net and not clearly understood what is the main use of perforce streams.
Can any one please help me giving brief intro on perforce streams. what is the main purpose ? when it is useful ?
It would be easier to answer this question if you gave some hint of your context -- do you already understand the general concepts of branching and merging as they're used in other tools, or better yet, are you already familiar with how branching works in Perforce without streams? If so I could give you some specifics on what the benefits of streams are relative to manually managing branch and client views.
For this answer though I'm going to assume you're new to branching in general and will simply direct you to the 2006 Google Tech Talk "The Flow of Change", which was given by Laura Wingerd, one of the chief architects of Perforce streams. This is from about 5 years before streams were actually implemented in Perforce (the 2011.1 release), but the core ideas around what it means to manage the flow of change between variants of software are all there. Hopefully with the additional context of the stream docs you've already read it'll be more clear why this is useful in the real world.
https://www.youtube.com/watch?v=AJ-CpGsCpM0
If you're already familiar with branching in Perforce, you're aware that a branch can be any arbitrary collection of files, managed by two types of view:
One or more client views, which define the set of files you need to map to your workspace in order to sync the branch
One or more branch views, which define how to merge changes between this branch and other branches. (Even if you don't define permanent branch specs, if you run p4 integrate src... dst... that's an ad hoc branch view.)
The main purpose of streams from a technical standpoint is to eliminate the work of maintaining these views. With "classic" Perforce branching, you might declare the file path //depot/main/... is your mainline and //depot/rel1/... is your rel1 release branch, and then define views like this:
Branch: rel1
View:
//depot/main/... //depot/rel1/...
Client: main-ws
View:
//depot/main/... //main-ws/...
Client: rel1-ws
View:
//depot/rel1/... //rel1-ws/...
If you wanted to have one workspace and switch between the two branches you'd do something like:
p4 client -t rel1-ws
p4 sync
(do work in rel1)
p4 submit
p4 client -t main-ws
p4 sync
p4 integ -r -b rel1
This is a very simple example of course -- if you decide you want to unmap some files from the branch, then you have to make that change in both client specs and possibly the branch view, if you create more branches that's more client specs and more branch specs, etc.
With streams the same simple two-branch setup is represented by two streams:
Stream: //depot/main
Parent: none
Type: mainline
Paths:
share ...
Stream: //depot/rel1
Parent: //depot/main
Type: release
Paths:
share ...
To do work in both streams you'd do:
p4 switch rel1
(do work in rel1)
p4 submit
p4 switch main
p4 merge --from rel1
All tasks around managing branch and client views are handled automatically -- the switch command generates a client view appropriate to the named stream and syncs it (it also shelves your work in progress, or optionally relocates it to the new stream similar to a git checkout command), and the merge command generates a branch view that maps between the current stream and the named source stream.
More complex views are also handled; for example, if you want to ignore all .o files in all workspaces associated with either of these streams, just add this to the //depot/main stream:
Ignored:
.o
This is automatically inherited by all child streams and is reflected in all automatically generated client and branch views (it's like adding a -//depot/branch/....o //client/... line to all your client views at once).
There's a lot more that you can do with stream definitions but hopefully that gives you the general idea -- the point is to take all the work that people do to manage views associated with codelines and centralize/automate it for ease of use, as well as provide nice syntactic sugar like p4 switch and p4 merge --from.

How do I monitor changes of files and only look at them when the changes are finished?

I'm currently monitoring files in node.js using fs.watch. The problem I have is for example, let say I copy a 1gig file into a folder I'm watching. The moment the file starts copying I get a notification about the file. If I start reading it immediately I end up with bad data since the file has not finished copying. For example a zip file has it's table of contents at the end but I'd end up reading it before it's table of contents has been written.
Off the top of my head I could setup some task to call fs.stat on the file every N seconds and only try to read it when the stats stop changing. That would work but it seems not ideal as I'd like my app to be as responsive as possible and calling stat on a bunch of files every second seems heavy as well as calling stat every 5 or 10 seconds seems unresponsive.
Is there some more robust way to get notified when a file has finished being modified?
So I did a project last year which required doing "file watching". There is a better library out there than fs.watch. Check out npm chokidar.
https://www.npmjs.com/package/chokidar
Underneath it uses fs.watch, but wraps better improvements around it.
There is a property called awaitWriteFinish. Really it's doing some polling on the file to determine whether or not the file is finished writing. I used it and it really works great.
Setting this property will allow you to work against that file, always ensuring that the file has been completely written. And you don't need to go off and implement your own method of determining if the file is complete. Should save a bunch of time.
Aside from that, I don't believe you can really get away from polling with regard to determining if a file is finished writing. Chokidar is still polling, it's just that you don't need to write the logic to do it. And you can configure the polling interval if CPU utilization is deemed to be too high.
Edit: Would also like to add, to just give it a shot and see how it works. I get you want it as responsive as possible... But having something working is better than having something not working at all. It might be that even with a polling solution it's not even an issue for you. If it's deemed a performance problem, then go address it at that time and seek a "better" solution.

How to efficiently work with a task stream?

Disclaimer: I let the question open, but my slow branching issue was due to the server being overloaded. So this is not the usual Perforce behavior. I now take about 30 seconds to branch 10K files.
I am a new Perforce 2014 user. I have created a stream depot, and put the legacy application (about 10,000 cpp files) in a development branch. That was relatively quick for an initial import, about 1 hour to upload everything.
Now I want to create a 'lightweight' task stream to work on a new feature.
I use the default menu > new stream > type task ... I select "Branch file from parent on stream creation"
To my surprise, it takes ages (about 1 hour) to create a new task because it is branching individually every file. I would expect the process to be almost instant coming from other SCM tools. (git, svn,...)
Now my question is :
is this the expected behavior ?
Alternatively, is there a way to create more quickly a Task, and only branch the file I intend to modify ?
Creating a new task stream of 10k files SHOULD be a very quick operation (on the order of a second or two?) since no actual file content is being transferred, assuming you're starting with a synced workspace. If you're creating a brand new workspace for the new stream, then part of creating the new workspace is going to be syncing the files; I'd expect that to take about as long as the submit did since it's the same amount of data being transferred.
Make sure when creating a new stream that you're not creating a new workspace. In the visual client there's an option to "create a workspace"; make sure to uncheck that box or it'll make a new workspace and then sync it, which is the part that'll take an hour.
From the command line, starting from a workspace of //stream/parent, here's what you'd do to make a new task stream:
p4 stream -t task -P //stream/parent //stream/mynewtask01
p4 populate -r -S //stream/mynewtask01
p4 client -s -S //stream/mynewtask01
p4 sync
The "stream" and "client" commands don't actually operate on any files, so they'll be really quick no matter what. The "populate" will branch all 10k files, but it does it on the back end without actually moving any content around, so it'll also be really quick (if you got up into the millions or billions it might take an appreciable amount of time depending on the server hardware, but 10k is nothing). The "sync" will be very quick if you were already synced to //stream/parent, because all the files are already there; again, it's just moving pointers around on the server side rather than transferring the file content.
I have found a workaround, which speed up things a bit, and is actually closer to what I intend to do :
Task streams are created by default with the following "stream view"
share ...
I have replaced it by
import ...
share /directory/I/actually/want/to/modify/...
So I skip the branching of most of the file, and it is working fine.

Perforce Streams - Isolating imported libraries

I need some guidance on a use case I've run into when using Perforce Streams. Say I have the following structure:
//ProductA/Dev:
share ...
//ProductA/Main
share ...
import Component1/... //Component1/Release-1_0/...
//ProductA/Release-1_0
share ...
//Component1/Dev
share ...
//Component1/Main
share ...
//Component1/Release-1_0
share ...
ProductA_Main imports code from Component1_Release-1_0. Whenever Component1_Release-1_0 gets updated, it will automatically be available to ProductA (but read-only).
Now. The problem I'm running into is that since ProductA_Release-1_0 inherits from Main and thus also imports Component1_Release-1_0, any code or changes made to the component will immediately affect the ProductA Release. This sort of side effect seems very risky.
Is there any way to isolate the code such that in the release stream such that ALL code changes are tracked (even code that was imported) and there are 0 side-effects from other stream depots but for main and and dev streams, the code is imported. This way, the release will have 0 side effects, while main and dev conveniently import any changes made in the depot.
I know one option would be to create some sort of product specific release stream in the Component1 depot, but that seems a bit of a kludge since Component1 shouldn't need any references to ProductA.
If you are just looking to be able to rebuild the previous versions, you can use labels to sync the stream back to the exact situation it was in at the time by giving a change list number (or label) to p4 sync.
If you are looking for explicit change tracking, you may want to branch the component into your release line. This will make the release copy of the library completely immune to changes in the other stream, unless you choose to branch and reconcile the data from there. If you think you might make independent changes to the libraries in order to patch bugs, this might be something to consider. Of course, perforce won't copy the files in your database on the server, just pointers to them in the metadata, and since you're already importing them into the stream, you're already putting copies of the files on your build machines, so there shouldn't be any "waste" except on the metadata front.
In the end, this looks like a policy question. Rebuilding can be done by syncing back to a previous version, but if you want to float the library fixes into the main code line, leave it as is; if you want to lock down the libraries and make the changes explicit, I'd just branch the libraries in.
Integrating into your release branch
In answer to the question in the comments, if you choose to integrate directly into your release branch, you'll need to remove the import line from the stream specification and replace it with the isolate line which would then place the code only in the release branch. Then you would use the standard p4 integrate command (or p4v) to integrate from the //Component1/Release-1_0/... to //ProductA/Main/Component1/....
Firming up a separate component stream
One final thought is that you could create another stream on the //Component1 line to act as a placeholder for the release.

Resources