How share and import data between two mainline streams in perforce - perforce

I have a mainline stream //depot/stream_mainline_1/...
I need to create another mainline stream //depot/stream_mainline_2/...
where I need to import(read only) some of the folders and share the some other folders in newly created mainline stream.
Let’s say //depot/stream_mainline_1/... contains two folders;
test_folder1 -> Need to import this folder
test_folder2 -> Need to share this folder
I know we can import the data like below while creating the new mainline stream(//depot/stream_mainline_2/..)
view:
import test_folder1/... //depot/stream_mainline_1/test_folder1/...
Can someone please help me to know the steps to share the test_folder2?
Thanks in advance.

By definition there should only be one mainline stream in a stream hierarchy; each mainline is essentially the root of the tree, and everything else in the tree ultimately either branches or imports content from the mainline. If you have multiple mainlines, you have multiple stream hierarchies that are not related to one another, but that's not what you're trying to do here.
Whether you make your new stream a development or release stream depends on whether it's more or less "firm" than the mainline, i.e. whether its checkin policy is more or less strict.
Either way, you want to make the new stream a child of the mainline, and then you can simply list the path you want to import and the path you want to share, like this:
Parent: //depot/stream_mainline_1
Paths:
import test_folder1/...
share test_folder2/...

Related

Driver's source code structure requirements for Linux Kernel upstream

I am planning to rewrite my sensor's driver in order to try to get my module in the Linux Kernel. I was wondering whether there were requirements regarding the organization of the source code. Is it mandatory to keep all the code in one single source file or is it possible to split it up in several ones?
I would prefer a modular approach for my implementation, with one file containing the API and all the structures required for the Kernel registration, and another file with the low level operations to exchange data with the sensor (i.e. mysensor.c & mysensor_core.c).
What are the requirements from this point of view?
Is there a limitation in terms of lines of codes for each file?
Note:
I tried to have a look at the official github repo and it seems to me that the code is always limited to one single source file.
https://github.com/torvalds/linux/tree/master/drivers/misc
Here is an extract from "linux/drivers/iio/gyro/Makefile" as an example:
# Currently this is rolled into one module, split it if
# we ever create a separate SPI interface for MPU-3050
obj-$(CONFIG_MPU3050) += mpu3050.o
mpu3050-objs := mpu3050-core.o mpu3050-i2c.o
The "mpu3050.o" file used to build the "mpu3050.ko" module is built by linking two object files "mpu3050-core.o" and "mpu3050-i2c.o", each of which is built by compiling a correspondingly named source file.
Note that if the module is built from several source files as above, the base name of the final module "mpu3050" must be different to the base name of each of the source files "mpu3050-core" and "mpu3050-i2c". So in your case, if you want the final module to be called "mysensor.ko" then you will need to rename the "mysensor.c" file.

Stream definition: Ignore all files but one filetype

We have a server with a depot that does not allow committing files which are in a client mapping therefore I need a stream configuration.
Now I struggle with a task which I would assume should be simple:
We have a very large stream with lots of different file types and I would like to check out the entire stream but get only a certain file type.
Can this be done with perforce without black-listing every file type in question?
Edit: Sorry that I (for some reason omitted) so many information in my question.
I am already setting up a virtual stream where the UI gives me three nice fields:
Paths – where I can enter import, share isolate paths
Remapping – ignored in my case
Ignored – here I can enter wildcards to ignore directories or files
I was hoping that by creating a virtual stream I actually could define the file types I want, e.g. I could write an import statement like
import RootDir/....txt //Depot/mainline/RootDir/....txt (note the 4 dots, 3 for perforce and the other as a "wildcard"
however the stream definition does not support this and only allows me to write
import RootDir/... //Depot/mainline/RootDir/...
Since I was not able to find a way to white list the files I wanted I only knew a way to blacklist all things I did not want but I would like to avoid that because my Ignored list would be dozens of entries long.
Now I will look into that sync hint because I could use the full stream spec without filter and only sync the files I need on disk, which might be very good.
There are a few different things going on in your question but this seems the most like a statement of what you're trying to do so I'm going to zero in on it:
I would like to check out the entire stream but get only a certain
file type.
If by "check out" you mean you only want to sync that file type to your local workspace:
p4 sync ....TXT
If by "check out" you mean you want to open only that file type for edit:
p4 edit ....TXT
ANY operation in Perforce that operates on files accepts an arbitrary file path, because Perforce tracks all of its state per-file. This is true whether you're using classic clients or streams.
There needs to be some mechanism for telling the Helix (Perforce) server that you only want to retrieve certain files from the stream.
Virtual Streams may be a good fit here, as they allow you to filter the view of an existing stream.
This means you can sync only the files you want and when you submit you will be submitting directly back to the stream your virtual stream is based on.
More information is available here:
https://www.perforce.com/perforce/doc.current/manuals/p4v/p4v_virtual_streams.html

How to delete a feature branch (or any branch) in Perforce?

I'm a Perforce newbie and I'm just starting to familiarize myself with Perforce's branching functionality. One thing I do not understand is how to delete a feature branch after I'm done working with it and the changes have been merged back into the mainline branch like you would do with a feature branch in Git.
Can you delete branches in perforce or do they remain permanently in Perforce?
If it's a task stream (which is what I'd recommend for a short-lived "feature branch" type stream), you probably want to "unload" it:
p4 unload -s //depot/task_stream
This is basically like deleting the stream with "p4 stream -d", except that you can get it back later if you want to. As with "p4 stream -d", it also doesn't get rid of all of the files in the stream; the ones that you modified stay in the depot (so that you can follow the merge records back to the original submits if you want to), but all the unmodified files are unloaded (whereas with "stream -d" they're gone and there isn't any convenient record of what exact version they matched in the parent -- you can reconstruct it after the fact but it's harder). Using "p4 reload" brings the task stream back to life.
If it's a normal stream and/or you want to get rid of it forever including the original changes in its depot path, you need to be an administrator (submitted changes in Perforce are generally considered Very Important and immutable unless you're an admin) and use the "obliterate" command, followed by deleting the stream spec:
p4 obliterate -y //depot/your_stream/...
p4 stream -d //depot/your_stream
Given your description I'd definitely recommend using task streams for features and "unloading" them when you're done.
If you're not using streams at all, the standard practice with branches is to either just leave them when you're done with them, or to reuse them (i.e. have an ongoing development branch that you merge into the mainline as you complete each feature). You can obliterate a branch (as described above the in the stream example) but since this requires admin permissions it's not typical.

Integrate streams from different depots. Some files not integrate

I have following structure:
DepotA have streams main (mainline stream) and dev (development stream) as mains' child.
DepotB have streams main (mainline stream) and dev (development stream) as mains' child.
I populated //DepotA/dev. //DepotB/dev is empty. Now I'm trying to merge //DepotA/dev/... to //DepotB/dev/.... I can't do this with p4v - I simply have weird error. But I can do it with command line. But after merge the streams are different - I checked the sizes of the folders on my machine - //DepotA/dev is 43Gb but //DepotB/dev is 18Gb. Where're other files? Why they didn't merged?
I suspect some of the size difference is due to the fact that files have not yet been changed in '//DepotB/dev', so lazy copies have been created.
These are simply pointers to the original files. Physical files are created once the source or targets are changed.
It can be tricky to investigate integration issues without taking a detailed look at the history, so I recommend you contact Perforce Support, if you have further questions:
http://www.perforce.com/support-services

GDBM file import and export

I am migrating a system from the old server (Slackware) to the new one (Redhat). The system includes some .gdbm files. I find out that on my new server, when running
WEB_SERVICES = file.gdbm
tie( %webservices, 'GDBM_File', $WEB_SERVICES, O_RDONLY, 0 )
the %webservices turns out to be empty. But this was working fine on my old server.
So my question is, are .gdbm files able to be simply transferred (using scp command) from one server to another (different operating system and different version of gdbm)?
Also I read the documents http://www.gnu.org.ua/software/gdbm/manual/gdbm.html#SEC12, which says .gdbm files need to be converted into flat format before sending over the network. But still I'm not sure how to do it.
Please help, thanks in advance!
On the old system, GDBM-tie to the hash, dump the hash. Move the dump to the new system. Read the dump into a hash, tie to GDBM to write it.
For dumping, use a platform independent serialisation format (Sereal is best), or if the dump needs to be human readable, Data::Dumper or similar for writing and Data::Undump for reading.

Resources