Is it possible to use multiple p4aliases files in perforce? - perforce

Is it possible to use multiple p4alias files, like one personal p4alias file and one project related? I do not see a way to source or concatenate multiple files.

Practical answer: not really, no -- you should just copy and paste the project alias file into your personal alias file. A fun trick here is to keep the aliases file in the depot, so you can use merging to pull project-level changes into your personal file without having your own changes go back.
Impractical answer: run something in the background that will concatenate the files together for you and surface them as a single p4aliases file, e.g. https://superuser.com/questions/762590/can-i-create-a-symlink-esque-file-to-merge-two-files-together

Related

Is there a simple way to make perforce 'automatically' ensure that two files, under different paths, always contain the same contents?

We currently have a header file that sits in two different depots (identical copies), and whenever we update it, we have to manually make sure that the other copy is updated as well.
Is there a simple way to get perforce to enforce this? Or would I need to set up something with triggers? (I'm a bit worried about doing it 'properly' if so, don't have any experience with that).
I assume you'd need admin access to the perforce server to do this?
To do this with a trigger, you'd want to put a change-commit trigger on the file:
Triggers:
copy-always change-commit //depot/my/file.h "my-copy-script"
and then my-copy-script would run commands like:
p4 copy //depot/my/file.h //depot/my/other/file.h
p4 submit -d "copy my file to my other file"
But! Keeping two identical copies is an antipattern and you shouldn't do it. Keep one file and use client mappings, branch mappings, streams, or symlinks to make it look like it's in two places. The exact solution you use depends on why you think you need two copies of this file in the first place. :)

Which is the best way to make config changes in conf files in ansible

Initially I used a makefile to deploy my application in linux.
I had various sed commands to replace variables like the PHP upload file size, post size, log file location etc.
Now I am shifting to ansible. I know I can copy the files, but how can I make changes to the conf files? Like if i just want to change the upload_filesize = 50M parameter. I don't want to make copies of the whole conf file and then replace with my file.
Sometimes it’s only a one-line change. Is there any better way to edit the config files in ansible?
If it's an INI file, then your specific requirements can be probably met by ini_file module (http://docs.ansible.com/ini_file_module.html).
If your change is limited just few lines (or eve one), then you can probably look at lineinfile module (http://docs.ansible.com/lineinfile_module.html).
Should you however need to change large number of settings, then it may be more beneficial to use template (http://docs.ansible.com/template_module.html) or assemble (http://docs.ansible.com/assemble_module.html). The main disadvantage of this approach is that you may need to review your template or fragments when updating your dependency.
Depending on the operating system, you can add files here /etc/php/apache2/conf.d/10-someinifilewithsetting.ini

How to create a copy of a directory on Linux with links

I have a series of directories on Linux and each directory contains lots of files and data. The data in those directories are automatically generated, but multiple users will need to perform more analysis on that data and generate more files, change the structure, etc.
Since these data directories are very large, I don't want several people to make a copy of the original data so I'd like to make a copy of the directory and link to the original from the new one. However, I'd like any changes to be kept only in the new directory, and leave the original read only. I'd prefer not to link only specific files that I define because the data in these directories is so varied.
So I'm wondering if there is a way to create a copy of a directory by linking to the original but keeping any changed files in the new directory only.
It turns out this is what I wanted to:
cp -al <origdir> <newdir>
It will copy an entire directory and create hard links to the original files. If the original file is deleted, the copied file still exists, and vice-versa. This will work perfectly, but I found newdir must not already exist. As long as the original files are read-only, you'll be able to create an identical, safe copy of the original directory.
However, since you are looking for a way that people can write back changes, UnionFS is probably what you are looking for. It provides means to combine read-only and read-write locations into one.
Unionfs allows any mix of read-only and read-write branches, as well as insertion and deletion of branches anywhere in the fan-out.
Originally I was going to recommend this (I use it a lot):
Assuming the permissions aren't an issue (e.g. only reading is required) I would suggest to bind-mount them into place.
mount -B <original> <new-location>
# or
mount --bind <original> <new-location>
<new-location> must exist as a folder.

Can I perforce integrate files from 2 directories into a single directory?

I've imported a project into a directory and the original directory structure is:
//depot/a/b/foo.txt
//depot/a/b/bar.txt
//depot/a/b/c/baz.txt
//depot/a/b/c/boz.txt
Note that none of the files in //depot/a/b have name conflicts with files in //depot/a/b/c.
I'd now like to reorganize things such that all of those text files reside under //depot/newdir/. I attempted this by creating the following branch spec with view lines similar to the following:
//depot/a/b/... //depot/newdir/...
//depot/a/b/c/... //depot/newdir/...
This didn't work. When I integrate, the files in //depot/a/b/c clobber those in //depot/a/b. If I reverse the lines the files in //depot/a/b clobber those in //depot/a/b/c.
I've tried various wildcards (such as the following) hoping to be more specific, and can't get this to work.
//depot/a/b/%%1.txt //depot/newdir/%%1.txt
//depot/a/b/c/%%1.txt //depot/newdir/%%1.txt
Is it possible to migrate all of the files as described above to the //depot/newdir area in a single branch spec?
You should be able to use the move command here.
p4 move //depot/a/b/* //depot/newdir/*
p4 move //depot/a/b/c/* //depot/newdir/*
Those two commands will move all the files in b and c (but not any files in their subdirectories) to newdir.

What's the proper way to refactor a single file into multiple files and maintain version history in Perforce?

I have a file which has gotten too large, and should be refactored into two smaller files. What's the best way to do this in Perforce such that the relationship to the original file is maintained?
I'm adding two new files, and deleting the original in this case, but I would expect there to be some general solution to this problem.
I think the simplest case would be to add one new file which contains a subset of the content of the original, and delete that content from the original file, but leave it in place (it's trivial to delete it later anyway).
It would be nicest if the operation could be done in a single changelist to avoid any checkins which would break the build.
This can't be done in a single checking, but it can certainly be done without "breaking the build". Let's say you want to split bigFile.cs into smallFile1.cs and smallFile2.cs. First integrate the big file into the two little files and submit them.
p4 integrate bigFile.cs smallFile1.cs
p4 integrate bigFile.cs smallFile2.cs
p4 submit
You now have two extra files in your project directory, but they're doing no harm. Now check out smallFile1.cs and smallFile2.cs, and your project file(s). Add references to the smaller files, remove the reference to the big, edit the small files accordingly, etc. Finally, mark bigFile.cs for delete and submit your changes.
You have now split your big file into two smaller files and the smaller files' history will show you their big file origins.
You can use the integrate command.
When you've made changes to a file
that need to be propagated to another
file, start the process with p4
integrate.
The simplist form of the command would be
p4 integrate fromFile toFile
I would therefore perform the following tasks:
run p4 integrate with toFile being the new file and fromFile being the large, original file
p4 submit
p4 edit both fromFile and toFile to be the smaller versions of the original.
p4 submit
With this method, your file history information is kept in tact for all future revisions of the files.
This actually can be done with a single checkin. The steps are as follows:
integrate from the source file to both destination files as both raven and Scott Saad suggest
before submitting the new files, do a p4 edit on both files
make changes
p4 submit
The complete file history shows up in the revision graph and time-lapse views. The only disadvantage that I can see to skipping the intermediate submit is that the action type changes from 'integrate' to 'add'. Because of that, other people might not realize there is more to the file history.
I think I slightly prefer the two-checkin process.

Resources