Is it possible to exclude a depot subdirectory from "get latest revision" on a higher level directory in Perforce/P4V? - perforce

Using the P4V Perforce client, we regularly need to get the latest revision of a directory with hundreds of subdirectories of files, but recently a few files have been added which are extremely large compared to all the others. When selecting "get latest revision" of the higher level directory, these larger files will easily all available disk space before throwing errors.
Is it possible to exclude those files/subdirectories when "get latest revision" is run by people who don't need them, without interfering with anyone who does need them?

If you don’t need those files, the best option is to exclude them from your client view. If you’re using streams, do this by creating a virtual stream.
I might also suggest moving the big files into a sibling directory to make it easier to avoid them —- while Perforce’s client view management features make it possible to customize your view at a very fine grained level, complex client views are often a “smell” that the depot isn’t organized well. In general files that always need to be synced together should be close together in the namespace, and files that often need to be excluded should be in their own isolated section of the namespace (ie a top level directory than can easily be avoided or mapped out as a single unit).

Related

Version queries in URL to bypass browser cache

I'm writing a web application which will likely be updated frequently, including changes to css and js files which are typically cached aggressively by the browser.
In order for changes to become instantly visible to users, without affecting cache performance, I've come up with the following system:
Every resource file has a version. This version is appended with a ? sign, e.g. main.css becomes main.css?v=147. If I change a file, I increment the version in all references. (In practice I would probably just have a script to increment the version for all resources, every time I deploy an update.)
My questions are:
Is this a reasonable approach for production code? Am I missing something? (Or is there a better way?)
Does the question mark introduce additional overhead? I could incorporate the version number into the filename, if that is more efficient.
The approach sound reasonable. Here are some points to consider:
If you have many different resource files with different version numbers it might be quite some overhead for developers to correctly manage all these and increase them in the correct situations.
You might need to implement a policy for your team
or write a CI task to check that the devs did it right
You could use one version number for all files. For example when you have a version number of the whole app you could use that.
It makes "managing" the versions for developers a no-op.
It changes the links on every deploy
Depending on the number of resource files you have to manage the frequency of deploys vs. the frequency of deploys that change a resource file and the numbers of requests for these resource files one or the other solution might be more performant. This is a question of trade off.

Perforce: How to remap different Stream paths to same Workspace folder?

We have been setting up Perforce in the studio, and we decided to work with Streams for the sake of simplicity. One of the problems that I have been running into is not being able to remap more than one folder from the Stream into the same target folder in the Workspace.
I know about the Overlay Operator (+), but this isn't allowed when setting up the Stream View Path. I tried to do it with Workspace Remap, but it doesn't seem to be working.
Basically what I am trying to do is
Dev/FolderA/... Dev/...
+Dev/FolderB/... Dev/...
FolderA and FolderB have different files that don't share the same name, and my only interest in having them in the same folder is for Build purposes in the local drive.
Any ideas?
PS: I know this is similar to perforce client spec - making different depot paths map to the same client workspace path , the difference being that this only works for traditional local depot views (not streams).
Thanks!
The "+" lines are called overlay mappings and they can't be used stream workspace view specs.
The streams framework has several constraints that raw Perforce does not. The simplicity you're looking for relies on these constraints. The other constraint you may have already noticed with views is that you can't have leading or embedded wildcards like '...this/example/...' or 'this/.../one'.
So if you're trying to make streams do exactly what you're used to doing with native Perforce, you could end up putting a lot of work into it for not a whole lot of gain. As with any framework, the best way to get the most out of streams is to start fresh and spin up a new workflow based on its strengths.
That being said, there is a sneaky trick you can try. You can create static, non-stream client views that map stream depot paths. Any mapping syntax you like can be used in non-stream client views.
Non-stream views can be used to sync stream files and to work on them, but not to submit them. To submit files you'll have to switch your workspace to a stream client, submit the files, and switch the workspace back. As long as you don't re-sync between switching views your files won't be rearranged on local disk.
How well this works for you will depend on, among other things, which clients tools you're using. Some tools may not allow non-stream clients to work on stream files. Others tools may allow it, but they might show inconsistent states because they don't expect you to be doing that. And of course subverting the framework like this might just make things more complicated for you in the long run.

Perforce: is it possible to force a branch/integrate workflow on a repo?

Say I've got a \\Repo\... repo. Currently devs generally tend to do all their work directly in there, which normally isn't a problem for small pieces of work. Every so often, this approach fails for various reasons, mainly because they're unable to submit the incomplete change to Live.
So, I was wondering, is there a way to enforce on the server that:
1) no files can be directly checked out from \\Repo\...
2) users then branch to a private area (\\Projects\...)
3) dev, test, submit, dev, test, submit, ...
4) on dev complete, they can re-integrate back into \\Repo\...
I guess the last part is the problem, as files need to be checked out! Has anyone implemented something similar? Any suggestions are much appreciated.
There is no way (that I know of) to enforce this type workflow in P4. You could try to enforce it by setting commit triggers, restricting permissions, or locking files however I believe it would only result in more work (micro-management) and frustrate you and your team.
The best way to establish and enforce any SCM workflow is to set as company/studio policy. Your team should be responsible/able to follow the set procedure and determine (by themselves or through discussion) if an issue is able to be fixed in the main line.
One note about the proposed workflow; creating a new branch for every issue will eventually cause issues and at some point you will need to perform maintenance on the server to conserve disk space and depot browsing speed.
For more information (over) branching on Perforce read this Perforce blog entry from 2009: Perforce Anti-Patterns Part 2: Overuse of branching.
In many studios using Perforce, most developers have their own "working" branch which they continually re-use whenever there are changes that are not safe or able to be performed in the main line.
if i understand your questions properly, you should try with shelving features and working offline features of Perforce. Process is main thing to achieve success in this senario. So you might need to setup a right process to execute this.
For more Info about shelving and working offline with perforce, you can try following links...
http://www.perforce.com/perforce/doc.current/manuals/cmdref/shelve.html

What kind of security issues will I have if I provide my web app write access?

I would like to give my web application write access to a particular folder on my web server. My web app can create files on this folder and can write data to those files. However, the web app does not provide any interface to the users nor does it publicize the fact that it can create files or write to files. Am I susceptible to any security vulnerabilities? If so, what are they?
You are suspectible to having your server tricked into writing malicious files into that location.
The issues that can arrive out of that depend on what happens with that folder.
Is it web-accessible?
Then malicious files can be hosted, such as stealing cookies or serving up malware.
Is it a folder where applications are executed automatically?
This would be madness. Do not do this.
Is just some place where you store files for later processing?
Consider what could happen if malicious files are put there. Malicious PDFs, say, fed into your PDF processing system, and then some PDF bug is executed that causes some malicious code to be executed, and then it's all over.
Basically, the issue you expose yourself to, potentially, is as I said - malicious files in that location. You can think through carefully what happens in that folder, and how exposed it is, and decide for yourself how risky it is.
With those risks identified, you can then decide how to go ahead. And obviously, you probably don't allow direct uploads to that area, so you can consider the risk being significantly less, because you are basically assessing a situation in which someone has identified a bug in your webserver that lets them, without you providing access, telling it to save some file in some place. I'd hazard to say there aren't hundreds of these types of issues. There may be though. Hence the reason it is appropriate to minimise the risk of a file in that folder, by making sure the folder and files therein are used in a restricted way and, if possible, checked to see if they are "good" files.

Best way to determine whether an attached document will harm a user when opened

We're writing a feature that will allow our users to "attach" things like Word documents, Excel spreadsheets, pictures, pdfs to documents in our application - just like email.
We don't however, want to allow them to attach .exe, .bat, .reg files, or anything else that might harm them if they opened it - so we're proposing to have a whitelist of allowed file types.
Does anyone know of a better way to determine whether a document is safe? (i.e. does not have the ability to harm a user's computer).
Or instead a resource that would give us a list of commonly used safe documents to add to our whitelist as defaults?
You could use a whitelist plus the result of AssocIsDangerous (http://msdn.microsoft.com/en-us/library/bb773465(VS.85).aspx) to determine if the file should be allowed. White list for files to attach without warning, AssocIsDangerous to block altogether, and the remaining could get a default warning dialog.
Be careful about the white list because complex documents can contain macros and their associated applications could contain security vulnerability in their parsers.
What about Word macro viruses? There is no one "safe" document type. What if someone renames a .exe file .doc - is that allowable? Don't depend on the file type or name alone and never just trust client input. Validate it on the server side if at all possible, most likely using an anti-virus program or some other known utility.
Use a reverse proxy setup such as
www <-> HAVP <-> webserver
HAVP (http://www.server-side.de/) is a way to scan http traffic though ClamAV or any other commercial antivirus software. It will prevent users to download infected files. If you need https or anything else, then you can put another reverse proxy or web server in reverse proxy mode that can handle the SSL before HAVP
Nevertheless, it does not work at upload, so it will not prevent the files to be stored on servers, but prevent the files from being downloaded and thus propagated. So use it with a regular file scanning (eg clamscan).
I honestly think you are best served by either Clam AV on Linux or Trend Micro on Windows.

Resources