Customizing the output of Perforce RCS keyword expansion - perforce

I'd like to filter files using RCS keyword expansion so that instances of $Change$ are translated to 1745 rather than the default behaviour of $Change: 1745 $. I realize that this would prevent future expansions, but that's acceptable for the purposes.
Other methods of inserting the changelist number into a file are also welcome. This is the only method I've seen with Perforce that works during submission -- it's just that I'd like to clean it up so that it can be cleanly inserted into version numbers. Could this also be accomplished with triggers?

What you are asking for cannot be done with triggers. The only triggers that are active during submit are change-submit, change-content and change-commit. You can only retrieve the file-content for the latter two, but with the change-content trigger, the changelist number is not yet fix, and with the change-commit trigger, the file-content is already committed and can't be changed. What's worse, though, is that you wouldn't have a way to submit a changed file-content back to the server from within your trigger.
The RCS keyword expansion works because it is done by the server itself and because Perforce does a refresh-after-submit, i.e. the client refreshes all files of a submitted change from the Perforce server, thereby getting the content with expanded RCS keywords.

Related

Garbage characters in approved Chrome Extension

Trying to solve a mystery here... Submitted an extension update and it was approved last night and I rolled it out shortly after. I discovered that a JS error was thrown in the newly rolled out release...
Invalid or unexpected token
This error was not thrown in the build submitted for review. After inspecting the compiled js in the rolled out version, I discovered a bunch of garbage characters that were not present in the uploaded extension...
I'm wondering if the extension was got corrupted in the review process? Has this happened to anyone else? I've submitted a new build for review with no changes and am hoping that this one will not have these garbage characters.
Someone don't want to admit it, but it happens more than you want to believe.
There are certain characters that are not always digested by the publication / review process.
The same character once passes and another time it gets busted.
It happened to me with the character §
In short, I suggest you to avoid these characters or use their escape sequence instead (i.e. space = = )
If this translation has had persistent implications, such as having incorrectly set items of some storage system (localStorage, chrome.storage indexedDB etc.), after replacing your character with the corresponding escape sequence, you will also have to try to put a patch on it to restore the expected value of such variables \ objects.
In my specific case I had to insert some code inside the onInstalled event handler checking if the user is updating from that "faulty" version.

Webhook dm:version:deleted unstable

I have a webhook setup but seems to have some issues with dm:version:deleted not always being triggered.
As far as i can see it is active, but most often just does nothing when i delete a file on BIM360.
I also have other webhooks active like dm:version:added, dm:version:moved, etc.., that all seems to work as they should.
My question might therefore be, are there any different setups in dm:version:deleted compared to the other webhooks?
Are there any known issues with the firing of the dm:version:deleted?
Would there be another way to detect the deletion of files on BIM360, other than checking all files in a project?
Thank you in advance.
BIM360 is using immutable file operations a “file delete” operation is not really deleting a file but is creating a new version of that file/lineage with a specific type
versions:autodesk.core:Deleted
so you should check for a file modified event and have a look into the type of the new version. Look for
dm.version.added
events when the file is deleted on your end and not
dm:version:deleted
events.

Why is usage of the downloadURL & updateURL keys called unusual and how do they work?

I was reading GM's wiki to determine the difference between #downloadURL & #updateURL (which I didn't). But what confused me even more that both are unadvised:
It is unusual to specify this value. Most scripts should omit it.
I'm surprised by that as it's the only way for scripts to auto-update and I don't see why these keys shouldn't be used.
The wiki itself is pretty lacking and no other forum sources are advised, so I have to ask here. Also would appreciate more detailed info on these keys.
Use of those keys is discouraged mainly by Greasemonkey's lead developer. Most others, including the Tampermonkey team feel no need for such a warning.
Also note that those directives are not always required for auto-updates to work.
Some reasons why he would say that it was unusual and that "most" scripts should omit it:
In most all cases it is not needed, see how updates work and how those directives work, below.
Adding and using those directives are just more items that the script writer must check and maintain. Why make work if it is not needed?.
The update implementation and those directives have been buggy and, perhaps, not well implemented in Greasemonkey.
Tampermonkey, and other engines, implement updates, and those directives, in a slightly different manner. This means that code that works on Tampermonkey may fail on Greasemonkey.
Note that that wiki entry was made by Greasemonkey's lead developer (Arantius) himself; so it wasn't just wiki noise.
How updates work:
Script updates are conducted in 4 phases:
The enabled phase and/or "forced" updates.
The check phase.
The download phase.
The parse and install phase.
For this question, we are only concerned with the check and download phases. We stipulate that updates are enabled and that the updated script was valid and installed correctly.
When updating scripts, Greasemonkey (and Tampermonkey) download files twice:
The first download, controlled by the script's updateURL value, is just to check the file's #version (if any) and date -- to see if an update is available.
The second download, controlled by the script's downloadURL value, is the actual download of the new script to install.
This download will only occur if the server file has a higher #version number than the local file and/or if the server file has a later date than the local file. (Beware that there are critical differences here between script engines.)
See "Why you might use #downloadURL and #updateURL", below, for reasons why 2 file downloads are used.
How #downloadURL and #updateURL work:
#downloadURL merely overrides the default internal "download URL" location.
#updateURL merely overrides the default internal "update URL" (or check) location.
In most cases, there is no need to do this. See, below.
When you install a userscript, Greasemonkey automatically records the install location. No meta directive is needed.
By default, this is where Greasemonkey will both check for updates and download any updates.
But, if #downloadURL is specified, then Greasemonkey will both check and download from the specified location rather than the stored location.
But, if #updateURL is specified, then Greasemonkey will check (not download) from the "update" location given.
So: #updateURL overrides both #downloadURL and the default location for checking operations only.
While: #downloadURL overrides the default location for both checking and downloading (unless #updateURL is present).
Why you might use #downloadURL and #updateURL:
First, there are 2 downloads and potentially 2 different locations mainly for speed and bandwidth reasons.
Consider a scenario where a very large userscript has thousands of users:
Those users' browsers would constantly hammer the host server checking to see if an update was available. Most of the time, one wouldn't be and the large file would be downloaded over and over again unnecessarily.
This got to be a problem for sites like the now defunct userscripts.org.
Thus a system developed whereby a separate file was created to just hold version (and date) information. So the server would now have veryLarge.user.js and veryLarge.meta.js
veryLarge.meta.js would be updated (by the developer) every time the userscript was and would just contain the Metadata Block from veryLarge.user.js.
So the thousands of browsers would just repeatedly download the much smaller veryLarge.meta.js -- saving everybody time and saving the server bandwidth.
Nowadays, both Greasemonkey and Tampermonkey will automatically look for a *.meta.js file, so there is normally no need to specify one separately.
So, why explicitly specify #downloadURL and/or #updateURL? Some possible reasons:
Your script can be installed multiple ways or from multiple sources (cut and paste, locally copied file, secondary server, etc.) and you only want to maintain one "master" version.
You want to track how many initial and/or upgrade downloads your script has.
#downloadURL is also a handy "self documenting" way of recording/conveying where the user got the script from.
You want the *.meta.js file on a different server than the userscript for some reason.
Possibly http versus https issues (need to dig into this some day).
You are a bad guy and you want the script to update a malicious version at some future date from a server that you control -- that is not where the script was installed from.
Some differences between Greasemonkey and Tampermonkey:
(Warning: I haven't verified all of this in a while. Subject to change anyway as Tampermonkey is constantly improving (and Greasemonkey changes a lot too).)
Tampermonkey requires a #version directive on both the current and newer file. This is how Tampermonkey determines if an update is available.
Greasemonkey will also use this method, so always include #version in scripts you might want to auto-update.
However, Greasemonkey also requires that the update file be newer. And if no version is present, Greasemonkey will just compare the dates only. Note that this has caused problems in Greasemonkey in the past and also foolishly assumes that many different machines are accurately synched with the correct date and time.
Greasemonkey will only update from https:// schemes by default, but can optionally be set to allow http:// and ftp:// schemes.
Both engines never allow updates from file:// schemes.

Does Deactivate/Reactivate of a SharePoint Feature Increment the Version?

We have a complex scenario which requires a timer job to run after content deployment to a SP 2010 site collection. The timer job automatically deactivates/reactivates a branding feature which is responsible for setting the master page for the site collection, among other things.
We have had several feature upgrades along the way, and neglected to call .Update() on the feature in that specific site collection. So all of the updated CSS, master page, page layouts etc. are out of date on that SC.
The strange part is that when I checked the version number of that feature in this SC, it shows as the latest version. The custom upgrade action clearly didn't run and update the files, because nobody called .Upgrade().
One of my colleagues suggested that the deactivate/reactivate process done by the timer job would update the version number, meaning that I can no longer call Upgrade()!
Is that true? Does a deactivate/reactivate cycle for a feature automatically update the feature version number?
Is there an easy way to fix this mess? Some way to decrement the version number programmatically, then call Upgrade()??
On 1: No. Feature deactivating / activating does not trigger an update. See this article by Chris O' Brian: http://www.sharepointnutsandbolts.com/2010/06/feature-upgrade-part-1-fundamentals.html
Feature upgrade does NOT happen automatically (including when the
Feature is deactivated/reactivated)! The only way to upgrade a Feature
is to call SPFeature.Upgrade(), typically in conjunction with one of
the QueryFeatures() methods. My tool which I’ll go on to talk about is
a custom application page which helps you with this part – note there
is no STSADM command, PowerShell cmdlet or user interface to do this
out-of-the-box.
Is your timer job cycling the feature activation with Force? Then, yes, it is triggering the feature upgrade/feature update see the following screenshot from SPFeature.Activate (see my yellow marking):
Why the feature version is incremented, I'm not sure. When you have a feature, install a new feature version and activate / deactivate, the feature version stays the same unless you run an Upgrade, see also this related question stating the same: https://sharepoint.stackexchange.com/questions/41476/feature-upgrading-question
I'm guessing your timer job is using force? Otherwise I'm not quite sure what is happening.
On 2: Don't know if it is possible to decrease the version number, but the safest way would be to just create a new version including a grand "clean up" feature receiver which sets everything correct, i.e. checks which steps of the feature upgrade have happened already (e.g. new list created, new content type added) and which haven't. Depending on that just execute the same steps again which have not executed yet. For the latter part you can fortunately use the existing code, so you would only need the "clean up" or checking code.
After some testing I found that simply deactivating and reactivating the feature will increment the version number and completely screw up your upgrade! I even watched the update come through in the content database. As soon as you deactivate/reactivate the updated feature, the new version number pops into the content DB. Of course the upgrade doesn't actually run, it just increments the version number.
This means that if you then call .Upgrade() it won't work because SharePoint thinks it's already been upgraded!!
To fix this I updated the row in the content database to set the feature version back to 0.0.0.0 for that particular web and then I could run .Upgrade() just fine....but that's not exactly a supported solution. If anyone else has a better idea drop a reply.

check in files on perforce

Say, I have a project called example.vcproj to which I have added files:
1. first_file.c
2. second_file.c
first_file.c was added 10 days ago and has still not been code reviewed. Therefore I am still waiting for it and at this moment I cannot check in the files.
second_file.c was added recently. It has gone through the code review and is ready to be checked in. However since my first file is still on review, I am not able to go ahead and submit the second one, mainly because of the dependency on example.vcproj.
Please let me know the simplest way to resolve this conflict other than temporarily removing the first_file.c and reverting the example.vcproj and checking in the most recent changes. Thanks.
There are a couple of things you can try.
First, you can shelve all of your files prior to submit. That at least means you are in no danger of losing any work, as the files will be stored on the Perforce server. After you receive code review you can check them in.
Second, you can create a private branch or stream for your work-in-progress. Then whenever you hit a stable milestone on your private branch, you can get code review approval and promote it to the shared branch.

Resources