I am using mozila MakeAPI for popcorn maker. It saves the data to elasticsearch. I added and saved a new column in elastic search. Now for retrieval, I was not able to get that column value. So I update a node module Makeapi_client.
My question is, Is this approach good? because It might be case if the user updates the npm. In this case what will happen to the code I wrote in it.
"Patching" a library in this way is not recommended. Next time you update MakeAPI your changes will be overridden. You have a few options:
Submit a Pull Request - Clone the original repository, make your changes, and then create a pull request. A pull request is a request you make to the original library authors to merge your changes into their canonical library. In essence, you'll be fixing the library for everyone!
Make the patch in a different file - Nothing is stopping you from doing something like this:
var Client = require('makeapi').Makeapi_client;
Client.prototype.someNewMethod = function() {};
Or something similar. In essence, creating a patch in a different file that you can check into version control, which won't get overridden by changes.
Please note that updates to the library may still break your changes.
Related
The scenario is as follows: A user has opened a merge request that adds a new feature to my code. Their code however contains a few bugs. I know how to fix the appropriate parts but I'd prefer keeping my repository free of code that has known issues. Therefore I'd like to modify their code before merging it.
I know I could also manually copy over the changes but I would still like to give the user that opened the merge request credit for their contribution.
You could take the user's changes (pull them over to your development environment) using Git, and then add your changes, and then merge the lot -- this way your user's commits are visible, and your commits are visible.
You could also add your user to a CREDITS file to give them credit as a contributor.
You could also ask the user to fix the bugs (which helps the user too, to learn how to improve their code) and then merge the change.
I need to write a plugin that needs to create table in database and some setting from installation form. I can easily create form but I have difficulty to run the script after installation to read options and create table. Is it possible at all to run such simple script or maybe you need to create everything like for example models, vehicles and so on?
I would appreciate if anyone could give me directions how to do it. Modx documentation is not clear about this and https://github.com/splittingred/Doodles/tree/production sample repo contains multiple elements I'm not familiar with and I believe don't need at all
Typically you'd use a resolver to run code after the install.
While in the question comments the setup options are discussed, the package attributes there are actually executed to generate the setup options form, not to process the results.
The docs are a tad dated (mostly the screenshots), but Creating a 3rd party build script explains the different parts of a build script, and what they're for, with a fair bit of examples.
The piece you're looking for is this:
$vehicle->resolve('php',array(
'source' => $sources['resolvers'] . 'setupoptions.resolver.php',
));
You'll need to have a $vehicle (perhaps from a category or other object you're adding to the build) and the file in the provided location. Inside the resolver file you can use $object->xpdo as an instance of the modX class to do your thing.
I have created/recorded a script in Vugen, however the the URL of the site has been changed recently. Is there any way just by replacing the url with a parameter works?
I have tried by replacing url with parameters, the new URL is
http://xsx.xxx.xsx.xxx/test99
Yhe parameters I have tried are below:
NewUrl: http://xsx.xxx.xsx.xxx/
Newhos: test99
I have replaced all in the script and when I run it I get the following error:
Error -27651: Attempted read from an unconnected socket (empty response, no HTTP headers received). URL="http://xsx.xxx.xsx.xxx/scripts/uiServer.dll"
What is the solution for this? Should i record again with the new URL ?
Thanks.
Hope I've understood what you're asking for, so here goes. If it's only the URL that has changed and not the content of the site which you might require later on in your script that this is fairly simple to do.
As you have created the new parameters ensure that they are getting the data from the same DAT file. I.e. newurl.dat which contains the following:
newurl,newhost
http://xsx.xxx.xsx.xxx/,test99
and assign the parameters to the correct column and have the newhost set to sameline as newurl. This way it’s easier to maintain I believe.
Now that the parameters have been created and properly assigned in your script you’ll need to change the url your trying to change from:
http://xsx.xxx.xsx.xxx/oldtest to {newurl}{newhost}
this needs to be done for all instances where the change has occurred.
Hope this helps with your problem you’re having.
Are you certain that the build level has not also changed at the same time as the host? If so then your new instance may be out of synch with the request model of the scripts built using an earlier build. Developers have a habit of including items below the scenes that do affect the site visually but change the structure of the requests. The error you are receiving is common when you attempt to continue a conversation on a dead connection resulting from a missed dynamic session component which may have been added in the last build.
When in doubt quickly record the second site and take a look at the differences in the requests, even to the point of using WinDiff (included in LoadRunner) for this purpose.
We have a complex scenario which requires a timer job to run after content deployment to a SP 2010 site collection. The timer job automatically deactivates/reactivates a branding feature which is responsible for setting the master page for the site collection, among other things.
We have had several feature upgrades along the way, and neglected to call .Update() on the feature in that specific site collection. So all of the updated CSS, master page, page layouts etc. are out of date on that SC.
The strange part is that when I checked the version number of that feature in this SC, it shows as the latest version. The custom upgrade action clearly didn't run and update the files, because nobody called .Upgrade().
One of my colleagues suggested that the deactivate/reactivate process done by the timer job would update the version number, meaning that I can no longer call Upgrade()!
Is that true? Does a deactivate/reactivate cycle for a feature automatically update the feature version number?
Is there an easy way to fix this mess? Some way to decrement the version number programmatically, then call Upgrade()??
On 1: No. Feature deactivating / activating does not trigger an update. See this article by Chris O' Brian: http://www.sharepointnutsandbolts.com/2010/06/feature-upgrade-part-1-fundamentals.html
Feature upgrade does NOT happen automatically (including when the
Feature is deactivated/reactivated)! The only way to upgrade a Feature
is to call SPFeature.Upgrade(), typically in conjunction with one of
the QueryFeatures() methods. My tool which I’ll go on to talk about is
a custom application page which helps you with this part – note there
is no STSADM command, PowerShell cmdlet or user interface to do this
out-of-the-box.
Is your timer job cycling the feature activation with Force? Then, yes, it is triggering the feature upgrade/feature update see the following screenshot from SPFeature.Activate (see my yellow marking):
Why the feature version is incremented, I'm not sure. When you have a feature, install a new feature version and activate / deactivate, the feature version stays the same unless you run an Upgrade, see also this related question stating the same: https://sharepoint.stackexchange.com/questions/41476/feature-upgrading-question
I'm guessing your timer job is using force? Otherwise I'm not quite sure what is happening.
On 2: Don't know if it is possible to decrease the version number, but the safest way would be to just create a new version including a grand "clean up" feature receiver which sets everything correct, i.e. checks which steps of the feature upgrade have happened already (e.g. new list created, new content type added) and which haven't. Depending on that just execute the same steps again which have not executed yet. For the latter part you can fortunately use the existing code, so you would only need the "clean up" or checking code.
After some testing I found that simply deactivating and reactivating the feature will increment the version number and completely screw up your upgrade! I even watched the update come through in the content database. As soon as you deactivate/reactivate the updated feature, the new version number pops into the content DB. Of course the upgrade doesn't actually run, it just increments the version number.
This means that if you then call .Upgrade() it won't work because SharePoint thinks it's already been upgraded!!
To fix this I updated the row in the content database to set the feature version back to 0.0.0.0 for that particular web and then I could run .Upgrade() just fine....but that's not exactly a supported solution. If anyone else has a better idea drop a reply.
I have a reason to prefer my plugin to be registered on post-op but I'd need to put something in it (change a field to another value). Do I really have to register it on pre-op or can I shove my update into it despite that the operation has been carried out already?
I'd prefer to avoid firing up an update. The code logic might get a bit crowdy and confusing by that, since there'll be a lot of stuff to do upon a "real" update.
Changes made to the target entity in post-op will not end up in the database unless you run an update manually.
You could consider breaking up your plugin into two. One to change the field in pre-op and one to do whatever it is you're doing in post-op.
Plugins can share data: http://msdn.microsoft.com/en-us/library/gg328579.aspx