How to move issues from Google Code to Phabricator - data-migration

Google Code is shutting down so I want to move my 2500 issues to Phabricator (hosted at Phoreplay).
While there are moving procedure for Github and others, I did not manage to find similar tools for Phabricator.
How to move issues from Google Code to Phabricator?
Only issues, not wiki/code/downloads/etc.
Note: I use Phabricator instead of Github because it fits my requirements better.

Preliminary note if you wish to keep tasks IDs
Migrations project could be facilitated if we can temporarily edit the Maniphest application code, so as you aren't in control of your installation, this is difficult to offer a clean solution to get consistent ID. So, first, you should be in control of your installation.
Such migration code has been written by the Blender project: here their repository at import moment.
The steps
Export Google code tasks in CSV or JSON format
Run a Phabricator script to import them, or, call the API conduit
Export
Google provides some tools to perform the migration. These tools include a issues.py script to parse issues on Google code.
With that, you can dump your issues in a workable format, for example JSON to store an array of comments.
Import through API (best for smallest tasks, without comments)
You could use the API and call through conduit maniphest.createtask. But this is not really convenient, as it's not the easiest way to add comments, closes the issue, etc.
Import through a script
This is probably the most interesting way to import the tasks, and this is the solution offering the maximal flexibility.
Here the skeleton of such script I drafting from the Blender code and some of my internal codes:
#!/usr/bin/env php
<?php
$root = dirname(dirname(__FILE__));
require_once $root . '/scripts/__init_script__.php';
/**
* Represents a task importable into Phabricator
*/
class LegacyTask {
private $id;
private $title;
//... other tasks properties, depending the Google Code fields
public function importIntoPhabricator () {
$projects = ......... // we need an array with one or more PHIDs, according what you created
$task = ManiphestTask::initializeNewTask($reporter);
$task->setTitle($title);
$task->attachProjectPHIDs($projects);
$task->setDescription($this->description);
$task->setPriority($this->priority);
$task->setOverrideID($this->id); //This is the method you want to borrow to the Blender migration code
$task->save();
//Initial transaction
$changes = [
ManiphestTransaction::TYPE_STATUS => ManiphestTaskStatus::STATUS_OPEN,
PhabricatorTransactions::TYPE_VIEW_POLICY => 'public'
];
self::applyTransactionsForChanges($task, $changes);
//Closes task
if ($this->closed) {
$status = ... //ManiphestTaskStatus::STATUS_CLOSED_RESOLVED
self::closeTask($task, $status);
}
//Project transaction
self::associateTaskToProject($task, $projects);
//Adds comments
//...
}
static public function getTransactions ($changes) {
$transactions = [];
$template = new ManiphestTransaction();
foreach ($changes as $type => $value) {
$transaction = clone $template;
$transaction->setTransactionType($type);
if ($type == PhabricatorTransactions::TYPE_EDGE) {
$transaction->setMetadataValue('edge:type', PhabricatorProjectObjectHasProjectEdgeType::EDGECONST);
}
$transaction->setNewValue($value);
$transactions[] = $transaction;
}
return $transactions;
}
static public function applyTransactionsForChanges ($task, $changes) {
$transactions = self::getTransactions($changes);
self::applyTransactions($task, $transactions);
}
static public function applyTransactions ($task, $transactions) {
$editor = id(new ManiphestTransactionEditor())
->setActor(self::getReporterUser())
->setContentSource(self::getContentSource())
->setContinueOnNoEffect(true)
->applyTransactions($task, $transactions);
}
static function associateTaskToProject($task, $projects) {
$project_type = PhabricatorProjectObjectHasProjectEdgeType::EDGECONST;
$transactions = [
id(new ManiphestTransaction())
->setTransactionType(PhabricatorTransactions::TYPE_EDGE)
->setMetadataValue('edge:type', $project_type)
->setNewValue([
'=' => array_fuse($projects)
])
];
self::applyTransactions($task, $transactions);
}
/**
* Closes the task
*/
static public function closeTask ($task, $status) {
$changes = [
ManiphestTransaction::TYPE_STATUS => $status
];
self::applyTransactionsForChanges($task, $changes);
}
}
Close status are documented here.
It works best.
Ask your core developers and top reporters if any to create an account, try to map their users, and for everyone else, attribute issues and comments to a bot account created for the migration.

Related

Create own action to clone/duplicate TYPO3 8.7 extbase object with nested child elements

I build my extbased TYPO3 extension in TYPO3 8.7 . It is a Backend-Module. In the controller, i write my own action to clone the object.
In this example, i want to clone/duplicate the object 'Campaign' and safe it with a modified title, like add the 'copy' text to the title.
But the new object should have also its own new child elements that must be exact copies.
When the action is called, i get only a copy of the Object, but no childs. Is there an example or best case how to handle this task? I did not find, even i found some questions and answers that are on the same topic, but older version. i hope that upd to date, there is a more straight forward solution. Thank you for every hint that points me to the right ideas and maybe an up to date and version example. Here is what i have i my controller. How do i implement recursiv copying of all child elements (and some childs have childs, too)?
/**
* action clone
* #param \ABC\Copytest\Domain\Model\Campaign $campaign
* #return void
* #var \ABC\Copytest\Domain\Model\Campaign $newCampaign
*/
public function cloneAction(\ABC\Copytest\Domain\Model\Campaign $campaign) {
$newCampaign = $this->objectManager->get("ABC\Copytest\Domain\Model\Campaign");
$properties = $campaign->_getProperties();
unset($properties['uid']);
foreach ($properties as $key => $value) {
$newCampaign->_setProperty($key, $value);
}
$newCampaign->_setProperty('title', $properties['title']. ' COPY');
$this->campaignRepository->add($newCampaign);
$this->addFlashMessage('Clone was created', '', \TYPO3\CMS\Core\Messaging\AbstractMessage::OK);
$this->redirect('list');
}
I am aware that this question has been answered a long time ago. But I want to provide my solution to create a deep copy for further reference. Tested on TYPO3 9.5.8.
private function deepcopy($object)
{
$clone = $this->objectManager->get(get_class($object));
$properties = \TYPO3\CMS\Extbase\Reflection\ObjectAccess::getGettableProperties($object);
foreach ($properties as $propertyName => $propertyValue) {
if ($propertyValue instanceof \TYPO3\CMS\Extbase\Persistence\ObjectStorage) {
$v = $this->objectManager->get(\TYPO3\CMS\Extbase\Persistence\ObjectStorage::class);
foreach($propertyValue as $subObject) {
$subClone = $this->deepcopy($subObject);
$v->attach($subClone);
}
} else {
$v = $propertyValue;
}
if ($v !== null) {
\TYPO3\CMS\Extbase\Reflection\ObjectAccess::setProperty($clone, $propertyName, $v);
}
}
return $clone;
}
There is one approach which tackles this usecase from a different POV, namely that request argument values without an identity are automatically put into fresh objects which can then be persisted. This basically clones the original objects. This is what you need to do:
Add a view which has fields for all properties of your object, hidden fields are fine too. This can for example be an edit view with a separate submit button to call your clone action.
Add a initializeCloneAction() and get the raw request arguments via $this->request->getArguments().
Now do unset($arguments[<argumentName>]['__identity']);, do the same for every relation your object has if you want copies instead of shared references.
Store the raw request arguments again via $this->request->setArguments($arguments).
Finally allow the creation of new objects in the property mapping configuration of your argument and possibly all relation properties.
This is how a full initializeCloneAction() could look like:
public function initializeCloneAction()
{
$arguments = $this->request->getArguments();
unset(
$arguments['campaign']['__identity'],
$arguments['campaign']['singleRelation']['__identity'],
);
foreach (array_keys($arguments['campaign']['multiRelation']) as $i) {
unset($arguments['campaign']['multiRelation'][$i]['__identity']);
}
$this->request->setArguments($arguments);
// Allow object creation now that we have new objects
$this->arguments->getArgument('campaign')->getPropertyMappingConfiguration()
->setTypeConverterOption(PersistentObjectConverter::class, PersistentObjectConverter::CONFIGURATION_CREATION_ALLOWED, true)
->allowCreationForSubProperty('singleRelation')
->getConfigurationFor('multiRelation')
->allowCreationForSubProperty('*');
}
Now if you submit your form using the clone action, your clone action will get a fully populated but new object which you can store in your repository as usual. Your cloneAction() will then be very simple:
public function cloneAction(Campaign $campaign)
{
$this->campaignRepository->add($campaign);
$this->addFlashMessage('Campaign was copied successfully!');
$this->redirect('list');
}
If you have "LazyLoadingProxy" instance in your object you need add one more conditions.
if ($propertyValue instanceof \TYPO3\CMS\Extbase\Persistence\Generic\LazyLoadingProxy) {
$objectStorage = $propertyValue->_loadRealInstance();
}
This is my solution for "deepcopy" function:
private function deepcopy($object)
{
$clone = $this->objectManager->get(get_class($object));
$properties = \TYPO3\CMS\Extbase\Reflection\ObjectAccess::getGettableProperties($object);
foreach ($properties as $propertyName => $propertyValue) {
if ($propertyValue instanceof \TYPO3\CMS\Extbase\Persistence\ObjectStorage) {
$objectStorage = $this->objectManager->get(\TYPO3\CMS\Extbase\Persistence\ObjectStorage::class);
foreach ($propertyValue as $subObject) {
$subClone = $this->deepcopy($subObject);
$objectStorage->attach($subClone);
}
} elseif ($propertyValue instanceof \TYPO3\CMS\Extbase\Persistence\Generic\LazyLoadingProxy) {
$objectStorage = $propertyValue->_loadRealInstance();
} else {
$objectStorage = $propertyValue;
}
if ($objectStorage !== null) {
\TYPO3\CMS\Extbase\Reflection\ObjectAccess::setProperty($clone, $propertyName, $objectStorage);
}
}
return $clone;
}
I think a good solution is, to emulate the backend-function.
See the code-example (german text)
http://blog.marcdesign.ch/2015/05/27/typo3-extbase-objekte-kopieren/
The general idea is to extend the TYPO3\CMS\Core\DataHandling\DataHandler and use the parent-method copyRecord. You declare your predefined backend-user to $this->BE_USER in your extend class. The obejct of your predefined backenduser can you get by using the class TYPO3\\CMS\\Backend\\FrontendBackendUserAuthentication and the known name of you predefined backenduser. Your user should have admin-rights and you should define the $BE_USER->uc_default['copyLevels']= '9999'; and declare $BE_USER->uc = $BE_USER->uc_default.
I have not checked, if the declaration $GLOBALS['PAGES_TYPES'][254]['allowedTables'] = '*'; is really needed.
The method copyRecorditself needs mainly the table-name, the uid-value, the pid-value and a language-object as parameters.The languages-object can you get $GLOBALS['lang'], which can although be generated by instanciating \TYPO3\CMS\Lang\LanguageService to $GLOBALS['lang'] and \TYPO3\CMS\Core\Charset\CharsetConverter to $GLOBALS['LANG']->csConvObj.
Sorry about my poor english.

Revit Api Load Command - Auto Reload

I'm working with the revit api, and one of its problems is that it locks the .dll once the command's run. You have to exit revit before the command can be rebuilt, very time consuming.
After some research, I came across this post on GitHub, that streams the command .dll into memory, thus hiding it from Revit. Letting you rebuild the VS project as much as you like.
The AutoReload Class impliments the revit IExteneralCommand Class which is the link into the Revit Program.
But the AutoReload class hides the actual source DLL from revit. So revit can't lock the DLL and lets one rebuilt the source file.
Only problem is I cant figure out how to implement it, and have revit execute the command. I guess my C# general knowledge is still too limited.
I created an entry in the RevitAddin.addin manifest that points to the AutoReload Method command, but nothing happens.
I've tried to follow all the comments in the posted code, but nothing seems to work; and no luck finding a contact for the developer.
Found at: https://gist.github.com/6084730.git
using System;
namespace Mine
{
// helper class
public class PluginData
{
public DateTime _creation_time;
public Autodesk.Revit.UI.IExternalCommand _instance;
public PluginData(Autodesk.Revit.UI.IExternalCommand instance)
{
_instance = instance;
}
}
//
// Base class for auto-reloading external commands that reside in other dll's
// (that Revit never knows about, and therefore cannot lock)
//
public class AutoReload : Autodesk.Revit.UI.IExternalCommand
{
// keep a static dictionary of loaded modules (so the data persists between calls to Execute)
static System.Collections.Generic.Dictionary<string, PluginData> _dictionary;
String _path; // to the dll
String _class_full_name;
public AutoReload(String path, String class_full_name)
{
if (_dictionary == null)
{
_dictionary = new System.Collections.Generic.Dictionary<string, PluginData>();
}
if (!_dictionary.ContainsKey(class_full_name))
{
PluginData data = new PluginData(null);
_dictionary.Add(class_full_name, data);
}
_path = path;
_class_full_name = class_full_name;
}
public Autodesk.Revit.UI.Result Execute(
Autodesk.Revit.UI.ExternalCommandData commandData,
ref string message,
Autodesk.Revit.DB.ElementSet elements)
{
PluginData data = _dictionary[_class_full_name];
DateTime creation_time = new System.IO.FileInfo(_path).LastWriteTime;
if (creation_time.CompareTo(data._creation_time) > 0)
{
// dll file has been modified, or this is the first time we execute this command.
data._creation_time = creation_time;
byte[] assembly_bytes = System.IO.File.ReadAllBytes(_path);
System.Reflection.Assembly assembly = System.Reflection.Assembly.Load(assembly_bytes);
foreach (Type type in assembly.GetTypes())
{
if (type.IsClass && type.FullName == _class_full_name)
{
data._instance = Activator.CreateInstance(type) as Autodesk.Revit.UI.IExternalCommand;
break;
}
}
}
// now actually call the command
return data._instance.Execute(commandData, ref message, elements);
}
}
//
// Derive a class from AutoReload for every auto-reloadable command. Hardcode the path
// to the dll and the full name of the IExternalCommand class in the constructor of the base class.
//
[Autodesk.Revit.Attributes.Transaction(Autodesk.Revit.Attributes.TransactionMode.Manual)]
[Autodesk.Revit.Attributes.Regeneration(Autodesk.Revit.Attributes.RegenerationOption.Manual)]
public class AutoReloadExample : AutoReload
{
public AutoReloadExample()
: base("C:\\revit2014plugins\\ExampleCommand.dll", "Mine.ExampleCommand")
{
}
}
}
There is an easier approach: Add-in Manager
Go to Revit Developer Center and download the Revit SDK, unzip/install it, the check at \Revit 2016 SDK\Add-In Manager folder. With this tool you can load/reload DLLs without having to modify your code.
There is also some additional information at this blog post.
this is how you can use the above code:
Create a new VS class project; name it anything (eg. AutoLoad)
Copy&Paste the above code in-between the namespace region
reference revitapi.dll & revitapiui.dll
Scroll down to AutoReloadExample class and replace the path to point
your dll
Replace "Mine.ExampleCommand" with your plugins namespace.mainclass
Build the solution
Create an .addin manifest to point this new loader (eg.
AutoLoad.dll)
your .addin should include "FullClassName" AutoLoad.AutoReloadExample
This method uses reflection to create an instance of your plugin and prevent Revit to lock your dll file! You can add more of your commands just by adding new classes like AutoReloadExample and point them with seperate .addin files.
Cheers

Pimcore where does code go

All the examples show random pimcore code; however, I have found no explanation of where the code goes - or a complete example. I do not use pimcore for the cms. I am only interested in the object management. The code I am trying to wrte is to export objects e.g. into csv or xml.
Thanks ~
You can either create a plugin as suggested by Johan, but a quicker way is to just put the files into the /website/lib/Website folder. This folder is already added to the autoloader so you don't need to do anything else.
For example create an ObjectExporter.php under /website/lib/Website folder with this content:
<?php
namespace Website;
class ObjectExporter
{
public function exportObjects()
{
// Your code
}
}
Then you can either instantiate this class in your controller action or in a CLI script. Controller actions are within /website/controllers folder and they need to be called through http: http://localhost?controller=default&action=default
Example: /website/controllers/DefaultController.php
<?php
class DefaultController extends Website_Controller_Action {
public function defaultAction () {
$this->disableViewAutoRender();
$objectExporter = new Website\ObjectExporter();
$objectExporter->exportObjects();
}
}
(You could also add your whole code directly into action, but that would be a bit ugly solution, it of course depends)
But better and quickest way to approach such tasks is with the CLI scripts.
I like to use the /website/var/cli folder (you need to create it manually, but the /website/var folder is excluded in .htaccess by default which makes it practical for such use cases).
Example: /website/var/cli/export-objects.php
<?php
$workingDirectory = getcwd();
chdir(__DIR__);
include_once("../../../pimcore/cli/startup.php");
chdir($workingDirectory);
$objectExporter = new Website\ObjectExporter();
$objectExporter->exportObjects();
Then just run it by issuing this command in your command line:
php website/var/cli/export-objects.php
In case you wish to add special UI elements to the Pimcore backend, the way to go is with building an extension as suggested by Johan.
Igor
Here is a primcore example to export a list of object into a csv file
private function csvAction(){
$this->disableLayout();
$this->disableViewAutoRender();
$obj_list = new YourObject_List();
$obj_list->load();
/* #var $obj Object_YourObject */
$out = array();
foreach($obj_list as $obj){
$entry = array();
$entry["key"] = $obj->getKey();
$entry["Field 1"] = $obj->getField1();
$entry["Field 2"] = $obj->getField2();
$entry["Field 3"] = $obj->getField3();
$out[]=$entry;
}
$this->_helper->Csv($out, "produkt");
}
You could either create a new Plugin using admin function
Extras -> Extensions -> Create new Plugin
Add name Test
Activate plugin in list at Extras -> Extensions
You can then add the action above to plugins/Test/controllers/IndexController.php
It's also possible to add controller code in website/controllers, there is already a default controller there.
/Johan

Gradle plugin best practices for tasks that depend on extension objects

I would like feedback on the best practices for defining plugin tasks that depend on external state (i.e. defined in the build.gradle that referenced the plugin). I'm using extension objects and closures to defer accessing those settings until they're needed and available. I'm also interested in sharing state between tasks, e.g. configuring the outputs of one task to be the inputs of another.
The code uses "project.afterEvaluate" to define the tasks when the required settings have been configured through the extension object. This seems more complex than should be needed. If I move the code out of the "afterEvaluate", it gets compileFlag == null which isn't the external setting. If the code is changed again to use the << or doLast syntax, then it will get the external flag... but then it fails to work with type:Exec and other similarly helpful types.
I feel that I'm fighting Gradle in some ways, which means I don't understand better how to work well with it. The following is a simplified pseudo-code of what I'm using. This works but I'm looking to see if this can be simplified, or indeed what the best practices are. Also, the exception shouldn't be thrown unless the tasks are being executed.
apply plugin: MyPlugin
class MyPluginExtension {
String compileFlag = null
}
class MyPlugin implements Plugin<Project> {
void apply(Project project) {
project.extensions.create("myPluginConfig", MyPluginExtension)
project.afterEvaluate {
// Closure delays getting and checking flag until strictly needed
def compileFlag = {
if (project.myPluginConfig.compileFlag == null) {
throw new InvalidUserDataException(
"Must set compileFlag: myPluginConfig { compileFlag = '-flag' }")
}
return project.myPluginConfig.compileFlag
}
// Inputs for translateTask
def javaInputs = {
project.files(project.fileTree(
dir: project.projectDir, includes: ['**/*.java']))
}
// This is the output of the first task and input to the second
def translatedOutputs = {
project.files(javaInputs().collect { file ->
return file.path.replace('src/', 'build/dir/')
})
}
// Translates all java files into 'translatedOutputs'
project.tasks.create(name: 'translateTask', type:Exec) {
inputs.files javaInputs()
outputs.files translatedOutputs()
executable '/bin/echo'
inputs.files.each { file ->
args file.path
}
}
// Compiles 'translatedOutputs' to binary
project.tasks.create(name: 'compileTask', type:Exec, dependsOn: 'translateTask') {
inputs.files translatedOutputs()
outputs.file project.file(project.buildDir.path + '/compiledBinary')
executable '/bin/echo'
args compileFlag()
translatedOutputs().each { file ->
args file.path
}
}
}
}
}
I'd look at this problem another way. It seems like what you want to put in your extension is really owned by each of your tasks. If you had something that was a "global" plugin configuration option, would it be treated as an input necessarily?
Another way of doing this would have been to use your own SourceSets and wire those into your custom tasks. That's not quite easy enough yet, IMO. We're still pulling together the JVM and native representations of sources.
I'd recommend extracting your Exec tasks as custom tasks with a #TaskAction that does the heavy lifting (even if it just calls project.exec {}). You can then annotate your inputs with #Input, #InputFiles, etc and your outputs with #OutputFiles, #OutputDirectory, etc. Those annotations will help auto-wire your dependencies and inputs/outputs (I think that's where some of the fighting is coming from).
Another thing that you're missing is if the compileFlag effects the final output, you'd want to detect changes to it and force a rebuild (but not a re-translate).
I simplified the body of the plugin class by using the Groovy .with method.
I'm not completely happy with this (I think the translatedFiles could be done differently), but I hope it shows you some of the best practices. I made this a working example (as long as you have a src/something.java) by implementing the translate as a copy/rename and the compile as something that just creates an 'executable' file (contents is just the list of the inputs). I've also left your extension class in place to demonstrate the "global" plug-in config. Also take a look at what happens with compileFlag is not set (I wish the error was a little better).
The translateTask isn't going to be incremental (although, I think you could probably figure out a way to do that). So you'd probably need to delete the output directory each time. I wouldn't mix other output into that directory if you want to keep that simple.
HTH
apply plugin: 'base'
apply plugin: MyPlugin
class MyTranslateTask extends DefaultTask {
#InputFiles FileCollection srcFiles
#OutputDirectory File translatedDir
#TaskAction
public void translate() {
// println "toolhome is ${project.myPluginConfig.toolHome}"
// translate java files by renaming them
project.copy {
includeEmptyDirs = false
from(srcFiles)
into(translatedDir)
rename '(.+).java', '$1.m'
}
}
}
class MyCompileTask extends DefaultTask {
#Input String compileFlag
#InputFiles FileCollection translatedFiles
#OutputDirectory File outputDir
#TaskAction
public void compile() {
// write inputs to the executable file
project.file("$outputDir/executable") << "${project.myPluginConfig.toolHome} $compileFlag ${translatedFiles.collect { it.path }}"
}
}
class MyPluginExtension {
File toolHome = new File("/some/sane/default")
}
class MyPlugin implements Plugin<Project> {
void apply(Project project) {
project.with {
extensions.create("myPluginConfig", MyPluginExtension)
tasks.create(name: 'translateTask', type: MyTranslateTask) {
description = "Translates all java files into translatedDir"
srcFiles = fileTree(dir: projectDir, includes: [ '**/*.java' ])
translatedDir = file("${buildDir}/dir")
}
tasks.create(name: 'compileTask', type: MyCompileTask) {
description = "Compiles translated files into outputDir"
translatedFiles = fileTree(tasks.translateTask.outputs.files.singleFile) {
includes [ '**/*.m' ]
builtBy tasks.translateTask
}
outputDir = file("${buildDir}/compiledBinary")
}
}
}
}
myPluginConfig {
toolHome = file("/some/custom/path")
}
compileTask {
compileFlag = '-flag'
}

subsonic unit testing bug?

I'm currently using Subsonic 3.03 Active Record repository.
I have setup a Test connection string to utilise the dummy internal storage.
[TestInitialize]
public void TestInitialize()
{
List<ServiceJob> jobs = new List<ServiceJob>()
{
new ServiceJob() { ServiceJobID = 1 },
new ServiceJob() { ServiceJobID = 2 }
};
ServiceJob.Setup(jobs);
}
[TestMethod]
public void TestMethod()
{
ServiceJob job = ServiceJob.SingleOrDefault(s => s.ServiceJobID == 2);
Assert.AreEqual(2, job.ServiceJobID);
}
I'm expecting this unit-test to pass, but it pulls out the first service job and fails.
I've also experienced problems using other sugar methods such as .Find().
It works fine when using the IQueryable interface such as ServiceJob.All.Where(s => s.ServiceJobID == 2) but don't fancy stripping out the sugar for testing purposes!
Great product by the way, really impressed so far.
As you say this looks like it's definitely a bug. You should submit it as an issue to github:
http://github.com/subsonic/SubSonic-3.0/issues

Resources