How can I call eclipse refactoring history programmatically? - eclipse-jdt

If you go to Refactor -> History... in Eclipse you will see a Dialog with all the history of refactorings done in your workspace.
I would like to know if there is a way to create a plugin that simply counts how many let's say rename refactorings are in the history. How would you do that?

The IRefactoringHistoryService interface has methods to access the refactoring history.
Get the interface with:
IRefactoringHistoryService service = RefactoringCore.getHistoryService();
You can then get the history for a project using:
IProject project = ... project you are interested in
RefactoringHistory history = service.getProjectHistory(project, progressMonitor);
There are other methods which let you get the workspace history and specify start and end time stamps.
The history object can return an array of objects representing the refactoring:
RefactoringDescriptorProxy [] proxies = history.getDescriptors();
You can get the actual refactoring descriptor from the proxy:
RefactoringDescriptor desc = proxy.getDescription();

Related

VdmComplex Changes Not Working with PATCH

Using the SAP B1 .edmx with 3.39.0 and trying to update DeliveryNotes with new DocumentPackages. However, the list of DocumentPackage that eventually gets passed by the execution of the update operation is empty.
Code:
var packagesUpdateDocument = new Document();
packagesUpdateDocument.setDocEntry(1);
var documentPackages = new ArrayList<DocumentPackage>();
var documentPackage = new DocumentPackage();
documentPackage.setNumber(10);
documentPackages.add(documentPackage);
packagesUpdateDocument.setDocumentPackages(documentPackages);
var updateDeliveryPackagesRequest = service.withServicePath("etc")
.updateDeliveryNotes(packagesUpdateDocument);
var updateDeliveryPackagesResponse = updateDeliveryPackagesRequest.tryExecute(serviceLayerDestination);
Looking at the logs of the service layer I can see this is the request which was eventually sent by the client:
PATCH /b1s/v2/DeliveryNotes(1)
{"DocEntry":1,"DocumentPackages":[{}],"#odata.type":"SAPB1.Document"}
From my understanding, PATCH requests will automatically disregard anything the generated client deems as 'unchanged.'
Printing the changed fields:
System.out.println(packagesUpdateDocument.getChangedFields());
Yields:
{
DocEntry=175017,
DocumentPackages=
[DocumentPackage
(
super=VdmObject(customFields={},
changedOriginalFields={}),
odataType=SAPB1.DocumentPackage,
number=10,
)
]
.....
}
I believe the Package is not recording the fields which have changed. Although I am not certain.
Is there a step I am missing or is this a feature gap?
As of SAP Cloud SDK 3.42.0 we support updating complex properties with PATCH out-of-the-box. See the release notes for more details.
Yes this is a feature gap currently. PATCH will only consider properties of the root entity and navigation properties while disregarding changes in complex properties.
Until that is supported updating with PUT instead via the .replacingEntity() option should work.

Rails cucumber uninitialized constant User (NameError)

I'm starting with BDD (cucumber + capybara + selenium chromedriver) and TDD (rspec) with factory_bot and I'm getting an error on cucumber features - step_definitions.
uninitialized constant User (NameError)
With TDD, everything is ok, the factory bot is working fine. The problem is with the cucumber.
factories.rb
FactoryBot.define do
factory :user_role do
name {"Admin"}
query_name {"admin"}
end
factory :user do
id {1}
first_name {"Mary"}
last_name {"Jane"}
email {"mary_jane#gmail.com"}
password {"123"}
user_role_id {1}
created_at {'1/04/2020'}
end
end
support/env.rb
require 'capybara'
require 'capybara/cucumber'
require 'selenium-webdriver'
require 'factory_bot_rails'
Capybara.register_driver :selenium do |app|
Capybara::Selenium::Driver.new(app, browser: :chrome)
end
Capybara.configure do |config|
config.default_driver = :selenium
end
Capybara.javascript_driver = :chrome
World(FactoryBot::Syntax::Methods)
And the problem is happening here
support/hooks.rb
Before '#admin_login' do
#user = create(:user)
end
step_definitions/admin_login.rb
Given("a registered user with the email {string} with password {string} exists") do |email, password|
#user
end
I don't know why, but I can't access the user using cucumber and factory_bot.
Anybody could help me please?
I think I need to configure something on the cucumber.
What do you think guys?
First of all Luke is correct about this being a setup issue. The error is telling you that the User model cannot be found which probably means Rails is not yet loaded. I can't remember the exact details of how cucumber-rails works but one of the things it does is to make sure that each scenario becomes an extension of a Rails integration test. This ensures that all of the Rails auto-loading has taken place and that these things are available.
Secondly I'd suggest you start simpler and use a step to create your registered user rather than using a tag. Using tags for setup is a Cucumber anti-pattern.
Finally, and more controversially I'd suggest that you don't use factory-bot when cuking. FactoryBot uses a separate configuration to create model objects directly in the datastore. This bypasses any application logic around the creation of these objects, which means the objects created by FactoryBot are going to end up being different from the objects created by your application. In real life object creation involves things like auditing, sending emails, conditional logic etc. etc. To use FactoryBot you either have to duplicate that additional creation logic and behavior or ignore it (both choices are undesirable).
You can create objects for cuking much more effectively (and quicker) by using the following pattern.
Each create method in the Rails controller delegates its work to a service object e.g.
UserController
def create
#user = CreateUserService.new(params).call
end
end
Then have your cukes use a helper module to create things for you. This module will provide tools for your steps to create users, using the above service
module UserStepHelper
def create_user(params)
CreateUserService.new(default_params.merge(params))
end
def default_params
{
...
}
end
end
World UserStepHelper
Given 'there is a registered user' do
#registered_user = create_user
end
and then use that step in the background of your feature e.g.
Background:
Given there is a registered user
And I am an admin
Scenario: Admin can see registered users
When I login and view users
Then I should see a user
Notice the absence of tagging here. Its not desirable or necessary here.
You can see an extension of this approach in a sample application I did for a CukeUp talk in 2013 here https://github.com/diabolo/cuke_up/commits/master. If you follow this commit by commit starting from first commit at the bottom you will get quite a good guide to setting up a rails project with cucumber in just the first 4 or 4 commits. If you follow it through to the end (22 commits) you'll get a basic powerful framework for creating and using model objects when cuking. I realize the project is ancient and that obviously you will have to use modern versions of everything, but the principles still apply, and I use this approach in all my work and having been doing so for at least 10 years.
So if you're using rails, it's probably advised to use cucumber-rails over cucumber. This is probably an issue where your User models have not been auto-loaded in.
Cucumber auto-loads all ruby files underneath features, with env.rb first, it's almost certainly an issue with load order / load location

Automating/Tracking Knex Migrations and Lucid Models

The Situation
I recently started working on a new project using nodejs. I have a background of using Python/Django and C#/.NET (not a huge fan of the latter). Node is awesome, but I must say I miss the ease of building models and automating migrations in Django. I am currently using the AdonisJS framework which leverages Knex. Knex is a powerful library, but the migrations all need to be manually built. Additionally, the AdonisJS ORM that manages the Models is independent of Knex (migration manager). You also do not define field attributes on the Models, which can have benifits for dynamically doing things in the front and back end. All things considered, there is a lot of room for human error, miscommunication and a boat load more typing required. I know the the hot thing these days is to keep it loose and fast, but for this specific project, I am looking for a bit more structure than loosely defined models.
Current State
What I have landed on is building a new Class called tableModel and a field class to define the fields within table model. I have already completed this and I am successfully writing the migration files leveraging mustache. I plan on also automatically writing the Models which I shouldn't have a problem with (fingers crossed).
The Problem
Here is where it gets a little tough and where I need help...I need to track what has been added or removed via migration so I can effectively write ups and downs as the tableModels change over time.
So let's say I add a "tableModel" which creates a migration to create table Foo with fields {id (bigint), user_id(int), name(string255)}
Later I want to add a field called description so I would simply add it to my "tableModel" and then run a build command which would build out the migration.
How do I check what has already been created though so I only do an up() for description?
Then I want to remove the name field so I mark it out in my "tableModel" and run a build migration command. How do I check what has been migrated that now needs to be added in to the down().
Edit: I would add a remove field to the up and the corresponding roll back to the down.
Bonus Round
Let's say I want to change user_id from an int to a bigint, because who makes a foreign key just an int? How do I check not just what needs to be added to the up and down, but also checks if I need to change a property on a field.
Edit: would just write the up. and a corresponding roll back to the down
The Big Question
Basically, how do I define dirty "tableModels" classes
Possible Solution?
I am thinking that maybe I should capture some type of registry or snapshot and then run the comparison when building the migrations and or models, then recapture/snapshot. If this is the route, should I store in a json file, write this to the DB itself, or is there another/better option.
If I create the tableModel instances as constants, could I actually write back to the JS file and capture the snapshot as an attribute? IF this is an option, is Node's file system the way to go and what's the best way to do this? Node keep suprising me so I wouldn't be baffled if any of these are an option.
Help!
If anyone has gone down this path before or knows of any tools I could leverage, I would greatly appreciate it and thank you in advance. Also, if I am headed in a completely wrong direction, then please let me know, I both handle and appreciate all types of feedback.
Example
Something to note, when I define the "tableModel" for a given migration or model, it is an instance of the class, I am not creating an extended class since this is not my orm.
class tableModel {
constructor(tableName, modelName = tableName, fields = []) {
this.tableName = tableName
this.modelName = modelName
this.fields = fields
}
// Bunch of other stuff
}
fooTableModel = new tableModel('fooTable', 'fooModel', fields = [
new tableField.stringField('title'),
new tableField.bigIntField('related_user_id'),
new tableField.textField('description','Testing Default',false,true)
]
)
which equates to:
tableModel {
tableName: 'fooTable',
modelName: 'fooModel',
fields:
[ stringField {
name: 'title',
type: 'string',
_unique: false,
allow_null: null,
fieldAttributes: {},
default_value: null },
bigIntField {
name: 'related_user_id',
type: 'bigInteger',
_unique: false,
allow_null: null,
fieldAttributes: {},
default_value: 0 },
textField {
name: 'description',
type: 'text',
_unique: false,
allow_null: true,
fieldAttributes: {},
default_value: 'Testing Default' } ]
You have the up and down notation mixed up. Those are for migrating the "latest" (runs the up function) and doing rollbacks (runs the down function). Up and down to not relate to dropping or adding table columns.
The migrations up is for any change, and the down is to reverse those changes. So if you wanted to drop a column from some table, you write the command in the up, then write the opposite in the down (you'd add it back in...), such that you can "rollback" and the change is effectively reversed. You have to be careful with such things though, as you can put yourself in a situation where you actually lose data.
Want to add a column? Write it in the up, and drop the column in the down.
One of the major points behind the migrations mechanism is to track the state of changes of your database, as time goes forward. So generally, if you created a table in some migration, then a day or so later you realize you need to drop/add columns, you normally don't go back and edit the existing migration, especially if the migration has already been run. You'd just write a new migration to drop/add your column.
Since you're using knex, there are a couple "knex" tables that get created. By default the one you're looking for is knex_migrations, unless someone specifically modified the settings to change the name of it. This table holds all the migrations that have run against your DB, per batch. From the CLI, assuming you have knex.js installed globally, you can run knex migrate:latest, and that will push all the migrations that exist in your directory to the target database, if they have not yet been run. It does this by way of examining that knex_migrations table. If you roll a change and don't like it, and assuming you've properly done the down function, you can invoke knex migrate:rollback to reverse the change. If there are 3 migration files that have NOT yet been run, invoking knex migrate:latest will run all 3 of those migration files under a new batch #, which is 1 higher than the most recent batch number. Conversely, if you invoke a knex migrate:rollback, it will find the highest batch number (there could be more than 1 migration in a batch...), and invoke the down function on all those files, effectively rollback those changes.
All that said, knex is a "query builder" tool. It's got a ton of helper functions to help build the sql for you. Personally, I find this to be a major distraction. Why spend hours on hours figuring out all the helper functions when I can just go crank out raw SQL and run that. Thus, that's what we've done in our system. we use knex.raw('') and write our own DDL and DML. It works great and does exactly what we need it to. We don't need to go figure out the magic of the query building.
The short answer is that knex will automatically know what has and has not been run for you (again, via that knex_migrations table it creates for you...).
Things can get weird though when it start involving git and different branches. I recommend that if you're writing migrations on some branch, and you need to go do other work, always remember to first perform a rollback of any migrations you've done in that branch BEFORE switching branches. Otherwise you will be in weird DB states that don't coincide with the application code.
I would personally just deal with updating models independently of writing migrations. For example, if I'm adding a description column to some table, then I probably want to manually update the ORM to reflect the change of the new db schema. Generally, I've found trying to use a tool that automagically does that for you (rather, if I change the orm, stuff happens to write all the underlying sql...) usually winds me up in a heap of trouble and I just spend more time trying to un-fudge stuff. But, that's just my 2 cents :)
Here is where it gets a little tough and where I need help...I need to track what has been added or removed via migration so I can effectively write ups and downs as the tableModels change over time.
You could store changes in a DB/txt file and those can act as snapshots. So when you want to rollback to a particular migration, you would find the changes (up/down) made for that mutation and adjust accordingly.
Later I want to add a field called description so I would simply add it to my "tableModel" and then run a build command which would build out the migration. How do I check what has already been created though so I only do an up() for description?
Here you either call the database itself directly and check what fields have already been created. If a field is already their and the attributes are the same, you can either ignore it or stop the transaction all together.
Bonus Round Let's say I want to change user_id from an int to a bigint, because who makes a foreign key just an int? How do I check not just what needs to be added to the up and down, but also checks if I need to change a property on a field.
Again, call the DB itself on the table in question. I know the SQL call would be:
describe [table_name];
After reading the end, I think you answered this yourself, but I think capturing these changes would work best in a NoSql database since you're using Node or PostGres with it's json field.

Incremental loading in Azure Mobile Services

Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.

What to do when get "The model used to open the store is incompatible with the one used to create the store"?

I had a core data EntityDescription and I created data in it. Then, I changed the EntityDescription, added new one, deleted the old one using the editor for xcdatamodeld file.
Now any of my code for core data causes this error "The model used to open the store is incompatible with the one used to create the store}". The detail is below. What should I do? I prefer to remove everything in the data model and restart new one.
Thanks for any suggestion!
reason=The model used to open the store is incompatible with the one used to create the store}, {
metadata = {
NSPersistenceFrameworkVersion = 320;
NSStoreModelVersionHashes = {
Promotion = <472663da d6da8cb6 ed22de03 eca7d7f4 9f692d88 a0f273b7 8db38989 0d34ba35>;
};
NSStoreModelVersionHashesVersion = 3;
NSStoreModelVersionIdentifiers = (
);
NSStoreType = SQLite;
NSStoreUUID = "9D6F4C7E-53E2-476A-9829-5024691CED03";
"_NSAutoVacuumLevel" = 2;
};
Or if you're in dev mode, you can also just delete the app and run it again.
Deleting the app is sometimes not the case! Suggest, your app has already been published! You can't just add new entity to the data base and go ahead - you need to perform migration!
For those who doesn't want to dig into documentation and is searching for a quick fix:
Open your .xcdatamodeld file
click on Editor
select Add model version...
Add a new version of your model (the new group of datamodels added)
select the main file, open file inspector (right-hand panel) and under Versioned core data model select your new version of data model for current data model
THAT'S NOT ALL ) You should perform so called "light migration".
Go to your AppDelegate and find where the persistentStoreCoordinator is being created
Find this line if (![_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error])
Replace nil options with #{NSMigratePersistentStoresAutomaticallyOption:#YES, NSInferMappingModelAutomaticallyOption:#YES} (actually provided in the commented code in that method)
Here you go, have fun!
P.S. This only applies for lightweight migration. For your migration to qualify as a lightweight migration, your changes must be confined to this narrow band:
Add or remove a property (attribute or relationship).
Make a nonoptional property optional.
Make an optional attribute nonoptional, as long as you provide a default value.
Add or remove an entity.
Rename a property.
Rename an entity.
Answer borrowed from Stas
If this is a non-production app, just delete your local database (appname.sqlite) and restart the app.
I find I'm always doing this, and so provide the following additional detail:
Under XCode 4 (4.3.2) you should find your datastore here:
/Users/~/Library/Application Support/iPhone Simulator/simulatorVersion/Applications/yourAppIdentifier/Documents
Or you can use Spotlight, if you first enable searching for System Files; I've found it fastest to save such a search to the menu bar.
If this is a non-production app, just delete your local database (appname.sqlite) and restart the app.
Delete your app on simulator and restart:
On simulator, go to Hardware -> Home:
Click and hold mouse button on your application icon:
Click on "X" in app icon to delete:
Go back to Xcode and restart your application(Command+R):
or:
PS.:
If the error appears again, review your code because the problem should be in the syntax or discrepancy between what you want to list with the data model that you have.
Reset your simulator and run again. If you were to run with a different device in the simulator, it would work. If you are running with an iphone 6s simulator and you try to run 6s plus, it would still work without resetting.
If running on a phone, make sure to delete the app and rerun it
I have faced the same issue using Xcode 7 beta 1 and the following action has resolved the issue.
Menu==>> click on Window>Projects>select project on the left hand side and click on delete button which is located on the right side.
If still doesn't work,
=> reset the simulator and run the app

Resources