Validate Interceptor not working in Multi Threaded Impex - sap-commerce-cloud

SAP Commerce 1811
Impex -
INSERT_UPDATE Calendar ; code[unique=true] ; name[lang=en] ; year ; active[default=false]
; 10001 ; 2021 Public Holiday ; 2021 ; true
; 10002 ; 2021 Holiday ; 2021 ; true
I have created one validate interceptor which will make sure that only one calendar can be active at a time. It means we can't make more than two Calendar active for the same year.
final CalendarModel cal = calendarService.getActiveCalendar(calendar.getYear());
if (cal != null && !cal.equals(calendar))
{
throw new InterceptorException(
String.format("Only one Calendar can be active at a time for year %s", calendar.getYear()));
}
In this Impex, I am inserting two Calendars with active=true, and expecting to give validation exception for one of the entry.
Since in this use case, one entry depends on other, it won't work in case of multi threading (because order will not defined).
If Max. threads is set to more than 1, and I run the impex, validation is not working. I tried importing this impex with 1 thread, then only validation is working.
Is there any way to solve this issue ?

You cannot solve this problem with an Interceptor. Interceptors are simply not threadsafe. They are not built to validate if an object already exists or not. The interceptors are meant to validate the object that will be saved, independently of other objects.
This is a problem that needs to be fixed on the database, not in your code. You need to add an SQL constraint (more specifically an SQL CHECK Constraint) on the table of your CalendarModel. This constraint should validate your requirement on 1 active item per year.
How to write that constraint will depend on your database. To add it, you can either add the constraint directly in the database, though an update system might remove it again (I did not validate this, but it occurs for indexes). Alternativly, you can run an alter table statement during initialization to create the constraint.
With this constraint, a database error will be thrown on the illegal item. This will be visibile in the result when running the impex

Related

Validate #versionColumn value before saving an entity with TypeORM

I'm currently working on saving data in a postgres DB using TypeORM with the NestJS integration. I'm saving data which keeps track of a version property using TypeORM's #VersionColumn feature, which increments a number each time save() is called on a repository.
For my feature it is important to check this version number before updating the records.
Important
I know I could technically achieve this by retrieving the record before updating it and checking the versions, but this leaves a small window for errors. If a 2nd user updates the same record in that millisecond between the get and save or if it would take longer for some weird reason, it would up the version and make the data in the first call invalid. TypeORM doesn't check the version value, so even if a call has a lower value than what is in the database, it still saves the data eventhough it should be seen as out of date.
1: User A checks latest version => TypeORM gives back the latest version: 1
2: User B updates record => TypeORM ups the version: 2
3: User A saves their data with version 1 <-- This needs to validate the versions first.
4: TypeORM overwrites User B's record with User A's data
What I'm looking for is a way to make TypeORM decline step 3 as the latest version in the database is 2 and User A tries to save with version 1.
I've tried using the querybuilder and update statements to make this work, but the build-in #VersionColumn only up the version on every save() call from a repository or entity manager.
Besides this I also got a tip to look into database triggers, but as far as I could find, this feature is not yet supported by TypeORM
Here is an example of the setup:
async update(entity: Foo): Promise<boolean> {
const value = await this._configurationRepository.save(entity);
if (value === entity) {
return true;
}
return false;
}
In my opinion, something like this is much better served through triggers directly in the Database as it will fix concerns around race conditions as well as making it so that modifications made outside the ORM will also update the version number. Here is a SQL Fiddle demonstrating triggers in action. You'll just need to incorporate it into your schema migrations.
Here is the relevant DDL from the SQL Fiddle example:
CREATE TABLE entity_1
(
id serial PRIMARY KEY,
some_value text,
version int NOT NULL DEFAULT 1
);
CREATE OR REPLACE FUNCTION increment_version() RETURNS TRIGGER AS
$BODY$
BEGIN
NEW.version = NEW.version + 1;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
CREATE TRIGGER increment_entity_1_version
BEFORE UPDATE
ON entity_1
FOR EACH ROW
EXECUTE PROCEDURE increment_version();
The same trigger function can be used for any table that has a version column in case this is a pattern you want to use across multiple tables.
I think you are looking for concurrency control. If this is the case there is a solution in this about 1/2 the way down. TypeORM concurrency control issue

Multiple member mdx query returns error (permission to access the specified member)

I want to mention that I'm new to SSAS and MDX.
In the past several days I've been dwelling with an excel generated query that errors out.
The problem is that a query is generated by excel when trying to read data from an online cube data source preventing other reads for that cube. The query is executed against an AZURE cube and I manage to profile it and get the following query:
with set __XLUniqueNames as {[Stores].[Chain].[Chain].&[SuperBrugsen], [Stores].[Chain].[Chain].&[Salling], [Stores].[Chain].[Chain].&[SuperBrugsen] }
set __XLDrilledUp as
Generate(__XLUniqueNames,
{ IIF([Stores].[Chain].currentmember.LEVEL_NUMBER <= 2147483647,
[Stores].[Chain].currentmember,
Ancestor([Stores].[Chain].currentmember,
[Stores].[Chain].currentmember.LEVEL_NUMBER - 2147483647)) } )
member [Measures].__XLPath as
Generate(
Ascendants([Stores].[Chain].currentmember),
[Stores].[Chain].currentmember.unique_name,
"__XLPSEP")
select { [Measures].__XLPath } on 0,
__XLDrilledUp on 1
from [SomeCube]
cell properties value
Each time query contains more than one member (an existing member from that dimension) it errors out with this message:
"Either you do not have permission to access the specified member or the specified member does not exist.".
What I have tried:
First, I tried to identify a pattern of member combinations that errors out, with no luck. It seems that for some certain members I get the error and for some, It doesn't. For single member, duplicate members and combination of members that don't exist in the cube it doesn't error.
Second, I did try the query on a different cube (on-premise SSAS) and I didn't get the error.
Third, by modifying the connection string I tried to make Excel ignore the missing members in the hope it will work using the "MDXMissingMemberMode" flag set to Ignore. I didn't work.
Forth, I tried to dissect the query to see which clause was giving the error. With my limited knowledge of MDX I suspect that "currentmember" with its "LEVEL_NUMBER" property is at fault. My guess is that it fails to get the current member for the next member in the set.
Fifth, the last thing and the longest, by accident I discovered that in SSMS you can execute a query in an mdx session (Right-click on cube -> New query) or you can open the cube in browse mode (Right-click on cube -> Browse) which results in a UI similar to the mdx query like.
No here comes the surprise, in this browse "mode" my query executes successfully each time. Intrigued by this I started to profile the request and see what was different. The difference was some additional xml structure like a list with properties. Seeing this I figured I could manipulate my connection string from excel to send some of the properties to make it work, but in the end, I didn't work.
Additional proprieties that worked:
<PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis">
<Catalog>SomeCatalog</Catalog>
<ShowHiddenCubes>true</ShowHiddenCubes>
<SspropInitAppName>Microsoft SQL Server Management Studio</SspropInitAppName>
<Timeout>3600</Timeout>
<LocaleIdentifier>1033</LocaleIdentifier>
<ClientProcessID>24400</ClientProcessID>
<DataSourceInfo/>
<Format>Tabular</Format>
<Content>Schema</Content>
<DbpropMsmdFlattened2>true</DbpropMsmdFlattened2>
<ReturnCellProperties>true</ReturnCellProperties>
<DbpropMsmdActivityID>2309dfa2-3607-41b2-9446-8ece2f5ababa</DbpropMsmdActivityID>
<DbpropMsmdCurrentActivityID>2309dfa2-3607-41b2-9446-8ece2f5ababa</DbpropMsmdCurrentActivityID>
<DbpropMsmdRequestID>d3dbd079-5ca7-496c-ab55-afea71889238</DbpropMsmdRequestID>
</PropertyList>
Additional properties that didn't work:
<PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis">
<Catalog>SomeCatalog</Catalog>
<SspropInitAppName>Microsoft SQL Server Management Studio - Query</SspropInitAppName>
<LocaleIdentifier>1033</LocaleIdentifier>
<ClientProcessID>24400</ClientProcessID>
<DataSourceInfo/>
<Format>Native</Format>
<AxisFormat>TupleFormat</AxisFormat>
<Content>SchemaData</Content>
<Timeout>0</Timeout>
<DbpropMsmdActivityID>e5e75ad6-8fca-4f25-abba-047f86198602</DbpropMsmdActivityID>
<DbpropMsmdCurrentActivityID>e5e75ad6-8fca-4f25-abba-047f86198602</DbpropMsmdCurrentActivityID>
<DbpropMsmdRequestID>8901787f-15a7-48a0-86eb-18ff0b92bdc4</DbpropMsmdRequestID>
</PropertyList>
Excel additional properties:
<PropertyList xmlns="urn:schemas-microsoft-com:xml-analysis" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<Catalog>SomeCatalog</Catalog>
<Timeout>0</Timeout>
<Format>Native</Format>
<DbpropMsmdFlattened2>false</DbpropMsmdFlattened2>
<SafetyOptions>2</SafetyOptions>
<Dialect>MDX</Dialect>
<MdxMissingMemberMode>Error</MdxMissingMemberMode>
<DbpropMsmdOptimizeResponse>9</DbpropMsmdOptimizeResponse>
<DbpropMsmdActivityID>9D69640F-553A-4970-BD4E-7234F1CD928C</DbpropMsmdActivityID>
<DbpropMsmdRequestID>B5E10FF0-EF2F-409E-83BF-CD2DBA20C2BE</DbpropMsmdRequestID>
<LocaleIdentifier>1030</LocaleIdentifier>
<DbpropMsmdMDXCompatibility>1</DbpropMsmdMDXCompatibility>
</PropertyList>
Result of a single member working mxd query:
SuperBrugsen [Stores].[Chain].[Chain].&[SuperBrugsen]__XLPSEP[Stores].[Chain].[All]
This all the info that I could gather for my problem. My next step is to get to Microsoft for help by I don't want to do that just yet due to the costs.
Can someone of you guys please help me out? any ideas or suggestion are most welcomed because I ran out of ideas.
It seems that the problem solved itself. Most likely there was an update that solved this issue. Ref. to azure update logs page: https://azure.microsoft.com/en-us/updates/?product=analysis-services&status=nowavailable

Automating/Tracking Knex Migrations and Lucid Models

The Situation
I recently started working on a new project using nodejs. I have a background of using Python/Django and C#/.NET (not a huge fan of the latter). Node is awesome, but I must say I miss the ease of building models and automating migrations in Django. I am currently using the AdonisJS framework which leverages Knex. Knex is a powerful library, but the migrations all need to be manually built. Additionally, the AdonisJS ORM that manages the Models is independent of Knex (migration manager). You also do not define field attributes on the Models, which can have benifits for dynamically doing things in the front and back end. All things considered, there is a lot of room for human error, miscommunication and a boat load more typing required. I know the the hot thing these days is to keep it loose and fast, but for this specific project, I am looking for a bit more structure than loosely defined models.
Current State
What I have landed on is building a new Class called tableModel and a field class to define the fields within table model. I have already completed this and I am successfully writing the migration files leveraging mustache. I plan on also automatically writing the Models which I shouldn't have a problem with (fingers crossed).
The Problem
Here is where it gets a little tough and where I need help...I need to track what has been added or removed via migration so I can effectively write ups and downs as the tableModels change over time.
So let's say I add a "tableModel" which creates a migration to create table Foo with fields {id (bigint), user_id(int), name(string255)}
Later I want to add a field called description so I would simply add it to my "tableModel" and then run a build command which would build out the migration.
How do I check what has already been created though so I only do an up() for description?
Then I want to remove the name field so I mark it out in my "tableModel" and run a build migration command. How do I check what has been migrated that now needs to be added in to the down().
Edit: I would add a remove field to the up and the corresponding roll back to the down.
Bonus Round
Let's say I want to change user_id from an int to a bigint, because who makes a foreign key just an int? How do I check not just what needs to be added to the up and down, but also checks if I need to change a property on a field.
Edit: would just write the up. and a corresponding roll back to the down
The Big Question
Basically, how do I define dirty "tableModels" classes
Possible Solution?
I am thinking that maybe I should capture some type of registry or snapshot and then run the comparison when building the migrations and or models, then recapture/snapshot. If this is the route, should I store in a json file, write this to the DB itself, or is there another/better option.
If I create the tableModel instances as constants, could I actually write back to the JS file and capture the snapshot as an attribute? IF this is an option, is Node's file system the way to go and what's the best way to do this? Node keep suprising me so I wouldn't be baffled if any of these are an option.
Help!
If anyone has gone down this path before or knows of any tools I could leverage, I would greatly appreciate it and thank you in advance. Also, if I am headed in a completely wrong direction, then please let me know, I both handle and appreciate all types of feedback.
Example
Something to note, when I define the "tableModel" for a given migration or model, it is an instance of the class, I am not creating an extended class since this is not my orm.
class tableModel {
constructor(tableName, modelName = tableName, fields = []) {
this.tableName = tableName
this.modelName = modelName
this.fields = fields
}
// Bunch of other stuff
}
fooTableModel = new tableModel('fooTable', 'fooModel', fields = [
new tableField.stringField('title'),
new tableField.bigIntField('related_user_id'),
new tableField.textField('description','Testing Default',false,true)
]
)
which equates to:
tableModel {
tableName: 'fooTable',
modelName: 'fooModel',
fields:
[ stringField {
name: 'title',
type: 'string',
_unique: false,
allow_null: null,
fieldAttributes: {},
default_value: null },
bigIntField {
name: 'related_user_id',
type: 'bigInteger',
_unique: false,
allow_null: null,
fieldAttributes: {},
default_value: 0 },
textField {
name: 'description',
type: 'text',
_unique: false,
allow_null: true,
fieldAttributes: {},
default_value: 'Testing Default' } ]
You have the up and down notation mixed up. Those are for migrating the "latest" (runs the up function) and doing rollbacks (runs the down function). Up and down to not relate to dropping or adding table columns.
The migrations up is for any change, and the down is to reverse those changes. So if you wanted to drop a column from some table, you write the command in the up, then write the opposite in the down (you'd add it back in...), such that you can "rollback" and the change is effectively reversed. You have to be careful with such things though, as you can put yourself in a situation where you actually lose data.
Want to add a column? Write it in the up, and drop the column in the down.
One of the major points behind the migrations mechanism is to track the state of changes of your database, as time goes forward. So generally, if you created a table in some migration, then a day or so later you realize you need to drop/add columns, you normally don't go back and edit the existing migration, especially if the migration has already been run. You'd just write a new migration to drop/add your column.
Since you're using knex, there are a couple "knex" tables that get created. By default the one you're looking for is knex_migrations, unless someone specifically modified the settings to change the name of it. This table holds all the migrations that have run against your DB, per batch. From the CLI, assuming you have knex.js installed globally, you can run knex migrate:latest, and that will push all the migrations that exist in your directory to the target database, if they have not yet been run. It does this by way of examining that knex_migrations table. If you roll a change and don't like it, and assuming you've properly done the down function, you can invoke knex migrate:rollback to reverse the change. If there are 3 migration files that have NOT yet been run, invoking knex migrate:latest will run all 3 of those migration files under a new batch #, which is 1 higher than the most recent batch number. Conversely, if you invoke a knex migrate:rollback, it will find the highest batch number (there could be more than 1 migration in a batch...), and invoke the down function on all those files, effectively rollback those changes.
All that said, knex is a "query builder" tool. It's got a ton of helper functions to help build the sql for you. Personally, I find this to be a major distraction. Why spend hours on hours figuring out all the helper functions when I can just go crank out raw SQL and run that. Thus, that's what we've done in our system. we use knex.raw('') and write our own DDL and DML. It works great and does exactly what we need it to. We don't need to go figure out the magic of the query building.
The short answer is that knex will automatically know what has and has not been run for you (again, via that knex_migrations table it creates for you...).
Things can get weird though when it start involving git and different branches. I recommend that if you're writing migrations on some branch, and you need to go do other work, always remember to first perform a rollback of any migrations you've done in that branch BEFORE switching branches. Otherwise you will be in weird DB states that don't coincide with the application code.
I would personally just deal with updating models independently of writing migrations. For example, if I'm adding a description column to some table, then I probably want to manually update the ORM to reflect the change of the new db schema. Generally, I've found trying to use a tool that automagically does that for you (rather, if I change the orm, stuff happens to write all the underlying sql...) usually winds me up in a heap of trouble and I just spend more time trying to un-fudge stuff. But, that's just my 2 cents :)
Here is where it gets a little tough and where I need help...I need to track what has been added or removed via migration so I can effectively write ups and downs as the tableModels change over time.
You could store changes in a DB/txt file and those can act as snapshots. So when you want to rollback to a particular migration, you would find the changes (up/down) made for that mutation and adjust accordingly.
Later I want to add a field called description so I would simply add it to my "tableModel" and then run a build command which would build out the migration. How do I check what has already been created though so I only do an up() for description?
Here you either call the database itself directly and check what fields have already been created. If a field is already their and the attributes are the same, you can either ignore it or stop the transaction all together.
Bonus Round Let's say I want to change user_id from an int to a bigint, because who makes a foreign key just an int? How do I check not just what needs to be added to the up and down, but also checks if I need to change a property on a field.
Again, call the DB itself on the table in question. I know the SQL call would be:
describe [table_name];
After reading the end, I think you answered this yourself, but I think capturing these changes would work best in a NoSql database since you're using Node or PostGres with it's json field.

Maximo automatisation script to change statut of workorder

I have created a non-persistent attribute in my WoActivity table named VDS_COMPLETE. it is a bool that get changed by a checkbox in one of my application.
I am trying to make a automatisation script in Python to change the status of every task a work order that have been check when I save the WorkOrder.
I don't know why it isn't working but I'm pretty sure I'm close to the answer...
Do you have an idea why it isn't working? I know that I have code in comments, I have done a few experimentations...
from psdi.mbo import MboConstants
from psdi.server import MXServer
mxServer = MXServer.getMXServer()
userInfo = mxServer.getUserInfo(user)
mboSet = mxServer.getMboSet("WORKORDER")
#where1 = "wonum = :wonum"
#mboSet .setWhere(where1)
#mboSet.reset()
workorderSet = mboSet.getMbo(0).getMboSet("WOACTIVITY", "STATUS NOT IN ('FERME' , 'ANNULE' , 'COMPLETE' , 'ATTDOC')")
#where2 = "STATUS NOT IN ('FERME' , 'ANNULE' , 'COMPLETE' , 'ATTDOC')"
#workorderSet.setWhere(where2)
if workorderSet.count() > 0:
for x in range(0,workorderSet.count()):
if workorderSet.getString("VDS_COMPLETE") == 1:
workorder = workorderSet.getMbo(x)
workorder.changeStatus("COMPLETE",MXServer.getMXServer().getDate(), u"Script d'automatisation", MboConstants.NOACCESSCHECK)
workorderSet.save()
workorderSet.close()
It looks like your two biggest mistakes here are 1. trying to get your boolean field (VDS_COMPLETE) off the set (meaning off of the collection of records, like the whole table) instead of off of the MBO (meaning an actual record, one entry in the table) and 2. getting your set of data fresh from the database (via that MXServer call) which means using the previously saved data instead of getting your data set from the screen where the pending changes have actually been made (and remember that non-persistent fields do not get saved to the database).
There are some other problems with this script too, like your use of "count()" in your for loop (or even more than once at all) which is an expensive operation, and the way you are currently (though this may be a result of your debugging) not filtering the work order set before grabbing the first work order (meaning you get a random work order from the table) and then doing a dynamic relationship off of that record (instead of using a normal relationship or skipping the relationship altogether and using just a "where" clause), even though that relationship likely already exists.
Here is a Stack Overflow describing in more detail about relationships and "where" clauses in Maximo: Describe relationship in maximo 7.5
This question also has some more information about getting data from the screen versus new from the database: Adding a new row to another table using java in Maximo

Mocking Postgresql now() function for testing

I have the following stack
Node/Express backend
Postgresql 10 database
Mocha for testing
Sinon for mocking
I have written a bunch of end-to-end tests to test all my webservices. The problem is that some of them are time dependent (as in "give me the modified records of the last X seconds").
sinon is pretty good at mocking all the time/dated related stuff in Node, however I have a modified field in my Postgresql tables that is populated with a trigger:
CREATE FUNCTION update_modified_column()
RETURNS TRIGGER AS $$
BEGIN
NEW.modified = now();
RETURN NEW;
END;
$$ LANGUAGE 'plpgsql';
The problem of course is that sinon can't override that now() function.
Any idea on how I could solve this? The problem is not setting a specific date at the start of the test, but advancing time faster than real-time (in one of my tests I want to change some stuff in the database, advance the 'current time' with one day, change some more stuff in the database and do webservice calls to see the result).
I can figure out a few solutions myself, but they all involve changing the application code and making it less elegant. I don't think your application code should be impacted by the fact that you want to test it.
Here's an idea: Create your own mock_now() with mock_dates table:
create table mock_dates (
id serial PRIMARY KEY,
mock_date timestamptz not null
);
create or replace function mock_now()
returns timestamptz
as $$
declare
RET timestamptz;
begin
-- Delete first added date and assign it to RET
delete from mock_dates where id in (
select id from mock_dates order by id asc limit 1
)
returning mock_dates.mock_date into RET;
-- If no deletion happened just return the current timestamp
if RET is null then
return now();
end if;
-- Otherwise return the mocked date
return RET;
end;
$$
language plpgsql;
And insert some mocked dates
insert into mock_dates (mock_date) values ('2001-03-11 02:34:00'::timestamptz);
insert into mock_dates (mock_date) values ('2002-05-22 01:49:00'::timestamptz);
and use mock_now() instead of now(). It will return the timestamps inserted to the mock_dates table once (first in first out).
When the table is empty it will work like the default now().
Just ensure the mock_dates table is empty in production 😁
Or you could even define a different function for production which does not even try to read the mock_dates table.
I found this very neat gist which provides a fake NOW() function that lives in a separate schema. You load it into your test database and then modify the search path of each testing session to search override before pg_catalog. Two functions freeze_time and unfreeze_time are provided to enable and disable frozen time.
To be honest, DB internal stuff is always hard to test from application code. In my experience, the best thing to do is to just verify the record's state.
So, rather than testing specifically that the now function is called internally, write a test that creates a new record, then verify that record has created and modified fields set. At that point they should equal each other, and probably be within the last second or two, so that's all stuff you can write assertions around.
Then you write another test that changes some value in the record, and write assertions that the modified stamp is different from the created stamp, is more recent, and probably within the last second d or two.

Resources