How to get the latest 10 changes from a CMIS repository - cmis

How to get the latest 10 changes that happened on a CMIS repository?
In particular, will the call to getContentChanges below return the latest 10 changes?
Or might it return the first 10 changes of the repo, or something else?
Integer maxItems = 10
getContentChanges(repositoryId, maxItems) // I don't specify a change token

Related

Validate #versionColumn value before saving an entity with TypeORM

I'm currently working on saving data in a postgres DB using TypeORM with the NestJS integration. I'm saving data which keeps track of a version property using TypeORM's #VersionColumn feature, which increments a number each time save() is called on a repository.
For my feature it is important to check this version number before updating the records.
Important
I know I could technically achieve this by retrieving the record before updating it and checking the versions, but this leaves a small window for errors. If a 2nd user updates the same record in that millisecond between the get and save or if it would take longer for some weird reason, it would up the version and make the data in the first call invalid. TypeORM doesn't check the version value, so even if a call has a lower value than what is in the database, it still saves the data eventhough it should be seen as out of date.
1: User A checks latest version => TypeORM gives back the latest version: 1
2: User B updates record => TypeORM ups the version: 2
3: User A saves their data with version 1 <-- This needs to validate the versions first.
4: TypeORM overwrites User B's record with User A's data
What I'm looking for is a way to make TypeORM decline step 3 as the latest version in the database is 2 and User A tries to save with version 1.
I've tried using the querybuilder and update statements to make this work, but the build-in #VersionColumn only up the version on every save() call from a repository or entity manager.
Besides this I also got a tip to look into database triggers, but as far as I could find, this feature is not yet supported by TypeORM
Here is an example of the setup:
async update(entity: Foo): Promise<boolean> {
const value = await this._configurationRepository.save(entity);
if (value === entity) {
return true;
}
return false;
}
In my opinion, something like this is much better served through triggers directly in the Database as it will fix concerns around race conditions as well as making it so that modifications made outside the ORM will also update the version number. Here is a SQL Fiddle demonstrating triggers in action. You'll just need to incorporate it into your schema migrations.
Here is the relevant DDL from the SQL Fiddle example:
CREATE TABLE entity_1
(
id serial PRIMARY KEY,
some_value text,
version int NOT NULL DEFAULT 1
);
CREATE OR REPLACE FUNCTION increment_version() RETURNS TRIGGER AS
$BODY$
BEGIN
NEW.version = NEW.version + 1;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
CREATE TRIGGER increment_entity_1_version
BEFORE UPDATE
ON entity_1
FOR EACH ROW
EXECUTE PROCEDURE increment_version();
The same trigger function can be used for any table that has a version column in case this is a pattern you want to use across multiple tables.
I think you are looking for concurrency control. If this is the case there is a solution in this about 1/2 the way down. TypeORM concurrency control issue

Assignment files do not get deleted via course reset after cron execution in Moodle

When I try to reset the assignments of a course, front-end wise all data get deleted. I tested this with a single file upload by myself in a test assignment. But when checking disk usage with
du moodledata/filedir
the same usage remains. I ensured execution of the cron task which printed
...
Cron script completed correctly
Cron completed at 17:40:03. Memory used 32.8MB.
Execution took 0.810698 seconds
The files also are not in moodledata/trashdir probably reason why the cron task does not clean it.
Removing file with
moosh file-hash-delete <hash>
seemed to work. I identified the hash with pre/after executing disk usage and checking hash in the folder that used up the size of the file I uploaded.
The hash was not in the mdl_files table in MySQL, but the draft of it was. This one I found out via
moosh file-check
and I also checked it with phpMyAdmin, which outputted the file(draft) alongside other files.
Logs for resetting the course show the following:
Core System, course reset finished, The reset of the course with id '4' has ended.
Core System, deadline updated, The user with id '2' updated the event 'test ist zur Bewertung fällig.' with id '4'.
Core System, deadline updated, The user with id '2' updated the event 'test ist fällig.' with id '3'.
Core System, course reset begin, The user with id '2' started the reset of the course with id '4'.
(note that I translated some of the messages, because my setup is in German).
Unfortunately I'm having to run this Moodle instance on a hoster with extremely low disk storage (hence backup/deletion requirement).
Some background infos:
Moodle - version 3.8.2+ stable, dbtype set to mariadb
MariaDB - version 10.3.19
Machine: CentOS Linux 7
UPDATE: It seems that after some days (I checked today, ~4 days later) the files have been deleted. I don't know why this happened after so many days even though I manually triggered the cron job (seems that it doesn't delete the files). It would be nice to check where the timer is set and which script finally deletes the files.
On the course reset page, if you scroll down, there is a drop down for Assignments
Did you check the box for Delete all submissions ?
In the code, $data->reset_assign_submissions will delete the files:
public function reset_userdata($data) {
global $CFG, $DB;
$componentstr = get_string('modulenameplural', 'assign');
$status = array();
$fs = get_file_storage();
if (!empty($data->reset_assign_submissions)) {

Orchard cms get latest version of content item even if it removed

I'm using Orchard 1.7
In Orchard, when a content item is removed, it don't actually deleted from database, the cms just only set Published & Latest value of all versions of content item to 0, it still can be retrieved
And my problem is: I have a user that was removed (this user was modified many times, especially the Title)
Case 1: I use cms.Get(userId, VersionOptions.AllVersions).As<TitlePart>()
Case 2: I use myItem.As<CommonPart>().Owner.As<TitlePart>()
And the result is it always returns the title of the first version of this user, I want it return the latest version (the largest version number) of it.
So, where in Orchard should I modify to resolve this ?
I had this issue too. Here is my solution to query the recent version of a deleted user by using the content manager:
Orchard.Users.Models.UserPart lUserPart = mContentManager
.Query<Orchard.Users.Models.UserPart,
Orchard.Users.Models.UserPartRecord>(VersionOptions.AllVersions)
.Where(u => u.NormalizedUserName == lowerName) // taken from Orchard.Users.Services.MembershipService.GetUser()
.List()
.LastOrDefault(); // LastOrDefault() to get version with highest version number

How to customize a merge request message on Gitlab?

Before starting a new issue, I always create a new branch for it (directly from Gitlab). When I finish the job on that issue (and tests are Ok), I create a merge request (from Gitlab).
After the merge is done, I have an "auto-generated" message linked to that merge (this message is very generic and identical to all merges I done).
The some thing happens also when I merge develop into master:
Is there a way to customize the merge request message to have a message like this:
Merge {shortIssueName}: {issueDescription} into {develop|master}
Note:
I'm using GitLab Community Edition 8.15.3.
Globally, automatically - I don't think so. As I see, it's hard-coded:
message = [
"Merge branch '#{source_branch}' into '#{target_branch}'",
title
]
if !include_description && closes_issues_references.present?
message << "Closes #{closes_issues_references.to_sentence}"
end
message << "#{description}" if include_description && description.present?
message << "See merge request #{to_reference}"
message.join("\n\n")
You can override message for any merge request manually:
It's also possible if you create merge request with API. It requires your time but you can build some mechanism that fetches all data with API and set it as a description (but you must ensure all is available with API, issueDescription and so on).
Even tho #piotr-dawidiuk makes a good point, I believe is outdated.
According to gitlab docs, you can create your .md files, changing all templates. Check it here. As it states,
Similarly to issue templates, create a new Markdown (.md) file inside the .gitlab/merge_request_templates/ directory in your repository. Commit and push to your default branch.

How to revert changes in couchdb?

I made a mass edit to a bunch of docs in my couchdb, but I made a mistake and overwrote a field improperly. I can see that the previous revision is there. How do I revert back to it?
My current best guess, based on this:
http://guide.couchdb.org/draft/conflicts.html
...is to find the doc id and the revision id and then send a delete for that document specifying the revision I want to be gone.
curl -X DELETE $HOST/databasename/doc-id?rev=2-de0ea16f8621cbac506d23a0fbbde08a
I think that will leave the previous revision. Any better ideas out there?
I had to write some coffeescript (using underscore.js and jquery.couch) to do this. It's not a true revert, as we are getting the old revision and creating a new revision with it. Still looking for better suggestions:
_.each docsToRevert, (docToRevert) ->
$.couch.db("databaseName").openDoc docToRevert.id,
revs_info: true
,
success: (doc) ->
$.couch.db("databaseName").openDoc docToRevert.id,
rev: doc._revs_info[1].rev #1 gives us the previous revision
,
success: (previousDoc) ->
newDoc = previousDoc
newDoc._rev = doc._rev
result.save newDoc

Resources