When designing Workflows you have a chance to indicate how it is triggered.
In my particular case I am interested to detect changes in the Status Reason and, for specific states, do something. I can use the "After" filed change on the Status Reason or a Wait condition and everything looks to be OK.
The question I have is in the relation to an Excel Export/Import used for bulk operations. In this case the user can change (using Excel) the Status Reason field to a value matching the condition in the workflow.
Assuming the workflow is Activated at the time of Excel import, does the workflow get triggered for every row imported?
It might be very inefficient from a timing perspective but for small data sets might be beneficial and acting as a bulk update, which in fact I am looking for.
Thank you!
For your question,
Yes workflow does get triggered every time you Import data using Excel and it matches the criteria for your Workflow.
Workflow run on server side that means, they will trigger every time value changes in Database and matches criteria. You could run your workflow in asynchronous mode and Crm Async job will take care of allocating resources as and when it has capacity. In this way you will not see performance impact when you Import data via Excel.
Related
I have an original source, Source A, where I am fetching data from. I am reformatting and filtering the data from Source A, and storing it in Source B.
I have created a REST API using Node/Express that uses Source B.
Source A gets updated once every day. I want to update Source B at the same rate. What is the best way to do it?
Ideas so far:
For every API call I get to my server, before returning the data, check if the data was last updated within a day. If not then update the data and send it. This would mean that one API call per day would be extremely slow.
Perform Cron Scheduling
I would like to know if there are more ways to do this and I would like a comparison of different ways? I would also like if any of you guys do something like that in production and what method has worked?
Note: In my case Source A is a CSV file on a github repo, and Source B is MongoDB collection.
The best case you can achieve here is automatic updates to the MongoDB collection whenever that github CSV file is updated. If you can hook your job execution into whatever event is triggering the CSV upload, you would be golden. If you have to go through github, look into github hooks and see if you can subscribe your workload to one of those events
There is a nice option 3 that you can do with mongo, by the way. Mongo queues are great for scheduling jobs at precise intervals.
I want to delete more than 1 million User information in Kentico10.
I tried to delete it with UserInfoProvider.DeleteUser (); (see the following documentation), but it is expected that it will take nearly one year with a simple calculation.
https://docs.kentico.com/api10/configuration/users#Users-Deletingauser
Because it's a simple calculation, I think it's actually a bit shorter, but it still takes time.
Is there any other way to delete users in a short time?
Of course make sure you have a backup of your database before you do any of this.
Depending on the features you're using, you could get away with a SQL statement. Due to the complexities of the references of a user to multiple other tables, the SQL statement can get pretty complex and you need to make sure you remove the other references before removing the actual user record.
I'd highly recommend an API approach and delete users through the API so it removes all the references for you automatically. In your API calls make sure you wrap the delete action in the following so it stops the logging of the events and other labor-intensive activities not needed.
using (var context = new CMSActionContext())
{
context.DisableAll();
// delete your user
}
In your code, I'd only select the top 100 or so at a time and delete them in batches. Assuming you don't need this done all in one run, you could let the scheduled task run your custom code for a week and see where you're at.
If all else fails, figure out how to delete the user and the 70+ foreign key references and you'll be golden.
Why don't you delete them with SQL query? - I believe it will be much faster.
Bulk delete functionality exist starting from version 10.
UserInfoProvider has BulkDelete method. Actually any InfoProvider object inhereted from AbstractInfoProvider has BulkDelete method.
I need to allow a couple of users to modify a table in my database, preferably as part of an integrated package that then submits the changes into our live database.
Please allow me to explain further:
We have an automated import task from one database system into another, with data transformation on the way through.
As part of this task, various checks are run before the final import and any rows with incomplete or incorrect data are sent to a rejections table and deleted from the import table.
I now need to allow a couple of senior users that ability to view and correct the missing/incorrect entries from the rejection table, before re-staging it and submitting to the live database.
(Obviously, it will be re-checked before submission and re-rejected if it is still wrong).
Can anyone tell me what I need to do in SSIS to display the contents of a specific table (e.g. MyDatabase.dbo.Reject_Table) to the user running this package from their local PC (the package will, of course, be located on the server).
Then they need the ability to modify the contents of the table - Either 1 row at a time or en-masse. Not bothered which).
When that is done, they hit a "Continue" or "Next" type button, which then continues to run the remainder of the package, which I am more than comfortable writing.
It is only the interactive stage(s) that I am struggling with and I would really appreciate some advice.
Thanks
Craig
That is non-native functionality in SSIS.
You can write pretty much anything you want in a script task and that includes GUI components. (I once had a package play music). In your data flow, you would have a Script Component that edits each row passing through the component.
Why this is a bad idea
Suitability - This isn't really what SSIS is for. The biggest challenge you'll run into is the data flow is tightly bound to the shape of the data. The reject table for Customer is probably different than the reject table for Phone.
Cost - How are you going to allow those senior users to run SSIS packages? If the answer involves installing SSIS on their machines, you are looking a production license for SQL Server. That's 8k to 23k ish per socket for SQL Server 2005-2008R2 and something insane per core for SQL Server 2012+.
What is a better approach
As always, I would decompose the problem into smaller tasks until I can solve it. I'd make 2 problem statements
As a data steward, I need the ability to correct (edit) incomplete data so that data can be imported into our application.
As an X, I need the ability to import (workflow) corrected rejected data so that we can properly bill our customers (or whatever the reason is).
Editing data. I'd make a basic web page or thick client app to provide edit capability. A DataGridView would be one way of doing. Heck, you could forgo custom development and just slap an Access front end to the tables and let them edit the data through that.
Import corrected data. This is where I'd use SSIS but possibly not exclusively. I'd probably look at adding a column to all the reject tables that indicates whether it's ready for reprocessing. For each reject table, I'd have a package that looks for any rows flagged as ready. I'd probably use a Delete first pattern to remove the flagged data and either insert it into the production tables or route it back into the reject table for further fixing. The mechanism for launching the packages could be whatever makes sense. Since I'm lazy,
I'd have a SQL Agent job that runs the packages and
Create a stored proc which can start that job
Grant security on that stored proc to the data stewards
Provide the stewards a big red button that says Import How that's physically implemented would depend on how you solved the edit question.
In my CouchDB database I'd like all documents to have an 'updated_at' timestamp added when they're changed (and have this enforced).
I can't modify the document with validation functions
updates functions won't run unless they're called specifically (so it'd be possible to update the document and not call the specific update function)
How should I go about implementing this?
There is no way to do this now without triggering _update handlers. This is nice idea to track documents changing time, but it faces problems with replications.
Replications are working on top of public API and this means that:
In case of enforcing such trigger you'll have replications broken since it will be impossible to sync data as it is without document modification. Since document get modified, he receives new revision which may easily lead to dead loop if you replicate data from database A to B and B to A in continuous mode.
In other case when replications are fixed there will be always way to workaround your trigger.
I can suggest one work around - you can create a view which emits a current date as a key (or a part of it):
function( doc ){
emit( new Date, null );
}
This will assign current dates to all documents as soon as the view generation gets triggered (which happens after first request to it) and will reassign new dates on each update of a specific document.
Although the above should solve your issue, I would advice against using it for the reasons already explained by Kxepal: if you're on a replicated network, each node will assign its own dates. So taking this into account, the best I can recommend is to solve the issue on the client side and just post the documents with a date already embedded.
I am building a custom control to do a lookup and provide a summary of the status of several items in database. There are 20 different statuses, and in order to determine the number for each status, I am doing a NotesDatabase.search to count each status.
This was fine when there were only 2 statuses to check, however the business now want all of them displayed. :)
I'm concerned about the time it will take to do the search, and want to do this in the most efficient manner possible.
Things I have taken into account:
The documents are updated regularly, so I can't really have an agent doing the calcs and the custom control run a lookup for those static values. This would mean data is old.
The results are dependant on the user logged in, doing counts based on their login ID so I can't really have seperate views per person.
Does anyone have a clean suggested solution?
I am about to start testing the 20 searches and will update this with those results, but am expecting it to be very slow.
A
the other option: instead of #DBLookup you go into the view and just run through it end to end using a navigator. That's pretty fast and should be faster than 20x search.
Of course you could update tallies in the QuerySave event and write it into a user specific in memory profile.
So in your QuerySave you would see what Users are loaded in a ApplicationBean and update those. If a user logs in newly then a search in the database is done into the application bean. When a session expires (Session listener) the entry in the ApplicationBean is cleared out.
Instead of 20 searches you actually might be better off with ONE Ajax call. Create a view that is categorized by your status and is collapsed. Then make an Ajax call ...statusview?ReadViewEntries&Outputformat=JSON&count=100. This will give you the 100 status summary entries with a childcount property.
Would that work for you?
Would it be possible to add 20 Status documents, and to update one or more of these documents whenever some condition is met? Each time a document is updated, an agent runs to match with those conditions, in order to update the status.
If there are many updates per day, it's not really efficient.