I know it might be a bit a confusing title but couldn't get up to anythig better.
The problem ...
I have a ADF Pipeline with 3 Activities, first a Copy to a DB, then 2 times a Stored procedure. All are triggered by day and use a WindowEnd to read the right directory or pass a data to the SP.
There is no way I can get a import-date into the XML files that we are receiving.
So i'm trying to add it in the first SP.
Problem is that once the first action from the pipeline is done 2 others are started.
The 2nd action in the same slice, being the SP that adds the dates, but in case history is loaded the same Pipeline starts again a copy for another slice.
So i'm getting mixed up data.
As you can see in the 'Last Attempt Start'.
Anybody has a idea on how to avoid this ?
ADF Monitoring
In case somebody hits a similar problem..
I've solved the problem by working with daily named tables.
each slice puts its data into a staging table with a _YYYYMMDD after, can be set as"tableName": "$$Text.Format('[stg].[filesin_1_{0:yyyyMMdd}]', SliceEnd)".
So now there is never a problem anymore of parallelism.
The only disadvantage is that the SP's coming after this first have to work with Dynamic SQL as the table name where they are selecting from is variable.
But that wasn't a big coding problem.
Works like a charm !
Related
I created a DLT pipeline targeting a terrabyte scale directory with file notifications option turned on. I set "cloudFiles.includeExistingFiles": false to ignore existing files, and ingest the data starting from the first run.
What I expect to happen is that on the first run (t0) no data is ingested, while on the second run (t1) the incoming data is ingested between t0 and t1. I also expect the first run to complete instantly, and since I am using file notifications, I expect the second run to complete pretty fast as well.
I started the first run, it's still running for the last 7 hours :) No data is ingested as I expected, but I have no idea what the pipeline is doing right now. I guess it does something with the existing files even though I explicitly stated that I want to ignore them.
Any ideas why the behavior I expected isn't happening?
I am currently developing an Azure ML pipeline that as one of its outputs is maintaining a SQL table holding all of the unique items that are fed into it. There is no way to know in advance if the data fed into the pipeline is new unique items or repeats of previous items, so before updating the table that it maintains it pulls the data already in that table and drops any of the new items that already appear.
However, due to this there are cases where this self-reference results in zero new items being found, and as such there is nothing to export to the SQL table. When this happens Azure ML throws an error, as it is considered an error for there to be zero lines of data to export. In my case, however, this is expected behaviour, and as such absolutely fine.
Is there any way for me to suppress this error, so that when it has zero lines of data to export it just skips the export module and moves on?
It sounds as if you are struggling to orchestrate a data pipeline because there orchestration is happening in two places. My advice would be to either move more orchestration into Azure ML, or make the separation between the two greater. One way to do this would be to have a regular export to blob of the table you want to use as training. Then you can use a Logic App to trigger a pipeline whenever a non-empty blob lands in the location
This issue has been resolved by an update to Azure Machine Learning; You can now run pipelines with a flag set to "Continue on Failure Step", which means that steps following the failed data export will continue to run.
This does mean you will need to design your pipeline to be able to handles upstream failures in its downstream modules; this must be done very carefully.
I want to delete more than 1 million User information in Kentico10.
I tried to delete it with UserInfoProvider.DeleteUser (); (see the following documentation), but it is expected that it will take nearly one year with a simple calculation.
https://docs.kentico.com/api10/configuration/users#Users-Deletingauser
Because it's a simple calculation, I think it's actually a bit shorter, but it still takes time.
Is there any other way to delete users in a short time?
Of course make sure you have a backup of your database before you do any of this.
Depending on the features you're using, you could get away with a SQL statement. Due to the complexities of the references of a user to multiple other tables, the SQL statement can get pretty complex and you need to make sure you remove the other references before removing the actual user record.
I'd highly recommend an API approach and delete users through the API so it removes all the references for you automatically. In your API calls make sure you wrap the delete action in the following so it stops the logging of the events and other labor-intensive activities not needed.
using (var context = new CMSActionContext())
{
context.DisableAll();
// delete your user
}
In your code, I'd only select the top 100 or so at a time and delete them in batches. Assuming you don't need this done all in one run, you could let the scheduled task run your custom code for a week and see where you're at.
If all else fails, figure out how to delete the user and the 70+ foreign key references and you'll be golden.
Why don't you delete them with SQL query? - I believe it will be much faster.
Bulk delete functionality exist starting from version 10.
UserInfoProvider has BulkDelete method. Actually any InfoProvider object inhereted from AbstractInfoProvider has BulkDelete method.
I am trying to learn SQLyog Job Agent (SJA).
I am on a Linux machine, and use SJA from within a bash script by a line command: ./sja myschema.xml
I need to sync an almost 100 tables db and its local clone.
Since a single table stores a few config data, which I do not wish to sync, it seems I need to write a myschema.xml where I list all the remaining 99 tables.
Question is: is there a way to write to sync all the table but a single one?
I hope my question is clear. I appreciate your help.
If you are using the latest version of sqlyog:You are given the table below, and the option to generate an xml job file at the end of the database syncronisation wizard reflecting the operation you've opted to perform. This will in effect list the other 99 tables in the xml file itself for you, but it will give you what you are looking for, and I dont think you would be doing anything in particular with an individual table, since you are specifying all tables in a single element.
I need to allow a couple of users to modify a table in my database, preferably as part of an integrated package that then submits the changes into our live database.
Please allow me to explain further:
We have an automated import task from one database system into another, with data transformation on the way through.
As part of this task, various checks are run before the final import and any rows with incomplete or incorrect data are sent to a rejections table and deleted from the import table.
I now need to allow a couple of senior users that ability to view and correct the missing/incorrect entries from the rejection table, before re-staging it and submitting to the live database.
(Obviously, it will be re-checked before submission and re-rejected if it is still wrong).
Can anyone tell me what I need to do in SSIS to display the contents of a specific table (e.g. MyDatabase.dbo.Reject_Table) to the user running this package from their local PC (the package will, of course, be located on the server).
Then they need the ability to modify the contents of the table - Either 1 row at a time or en-masse. Not bothered which).
When that is done, they hit a "Continue" or "Next" type button, which then continues to run the remainder of the package, which I am more than comfortable writing.
It is only the interactive stage(s) that I am struggling with and I would really appreciate some advice.
Thanks
Craig
That is non-native functionality in SSIS.
You can write pretty much anything you want in a script task and that includes GUI components. (I once had a package play music). In your data flow, you would have a Script Component that edits each row passing through the component.
Why this is a bad idea
Suitability - This isn't really what SSIS is for. The biggest challenge you'll run into is the data flow is tightly bound to the shape of the data. The reject table for Customer is probably different than the reject table for Phone.
Cost - How are you going to allow those senior users to run SSIS packages? If the answer involves installing SSIS on their machines, you are looking a production license for SQL Server. That's 8k to 23k ish per socket for SQL Server 2005-2008R2 and something insane per core for SQL Server 2012+.
What is a better approach
As always, I would decompose the problem into smaller tasks until I can solve it. I'd make 2 problem statements
As a data steward, I need the ability to correct (edit) incomplete data so that data can be imported into our application.
As an X, I need the ability to import (workflow) corrected rejected data so that we can properly bill our customers (or whatever the reason is).
Editing data. I'd make a basic web page or thick client app to provide edit capability. A DataGridView would be one way of doing. Heck, you could forgo custom development and just slap an Access front end to the tables and let them edit the data through that.
Import corrected data. This is where I'd use SSIS but possibly not exclusively. I'd probably look at adding a column to all the reject tables that indicates whether it's ready for reprocessing. For each reject table, I'd have a package that looks for any rows flagged as ready. I'd probably use a Delete first pattern to remove the flagged data and either insert it into the production tables or route it back into the reject table for further fixing. The mechanism for launching the packages could be whatever makes sense. Since I'm lazy,
I'd have a SQL Agent job that runs the packages and
Create a stored proc which can start that job
Grant security on that stored proc to the data stewards
Provide the stewards a big red button that says Import How that's physically implemented would depend on how you solved the edit question.