It's quite simple to use row denormaliser to achieve pivots when we have few records which can be written manually in denorm step,but what when there's hundreds of thousands of records? I tried using etl metadata injection step, but I was unable to achieve my desired output.
Here is the link to my previous post where my source data has been defined.
and here is what I have tried
row denormaliser options
pivot_inject_etl_metadata.ktr
Try something like this:
pivot_inject_etl_metadata.ktr:
Group by: sku_id
Add constants:
ETL Metadata Injection:
Related
I have an ADF pipeline which is iterating over a set of files, performing various operations and I have an Azure CosmosDB (SQL API) instance where I would like to insert the name of file and a timestamp, mainly to keep track on which files have been already processed and which not, but in the future I might want to add some other bits of data related to each file.
What I have is my CosmosDB
And currently I am trying to utilice the Copy Data Activity for the insert part.
One problem that I have is that this particular activity expects source while at this point I have only the filename. In theory it was an option to use the Blob Storage from where I read the file at the beginning, but since the Blob Storage is set to store binary files I got the following error if I try to use it as source
Because of that I created a dummy CosmosDB Linked service, but I have several issues with this approach:
Generally the idea for dummy source is not very appealing to me
I haven't find a lot of information on the topic but it seems that if I want to use something in the Sink I need to SELECT from the source
Even though I have selected a value for the id the item is not saved with the selected value from the Source query, but as you can see from the first screenshot I got a GUID and only the name is as I want it.
So my questions are two. I just learn ADF but this approach doesn't look like the proper way to insert item into CosmosDB from activity, so a better/more common approach would be appreciated. If there is not better proposal, how can I at least apply my own value for the id column? If I create the item in the CosmosDB GUI and save it from there, as you can see I am able to use the filename as id which for now seems like a good idea to me, but I wasn't able to add custom value (string or int) when I was trying through the activity, so how can I achieve this?
This is how my Sink looks like
While trying to build an ADF pipeline that generates datasets within Data Factory, I ran into an interesting issue. Or maybe I misunderstand some components completely, in which case I'd happily be educated.
I basically read some meta data from a SQL Database table which determines which source system, schema and tables I should pull new data from. The meta data is stored within a bunch of variables, which then feed a Web Request that attempts to generate a new Data Source as per the MS documentation. Yes, I'm trying to use Azure Data Factory to generate Azure Data Factory components.
The URL to create the DataSet and the JSON Body for the request are both generated using #Concat and a number of the variables. The resulting DataSet is a very straightforward file that does not contain references to the columns, but just the table schema and table name. I generated these manually before, and that all seems to work brilliantly. I basically have a dataset connected to the source system, referincing the table from the meta data.
The code runs, but the resulting dataset is directly published, as opposed to being added in my working branch. While this should not be a big issue once I manage to properly test everything, ideally the object would be created in my working branch (using Azure DevOps, thus a local file).
My next thought was to set up a linked service to my local PC, and simply write the same contents as above there. My challenge seems to be that I essentially am creating a file out of nothing. I am trying to use a Copy Data component, and added an empty placeholder file to act as a source.
I configure the sink with Dynamic Content for Copy Behavior, and attempted to add the JSON contents there. This gets the file created, but it's unfortunately empty. I also attempted to add a new column to the source with the data being the same contents.
However, seeing the file to be used as a sink doesn't exist, a mapping error will occur. Apart from this, I'd not want a column header to be written; just the dynamically created contents.
I'm not sure how to continue with this. I feel I'm very close to achieving my goal, but cannot seem to take this final hurdle.
Any hints or suggestions would be very welcome.
I have written a article in my google docs.
I have included small tables, big tables and huge tables in different places in the files.
Now I need to modify some properties of all tables at a time.
But that seems not possible?
Are there any methods to modifying properties of all tables at a time for google docs?
PS. more details to illustrate my issue:
1. Here is a doc file with one table.
2. Right click on the table and choose Table properties
3. Now here comes more tables in a doc file
How can I deal with all the tables together? (All modifications are the same)
Method 1
When creating the tables, you can simply set all the properties on the first one and then for the next ones you can copy and paste the first one since the format will be kept.
Method 2
If you want to modify more tables at the same time, you can make use of Apps Script.
Apps Script is a powerful development platform which can be used to build web apps and automate tasks. What makes it special is the fact that it is easy to use and to create applications that integrate with G Suite.
Therefore, your task can be achieved by using this script.
Snippet
function setTableProperties() {
var doc = DocumentApp.openById("DOCUMENT_ID");
var tables = doc.getBody().getTables();
tables.forEach((table) => {
//Any instruction run with the variable table will be executed for all tables.
});
}
Explanation
The above script gathers all the tables from the wanted document and then using a for loop accesses each table from the document.
In order to set the properties of the tables as wanted, you just have to use the appropriate method/s.
The getAttributes method can be used as well in order to see exactly which properties does a table posses.
Reference
Apps Script Document Service;
Apps Script Enum Attribute;
Apps Script Table Class;
Apps Script DocumentApp Class.
My question is pretty simple. I am working on Kentico 9 with its SQL Server database which contains several tables which had been added directly from the SQL Management Studio by an external contractor. The fact is that those tables are being used to store custom content which will be displayed for a site, but, in the code they don't have the code for making queries. I mean, they don't have Info and Provider classes.
https://docs.kentico.com/display/K82/Retrieving+database+data+using+ObjectQuery+API
According with this, all tables into the Kentico database can be accessed by invoking methods on these classes, but I don't have it this time.
Something like this, it will not work if I use my table name:
var user = UserInfoProvider.GetUserInfo("administrator");
var items = CustomTableItemProvider.GetItems("MyTable")
.TopN(10)
.WhereEquals("ItemCreatedBy", user.UserID)
.OrderBy("ItemCreatedWhen");
My question is:
can I query any table by its name?
One last thing:
I cannot declared those table as "custom table" because it seems to be a bug in the CMS.
Or you can pull data using your own SQL query:
var ds = ConnectionHelper.ExecuteQuery("select ....", null, QueryTypeEnum.SQLQuery);
Nevertheless I would recommend to create a custom class inside a custom module (much more robust than custom tables) instead and use the generated Info and InfoProvider classes to get and manipulate data.
I think an object has to be registered within the system (created through Kentico UI or API) in order to be pulled from DB with object query.
So I'd choose one of the following options:
Use Entity Framework or something similar to work with that data
Create appropriate custom tables or even custom module and push data there. Not sure why you can't create a custom table... What is an error you're getting?
If you need to present data on the UI only (without processing on the back end) - use just custom queries
Hope this helps.
If you are accessing in code then you could do it the good old fashioned way. If you want to pull data from the database to display on the website you could also do so by creating a custom query and using a transformation to display the fields, then use a repeater on the page to display the transformed data. Alternatively you can use a SQL datasource with a basic repeater, but you still have to create a transformation to display the data. Both methods allow you to access the data in the tables from within the CMS UI, no need to touch any code behind.
If your objective is to read data from these database tables to transform on webpage e.g. using CMS Repeater webpart, you can simply create custom query(s) in Kentico itself and load data using it. You can find the detail here on how to create custom custom queries and load data using it.
On the other hand you can also write your custom classes and define the custom methods where you can pull data using your own SQL query like this:
var ds = ConnectionHelper.ExecuteQuery("select ....", null, QueryTypeEnum.SQLQuery);
Lastly I don't think there should be any issue to create custom table instead of those direct DB tables, only thing we have to ensure code name of custom table should be unique means don't try to use exact same name because it'll cause exception due to same table name already exist in DB. You can please share exception you getting while creating custom table so that I can help you out further.
HI,
I have a SP which returning more than 100 fields with 1000+ row. I need to to save all in temp table and and rum my customize query to get the appropriate data.
I did many search but i am unable to find the right solutions for my project. I will appreciate if anyone can share his idea.
create table #SP_Result
(i need to create field dynamically according to the SP return result )
exec Ministry..civil_record
"2010-08-07","Autogen",20,NULL,NULL,NULL,NULL,NULL,NULL,NULL
I need dump the result from SP to #SP_Result.
Why don't you run the query itself, in your "customised query", rather than try to capture the result set of the stored proc ? That's the normal method.
All those Nulls look like a bastard of a de-normalised "table", where many rows will not apply to the task. It is much, much faster to deal with the database in a normalised, set-oriented manner.