Add constant header to flat-file schema in BizTalk - xsd

I have XML schema with some data. I need to convert this schema to Flat-File AND add the constant header, which is given separately as a string.
I have 2 possible solutions:
Since header values are fixed and happen only once, I will create a separate record for header.
In this case I will have 2 records level. 1. HeaderTitles and 2. Records. So I use the HeaderTitle record as a filter.
We can create 2 schemas:
(1) Header - This will have one string element type, "Name Age Country". (This is the column header)
(2) Body - This will be the actual data records. This will have 3 elements, the name, age & the country as repeating records.
In the pipeline assembler, there is a property where we can decide whether we want to include the header info or not in the final message. We can just disable this.
Can I do this in some other way?

I would recommend Option 1 where you have the header in the Flat File schema and you either have default values specified in the schema or set them in the map would be the best and easiest in in my opinion the correct approach.
The only time I would use option 2 is if you had the flat file incoming and needed to disassemble it and actually needed to debatch the record lines into separate messages, which you would do be defining the Body record as occurs 1.

Related

Most efficient way to avoid injecting duplicate rows into Postgres db

This is more of a conceptual question. I'm building a relational db, using python and the psycopg2 library, and have a table that has over 44 million rows (and growing) that I want to try and inject sanitized rows from a csv file into the table without injecting duplicate rows; each row has an auto incrementing unique id from it's origin db table.
The current way I'm injecting the data is using the COPY table(columns...) FROM '/path/to/file' command; which is working like a charm. This occurs after we've sanitized all rows in the the csv file to match the datatypes in the rows to the appropriate column's datatypes in the table.
There are a few ideas that I have in mind, and one I've tried, but want to see what the most efficient option is before implementation.
The one I tried ended up being a tremendous burden on the server's cpu and memory; which we have decided not to proceed on. I ended up creating a script that makes a query to the db that searches for the unique id in the table (over 44 million rows).
My other idealistic solutions:
Allow injection of duplicates then create a script to clean up any duplicate rows in the table.
Create a temporary table with the data from the csv. Compare the temp table with the existing table, removing any duplicate values from the temp table, then injecting the temp table into the existing table.
Step 2 might be simplified with this issue. Instead of comparing the two tables we just use the INSERT INTO command along with the ON CONFLICT option.
This one might be more of a stretch of the imagination, and probably pretty unique to our situation. But, since we know that the unique id field will be auto incrementing, we can set a global variable to equal the largest unique id value in the table, then before sanitizing the data we make a query to check if the unique id value is less than the global variable data, and if that is True, we throw out the row from being injected. (No longer an option)

Azure Data Flow - Can we have Dynamic columns or change in projections for Unpiovt functionality

The excel consist of 62 columns and 7 columns are fixed and rest of them have weeks as in year(week1 to week 52)
I have used a data flow task to unpivot the 53 columns into rows with 2 extra columns year and value.
The problem is that I have the 52 week column names keep changing on every week data load and how to I handle this change in column names in data flow. For a single run it gives the exact output
What you'll want to do here is to implement late-binding of your schema, or what ADF refers to as "schema drift". Instead of setting a hardened "early binding" schema in your Source projection, leave the dataset schema and projection empty.
Next, add a Derived Column after your source and call it "Projection". This is where you'll build your projection using rules to account for your evolving schema.
Build out your canonical model with the column names for your entire year using byName('columnname'). That will tell ADF to look for the existence of the column in single quotes from your source data while also providing a schema that you can use to build out your pivot table.
If you need to cast the values, wrap byName() inside of a casting function, i.e. toString(), toDate(), etc.

Dynamic validation list in Excel

I have an issue regarding data validation in Excel, namely how to dynamically set the validation source.
I have three tables, where the first contains a product ID and a product name. The second table contains a product ID together with a serial number. A third table has three columns; one for product ID, one for serial number and a description for e.g. error reporting.
What I want to do is related to the third column where I select the product ID in a drop-down box which is linked to the first table. This works perfectly fine. The second column though, must only allow serial numbers related to the product ID selected according to the relationship in the second table. Hence, the data validation list must be dynamically generated by the input in the first column.
The reason for having it in Excel is due corporate reasons and personally I'd use an SQL-database for this very issue. E.g. if I were to use SQL-syntax to generate the validation list, the corresponding SQL-statement would be:
SELECT serialNumber WHERE productId = 12345;
I've tried using the INDEX-MATCH, but unfortunately MATCH only returns a scalar value rather than an array. I have not come across array functions prior to today, but I assume such might be included in order to accomplish this and have tried a bit without success.
If I somehow were to acquire an array returning the row numbers where there is a match, the INDEX-function would accomplish my needs, I presume.
My question is therefore, is there a method to acquire an array of matched values or can my problem be solved using a more elegant solution? If it could be of value if it can be made without VBA, also for corporate security reasons.
Thanks in advance!

How to arrange data in Cassandra to get data in last in first out format

As we cannot sort data in Cassandra, I wanted to store data in such format that when I retrieve the data, I need to get data in ' last in first out format ' i.e if user enter comments when I retrieve data, I should first get very latest comment first and then older comments. I think it's something to do with comparator.
I have set following when configuring Cassandra:
assume posts comparator as utf8;
assume posts validator as utf8;
assume posts keys as utf8;
Please help - how should I create the column to arrange data in time format so that latest data is stored first?
Columns in a row are always sorted, and you can iterate over the columns in a row in reverse order. Given these two facs we could model the situation you're describing by storing comments in a column family called "comments" where the row key is the post ID, and the columns represent the comments to the corresponding post. The columns are timestamts (either ISO formatted dates, UNIX timestamps or time UUIDs) and the values are the comment text bodies.
If you would now get the columns for a row and specify that you wanted them in reverse order you would get what you want. How to specify reverse order depends on your driver, but it's usually just an option to the command that retrieves a row, or a column slice.
Another way, which is more hackish, would be to take the UNIX timestamp of a post, and subtract it from a large integer, like 2^31, and use that as column key. That way columns would sort in reverse order by default. It's not pretty and the above method is more elegant.
If you worry about using timestamps because there could be collisions where two comments are posted at exactly the same time, use Cassandra's time UUID type.
You need to organize your data such that the comparator is a timestamp. You store your data in natural order and specify reverse order in your slice query.

How can I delete records from a table that have certain criteria

Rookie question I know.
I have a table with about 10 fields, one of the fields is a category field. I need this field to exist because of the multiple types of categories. However, one category in this field is wrong and is duplicating results.
So can I delete all records in the table that have "Type320" in the CatDescription field, and how? I want to keep eveerything else as it is in this table; just need to get rid of the records that have that that in that one field
Thanks very much!
EDIT: Thanks for the answer, I did not know how to do this so this is very helpful
However, this is more complicated than I thought. The raw data that I am supplied carries these duplicate records (only duplicate in certain circumstances but they are easy to isolate). This raw data is given to me on a monthly basis in several spreadsheet forms.
It all relates to these ID numbers, and has like 10 fields (xls columns). As I said before one of these is the Category Description field (sorry, this is not a lookup) In certain places this records automatically duplicates itself on output because in the database this comes from, it has to have this sub category for one particular "type"
So....every time there is a duplication, every single bit of information in all fields are exactly the same, with the exception of this CatDescription (one is Type320, and the duplicated record type is "Type321"). However, there are some instances where Type321 is valid on it's own (in which case there is no matching data row with a Type320 catdescription). By matching I mean all data in all fields of a particular record.
A very clear absolute of this is if all fields (data within) of a record with Type320 CatDescription, matches all fields (data within) a record with Type321 CatDescription, then I can delete that record containing Type321 CatDescription. This is true because this is the only situation where this duplication occurs, normally not all of this should match.
This allows all unique records with Type320 and Type321 data (that does not match exactly) to stay; just a it should. This makes sense to me (and hopefully you too :/) but can it be done, and how?
thanks because this is way over my head. I would rather know how to do it in access, but an xls solution is equally as appreciate. heck i would do it in ppt if it would get the job done! :)
I would try with one of these two querys:
DELETE FROM table WHERE CatDescription LIKE '%Type320%';
DELETE FROM table WHERE CatDescription LIKE '*Type320*';
That because the Access database engine could be using * (ANSI-89 Query Mode e.g. DAO) instead of % (ANSI-92 Query Mode e.g. OLE DB/ADO) for the wildcards.
Alternatively, this regardless of ANSI Query Mode:
DELETE FROM table WHERE CatDescription ALIKE '%Type320%';
Note the Access database engine's ALIKE keyword is not officially supported.
Does the CatDescription field look to another table? Is it a a query of those tables that creates what you call duplicate results?
If so, be careful about blaming the table that has CatDescription. Check the look-up table to see if Type320 is found there in duplicate.
If you don't have the problem isolated correctly, then you're likely to delete good records while not fixing the problem.

Resources