How to map a column of Output Data to another output data in TALEND - data-migration

I have a situation where i need to insert the data from old DB to new DB. e.g.
In my old DB, i need to migrate USER_NAME, ADDRESS to my new DB tables, one in which there are two columns one is USER_ID which is auto generated and other is USER_NAME and in second table i have column as OID,USER_ID,ADDRESS. Now i am not able to map USER_ID to my second table.
OLD DB NEW DB TABLE A NEW DB TABLE B
OID USER_ID --------------- USER_ID
USER_NAME ----------------------- USER_NAME OID
ADDRESS ------------------------------------------------- ADDRESS
Someone please hep me in doing this. I am not able to get anything on this.

It can be done using tMap, design job like below.
oldDbInput---main---tMap-----out1---newDbOutput_a----thashouput
|
|-----out2--tMap---newDbOutput_b
|---thashInput
Map required columns from source to destination

Related

In Cassandra does creating a tables with multiple columns take more space compared to multiple tables?

I have 6 tables in my database each consisting of approximate 12-15 columns and they have relationship with its id to main_table. I have to migrate my database to cassandra so I have a question should I create one main_table with consisting multiple columns or different table as in my mysql database.
Will creatimg multiple column take more space or multiple table will take more space
Your line of questioning is flawed. It is a common mistake for DBAs who only have a background in traditional relational databases to view data as normalised tables.
When you switch to NoSQL, you are doing it because you are trying to solve a problem that traditional RDBMS can't. A paradigm shift is required since you can't just migrate relational tables the way they are, otherwise you're back to where you started.
The principal philosophy of data modelling in Cassandra is that you need to design a CQL table for each application query. It is a one-to-one mapping between app queries and CQL tables. The crucial point is you need to start with the app queries, not the tables.
Let us say that you have an application that stores information about users which include usernames, email addresses, first/last name, phone numbers, etc. If you have an app query like "get the email address for username X", it means that you need a table of email addresses and the schema would look something like:
CREATE TABLE emails_by_username (
username text,
email text,
firstname text,
lastname text,
...
PRIMARY KEY(username)
)
You would then query this table with:
SELECT email FROM emails_by_username WHERE username = ?
Another example is where you have an app query like "get the first and last names for a user where email address is Y". You need a table of users partitioned by email instead:
CREATE TABLE users_by_email (
email text,
firstname text,
lastname text,
...
PRIMARY KEY(email)
)
You would query the table with:
SELECT firstname, lastname FROM users_by_email WHERE email = ?
Hopefully with these examples you can see that the disk space consumption is completely irrelevant. What is important is that you design your tables so they are optimised for the application queries. Cheers!

How to validate multiple Tables before inserting on sink table in Azure Data Factory

I have several sets of Tables let's say employee, dept, account, etc. with the schema as follows
Employee ->empId, Name
dept->deptid,empId, deptName
account->empId,accID, name
so before inserting a new record on the table dept I have to validate if the empID is present on the Employee and account table or not.
Is there an option in Azure Datafactory to validate these condition before inserting on Dept table being dept as my sink table?
I used a copy data pipeline to achieve this. I created a table of values to be inserted into dept table called dept2 with same schema. Using copy data activity, the requirement is to filter and insert records where the new 'empId' records exist in both employee and account table. So, select source table as dept2 and select 'Use Query' method.
Write a desired query in the Query text area provided. As specified, using the query given below would give only the records from dept2 (table values to be inserted) whose empId exists in both ‘employee’ and ‘account’ table.
select d2.deptid, d2.empId, d2.deptName from [dbo].[dept2] as d2
where d2.empId in(select empid from [dbo].[employee])
and d2.empId in(select empId from [dbo].[account])
When you preview the data, you can see that the query is executed and only the records where 'empId' exists in both employee and account are visible. Now in the sink tab, create a linked service directing to dept table.
We can execute the pipeline now and it successfully accomplishes the task of validating the records to be inserted such that the new 'empId' pre-exists in both the other tables. The following are the images of sample data with tables.
employee table:
account table:
dept2 table (insert values):
Output:
So, you can filter or validate the data using this query tab and copy only required data to your destination table.

How To Dynamically Query For Tables Enteries In Database Using Room

I'm new to Room lib, I'd like to know how I can dynamically select a table from a database in my query.
I have a database that has 10 different tables. Depending on users' preferences, they can request information from any table from the database.
I am using the below code snippet to query
#Query("SELECT * FROM lecture_progress_table WHERE subject = :eSubject AND isWatched = :isWatched")
LiveData<List<LectureProgressObject>> getSubjectLectureProgress(String eSubject, int isWatched);
But instead of a pre-defined table name like "lecture_progress_table", I would like a situation where I can pass the table name dynamically to the query. Please, any assistant would be more than helpful.

How to update "updatedAt" on main table after child table with foreign key updated or created on sequelize

I have main table "flow" and other table that connect to flow table with foreign key.
How can I update "flow" table "updateAt" column when the other table create new data or update data.
Example:
I have "uplodadFiles" table that connect to the flow table with flowID foreign key, now when new file uploaded to "uplodadFiles" I need to update "flow" table "updateAt".
What is the best way to do this? Hook? Thanks.

Azure-logicApps-INSERTING ENTITY INTO A TABLE

i have created a HTTP post request using logic apps and inserting that in a table. so i need the row key to be 1,2, and 3 so on. when i insert first entity it Should be 1 and for next entity it should 2 so on. i have tried Guid no solution. please if someone knows share the answer.
If you really need to avoid guids, you have to use other ways to implement auto-increment, you could create a queue or a table entity with your next ids to use.
In the logic app before do insert action, read the queue or entity, increment and save it then insert with the value you read.
The below is my test logic app, use a itemid table to store the auto-increment id and insert the id as rowkey to destination table test. Then replace the entity in the itemid table with the new id value.

Resources