Querying multiple tables with a where clause in LINQ to SQL - c#-4.0

Forgive my ignorance with Linq to SQL but...
How do you query mulitple tables in one fell swoop?
Example:
I want to query, say 4 tables for a title that includes the following word "penguin". Funnily enough each table also has a field called TITLE.
Tables are like so:
I want to query each table (column: TITLE) for the word "penguin". Each table is referenced (via foreign key) to a parent table that is simply called Reference, and is linked on a column called REF_ID. So ideally the result should come back with a list of REF_ID's where the query criteria was matched.
If you can help you will be richly rewarded....... (with a green tick ;)
The code I have works for just one table - but not for two:
var refs = db.REFERENCEs
.Include(r => r.BOOK).Where(r => r.BOOK.TITLE.Contains(titleString)).Include(r => r.JOURNAL.AUTHORs)
.Include(r => r.JOURNAL).Where(r => r.JOURNAL.TITLE.Contains(titleString));

I had a similar scenario a while back and ended up creating a view that unioned my tables and then mapped that view to a LINQ-to-SQL entity.
Something like this:
create view dbo.References as
select ref_id, title, 'Book' as source from dbo.Book
union all
select ref_id, title, 'Journal' from dbo.Journal
union all
select ref_id, title, 'Magazine' from dbo.Magazine
union all
select ref_id, title, 'Report' from dbo.Report
The mapping would look like this (using attributes):
[Table(Name="References")]
public class Reference {
[Column(Name="Ref_Id", IsPrimaryKey=true)]
public int Id {get;set;}
[Column]
public string Title {get;set;}
[Column]
public string Source {get;set;}
}
Then a query might look like this:
var query = db.GetTable<Reference>().Where(r => r.Title.Contains(titleString));

Related

Pass column name as argument - Postgres and Node JS

I have a query (Update statement) wrapped in a function and will need to perform the same statement on multiple columns during the course of my script
async function update_percentage_value(value, id){
(async () => {
const client = await pool.connect();
try {
const res = await client.query('UPDATE fixtures SET column_1_percentage = ($1) WHERE id = ($2) RETURNING *', [value, id]);
} finally {
client.release();
}
})().catch(e => console.log(e.stack))
}
I then call this function
update_percentage_value(50, 2);
I have many columns to update at various points of my script, each one needs to be done at the time. I would like to be able to just call the one function, passing the column name, value and id.
My table looks like below
CREATE TABLE fixtures (
ID SERIAL PRIMARY KEY,
home_team VARCHAR,
away_team VARCHAR,
column_1_percentage INTEGER,
column_2_percentage INTEGER,
column_3_percentage INTEGER,
column_4_percentage INTEGER
);
Is it at all possible to do this?
I'm going to post the solution that was advised by Sehrope Sarkuni via the node-postgres GitHub repo. This helped me a lot and works for what I require:
No column names are identifiers and they can't be specified as parameters. They have to be included in the text of the SQL command.
It is possible but you have to build the SQL text with the column names. If you're going to dynamically build SQL you should make sure to escape the components using something like pg-format or use an ORM that handles this type of thing.
So something like:
const format = require('pg-format');
async function updateFixtures(id, column, value) {
const sql = format('UPDATE fixtures SET %I = $1 WHERE id = $2', column);
await pool.query(sql, [value, id]);
}
Also if you're doing multiple updates to the same row back-to-back then you're likely better off with a single UPDATE statement that modifies all the columns rather than separate statements as they'd be both slower and generate more WAL on the server.
To get the column names of the table, you can query the information_schema.columns table which stores the details of column structure of your table, this would help you in framing a dynamic query for updating a specific column based on a specific result.
You can get the column names of the table with the help of following query:
select column_name from information_schema.columns where table_name='fixtures' and table_schema='public';
The above query would give you the list of columns in the table.
Now to update each one for a specific purpose, You can store the result set of column name to a variable and pass that variable to the function to perform the required action.

Extend Generic Inquiry to Show the Number of records

Is it possible to extend the Generic Inquiry screen so that it shows the number of records retrieved? Or perhaps is it possible to use PXGenericInqGrph to get the number of records of a Generic Inquiry?
However, it is important, for performance reasons that I only retrieve one record with the total from the Database. and not getting all records from the database and doing a Count at the Application layer.
At least up until Acumatica 7.207.0029 there is no method to extend the Generic Inquiry results screen.
If you only need the record count, what you can do is edit your GI or create a copy to get the total and use the special <Count> field to get the record count.
Of course this requires you to set a GroupBy field and you need this to be the same for all records if you want a total record count.
If your query has a field you know to be equal to all records, you can use that field in the GroupBy tab. If not, there is a way to do this by adding a join to an number table.
Number Table Workaround
This technique uses a table with numbers to create specific queries. In this case we can join it to your query to add a known common value to all rows.
Here is the XML for a Customization Project that creates this table and makes it available as the Is.Objects.Core.ISNumbers DAC.
<Customization level="200" description="Number utility table" product-version="17.207">
<Graph ClassName="ISNumbers" Source="#CDATA" IsNew="True" FileType="NewDac">
<CDATA name="Source"><![CDATA[using System;
using PX.Data;
namespace IS.Objects.Core
{
[Serializable]
public class ISNumbers: IBqlTable
{
#region Number
[PXDBInt(IsKey = true)]
[PXUIField(DisplayName = "Number", IsReadOnly = true)]
public int? Number { get; set; }
public class number : IBqlField{}
#endregion
}
}]]></CDATA>
</Graph>
<Sql TableName="ISNumbers" CustomScript="#CDATA">
<CDATA name="CustomScript"><![CDATA[IF OBJECT_ID('ISNumbers', 'U') IS NOT NULL DROP TABLE ISNumbers;
SELECT TOP 10000 IDENTITY(int,1,1) AS Number
INTO ISNumbers
FROM sys.objects s1
CROSS JOIN sys.objects s2
ALTER TABLE ISNumbers ADD CONSTRAINT PK_ISNumbers PRIMARY KEY CLUSTERED (Number)]]></CDATA>
</Sql>
</Customization>
Just add the table to the GI and crate an INNER JOIN relation where the value of the number field equals 1:
Then you can use this field in the GroupBy condition.
Then you can add the Numbers field and set its value to <Count>. Leave all your other result fields to keep the logic but hide them if you don't need them (they will be automatically grouped by max value).
All queries performed by GIs are executed in the DB so you don't need to worry about it running in the App side.

Insert data in two Table using Mybatis

I am very new to Mybatis and stuck in a situation I have some questions
The complete scenario is I need to read and excel file and insert the excel data in database in two different tables having primary and foreign key relationship .
I am able to read the excel data and able to insert in primary table but not getting how to insert data in second table actually the problem is I have two different pojo classes having separate data for for each table two different mappers.
I am achiving association by defining the pojo of child table inside the pojo of parent class
Is there any way to insert data in two different table.
Is is possible to run 2 insert queries in single tag
Any help would be appreciable
There are lot of ways to do that.
Here is demonstration of one of the most straightforward ways to do that - using separate inserts. The exact solution may vary insignificantly depending mainly on whether primary keys are taken from excel or are generated during insertion into database. Here I suppose that keys are generated during insertion (as this is a slightly more complicated case)
Let's assume you have these POJOs:
class Parent {
private Integer id;
private Child child;
// other fields, getters, setters etc
}
class Child {
private Integer id;
private Parent parent;
// other fields, getters, setters etc
}
Then you define two methods in mapper:
public interface MyMapper {
#Insert("Insert into parent (id, field1, ...)
values (#{id}, #{field1}, ...)")
#Options(useGeneratedKeys = true, keyProperty = "id")
void createParent(Parent parent);
#Insert("Insert into child(id, parent_id, field1, ...)
values (#{id}, #{parent.id}, #{field1}, ...)")
#Options(useGeneratedKeys = true, keyProperty = "id")
void createChild(Child child);
}
and use them
MyMapper myMapper = createMapper();
Parent parent = getParent();
myMapper.createParent(parent);
myMapper.createChild(parent.getChild());
Instead of single child there can be a collection. In that case createChild is executed in the loop for every child.
In some databases (posgresql, sql server) you can insert into two tables in one statement. The query however will be more complex.
Another possibility is to use multiple insert statements in one mapper method. I used code similar to this in postgresql with mapping in xml:
<insert id="createParentWithChild">
insert into parent(id, field1, ...)
values (#{id}, #{field1}, ...);
insert into child(id, parent_id, field1, ...)
values (#{child.id}, #{id}, #{child.field1},...)
</insert>
and method definition in mapper interface:
void createParentWIthChild(Parent parent);
I know this is a little old, but the solution which worked best for me was implementing 2 insert stanzas in my mapping xml.
<insert id="createParent">
insert into parent(id, field1, ...)
values (#{id}, #{field1}, ...);
</insert>
<insert id="createChild">
insert into child(id, parent_id, field1, ...)
values (#{child.id}, #{id}, #{child.field1},...);
</insert>
And then chaining them. ( if the parent call failed do not continue to call the child)
As a side note, In my case I am using camel-mybatis so my camel-config had
<from uri="stream:in"/>
<to uri="mybatis:createParent?statementType=Insert"/>
<to uri="mybatis:createChild?statementType=Insert"/>

Cannot link MS Access query with subquery

I have created a query with a subquery in Access, and cannot link it in Excel 2003: when I use the menu Data -> Import External Data -> Import Data... and select the mdb file, the query is not present in the list. If I use the menu Data -> Import External Data -> New Database Query..., I can see my query in the list, but at the end of the import wizard I get this error:
Too few parameters. Expected 2.
My guess is that the query syntax is causing the problem, in fact the query contains a subquery. So, I'll try to describe the query goal and the resulting syntax.
Table Positions
ID (Autonumber, Primary Key)
position (double)
currency_id (long) (references Currency.ID)
portfolio (long)
Table Currency
ID (Autonumber, Primary Key)
code (text)
Query Goal
Join the 2 tables
Filter by portfolio = 1
Filter by currency.code in ("A", "B")
Group by currency and calculate the sum of the positions for each currency group an call the result: sumOfPositions
Calculate abs(sumOfPositions) on each currency group
Calculate the sum of the previous results as a single result
Query
The query without the final sum can be created using the Design View. The resulting SQL is:
SELECT Currency.code, Sum(Positions.position) AS SumOfposition
FROM [Currency] INNER JOIN Positions ON Currency.ID = Positions.currency_id
WHERE (((Positions.portfolio)=1))
GROUP BY Currency.code
HAVING (((Currency.code) In ("A","B")));
in order to calculate the final SUM I did the following (in the SQL View):
SELECT Sum(Abs([temp].[SumOfposition])) AS sumAbs
FROM [SELECT Currency.code, Sum(Positions.position) AS SumOfposition
FROM [Currency] INNER JOIN Positions ON Currency.ID = Positions.currency_id
WHERE (((Positions.portfolio)=1))
GROUP BY Currency.code
HAVING (((Currency.code) In ("A","B")))]. AS temp;
So, the question is: is there a better way for structuring the query in order to make the export work?
I can't see too much wrong with it, but I would take out some of the junk Access puts in and scale down the query to this, hopefully this should run ok:
SELECT Sum(Abs(A.SumOfPosition)) As SumAbs
FROM (SELECT C.code, Sum(P.position) AS SumOfposition
FROM Currency As C INNER JOIN Positions As P ON C.ID = P.currency_id
WHERE P.portfolio=1
GROUP BY C.code
HAVING C.code In ("A","B")) As A
It might be worth trying to declare your parameters in the MS Access query definition and define their datatypes. This is especially important when you are trying to use the query outside of MS Access itself, since it can't auto-detect the parameter types. This approach is sometimes hit or miss, but worth a shot.
PARAMETERS [[Positions].[portfolio]] Long, [[Currency].[code]] Text ( 255 );
SELECT Sum(Abs([temp].[SumOfposition])) AS sumAbs
FROM [SELECT Currency.code, Sum(Positions.position) AS SumOfposition
FROM [Currency] INNER JOIN Positions ON Currency.ID = Positions.currency_id
WHERE (((Positions.portfolio)=1))
GROUP BY Currency.code
HAVING (((Currency.code) In ("A","B")))]. AS temp;
I have solved my problems thanks to the fact that the outer query is doing a trivial sum. When choosing New Database Query... in Excel, at the end of the process, after pressing Finish, an Import Data form pops up, asking
Where do you want to put the data?
you can click on Create a PivotTable report... . If you define the PivotTable properly, Excel will display only the outer sum.

Subsonic 3 Simple Query inner join sql syntax

I want to perform a simple join on two tables (BusinessUnit and UserBusinessUnit), so I can get a list of all BusinessUnits allocated to a given user.
The first attempt works, but there's no override of Select which allows me to restrict the columns returned (I get all columns from both tables):
var db = new KensDB();
SqlQuery query = db.Select
.From<BusinessUnit>()
.InnerJoin<UserBusinessUnit>( BusinessUnitTable.IdColumn, UserBusinessUnitTable.BusinessUnitIdColumn )
.Where( BusinessUnitTable.RecordStatusColumn ).IsEqualTo( 1 )
.And( UserBusinessUnitTable.UserIdColumn ).IsEqualTo( userId );
The second attept allows the column name restriction, but the generated sql contains pluralised table names (?)
SqlQuery query = new Select( new string[] { BusinessUnitTable.IdColumn, BusinessUnitTable.NameColumn } )
.From<BusinessUnit>()
.InnerJoin<UserBusinessUnit>( BusinessUnitTable.IdColumn, UserBusinessUnitTable.BusinessUnitIdColumn )
.Where( BusinessUnitTable.RecordStatusColumn ).IsEqualTo( 1 )
.And( UserBusinessUnitTable.UserIdColumn ).IsEqualTo( userId );
Produces...
SELECT [BusinessUnits].[Id], [BusinessUnits].[Name]
FROM [BusinessUnits]
INNER JOIN [UserBusinessUnits]
ON [BusinessUnits].[Id] = [UserBusinessUnits].[BusinessUnitId]
WHERE [BusinessUnits].[RecordStatus] = #0
AND [UserBusinessUnits].[UserId] = #1
So, two questions:
- How do I restrict the columns returned in method 1?
- Why does method 2 pluralise the column names in the generated SQL (and can I get round this?)
I'm using 3.0.0.3...
So far my experience with 3.0.0.3 suggests that this is not possible yet with the query tool, although it is with version 2.
I think the preferred method (so far) with version 3 is to use a linq query with something like:
var busUnits = from b in BusinessUnit.All()
join u in UserBusinessUnit.All() on b.Id equals u.BusinessUnitId
select b;
I ran into the pluralized table names myself, but it was because I'd only re-run one template after making schema changes.
Once I re-ran all the templates, the plural table names went away.
Try re-running all 4 templates and see if that solves it for you.

Resources