Sybase offset for pagination - pagination

Is there any simple way to implement pagination in sybase?
In postgres there are limit and offset in mysql there is limit X,Y. What about sybase? There is top clausure to limit results but to achieve full pagination there is also offset needed.
It is not a problem if there are a few pags, I can simply trim results on the client side, but if there are millions of rows I would like to fetch only data that I need.

// First row = 1000
// Last row = 1009
// Total row = 1009 - 1000 + 1 = 10
// Restriction: exec sp_dboption 'DATABASE_NAME','select into/bulkcopy','true'
select TOP 1009 *, rownum=identity(10)
into #people
from people
where upper(surname) like 'B%'
select * from #people where rownum >= 1000
drop table #people
// It shoulde be better SQL-ANSI-2008 (but we have to wait):
// SELECT * FROM people
// where upper(surname) like 'B%'
// OFFSET 1000 ROWS FETCH NEXT 10 ROWS ONLY

I'm very late to the party but I've happened to stumble on this problem and found a better answer using TOP and START AT from sybase doc. You need to use ORDER BY for or you will have unpredictable results.
http://dcx.sybase.com/1101/en/dbusage_en11/first-order-formatting.html
SELECT TOP 2 START AT 5 *
FROM Employees
ORDER BY Surname DESC;

Quoting from http://www.isug.com/Sybase_FAQ/ASE/section6.2.html#6.2.12:
Sybase does not have a direct equivalent to Oracle's rownum but its functionality can be emulated in a lot of cases.
You can set a maximum rowcount, which will limit the number of rows returned by any particular query:
set rowcount 150
That limit will apply until it is reset:
set rowcount 0
You could select into a temporary table, then pull data from that:
set rowcount 150
select pseudo_key = identity(3),
col1,
col2
into #tempA
from masterTable
where clause...
order by 2,3
select col1,col2 from #tempA where pseudo_key between 100 and 150
You could optimize storage on the temp table by storing only ID columns, which you then join back to the original table for your select.
The FAQ also suggests other solutions, including cursors or Sybperl.

Sybase SQL Anywhere example, rows per page:10, offset:1000.
SELECT top 10 start at 1001 * FROM employee order by employeeid
Note: You need to specify the order by column.

as of version 16.x Sybase ASE supports LIMIT and OFFSET
Reference: https://help.sap.com/docs/SAP_ASE/e0d4539d39c34f52ae9ef822c2060077/26d84b4ddae94fed89d4e7c88bc8d1e6.html?version=16.0.2.5&locale=en-US#using-limit-and-offset-together

Unfortunately Sybase does not provide the ability to set a start and offset limit. The best you can achieve is to use the SET ROWCOUNT to limit the number of records returned. If you have 1,000 records and want to page by 50 entries then something like this will bring back the first page...
set rowcount 50
select * from orders
For the second page...
set rowcount 100
select * from orders
...and then you can choose not to display the first 50 from within your Java code. Obviously as you page forward you end up having to return larger and larger data sets. Your question about what to do with 1,000,000 records doesn't seem practical for a user interface that is paginated. No user searches on Google and then pages forward 1,000 times looking for stuff.
What if I have a Natural Key?
If you do have a relatively large data set and you can use a natural key on your data, this will help limit the records returned. For example if you have a list of contacts and have an interface that allows your users to select A to Z to page the people in the directory based on Surname then you can do something like...
set rowcount 50
select * from people
where upper(surname) like 'B%'
When there are more than 50 people with a surname starting with 'B' you use the same approach as above to page forward...
set rowcount 100
select * from people
where upper(surname) like 'B%'
... and remove the first 50 in Java code.
Following on from this example, maybe you can limit searches by date, or some other piece of data meaningful to your users requriements.
Hope this helps!

You could try using the set ROWCOUNT twice like this:
declare #skipRows int, #getRows int
SELECT #skipRows=50
SELECT #getRows=10
set ROWCOUNT #skipRows
SELECT caslsource_id into #caslsource_paging FROM caslsources
set rowcount #getRows
Select * from caslsources where caslsource_id not in (select caslsource_id from #caslsource_paging)
DROP TABLE #caslsource_paging
This creates a temporary table of rows to skip. You will need to add your WHERE and ORER BY clauses to both the SELECTs to skip the right pages.

I don't know if this is ASE or a different product but the following pattern works across several databases I've worked with as long as there is a way to produce a temp table with a row number somehow and you can identify a unique key for each row:
Input parameters:
declare #p_first int /* max number of rows to see; may be null (= all results); otherwise must be positive number */
declare #p_skipFirst int /* number of rows to skip before the results; must be nonnegative number */
declare #p_after PKTYPE /* key for the row before you start skipping; may be null */
given a table:
RowNumber | RowIndex | DataCol1
1 | 1234 | Joe
2 | 1235 | Sue
3 | 2000 | John
4 | 2005 | Frank
5 | 3000 | Tom
6 | 4000 | Alice
the parameter set:
set #p_first = 5
set #p_skipFirst = 2
set #p_after = 1235
would represent rows 5 and 6.
An additional set of parameters can represent paging from the end of the table in reverse:
declare #p_last int /* max number of rows to see; may be null (= all results); otherwise must be positive number */
declare #p_skipLast int /* number of rows to skip after the results; must be nonnegative number */
declare #p_before PKTYPE /* key for the row after you start skipping; may be null */
Assuming your unsorted table is in #resultsBeforeSort with an index column named RowIndex you can sort this with the following script:
select RowNumber = identity(10), *
into #results
from #resultsBeforeSort
/*
you might also wish to have a where clause on this query
this sort is dynamically generated based on a sort expression and
ultimately ended with RowIndex to ensure a deterministic order
*/
order by Column1, Column2 desc, RowIndex
declare #p_total int, #p_min int, #p_max int
select #p_total = count(*) from #results
select #p_min = case when #p_after is null then 1 + #p_skipFirst else #p_total + 1 end
select #p_min = RowNumber + #p_skipFirst from #results where [RowIndex] = #p_after
select #p_max = case when #p_before is null then #p_total - #p_skipLast else 0 end
select #p_max = RowNumber - #p_skipLast from #results where [RowIndex] = #p_before
declare #p_min2 int, #p_max2 int
set #p_min2 = #p_min
set #p_max2 = #p_max
select #p_max2 = case when #p_first is null then #p_max else #p_min + #p_first - 1 end
select #p_min2 = case when #p_last is null then #p_min else #p_max - #p_last end
select #p_min = case when #p_min2 > #p_min then #p_min2 else #p_min end
select #p_max = case when #p_max2 < #p_max then #p_max2 else #p_max end
that script sets up the parameters #p_min, #p_max, and #p_total as well as the temp table #results
You can then use this to select actual data; select 2 table results, first one being metadata (select this first because the second table might not have any actual rows and your reader implementation might not be capable of dealing with that without backtracking):
select [Count] = #p_total,
HasPreviousPage = (case when #p_min > 1 then 1 else 0 end),
HasNextPage = (case when #p_max + 1 < #p_total then 1 else 0 end)
followed by the paged window of results that you actually want:
select [RowIndex], Col1, Col2, Col3
from #results where RowNumber between #p_min and #p_max
Doing this generic solution permits the ability to expose whatever paging strategy you wish. You can do a streaming solution (facebook, google, stackoverflow, reddit, ...) via #p_after and #p_first (or #p_before and #p_last). You can do an offset + take with #p_first and #p_skipFirst. You can also do a page + size with the same parameters #p_first = size and #p_skipFirst = (page - 1) * size. Further you can do more esoteric paging strategies (last X pages, between absolute records, offset + anchor, etc) with other combinations of parameters.
This said, Sybase (SAP) ASE does now directly support the offset + take strategy via rows limit #p_first offset #p_skipFirst. If you only wished to support that strategy you could simplify the above to:
declare #p_total int
select #p_total = count(*) from #resultsBeforeSort
select [Count] = #p_total,
[HasPreviousPage] = (case when #p_skipFirst > 0 then 1 else 0 end),
[HasNextPage] = (case when #p_total > #p_skipFirst + #p_first then 1 else 0 end)
select [RowIndex], Col1, Col2, Col3
from #resultsBeforeSort
order by Column1, Column2 desc, RowIndex
rows limit #p_first offset #p_skipFirst

Related

SQL Server: use all the words of a string as separate LIKE parameters (and all words should match)

I have a string containing a certain number of words (it may vary from 1 to many) and I need to find the records of a table which contains ALL those words in any order.
For instances, suppose that my input string is 'yellow blue red' and I have a table with the following records:
1 yellow brown white
2 red blue yellow
3 black blue red
The query should return the record 2.
I know that the basic approach should be something similar to this:
select * from mytable where colors like '%yellow%' and colors like '%blue%' and colors like '%red%'
However I am not being able to figure out how turn the words of the string into separate like parameters.
I have this code that splits the words of the string into a table, but now I am stuck:
DECLARE #mystring varchar(max) = 'yellow blue red';
DECLARE #terms TABLE (term varchar(max));
INSERT INTO #terms
SELECT Split.a.value('.', 'NVARCHAR(MAX)') term FROM (SELECT CAST('<X>'+REPLACE(#mystring, ' ', '</X><X>')+'</X>' AS XML) AS String) AS A CROSS APPLY String.nodes('/X') AS Split(a)
SELECT * FROM #terms
Any idea?
First, put that XML junk in a function:
CREATE FUNCTION dbo.SplitThem
(
#List NVARCHAR(MAX),
#Delimiter NVARCHAR(255)
)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN ( SELECT Item = y.i.value(N'(./text())[1]', N'nvarchar(4000)')
FROM ( SELECT x = CONVERT(XML, '<i>'
+ REPLACE(#List, #Delimiter, '</i><i>')
+ '</i>').query('.')
) AS a CROSS APPLY x.nodes('i') AS y(i));
Now you can extract the words in the table, join them to the words in the input string, and discard any that don't have the same count:
DECLARE #mystring varchar(max) = 'red yellow blue';
;WITH src AS
(
SELECT t.id, t.colors, fc = f.c, tc = COUNT(t.id)
FROM dbo.mytable AS t
CROSS APPLY dbo.SplitThem(t.colors, ' ') AS s
INNER JOIN (SELECT Item, c = COUNT(*) OVER()
FROM dbo.SplitThem(#mystring, ' ')) AS f
ON s.Item = f.Item
GROUP BY t.id, t.colors, f.c
)
SELECT * FROM src
WHERE fc = tc;
Output:
id
colors
fc
tc
2
red blue yellow
3
3
Example db<>fiddle
This disregards any possibility of duplicates on either side and ignores the larger overarching issue that this is the least optimal way possible to store sets of things. You have a relational database, use it! Surely you don't think the tags on this question are stored somewhere as the literal string
string sql-server-2012 sql-like
Of course not, these question:tag relationships are stored in a, well, relational table. Splitting strings is for the birds and those with all kinds of CPU and time to spare.
If you are storing a delimited list in a single column then you really need to normalize it out into a separate table.
But assuming you actually want to just do multiple free-form LIKE comparisons, you can do them against a list of values:
select *
from mytable t
where not exists (select 1
from (values
('%yellow%'),
('%blue%'),
('%red%')
) v(search)
where t.colors not like v.search
);
Ideally you should pass these values through as a Table Valued Parameter, then you just put that into your query
select *
from mytable t
where not exists (select 1
from #tmp v
where t.colors not like v.search
);
If you want to simulate an OR semantic rather than AND the change not exists to exists and not like to like.

How to skip the first row of data in SQL Query?

I have this code:
select DOLFUT from [DATABASE $]
How do I get it to get data from the 2nd line? (skip only the first line of data and collect all the rest)
You can use LIMIT to skip any number of row you want. Something like
SELECT * FROM table
LIMIT 1 OFFSET 10
SELECT * FROM tbl LIMIT 5,10; # Retrieve rows 6-15
To retrieve all rows from a certain offset up to the end of the result set, you can use some large number for the second parameter. This statement retrieves all rows from the 96th row to the last:
SELECT * FROM tbl LIMIT 95,18446744073709551615;
With one argument, the value specifies the number of rows to return from the beginning of the result set:
SELECT * FROM tbl LIMIT 5; # Retrieve first 5 rows
MySql docs
In Access, which you seem to use, you can use:
Select DOLFUT
From [DATABASE $]
Where DOLFUT Not In
(Select Top 1 T.DOLFUT
From [DATABASE $] As T
Order By 1)
Data in tables have no inherent order.
To get data from the 2nd line
, you have to set up some sort sequence and then bypass the first record of the set - as Gustav has shown.

How do I select everything where two columns contain equal values in CQL?

I'm trying to select everything where two columns contain equal values. Here is my CQL query:
select count(someColumn) from somekeySpace.sometable where columnX = columnY
This doesn't work. How can I do this?
You can't query like that, cassandra don't support it
You can do this in different way.
First you have to create a separate counter table.
CREATE TABLE match_counter(
partition int PRIMARY KEY,
count counter
);
At the time of insertion into your main table if columnX = columnY then increment the value here. Though you have only a single count, you can use a static value of partition
UPDATE match_counter SET count = count + 1 WHERE partition = 1;
Now you can get the count of match column
SELECT * FROM match_counter WHERE partition = 1;

Select records by first row and page size

I'm trying to implement lazy loading and pagination in my front end. I've been supplied the following two variables by the front end:
firstRow - the index of the first record to return in results of the select query.
pageSize - the total size of the records which the select query must return, starting at firstRow.
How can I use them in a select query in MyBatis in order to return the desired subset of records?
There is not really any magic to pagination in mybatis just write the query then subset it using row number. Pagination syntax will vary depending on the database, but here is an oracle example.
select *
from (
select r.*, rownum rnum,
from (# base query goes here #) r
)
where rnum >= (#{firstRow})
and rnum < #{firstRow} + #{pageSize}

What kinds of strategies are there for paging result sets for web UIs?

I am looking at implementing a paging strategy for a domain model which has numbers in the hundreds of thousands. I am most interested in how websites that are performance conscience achieve this.
Here is what I use in a SQL Server 2008 table that has 2 billion + rows of data (I changed the table and column names)
it takes between 6 and 10 milliseconds to do a page of 50 rows, 5000 rows per page takes about 60 milliseconds
; with cte as(select ROW_NUMBER()over(order by Column1,Column2) as RowNumber,
<<Other columns here>>
from Table1 p
join Table2 i on i.ID = p.ID
and i.ID2 = p.ID2
join dbo.Table3 c on i.ID2 = p.ID2
where Column2 between #StartDate and #EndDate
and p.ID = #ID
)
select *,(select MAX(RowNumber) from cte) as MaxRow
from cte
where RowNumber between #StartRow and (#StartRow + #RowsPerPage) -1
order by Col3
I use page level compression on the DB, narrow tables and indexes also
That is on the database, on the website we use a ExtJS grid and it is just Ajax calls to the service which calls the DB
You can check out jqgrid.
Here's a demo with 100,000 rows.

Resources