I want to limit the number of result using JobSearchRestriction. I want to limit not by a condition, but by "hard coded" number. Somethig like "LIMIT 10".
Is it possible to do that in hybris using JobSearchRestriction?
Try this
SELECT * FROM {Product} LIMIT 10
or
SELECT TOP 10 * FROM {Product}
For Oracle
SELECT * FROM {Product} WHERE rownum <= 10
Though API
final FlexibleSearchQuery query = new FlexibleSearchQuery("SELECT * FROM {Product}");
query.setCount(10);
Related
I am using ScyllaDB open-source version 4.4.
I am trying to figure out how to write a query which I would have done with a window function or a UNION set operator if this was a traditional relational database with a full SQL.
A simplified table schema:
CREATE TABLE mykeyspace.mytable (
name text ,
timestamp_utc_nanoseconds bigint ,
value bigint ,
PRIMARY KEY( (name),timestamp_utc_nanoseconds )
) WITH CLUSTERING ORDER BY (timestamp_utc_nanoseconds DESC);
My query needs to calculate and return 6 values, each of them is an average of "value" over one of the previous minutes.
In pseudo-code:
SELECT AVG(value) AS one_min_avg_6_min_ago FROM mykeyspace.mytable WHERE name = 'some_name' AND timestamp_utc_nanoseconds >= [*6 minutes ago*] AND timestamp_utc_nanoseconds < [*5 minutes ago*];
SELECT AVG(value) AS one_min_avg_5_min_ago FROM mykeyspace.mytable WHERE name = 'some_name' AND timestamp_utc_nanoseconds >= [*5 minutes ago*] AND timestamp_utc_nanoseconds < [*4 minutes ago*];
SELECT AVG(value) AS one_min_avg_4_min_ago FROM mykeyspace.mytable WHERE name = 'some_name' AND timestamp_utc_nanoseconds >= [*4 minutes ago*] AND timestamp_utc_nanoseconds < [*3 minutes ago*];
SELECT AVG(value) AS one_min_avg_3_min_ago FROM mykeyspace.mytable WHERE name = 'some_name' AND timestamp_utc_nanoseconds >= [*3 minutes ago*] AND timestamp_utc_nanoseconds < [*2 minutes ago*];
SELECT AVG(value) AS one_min_avg_2_min_ago FROM mykeyspace.mytable WHERE name = 'some_name' AND timestamp_utc_nanoseconds >= [*2 minutes ago*] AND timestamp_utc_nanoseconds < [*1 minutes ago*];
SELECT AVG(value) AS one_min_avg_1_min_ago FROM mykeyspace.mytable WHERE name = 'some_name' AND timestamp_utc_nanoseconds >= [*1 minute ago*];
My client-side is C# .NET 5. I can easily do pretty much anything on the client side. But the latency in this case is going to be a big problem.
My question is:
How can I combine these 6 queries into one result set on the server side (not on the client app side)?
Ideally you would use a UDA - User Defined Aggregate function, however support for these is not yet complete. In the meantime, you can execute these 6 queries in parallel which might even be preferable.
Hello I am trying to execute the following Oracle Query. I confirmed i can successfully connect to the database using cx_Oracle but my query is not executing. This is a large table and i am trying to limit the number of rows to 10
query1 = """
select *
from
(select *
from some_table
)
where rownum < 10;
"""
df_ora1 = pd.read_sql(query1, con=connection1)
I am getting the following error but cant figure out what the invalid character is!
DatabaseError: ORA-00911: invalid character
Please help!
Remove the semi-colon from the SQL statement. Semi-colons are not part of SQL.
query1 = """ select * from (select * from some_table ) where rownum < 10 """
How to do this in Laravel 5.1?
SELECT *
FROM
(
SELECT subscriber_id, COUNT(*) as count
FROM mailing_group_subscriber
WHERE mailing_group_id IN ('99', '15498855416270870')
GROUP BY subscriber_id
) table_count
WHERE count = 2;
Thanks!
Your query is not using having, thus it will count all and pick the count is 2. So, get all counts, and pick count 2 via collection:
DB::table('mailing_group_subscriber')
->select(DB::raw('COUNT(*) as count'))
->select('subscriber_id')
->groupBy('subscriber_id')
->whereIn('mailing_group_id', ['99', '15498855416270870'])
->pluck('count', 'subscriber_id')
->where('count', 2);
i'm having some problems with the reliability of Azure SQL servers.
sometimes doing complicated queries with subqueries like the following:
SELECT DISTINCT [DeviceName] ,name ,data.[Addr] ,[Signal] FROM (SELECT [DeviceName] ,[Signal] ,MAX([Signal]) OVER (PARTITION BY [Addr]) AS 'MaxSignal',[Timestamp] ,[Addr] ,[PartitionId] ,[EventEnqueuedUtcTime] FROM [dbo].[mytable] WHERE CAST([Timestamp] AS DATETIME) > DATEADD(HOUR,+2,(DATEADD(MINUTE, -10, GETDATE()))) ) data LEFT JOIN mytable ON [dbo].[myreftable].[Addr] = data.[Addr] WHERE [Signal] = [MaxSignal];
Is done in almost an instant, like i would assume, at other times simply doing a SELECT COUNT(*) FROM mytable
Is taking upwards of 30 minutes, and showing a DTU usage graph like this:
Anyone know any solutions to this? is it me doing something completely wrong? or is Azure simply not there yet?
what you pay is what you get.You will need to look at what are the top resources consumers in your system.DTU is nothing, but a limit on CPU,IO,Memory available to your database..
so to troubleshoot DTU problems,I would follow below steps..
1.)Below query gives me Resource usage for last 14 days for all resources..
SELECT
(COUNT(end_time) - SUM(CASE WHEN avg_cpu_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'CPU Fit Percent'
,(COUNT(end_time) - SUM(CASE WHEN avg_log_write_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Log Write Fit Percent'
,(COUNT(end_time) - SUM(CASE WHEN avg_data_io_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Physical Data Read Fit Percent'
FROM sys.dm_db_resource_stats
running above query gives you an idea on how much is your CPU percentage consistently
2.) Below query gives me an idea of resource usage over time..
SELECT start_time, end_time,
(SELECT Max(v) FROM (VALUES (avg_cpu_percent), (avg_physical_data_read_percent), (avg_log_write_percent)) AS value(v)) as [avg_DTU_percent]
FROM sys.resource_stats where database_name = ‘<your db name>’ order by end_time desc
Now that ,i have enough data to look out which metric is more resource intensive,I can follow normal approach of trying to troubleshoot..
Say for Example,if my CPU usage is above 90% consistently over time,I will gather all the queries which are consuming more CPU and try to fine tune them
Need help in writing u-sql query to fetch me top n percentage of rows.I have one dataset from which need to take total count of rows and take top 3% rows from dataset based on col1. Code which I have written is :
#count = SELECT Convert.ToInt32(COUNT(*)) AS cnt FROM #telData;
#count1=SELECT cnt/100 AS cnt1 FROM #count;
DECLARE #cnt int=SELECT Convert.ToInt32(cnt1*3) FROM #count1;
#EngineFailureData=
SELECT vin,accelerator_pedal_position,enginefailure=1
FROM #telData
ORDER BY accelerator_pedal_position DESC
FETCH #cnt ROWS;
#telData is my basic dataset.Thanks for help.
Some comments first:
FETCH currently only takes literals as arguments (https://msdn.microsoft.com/en-us/library/azure/mt621321.aspx)
#var = SELECT ... will assign the name #var to the rowset expression that starts with the SELECT. U-SQL (currently) does not provide you with stateful scalar variable assignment from query results. Instead you would use a CROSS JOIN or other JOIN to join the scalar value in.
Now to the solution:
To get the percentage, take a look at the ROW_NUMBER() and PERCENT_RANK() functions. For example, the following shows you how to use either to answer your question. Given the simpler code for PERCENT_RANK() (no need for the MAX() and CROSS JOIN), I would suggest that solution.
DECLARE #percentage double = 0.25; // 25%
#data = SELECT *
FROM (VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),(13),(14),(15),(16),(17),(18),(19),(20)
) AS T(pos);
#data =
SELECT PERCENT_RANK() OVER(ORDER BY pos) AS p_rank,
ROW_NUMBER() OVER(ORDER BY pos) AS r_no,
pos
FROM #data;
#cut_off =
SELECT ((double) MAX(r_no)) * (1.0 - #percentage) AS max_r
FROM #data;
#r1 =
SELECT *
FROM #data CROSS JOIN #cut_off
WHERE ((double) r_no) > max_r;
#r2 =
SELECT *
FROM #data
WHERE p_rank >= 1.0 - #percentage;
OUTPUT #r1
TO "/output/top_perc1.csv"
ORDER BY p_rank DESC
USING Outputters.Csv();
OUTPUT #r2
TO "/output/top_perc2.csv"
ORDER BY p_rank DESC
USING Outputters.Csv();