My Query:
Select * from (Select id,name, salary from Emp ORDER BY %s %s) AS AL
BETWEEN OFFSET :OFFSET AND LIMIT: LIMIT
%s ,%s Represents the created,ASC
It is not working in Spanner
How to implement this query in spanner from java side?
It seems your query has a syntax error. We are missing a WHERE <column name> before the BETWEEN.
With the fixed query, you could do something like this in the Java client:
try (ResultSet rs = databaseClient
.singleUse()
.executeQuery(Statement
.newBuilder(String.format("SELECT *"
+ " FROM ("
+ " SELECT id, name, salary FROM Emp ORDER BY %s %s"
+ " ) AS e \n"
+ " WHERE e.id BETWEEN #offset AND #limit",
"id", "ASC"
))
.bind("offset")
.to(1L)
.bind("limit")
.to(10L)
.build())) {
while (rs.next()) {
System.out.println(rs.getLong("id") + ", " + rs.getString("name") + ", " + rs.getBigDecimal("salary"));
}
}
Related
I want to run a Process with Bash. I uploaded the jar file, I have checked my .conf and shell files. In general this process is meant to write in a database table, and not in a hdfs path.
I don't know if I need to add in the code an additional hdfs path before generating the jar. Next I show some part of the exception:
22/09/07 21:50:42 INFO yarn.Client: Deleted staging directory hdfs://nn/user/srv_remozo_equip/.sparkStaging/application_1661633254168_93772
Exception in thread "main" java.io.FileNotFoundException: File does not exist: hdfs://nn:8020/applications/recup_remozo_equipos/Logistica_Carga_Input_SimpleData/logistica_carga_input_simpledata_2.11-0.1.jar
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1742)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1757)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:386)
I dont know where the exception is coming from, or if related to the .conf and shell files.
Next i share some part of the code(some paths and names are in spanish), where it is the logic and writes in database.
// ================= INICIO LÓGICA DE PROCESO =================
//Se crea df a partir de archivo diario input de acuerdo a la ruta indicada
val df_csv = spark.read.format("csv").option("header","true").option("sep",";").option("mode","dropmalformed").load("/applications/recup_remozo_equipos/equipos_por_recuperar/output/agendamientos_sin_pet_2")
val df_final = df_csv.select($"RutSinDV".as("RUT_SIN_DV"),
$"dv".as("DV"),
$"Agendado".as("AGENDADO"),
to_date(col("Dia_Agendado"), "yyyyMMdd").as("DIA_AGENDADO"),
$"Horario_Agendado".as("HORARIO_AGENDADO"),
$"Nombre_Agendamiento".as("NOMBRE_AGENDAMIENTO"),
$"Telefono_Agendamiento".as("TELEFONO_AGENDAMIENTO"),
$"Email".substr(0,49).as("EMAIL"),
$"Region_Agendamiento".substr(0,29).as("REGION_AGENDAMIENTO"),
$"Comuna_Agendamiento".as("COMUNA_AGENDAMIENTO"),
$"Direccion_Agendamiento".as("DIRECCION_AGENDAMIENTO"),
$"Numero_Agendamiento".substr(0,5)as("NUMERO_AGENDAMIENTO"),
$"Depto_Agendamiento".substr(0,9).as("DEPTO_AGENDAMIENTO"),
to_timestamp(col("fecha_registro")).as("FECHA_REGISTRO"),
to_timestamp(col("Fecha_Proceso")).as("FECHA_PROCESO")
)
// ================== FIN LÓGICA DE PROCESO ==================
// Limpieza en EXADATA
println("[INFO] Se inicia la limpieza por reproceso en EXADATA")
val query_particiones = "(SELECT * FROM (WITH DATA AS (select table_name,partition_name,to_date(trim('''' " +
"from regexp_substr(extractvalue(dbms_xmlgen.getxmltype('select high_value from all_tab_partitions " +
"where table_name='''|| table_name|| ''' and table_owner = '''|| table_owner|| ''' and partition_name = '''" +
"|| partition_name|| ''''),'//text()'),'''.*?''')),'syyyy-mm-dd hh24:mi:ss') high_value_in_date_format " +
"FROM all_tab_partitions WHERE table_name = '" + table_name + "' AND table_owner = '" + table_owner + "')" +
"SELECT partition_name FROM DATA WHERE high_value_in_date_format > DATE '" + startDateYear + "-" + startDateMonth + "-" + startDateDay + "' " +
"AND high_value_in_date_format <= DATE '" + endDateYear + "-" + endDateMonth + "-" + endDateDay + "') A)"
Class.forName(driver_jdbc)
val db = DriverManager.getConnection(url_jdbc, user_jdbc, pass_jdbc)
val st = db.createStatement()
try {
val consultaParticiones = spark.read.format("jdbc")
.option("url", url_jdbc)
.option("driver", driver_jdbc)
.option("dbTable", query_particiones)
.option("user", user_jdbc)
.option("password", pass_jdbc)
.load()
.collect()
for (partition <- consultaParticiones) {
st.executeUpdate("call " + table_owner + ".DO_THE_TRUNCATE_PARTITION('" + table + "','" + partition.getString(0) + "')")
}
} catch {
case e: Exception =>
println("[ERROR TRUNCATE] " + e)
}
st.close()
db.close()
println("[INFO] Se inicia la inserción en EXADATA")
df_final.filter($"DIA_AGENDADO" >= "2022-08-01")
.repartition(repartition).write.mode("append")
.jdbc(url_jdbc, table, utils.jdbcProperties(driver_jdbc, user_jdbc, pass_jdbc))
println("[INFO] Inserción en EXADATA completada con éxito")
println("[INFO] Proceso Logistica Carga Input SimpleData")
I am trying to create(using spring native query) the findAllId for the reactive repository spring data cosmo DB.
Since for the ReactiveCosmosRepository is not implemented.
#Query(value = " SELECT *\n" +
" FROM container_name km\n" +
" WHERE km.id IN (#ids) \n" +
" ORDER BY km.createdDate DESC ")
Flux<ContainerData> findAllById(#Param("ids") String[] ids);
or even
#Query(value = " SELECT *\n" +
" FROM container_name km\n" +
" WHERE km.id IN (#ids) \n" +
" ORDER BY km.createdDate DESC ")
Flux<ContainerData> findAllById(#Param("ids") Iterable<String> ids);
but it is not retrieving any results, and it is not throwing any exception either.
So the question is, how to use IN operator with spring data native query in cosmos db and collection or array out of the box without having to do a workaround.
You should use array_contains
#Query(value = " SELECT *\n" +
" FROM container_name km\n" +
" WHERE array_contains(#ids, km.id, true) \n" +
" ORDER BY km.createdDate DESC ")
Flux<ContainerData> findAllById(#Param("ids") Iterable<String> ids);
I am using spark-sql-2.4.1v with java8.
I have scenario/snippet like below
Dataset<Row> df =//loaded data from a csv file
// this has columns like "code1","code2","code3","code4","code5","code6", and "class"
df.createOrReplaceTempView("temp_tab");
List<String> codesList = Arrays.asList("code1","code5"); // codes of interest to be calculated.
codesList.stream().forEach( code -> {
String query = "select "
+ " avg(" + code + ") as mean, "
+ "percentile(" + code +",0.25) as p25"
+ "from " + temp_tab
+ " group by class";
Dataset<Row> resultDs = sparkSession.sql(query);
});
how can this be written using functions.expr() & functions.agg() ?
I'm trying to implement my first SQLite Database in an Android App regarding obtaining location coordinates to keep track of where the user has been.
I'm trying to add information from my entry into two tables:
a Location table that contains information of the places name, id, latitude, and longitude information &
a CheckIn table that contains information of the places address, corresponding location_id to know which location it corresponds to, latitude, longitude, and time of check in.
Whenever I try to do this, my entry is never updated for the Locations table, solely the CheckIn table, despite using the insert() function to insert into the Locations table as well the id is not updating for the Location table.
I've went through my app in a debugger and I can't figure out what's causing the problem here, as there's no error and the program proceeds just fine to add in the necessary info for the CheckIn table.
I've tried checking StackOverFlow but I can't quite find anything that has been able to help fix my problem. If there's anyone who could help me, it'd be greatly appreciated
My add function:
fun addLoc_CheckIn(Entry: Locations)
{
val selectQuery = "SELECT * FROM $LOCATIONS ORDER BY ID"
val db = this.readableDatabase
val cursor = db.rawQuery(selectQuery, null)
var con = 0
if (cursor.moveToFirst())
{
while (cursor.moveToNext())
{
val pSLong = cursor.getDouble(cursor.getColumnIndex(SLONG))
val pCLong = cursor.getDouble(cursor.getColumnIndex(CLONG))
val pSLat = cursor.getDouble(cursor.getColumnIndex(SLAT))
val pCLat = cursor.getDouble(cursor.getColumnIndex(CLAT))
val Theta = (pCLong * Entry.cLong) + (pSLong * Entry.sLong)
var dist = (pSLat * Entry.sLat) + (pCLat * Entry.cLat * Theta)
// dist = (Math.acos(dist) * 180.00 / Math.PI) * (60 * 1.1516 * 1.609344) / 1000
dist = Math.acos(dist) * 6380000
if (dist <= 30)
{
con = 1
val db1 = this.writableDatabase
val values = ContentValues()
values.put(LOC_ID, cursor.getInt(cursor.getColumnIndex(ID)))
values.put(ADDRESS, Entry.Checks[0].Address)
values.put(LATI, Entry.Lat)
values.put(LONGI, Entry.Long)
values.put(TIME, Entry.Checks[0].Date_Time)
db1.insert(CHECKINS, null, values)
break
}
}
}
if (con == 0)
{
val db1 = this.writableDatabase
val values = ContentValues()
values.put(LOC_NAME, Entry.Name)
values.put(LAT, Entry.Lat)
values.put(LONG, Entry.Long)
values.put(CLAT, Entry.cLat)
values.put(SLAT, Entry.sLat)
values.put(CLONG, Entry.cLong)
values.put(SLONG, Entry.sLong)
Entry.Id = db1.insert(LOCATIONS, null, values)
val cvalues = ContentValues()
cvalues.put(LOC_ID, Entry.Id)
cvalues.put(ADDRESS, Entry.Checks[0].Address)
cvalues.put(LATI, Entry.Lat)
cvalues.put(LONGI, Entry.Long)
cvalues.put(TIME, Entry.Checks[0].Date_Time)
db1.insert(CHECKINS, null, cvalues)
}
}
My OnCreate function with the corresponding companion object:
companion object {
private val DATABASE_NAME = "LocationsDB"
private val DATABASE_VERSION = 1
// 1st Table - Unique Check Ins
private val LOCATIONS = "LOCATIONS"
private val ID = "ID"
private val LOC_NAME = "LOC NAME"
private val LAT = "LAT"
private val LONG = "LONG"
private val CLAT = "CLAT"
private val SLAT = "SLAT"
private val CLONG = "CLONG"
private val SLONG = "SLONG"
// 2nd Table - Repeated Check Ins
private val CHECKINS = "CHECKINS"
private val CHECKIN_ID = "CHECKIN_ID"
private val LOC_ID = "LOC_ID"
private val ADDRESS = "ADDRESS"
private val TIME = "TIME"
private val LATI = "LAT"
private val LONGI = "LONG"
}
override fun onCreate(p0: SQLiteDatabase?) {
val LOCATION_QUERY = "CREATE TABLE " + LOCATIONS + "(" + ID +
" INTEGER PRIMARY KEY AUTOINCREMENT, " + LOC_NAME +
" TEXT, " + LAT + " INTEGER, " + LONG + " INTEGER, " +
CLAT + " INTEGER, "+ SLAT + " INTEGER, " + CLONG + " INTEGER, "+ SLONG + " INTEGER " + ")"
val CHECKIN_QUERY = "CREATE TABLE " + CHECKINS + "(" +
LOC_ID + " INTEGER, " + CHECKIN_ID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + LATI + " INTEGER, " + LONGI + " INTEGER, " + ADDRESS +
" TEXT, " + TIME + " TEXT " + ")"
p0!!.execSQL(LOCATION_QUERY)
p0.execSQL(CHECKIN_QUERY)
}
Now, in my constructor for the Location class and the CheckIns class, I have the id's set to -1, which is what the id for the location remains, even after using the insert() function. Now, this doesn't cause me any issues with regards to adding in my CheckIns as well incrementing the ids in my CheckIns table and I doubt it's causing an issue but I figured it'd be best to include the information, just in case.
I believe that you have an issue with the name of the column due to using
private val LOC_NAME = "LOC NAME"
A column name cannot have a space unless it is enclosed in special characters as per SQL As Understood By SQLite - SQLite Keywords.
This isn't an issue when the table is create (the column name will be LOC). However, when you attempt to insert you will get a syntax error, the row will not be inserted but as you are using the SQLiteDatabase insert method, the error is trapped and processing continues.
However, in the log you would see something similar to :-
2019-10-29 15:47:35.119 12189-12189/aso.so58600930insert E/SQLiteLog: (1) near "NAME": syntax error
2019-10-29 15:47:35.121 12189-12189/aso.so58600930insert E/SQLiteDatabase: Error inserting LOC NAME=MyLoc LAT=100 CLAT=120 LONG=110 SLAT=140 CLONG=130 SLONG=150
android.database.sqlite.SQLiteException: near "NAME": syntax error (code 1 SQLITE_ERROR): , while compiling: INSERT INTO LOCATIONS(LOC NAME,LAT,CLAT,LONG,SLAT,CLONG,SLONG) VALUES (?,?,?,?,?,?,?)
You could circumvent the above by using :-
val db1 = this.writableDatabase
val values = ContentValues()
values.put("LOC", Entry.Name)
values.put(LAT, Entry.Lat)
values.put(LONG, Entry.Long)
values.put(CLAT, Entry.cLat)
values.put(SLAT, Entry.sLat)
values.put(CLONG, Entry.cLong)
values.put(SLONG, Entry.sLong)
Entry.Id = db1.insert(LOCATIONS, null, values)
However, it is not suggested that you use the above BUT that instead you correct the name, e.g. using :-
private val LOC_NAME = "LOC_NAME"
then clear the App's data or uninstall the App and then rerun the App.
This fix assumes that you are developing the App and can afford to lose any existing data. You could retain data but this is a little more complicated as you basically have to create a new table with the appropriate column name, copy the data from the original table, rename or drop the original table and then rename the new table to be the original name.
We just updated to sequelize version 1.7.0-rc1. All of a sudden the following sequelize code stopped working:
exports.invalidate = function (partnerId, importStartTime, fn) {
console.log('\r\n Import start time: ' + JSON.stringify(importStartTime));
var sql = '';
sql += 'UPDATE "Product" p SET "status" = 0 FROM "Brand" b WHERE b."id" = p."BrandId" AND b."PartnerId" = ' + partnerId + ' AND p."status" <> 0 AND \
p."validatedAt" < \'' + importStartTime + '\'; ';
sql += 'UPDATE "Variant" v SET "status" = 0 FROM "Product" p INNER JOIN "Brand" b ON b."id" = p."BrandId" WHERE \
p."id" = v."ProductId" AND b."PartnerId" = ' + partnerId + ' AND v."status" <> 0 AND v."validatedAt" < \'' + importStartTime + '\'; ';
sql += 'UPDATE "ProductCategory" pc SET "status" = 0 FROM "Product" p INNER JOIN "Brand" b ON b."id" = p."BrandId" WHERE \
p."id" = pc."ProductId" AND b."PartnerId" = ' + partnerId + ' AND pc."status" <> 0 AND pc."validatedAt" < \'' + importStartTime + '\'; ';
sql += 'UPDATE "VariantImg" vi SET "status" = 0 FROM "Variant" v INNER JOIN "Product" p ON p."id" = v."ProductId" INNER JOIN "Brand" b ON b."id" = p."BrandId" WHERE \
v."id" = vi."VariantId" AND b."PartnerId" = ' + partnerId + ' AND vi."status" <> 0 AND vi."validatedAt" < \'' + importStartTime + '\'; ';
console.log('\r\n ' + sql);
sequelize.query(sql).success(function () {
console.log('\r\n Invalidation completed :)');
fn(null);
}).error(function (err) {
logExceptOnTest('\r\n Invalidation failed :(');
logExceptOnTest('\r\n Sequelize failed to set status to 0 for items that have been invalidated :(');
logExceptOnTest('\r\n err: ' + JSON.stringify(err));
fn(err);
});
}
We are using PostgreSql v9.3 for our datastore and it appears that the Update (sql) is not being executed. If I execute the same sql directly in PostgreSql, everything works as expected. The funny thing is, if I enable sequelize logging, I can see the Update statments being logged.
Update: 1/23/2014
I decided to move the Update SQL to a PosgreSql function. Strangely, the result is the same . If I call the function within PosgreSql, everything works. If I call the function from Sequelize, via the sequelize.query method, the same problem occurs.
Here is the code for the function:
CREATE OR REPLACE FUNCTION invalidate(partnerId INTEGER, importStartTime TIMESTAMP)
RETURNS TABLE (
validatedAt TIMESTAMP,
importStartTime2 TIMESTAMP
)
AS
$$
BEGIN
UPDATE "Product" p SET "status" = 0 FROM "Brand" b WHERE b."id" = p."BrandId" AND b."PartnerId" = $1 AND p."status" <> 0 AND
p."validatedAt"::TIMESTAMP WITHOUT TIME ZONE < $2;
UPDATE "Variant" v SET "status" = 0 FROM "Product" p INNER JOIN "Brand" b ON b."id" = p."BrandId" WHERE
p."id" = v."ProductId" AND b."PartnerId" = $1 AND v."status" <> 0 AND
v."validatedAt"::TIMESTAMP WITHOUT TIME ZONE < $2;
UPDATE "ProductCategory" pc SET "status" = 0 FROM "Product" p INNER JOIN "Brand" b ON b."id" = p."BrandId" WHERE
p."id" = pc."ProductId" AND b."PartnerId" = $1 AND pc."status" <> 0 AND
pc."validatedAt"::TIMESTAMP WITHOUT TIME ZONE < $2;
UPDATE "VariantImg" vi SET "status" = 0 FROM "Variant" v INNER JOIN "Product" p ON p."id" = v."ProductId" INNER JOIN "Brand" b ON b."id" = p."BrandId" WHERE
v."id" = vi."VariantId" AND b."PartnerId" = $1 AND vi."status" <> 0 AND
vi."validatedAt"::TIMESTAMP WITHOUT TIME ZONE < $2;
RETURN QUERY select pc."validatedAt"::TIMESTAMP WITHOUT TIME ZONE as "validatedAt", $2 as "importStartTime"
from "ProductCategory" pc
inner join "Product" p on pc."ProductId" = p."id"
inner join "Brand" b on b."id" = p."BrandId"
where pc."id" = 34;
END
$$
LANGUAGE plpgsql;
If I call the function like so within PosgreSql, all is fine:
SELECT invalidate(31, '2014-01-22 22:27:53');
If I call the function like so from node.js using Sequelize, it doesn't work:
var sql = 'SELECT invalidate(' + partnerId + ', \'' + importStartTime + '\');';
console.log('\r\n ' + sql);
sequelize.query(sql).success(function (data) {
//console.log('\r\n Invalidation completed :)');
console.log('\r\n data: ' + JSON.stringify(data));
fn(null);
}).error(function (err) {
logExceptOnTest('\r\n Invalidation failed :(');
logExceptOnTest('\r\n Sequelize failed to set status to 0 for items that have been invalidated :(');
logExceptOnTest('\r\n err: ' + JSON.stringify(err));
fn(err);
});
I have narrowed down the problem to this part of the sql:
p."validatedAt"::TIMESTAMP WITHOUT TIME ZONE < $2
I believe the problem is with how dates are stored and passed by both PostgreSql and Sequelize. Somthing changed with Sequelize 1.7.0-rc1, that is converting the date to a format that doesn't match what PosgreSql is reading from the validatedAt timestamp column.
I played around for a while, by attempting to message the dates into the right UTC format. I tried both with and without time zone, without any success. I even added a select at the end of the function to return me the date which is stored in the validatedAt column and the date I pass into the function (importStartTime). Both dates are clearly the same when looking at them visually in the code (via console.log) and in the table (when selected directly in PostgreSql), but when the function call returns the results of it's select statement, the dates are not in the same format. So, I believe the Update statment's date comparison part is also seeing the dates in different format and thus making it appear that the query is not working, even though it actually is.
I updated the title of this post to reflect the new findings.