My Query:
Select * from (Select id,name, salary from Emp ORDER BY %s %s) AS AL
BETWEEN OFFSET :OFFSET AND LIMIT: LIMIT
%s ,%s Represents the created,ASC
It is not working in Spanner
How to implement this query in spanner from java side?
It seems your query has a syntax error. We are missing a WHERE <column name> before the BETWEEN.
With the fixed query, you could do something like this in the Java client:
try (ResultSet rs = databaseClient
.singleUse()
.executeQuery(Statement
.newBuilder(String.format("SELECT *"
+ " FROM ("
+ " SELECT id, name, salary FROM Emp ORDER BY %s %s"
+ " ) AS e \n"
+ " WHERE e.id BETWEEN #offset AND #limit",
"id", "ASC"
))
.bind("offset")
.to(1L)
.bind("limit")
.to(10L)
.build())) {
while (rs.next()) {
System.out.println(rs.getLong("id") + ", " + rs.getString("name") + ", " + rs.getBigDecimal("salary"));
}
}
I am trying to create(using spring native query) the findAllId for the reactive repository spring data cosmo DB.
Since for the ReactiveCosmosRepository is not implemented.
#Query(value = " SELECT *\n" +
" FROM container_name km\n" +
" WHERE km.id IN (#ids) \n" +
" ORDER BY km.createdDate DESC ")
Flux<ContainerData> findAllById(#Param("ids") String[] ids);
or even
#Query(value = " SELECT *\n" +
" FROM container_name km\n" +
" WHERE km.id IN (#ids) \n" +
" ORDER BY km.createdDate DESC ")
Flux<ContainerData> findAllById(#Param("ids") Iterable<String> ids);
but it is not retrieving any results, and it is not throwing any exception either.
So the question is, how to use IN operator with spring data native query in cosmos db and collection or array out of the box without having to do a workaround.
You should use array_contains
#Query(value = " SELECT *\n" +
" FROM container_name km\n" +
" WHERE array_contains(#ids, km.id, true) \n" +
" ORDER BY km.createdDate DESC ")
Flux<ContainerData> findAllById(#Param("ids") Iterable<String> ids);
I am using DataFrame to read data from each postgres table and using to_sql() method to insert data into the oracle. The problem I am facing is that It gets stuck after copying a few records to oracle. Jupyter Notebook gets busy but does nothing.
def duplicateData(conn, conn2, session):
query1 = "SELECT table_name FROM information_schema.tables WHERE table_schema = 'public'"
all_tables = session.execute(query1)
count = 0
for index,tables in enumerate(all_tables):
count += 1
# getting rid of comma and parathesis
for i, table in enumerate(tables):
print("\n"+table+" - NO: "+str(count)+"\n")
query2 = "SELECT column_name FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '" + table + "'"
columns = session.execute(query2)
cols = []
for col in columns:
cols.append(col[0])
query3 = "SELECT * FROM " + table
df = pd.read_sql(query3, conn)
alias = (table[:30] + '') if len(table) > 30 else table
df.to_sql(alias, conn2, index=False, schema="PMS")
print("\nDONE\n")
I'm trying to implement my first SQLite Database in an Android App regarding obtaining location coordinates to keep track of where the user has been.
I'm trying to add information from my entry into two tables:
a Location table that contains information of the places name, id, latitude, and longitude information &
a CheckIn table that contains information of the places address, corresponding location_id to know which location it corresponds to, latitude, longitude, and time of check in.
Whenever I try to do this, my entry is never updated for the Locations table, solely the CheckIn table, despite using the insert() function to insert into the Locations table as well the id is not updating for the Location table.
I've went through my app in a debugger and I can't figure out what's causing the problem here, as there's no error and the program proceeds just fine to add in the necessary info for the CheckIn table.
I've tried checking StackOverFlow but I can't quite find anything that has been able to help fix my problem. If there's anyone who could help me, it'd be greatly appreciated
My add function:
fun addLoc_CheckIn(Entry: Locations)
{
val selectQuery = "SELECT * FROM $LOCATIONS ORDER BY ID"
val db = this.readableDatabase
val cursor = db.rawQuery(selectQuery, null)
var con = 0
if (cursor.moveToFirst())
{
while (cursor.moveToNext())
{
val pSLong = cursor.getDouble(cursor.getColumnIndex(SLONG))
val pCLong = cursor.getDouble(cursor.getColumnIndex(CLONG))
val pSLat = cursor.getDouble(cursor.getColumnIndex(SLAT))
val pCLat = cursor.getDouble(cursor.getColumnIndex(CLAT))
val Theta = (pCLong * Entry.cLong) + (pSLong * Entry.sLong)
var dist = (pSLat * Entry.sLat) + (pCLat * Entry.cLat * Theta)
// dist = (Math.acos(dist) * 180.00 / Math.PI) * (60 * 1.1516 * 1.609344) / 1000
dist = Math.acos(dist) * 6380000
if (dist <= 30)
{
con = 1
val db1 = this.writableDatabase
val values = ContentValues()
values.put(LOC_ID, cursor.getInt(cursor.getColumnIndex(ID)))
values.put(ADDRESS, Entry.Checks[0].Address)
values.put(LATI, Entry.Lat)
values.put(LONGI, Entry.Long)
values.put(TIME, Entry.Checks[0].Date_Time)
db1.insert(CHECKINS, null, values)
break
}
}
}
if (con == 0)
{
val db1 = this.writableDatabase
val values = ContentValues()
values.put(LOC_NAME, Entry.Name)
values.put(LAT, Entry.Lat)
values.put(LONG, Entry.Long)
values.put(CLAT, Entry.cLat)
values.put(SLAT, Entry.sLat)
values.put(CLONG, Entry.cLong)
values.put(SLONG, Entry.sLong)
Entry.Id = db1.insert(LOCATIONS, null, values)
val cvalues = ContentValues()
cvalues.put(LOC_ID, Entry.Id)
cvalues.put(ADDRESS, Entry.Checks[0].Address)
cvalues.put(LATI, Entry.Lat)
cvalues.put(LONGI, Entry.Long)
cvalues.put(TIME, Entry.Checks[0].Date_Time)
db1.insert(CHECKINS, null, cvalues)
}
}
My OnCreate function with the corresponding companion object:
companion object {
private val DATABASE_NAME = "LocationsDB"
private val DATABASE_VERSION = 1
// 1st Table - Unique Check Ins
private val LOCATIONS = "LOCATIONS"
private val ID = "ID"
private val LOC_NAME = "LOC NAME"
private val LAT = "LAT"
private val LONG = "LONG"
private val CLAT = "CLAT"
private val SLAT = "SLAT"
private val CLONG = "CLONG"
private val SLONG = "SLONG"
// 2nd Table - Repeated Check Ins
private val CHECKINS = "CHECKINS"
private val CHECKIN_ID = "CHECKIN_ID"
private val LOC_ID = "LOC_ID"
private val ADDRESS = "ADDRESS"
private val TIME = "TIME"
private val LATI = "LAT"
private val LONGI = "LONG"
}
override fun onCreate(p0: SQLiteDatabase?) {
val LOCATION_QUERY = "CREATE TABLE " + LOCATIONS + "(" + ID +
" INTEGER PRIMARY KEY AUTOINCREMENT, " + LOC_NAME +
" TEXT, " + LAT + " INTEGER, " + LONG + " INTEGER, " +
CLAT + " INTEGER, "+ SLAT + " INTEGER, " + CLONG + " INTEGER, "+ SLONG + " INTEGER " + ")"
val CHECKIN_QUERY = "CREATE TABLE " + CHECKINS + "(" +
LOC_ID + " INTEGER, " + CHECKIN_ID + " INTEGER PRIMARY KEY AUTOINCREMENT, " + LATI + " INTEGER, " + LONGI + " INTEGER, " + ADDRESS +
" TEXT, " + TIME + " TEXT " + ")"
p0!!.execSQL(LOCATION_QUERY)
p0.execSQL(CHECKIN_QUERY)
}
Now, in my constructor for the Location class and the CheckIns class, I have the id's set to -1, which is what the id for the location remains, even after using the insert() function. Now, this doesn't cause me any issues with regards to adding in my CheckIns as well incrementing the ids in my CheckIns table and I doubt it's causing an issue but I figured it'd be best to include the information, just in case.
I believe that you have an issue with the name of the column due to using
private val LOC_NAME = "LOC NAME"
A column name cannot have a space unless it is enclosed in special characters as per SQL As Understood By SQLite - SQLite Keywords.
This isn't an issue when the table is create (the column name will be LOC). However, when you attempt to insert you will get a syntax error, the row will not be inserted but as you are using the SQLiteDatabase insert method, the error is trapped and processing continues.
However, in the log you would see something similar to :-
2019-10-29 15:47:35.119 12189-12189/aso.so58600930insert E/SQLiteLog: (1) near "NAME": syntax error
2019-10-29 15:47:35.121 12189-12189/aso.so58600930insert E/SQLiteDatabase: Error inserting LOC NAME=MyLoc LAT=100 CLAT=120 LONG=110 SLAT=140 CLONG=130 SLONG=150
android.database.sqlite.SQLiteException: near "NAME": syntax error (code 1 SQLITE_ERROR): , while compiling: INSERT INTO LOCATIONS(LOC NAME,LAT,CLAT,LONG,SLAT,CLONG,SLONG) VALUES (?,?,?,?,?,?,?)
You could circumvent the above by using :-
val db1 = this.writableDatabase
val values = ContentValues()
values.put("LOC", Entry.Name)
values.put(LAT, Entry.Lat)
values.put(LONG, Entry.Long)
values.put(CLAT, Entry.cLat)
values.put(SLAT, Entry.sLat)
values.put(CLONG, Entry.cLong)
values.put(SLONG, Entry.sLong)
Entry.Id = db1.insert(LOCATIONS, null, values)
However, it is not suggested that you use the above BUT that instead you correct the name, e.g. using :-
private val LOC_NAME = "LOC_NAME"
then clear the App's data or uninstall the App and then rerun the App.
This fix assumes that you are developing the App and can afford to lose any existing data. You could retain data but this is a little more complicated as you basically have to create a new table with the appropriate column name, copy the data from the original table, rename or drop the original table and then rename the new table to be the original name.
I'm a new Spark user and I want to save my streaming data into multiple Hbase tables. I didn't have problems when I wanted to save my data in a single one, but with multiple I haven't been able to work with.
I've tried to create multiple HTable but then I've noticed that this class only used to communicate with a single HBase table.
Is there any way to do this?
This is where I try to create multiple Htables (of course doesn't work, but it's the idea)
//HBASE Tables
val tableFull = "table1"
val tableCategoricalFiltered = "table2"
// Add local HBase conf
val conf1 = HBaseConfiguration.create()
val conf2 = HBaseConfiguration.create()
conf1.set(TableInputFormat.INPUT_TABLE, tableFull)
conf2.set(TableInputFormat.INPUT_TABLE, tableCategoricalFiltered)
//Opening Tables
val tableInputFeatures = new HTable(conf1, tableFull)
val tableCategoricalFilteredFeatures = new HTable(conf2, tableCategoricalFiltered)
And here is where I try to use them (with one HTable works though)
events.foreachRDD { event =>
var j = 0
event.foreach { feature =>
if ( j <= 49 ) {
println("Feature " + j + " : " + featuresDic(j))
println(feature)
val p_full = new Put(new String("stream " + row_full).getBytes())
p_full.add(featuresDic(j).getBytes(), "1".getBytes(), new String(feature).getBytes())
tableInputFeatures.put(p_full)
if ( j != 26 || j != 27 || j != 28 || j != 29 ) {
val p_cat = new Put(new String("stream " + row_categorical).getBytes())
p_cat.add(featuresDic(j).getBytes(), "1".getBytes(), new String(feature).getBytes())
tableCategoricalFilteredFeatures.put(p_cat)
}else{
j = 0
row_full = row_full + 1
println("Feature " + j + " : " + featuresDic(j))
println(feature)
val p_full = new Put(new String("stream " + row_full).getBytes())
p_full.add(featuresDic(j).getBytes(), "1".getBytes(), new String(feature).getBytes())
tableInputFeatures.put(p_full)
val p_cat = new Put(new String("stream " + row_categorical).getBytes())
p_cat.add(featuresDic(j).getBytes(), "1".getBytes(), new String(feature).getBytes())
tableCategoricalFilteredFeatures.put(p_cat)
}
j = j + 1
}
}
There's one way I confirmed that works well, use hbase-rdd library.
https://github.com/unicredit/hbase-rdd
It's easy to use. Please refer https://github.com/unicredit/hbase-rdd#writing-to-hbase to see usage.
You can try MultiTableOutputFormat as I confirmed that works well with traditional mapreduce. I didn't use it from Spark yet.