I'm trying to convert HQL to Spark.
I have the following query (Works in Hue with Hive editor):
select reflect('java.util.UUID', 'randomUUID') as id,
tt.employee,
cast( from_unixtime(unix_timestamp (date_format(current_date(),'dd/MM/yyyy HH:mm:ss'), 'dd/MM/yyyy HH:mm:ss')) as timestamp) as insert_date,
collect_set(tt.employee_detail) as employee_details,
collect_set( tt.emp_indication ) as employees_indications,
named_struct ('employee_info', collect_set(tt.emp_info),
'employee_mod_info', collect_set(tt.emp_mod_info),
'employee_comments', collect_set(tt.emp_comment) )
as emp_mod_details,
from (
select views_ctr.employee,
if ( views_ctr.employee_details.so is not null, views_ctr.employee_details, null ) employee_detail,
if ( views_ctr.employee_info.so is not null, views_ctr.employee_info, null ) emp_info,
if ( views_ctr.employee_comments.so is not null, views_ctr.employee_comments, null ) emp_comment,
if ( views_ctr.employee_mod_info.so is not null, views_ctr.employee_mod_info, null ) emp_mod_info,
if ( views_ctr.emp_indications.so is not null, views_ctr.emp_indications, null ) employees_indication,
from
( select * from views_sta where emp_partition=0 and employee is not null ) views_ctr
) tt
group by employee
distribute by employee
First, What I'm trying is to write it in spark.sql as follow:
sparkSession.sql("select reflect('java.util.UUID', 'randomUUID') as id, tt.employee, cast( from_unixtime(unix_timestamp (date_format(current_date(),'dd/MM/yyyy HH:mm:ss'), 'dd/MM/yyyy HH:mm:ss')) as timestamp) as insert_date, collect_set(tt.employee_detail) as employee_details, collect_set( tt.emp_indication ) as employees_indications, named_struct ('employee_info', collect_set(tt.emp_info), 'employee_mod_info', collect_set(tt.emp_mod_info), 'employee_comments', collect_set(tt.emp_comment) ) as emp_mod_details, from ( select views_ctr.employee, if ( views_ctr.employee_details.so is not null, views_ctr.employee_details, null ) employee_detail, if ( views_ctr.employee_info.so is not null, views_ctr.employee_info, null ) emp_info, if ( views_ctr.employee_comments.so is not null, views_ctr.employee_comments, null ) emp_comment, if ( views_ctr.employee_mod_info.so is not null, views_ctr.employee_mod_info, null ) emp_mod_info, if ( views_ctr.emp_indications.so is not null, views_ctr.emp_indications, null ) employees_indication, from ( select * from views_sta where emp_partition=0 and employee is not null ) views_ctr ) tt group by employee distribute by employee")
But I got the following exception:
Exception in thread "main" org.apache.spark.SparkException: Job
aborted due to stage failute: Task not serializable:
java.io.NotSerializableException:
org.apache.spark.unsafe.types.UTF8String$IntWrapper
-object not serializable (class : org.apache.spark.unsafe.types.UTF8String$IntWrapper, value:
org.apache.spark.unsafe.types.UTF8String$IntWrapper#30cfd641)
If I'm trying to run my query without collect_set function its work, It's can fail because struct column types in my table?
How can I write my HQL query in Spark / fix my exception?
Related
In pgAdmin/the cli, the following query:
UPDATE wq SET l_id = NULL, v_id = NULL WHERE w_id = 'cf93bc71-88c1-4bba-9e5c-fdc58d0ed14e';
works fine. However, when calling the same with the pg package in node:
const w_id_val = 'cf93bc71-88c1-4bba-9e5c-fdc58d0ed14e';
/*
(here client is the result of calling const pool = pg.Pool({...}) ,
then let client = await pool.connect());
*/
const result = await client.query(
`UPDATE wq
SET l_id = null,
v_id = null
WHERE w_id = $1`,
[w_id_val]
);
I get the following error:
{
"message":"column \"w_id\" does not exist",
"stack":"error: column \"w_id\" does not exist\n at Connection.parseE (/Users/lukasjenks/Documents/Work/socrative-nodejs/node_modules/pg/lib/connection.js:569:11)\n at Connection.parseMessage (/Users/lukasjenks/Documents/Work/socrative-nodejs/node_modules/pg/lib/connection.js:396:17)\n at Socket.<anonymous> (/Users/lukasjenks/Documents/Work/socrative-nodejs/node_modules/pg/lib/connection.js:132:22)\n at Socket.emit (events.js:314:20)\n at Socket.EventEmitter.emit (domain.js:483:12)\n at addChunk (_stream_readable.js:297:12)\n at readableAddChunk (_stream_readable.js:272:9)\n at Socket.Readable.push (_stream_readable.js:213:10)\n at TCP.onStreamRead (internal/stream_base_commons.js:188:23)\n at TCP.callbackTrampoline (internal/async_hooks.js:126:14)",
"name":"error",
"length":112,
"severity":"ERROR",
"code":"42703",
"position":"68",
"file":"parse_relation.c",
"line":"3514",
"routine":"errMissingColumn"
}
I can confirm the column exists with this query:
SELECT table_schema, table_name, column_name, data_type
FROM information_schema.columns
WHERE table_name = 'wq';
public wq id uuid
public wq w_id uuid
public wq l_id uuid
public wq v_id uuid
I can also confirm that the column (w_id) should be recognized by pg as when using pg to query the table with a SELECT statement, I get this back in the fields property in the result object returned:
fields: [
Field {
name: 'id',
tableID: 26611,
columnID: 1,
dataTypeID: 2950,
dataTypeSize: 16,
dataTypeModifier: -1,
format: 'text'
},
Field {
name: 'w_id',
tableID: 26611,
columnID: 3,
dataTypeID: 2950,
dataTypeSize: 16,
dataTypeModifier: -1,
format: 'text'
},
...
I've also confirmed this isn't a case issue; i.e. the column name is all lowercase and using double quotes around the column name has no effect.
I have a table with polygons stored as geojson.
CREATE TABLE `location_boundaries` (
`id` INT UNSIGNED NOT NULL,
`name` VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_unicode_ci',
`geo_json` JSON NULL DEFAULT NULL,
`geom` GEOMETRY NULL DEFAULT NULL,
PRIMARY KEY (`id`)
)
COLLATE='utf8mb4_unicode_ci'
ENGINE=InnoDB
I have geojson multipolygon for asia as follows in the table:
{"type": "MultiPolygon", "coordinates": [[[[-168.25, 77.7], [-180, 77.7], [-180, 58.1], [-168.25, 58.1], [-168.25, 77.7]]], [[[39.6908, 84.52666], [180, 84.38487], [180, 26.27883], [142.084541, 22.062707], [130.147, 3.608598], [141.1373, -1.666358], [141.0438, -9.784795], [130.2645, -10.0399], [118.2545, -13.01165], [102.7975, -8.388008], [89.50451, -11.1417], [61.62511, -9.103512], [51.62645, 12.54865], [44.20775, 11.6786], [39.78016, 16.56855], [31.60401, 31.58641], [33.27769, 34.00057], [34.7674, 34.85347], [35.72423, 36.32686], [36.5597, 37.66439], [44.1053, 37.98438], [43.01638, 41.27191], [41.28304, 41.41274], [36.26378, 44.40772], [36.61315, 45.58723], [37.48493, 46.80924], [38.27497, 47.61317], [39.56164, 48.43141], [39.77264, 50.58891], [39.6908, 84.52666]]]]}
When I run the following
UPDATE location_boundaries SET geom = ST_GeomFromGeoJSON(geo_json) where id = 6255147
I'm getting the following error:
Longitude -180.000000 is out of range in function st_geomfromgeojson. It must be within (-180.000000, 180.000000].")
What's going on. All this was working fine in mysql 5.7 yet in mysql 8 everything has messed up?
I am using embedded cassandra to run unit tests. I notice that if any cql statements fail, then I don't see any descriptive reason for failure. For eg. I am running the following two statements which fails because I am trying to add a table without switching to a keyspace
val statement1 =
"""
|CREATE KEYSPACE test
| WITH REPLICATION = {
| 'class' : 'SimpleStrategy',
| 'replication_factor' : 1
| };
""".stripMargin
val statement3 =
"""
|CREATE TABLE users (
| bucket int,
| email text,
| firstname text,
| lastname text,
| authprovider text,
| password text,
| confirmed boolean,
| id UUID,
| hasher text,
| salt text,
| PRIMARY KEY ((bucket, email), authprovider,firstname, lastname) )
""".stripMargin
val cqlStatements:CqlStatements = new CqlStatements(statement1,statement3)
")
val testCassandra = repoTestEnv.testCassandra
try {
testCassandra.start()
testCassandra.executeScripts(cqlStatements)
} finally testCassandra.stop()
But I don't see correct error. I see the following which doesn't tell exactly what is the problem
[info] c.g.n.e.c.l.WindowsCassandraNode - Apache Cassandra Node '7276' is started
[info] c.g.n.e.c.l.LocalCassandraDatabase - Apache Cassandra '3.11.1' is started (20811 ms)
[warn] c.d.d.c.Connection - /127.0.0.1:9042 did not send an authentication challenge; This is suspicious because the driver expects authentication (configured auth provider = com.datastax.driver.core.PlainTextAuthProvider)
[warn] c.d.d.c.Connection - /127.0.0.1:9042 did not send an authentication challenge; This is suspicious because the driver expects authentication (configured auth provider = com.datastax.driver.core.PlainTextAuthProvider)
[debug] c.g.n.e.c.t.u.CqlUtils - Executing Script: CqlStatements [
CREATE KEYSPACE test
WITH REPLICATION = {
'class' : 'SimpleStrategy',
'replication_factor' : 1
};
,
CREATE TABLE users (
bucket int,
email text,
firstname text,
lastname text,
authprovider text,
password text,
confirmed boolean,
id UUID,
hasher text,
salt text,
PRIMARY KEY ((bucket, email), authprovider,firstname, lastname) )
]
[debug] c.g.n.e.c.t.u.CqlUtils - Executing Statement:
CREATE KEYSPACE test
WITH REPLICATION = {
'class' : 'SimpleStrategy',
'replication_factor' : 1
};
[info] c.g.n.e.c.Cassandra - INFO [Native-Transport-Requests-1] 2019-05-29 07:50:00,788 MigrationManager.java:310 - Create new Keyspace: KeyspaceMetadata{name=test, params=KeyspaceParams{durable_writes=true, replication=ReplicationParams{class=org.apache.cassandra.locator.SimpleStrategy, replication_factor=1}}, tables=[], views=[], functions=[], types=[]}
[debug] c.g.n.e.c.t.u.CqlUtils - Executing Statement:
CREATE TABLE users (
bucket int,
email text,
firstname text,
lastname text,
authprovider text,
password text,
confirmed boolean,
id UUID,
hasher text,
salt text,
PRIMARY KEY ((bucket, email), authprovider,firstname, lastname) )
[debug] c.g.n.e.c.t.TestCassandra - Stop TestCassandra 3.11.1
[info] c.g.n.e.c.l.LocalCassandraDatabase - Stop Apache Cassandra '3.11.1'
[debug] c.g.n.e.c.l.RunProcess - Execute 'powershell -ExecutionPolicy Unrestricted C:\Users\manu\AppData\Local\Temp\embedded-cassandra\3.11.1\0d155e04-97d5-4927-87ac-d46824a77c32\bin\stop-server.ps1 -p C:\Users\manu\AppData\Local\Temp\embedded-cassandra\3.11.1\0d155e04-97d5-4927-87ac-d46824a77c32\1da63488-2624-4141-a49e-174203b7edc4' within a directory 'C:\Users\manu\AppData\Local\Temp\embedded-cassandra\3.11.1\0d155e04-97d5-4927-87ac-d46824a77c32'
[info] c.g.n.e.c.Cassandra - INFO [StorageServiceShutdownHook] 2019-05-29 07:50:03,926 HintsService.java:220 - Paused hints dispatch
[info] c.g.n.e.c.Cassandra - INFO [StorageServiceShutdownHook] 2019-05-29 07:50:03,933 Server.java:176 - Stop listening for CQL clients
[info] c.g.n.e.c.Cassandra - INFO [StorageServiceShutdownHook] 2019-05-29 07:50:03,934 Gossiper.java:1532 - Announcing shutdown
[info] c.g.n.e.c.Cassandra - INFO [StorageServiceShutdownHook] 2019-05-29 07:50:03,938 StorageService.java:2268 - Node localhost/127.0.0.1 state jump to shutdown
[info] c.g.n.e.c.Cassandra - INFO [StorageServiceShutdownHook] 2019-05-29 07:50:05,941 MessagingService.java:984 - Waiting for messaging service to quiesce
[info] c.g.n.e.c.Cassandra - INFO [ACCEPT-localhost/127.0.0.1] 2019-05-29 07:50:05,948 MessagingService.java:1338 - MessagingService has terminated the accept() thread
[info] c.g.n.e.c.Cassandra - INFO [StorageServiceShutdownHook] 2019-05-29 07:50:06,076 HintsService.java:220 - Paused hints dispatch
[info] c.g.n.e.c.l.WindowsCassandraNode - Successfully sent ctrl+c to process with id: 7276.
[info] c.g.n.e.c.l.WindowsCassandraNode - Apache Cassandra Node '7276' is stopped
[info] c.g.n.e.c.l.LocalCassandraDatabase - Apache Cassandra '3.11.1' is stopped (3490 ms)
[info] c.g.n.e.c.l.LocalCassandraDatabase - The working directory 'C:\Users\manu\AppData\Local\Temp\embedded-cassandra\3.11.1\0d155e04-97d5-4927-87ac-d46824a77c32' was deleted.
[debug] c.g.n.e.c.t.TestCassandra - TestCassandra 3.11.1 is stopped
Unable to start TestCassandra 3.11.1
com.github.nosan.embedded.cassandra.CassandraException: Unable to start TestCassandra 3.11.1
at com.github.nosan.embedded.cassandra.test.TestCassandra.start(TestCassandra.java:128)
at UnitSpecs.RepositorySpecs.UsersRepositorySpecs.$anonfun$new$3(UsersRepositorySpecs.scala:146)
at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
Ideally I should get the error similar to what I would get if I was using cqlsh
Is there a way to get more descriptive errors?
I have tried to reproduce your issue, but no luck.
import com.github.nosan.embedded.cassandra.cql.CqlScript;
import com.github.nosan.embedded.cassandra.test.TestCassandra;
class Scratch {
public static void main(String[] args) {
TestCassandra testCassandra = new TestCassandra(CqlScript.statements(createKeyspace(),
createUserTable()));
testCassandra.start();
try {
System.out.println(testCassandra.getSettings());
}
finally {
testCassandra.stop();
}
}
private static String createUserTable() {
return "CREATE TABLE users ( bucket int, "
+ "email text, "
+ "firstname text, "
+ "lastname text, "
+ "authprovider text, "
+ "password text, "
+ "confirmed boolean, "
+ "id UUID, hasher text, "
+ "salt text, "
+ "PRIMARY KEY ((bucket, email), authprovider,firstname, lastname) )";
}
private static String createKeyspace() {
return "CREATE KEYSPACE test WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor' : 1}";
}
}
Output:
Exception in thread "main" com.github.nosan.embedded.cassandra.CassandraException: Unable to start TestCassandra 3.11.4
at com.github.nosan.embedded.cassandra.test.TestCassandra.start(TestCassandra.java:156)
at com.github.nosan.embedded.cassandra.Scratch.main(Scratch.java:27)
Caused by: com.datastax.oss.driver.api.core.servererrors.InvalidQueryException: No keyspace has been specified. USE a keyspace, or explicitly specify keyspace.tablename
at com.datastax.oss.driver.api.core.servererrors.InvalidQueryException.copy(InvalidQueryException.java:48)
at com.datastax.oss.driver.internal.core.util.concurrent.CompletableFutures.getUninterruptibly(CompletableFutures.java:113)
at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:53)
at com.datastax.oss.driver.internal.core.cql.CqlRequestSyncProcessor.process(CqlRequestSyncProcessor.java:30)
at com.datastax.oss.driver.internal.core.session.DefaultSession.execute(DefaultSession.java:207)
at com.datastax.oss.driver.api.core.CqlSession.execute(CqlSession.java:47)
at com.datastax.oss.driver.api.core.CqlSession.execute(CqlSession.java:56)
at com.github.nosan.embedded.cassandra.test.util.CqlUtils.execute(CqlUtils.java:68)
at com.github.nosan.embedded.cassandra.test.util.CqlUtils.execute(CqlUtils.java:47)
at com.github.nosan.embedded.cassandra.test.util.CqlSessionUtils.execute(CqlSessionUtils.java:43)
at com.github.nosan.embedded.cassandra.test.CqlSessionConnection.execute(CqlSessionConnection.java:60)
at com.github.nosan.embedded.cassandra.test.DefaultConnection.execute(DefaultConnection.java:53)
at com.github.nosan.embedded.cassandra.test.TestCassandra.executeScripts(TestCassandra.java:256)
at com.github.nosan.embedded.cassandra.test.TestCassandra.doStart(TestCassandra.java:285)
at com.github.nosan.embedded.cassandra.test.TestCassandra.start(TestCassandra.java:147)
I haven't been able to find my Caused isn't printed but I have found this workaround
try {
testCassandra.start()
println(s"cassandra state is ${testCassandra.getState}")
testCassandra.executeScripts(cqlStatements)
//println(s"result of execution is ${result}")
//val settings = testCassandra.getSettings
//println(s"settings are ${settings}")
} catch {
case e:Exception => {
println(s"exception ${e} caused by ${e.getCause}")
//println(s"caused by ${e.getCause()}")
fail( new Throwable(e.getCause))
}
}finally {
testCassandra.stop()
}
the above prints
org.scalatest.exceptions.TestFailedException was thrown.
ScalaTestFailureLocation: UnitSpecs.RepositorySpecs.UsersRepositorySpecs at (UsersRepositorySpecs.scala:157)
...
Caused by: java.lang.Throwable: com.datastax.driver.core.exceptions.InvalidQueryException: No keyspace has been specified. USE a keyspace, or explicitly specify keyspace.tablename
I found the reason. I wasn't using TestCassandra correctly it seems. I didn't realize that if I create TestCassandra and also specify the cql statements at time of instantiation, the start method runs the queries as well. In my code, I was creating TestCassandra as follows
new TestCassandra(factory,cqlStatements)})
and was calling both start and executeScripts
testCassandra.start()
testCassandra.executeScripts(cqlStatements)
I commented that executeScripts line and I now see both Exception and Caused
I think it would be better if the APIs clearly mention that start has side effect of executing statements as well.
I'm trying to write a Spark Structured Streaming (2.3) dataset to ScyllaDB (Cassandra).
My code to write the dataset:
def saveStreamSinkProvider(ds: Dataset[InvoiceItemKafka]) = {
ds
.writeStream
.format("cassandra.ScyllaSinkProvider")
.outputMode(OutputMode.Append)
.queryName("KafkaToCassandraStreamSinkProvider")
.options(
Map(
"keyspace" -> namespace,
"table" -> StreamProviderTableSink,
"checkpointLocation" -> "/tmp/checkpoints"
)
)
.start()
}
My ScyllaDB Streaming Sinks:
class ScyllaSinkProvider extends StreamSinkProvider {
override def createSink(sqlContext: SQLContext,
parameters: Map[String, String],
partitionColumns: Seq[String],
outputMode: OutputMode): ScyllaSink =
new ScyllaSink(parameters)
}
class ScyllaSink(parameters: Map[String, String]) extends Sink {
override def addBatch(batchId: Long, data: DataFrame): Unit =
data.write
.cassandraFormat(
parameters("table"),
parameters("keyspace")
//parameters("cluster")
)
.mode(SaveMode.Append)
.save()
}
However, when I run this code, I receive an exception:
...
[error] +- StreamingExecutionRelation KafkaSource[Subscribe[transactions_load]], [key#7, value#8, topic#9, partition#10, offset#11L, timestamp#12, timestampType#13]
[error] at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:295)
[error] at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
[error] Caused by: org.apache.spark.sql.AnalysisException: 'write' can not be called on streaming Dataset/DataFrame;
[error] at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
[error] at org.apache.spark.sql.Dataset.write(Dataset.scala:3103)
[error] at cassandra.ScyllaSink.addBatch(CassandraDriver.scala:113)
[error] at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3$$anonfun$apply$16.apply(MicroBatchExecution.scala:477)
...
I have seen a similar question, but that is for CosmosDB - Spark CosmosDB Sink: org.apache.spark.sql.AnalysisException: 'write' can not be called on streaming Dataset/DataFrame
You could convert it to an RDD first and then write:
class ScyllaSink(parameters: Map[String, String]) extends Sink {
override def addBatch(batchId: Long, data: DataFrame): Unit = synchronized {
val schema = data.schema
// this ensures that the same query plan will be used
val rdd: RDD[Row] = df.queryExecution.toRdd.mapPartitions { rows =>
val converter = CatalystTypeConverters.createToScalaConverter(schema)
rows.map(converter(_).asInstanceOf[Row])
}
// write the RDD to Cassandra
}
}
I am trying to integrate the Content API for Shopping in OpenCart into my PHP app and getting this error:
Fatal error: Uncaught exception 'GSC_ParseError' with message 'Not
Found' in
/home/public_html/admin/controller/module/GShoppingContent.php:2805
Stack trace: #0
/home/public_html/admin/controller/module/GShoppingContent.php(980):
_GSC_AtomParser::parse('Not Found') #1 /home/public_html/admin/controller/module/contentapi.php(76):
GSC_Client->insertProduct(Object(GSC_Product)) #2 [internal function]:
ControllerModuleContentApi->index(Array) #3
/home//public_html/vqmod/vqcache/vq2-system_modification_system_engine_action.php(71):
call_user_func(Array, Array) #4
/home//public_html/vqmod/vqcache/vq2-system_engine_front.php(89):
Action->execute(Object(Registry)) #5
/home//public_html/vqmod/vqcache/vq2-system_engine_front.php(63):
Front->execute(Object(Action)) #6
/home//public_html/admin/index.php(175):
Front->dispatch(Object(Action), Object(Action)) #7 {main} thrown in
/home//public_html/admin/controller/module/GShoppingContent.php on
line 2805
Here's my code:
require_once('GShoppingContent.php');
class ControllerModuleContentApi extends Controller {
private $error = array();
public function index() {
$this->load->language('module/contentapi');
$this->db->query("
CREATE TABLE IF NOT EXISTS `contentapi` (
`contentapiid` int(11) NOT NULL AUTO_INCREMENT,
`contentapimerchantid` varchar(256) NOT NULL,
`contentapiemail` varchar(256) NOT NULL,
`contentapipassword` varchar(256) NOT NULL,
PRIMARY KEY (`contentapiid`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
");
$contentAPI11=$this->db->query("select * from contentapi");
$data['contentapidata'] = $contentAPI11->rows;
if (isset($this->request->post['merchantid']) && isset($this->request->post['GoogleMerchantEmail']) && isset($this->request->post['GoogleMerchantPassword'] )) {
$merchantid=$this->request->post['merchantid'];
$GoogleMerchantEmail=$this->request->post['GoogleMerchantEmail'];
$GoogleMerchantPassword=$this->request->post['GoogleMerchantPassword'];
$primaryid=$this->request->post['primaryid'];
$this->db->query("UPDATE contentapi SET contentapimerchantid='$merchantid',contentapiemail='$GoogleMerchantEmail',contentapipassword='$GoogleMerchantPassword' where contentapiid='$primaryid'");
}
$retrieveapidetails=$this->db->query("select * from contentapi");
if($retrieveapidetails->num_rows>0)
{
$retrieveapidetails1=$retrieveapidetails->rows;
foreach($retrieveapidetails1 as $retrieveapidetails2)
{
$contentapimerchantid=$retrieveapidetails2["contentapimerchantid"];
$contentapiemail=$retrieveapidetails2["contentapiemail"];
$contentapipassword=$retrieveapidetails2["contentapipassword"];
}
$retrieveproducts=$this->db->query("select oc_product.model,oc_product.sku,oc_product.price,oc_product.image,oc_product_description.name,oc_product_description.description from oc_product,oc_product_description where oc_product.product_id=oc_product_description.product_id");
$retrieveproducts1=$retrieveproducts->rows;
$client = new GSC_Client($contentapimerchantid);
$client->login($contentapiemail, $contentapipassword);
foreach($retrieveproducts1 as $retrieveproducts2)
{
$productid=$retrieveproducts2["product_id"];
$productname=$retrieveproducts2["name"];
$productmodel=$retrieveproducts2["model"];
$productprice=$retrieveproducts2["price"];
$productimage=$retrieveproducts2["image"];
$productsku=$retrieveproducts2["sku"];
$productdescription=$retrieveproducts2["description"];
$product = new GSC_Product();
$product->setTitle($productname);
$product->setDescription($productdescription);
$link = 'https://testingsite.com/index.php?route=product/product&product_id=' . $productid;
$product->setProductLink($link);
$product->setSKU($productsku);
$product->setImageLink('https://testingsite.com/'.$productimage);
$product->setBrand($productimage);
$product->setPrice($productprice, 'usd');
$entry = $client->insertProduct($product);
}
}