I'm trying to open a new connection and do updates as follows:
for(String str: queries){
query = getQuery(str);
TTransport tx_CUCR = getNewTransaction();
try {
System.out.println("opening...");
tx_CUCR.open();
CqlQuery<String, String, ByteBuffer> cqlQuery = new CqlQuery<String, String, ByteBuffer>(keyspace, StringSerializer.get(), StringSerializer.get(), ByteBufferSerializer.get());
cqlQuery.setQuery(query);
System.out.println("executing...");
QueryResult<CqlRows<String, String, ByteBuffer>> result = cqlQuery.execute();
if (null != result) {
rows = result.get();
}
System.out.println("flushing...");
tx_CUCR.flush();
} catch (HectorException e) {
e.printStackTrace();
translateException(e);
} catch (TTransportException e) {
e.printStackTrace();
} finally {
tx_CUCR.close();
}
}
where queries contain 0.5 million keys. While running through for loop, after few successful updates, it gets into following error while opening a new connection:
opening...
org.apache.thrift.transport.TTransportException: java.net.NoRouteToHostException: Cannot assign requested address
at org.apache.thrift.transport.TSocket.open(TSocket.java:183)
at org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81)
at com.germinait.influence.cassandra.utils.CassandraUtils.executeQuery(CassandraUtils.java:546)
at com.germinait.influence.cassandra.utils.CassandraUtils.updateByKey(CassandraUtils.java:484)
at com.germinait.influence.cassandra.utils.IACassandraUtils.update(IACassandraUtils.java:298)
at com.germinait.influence.cassandra.utils.IACassandraUtils.updateNodeScores(IACassandraUtils.java:274)
at com.germinait.influence.cassandra.data.IACassBONodeScores.commit(IACassBONodeScores.java:94)
at com.germinait.influence.facebook.db.updates.UpdateWithinGraph.initScores(UpdateWithinGraph.java:1048)
at com.germinait.influence.facebook.db.updates.UpdateWithinGraph.initializePersonInfluenceScores(UpdateWithinGraph.java:981)
at com.germinait.influence.facebook.db.updates.UpdateWithinGraph.applyInfluenceRankAlgo(UpdateWithinGraph.java:726)
at com.germinait.influence.utils.IAUpdateUtils.runAlgorithm(IAUpdateUtils.java:296)
at com.germinait.influence.NetworkIAMain.IA_runAlgorithm(NetworkIAMain.java:422)
at com.germinait.influence.NetworkIAMain.main(NetworkIAMain.java:587)
Caused by: java.net.NoRouteToHostException: Cannot assign requested address
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384)
at java.net.Socket.connect(Socket.java:546)
at org.apache.thrift.transport.TSocket.open(TSocket.java:178)
... 12 more
If I pause, while debugging, at exception for some time and then continue, it continues for few more next updates and completes successfully before resulting into this exception again at some another point.
How can I overcome this?
Related
ClientConfiguration cfg = new ClientConfiguration().setAddresses("127.0.0.1:10800");
try (IgniteClient igniteClient = Ignition.startClient(cfg)) {
System.out.println(">>> Thin client put-get example started.");
final String CACHE_NAME = "put-get-example";
ClientCache<Integer, Object> cache = igniteClient.getOrCreateCache(CACHE_NAME);
Person p = new Person();
//put
HashMap<Integer, Person> hm = new HashMap<Integer, Person>();
hm.put(1, p);
cache.put(1, hm);
//get
HashMap<Integer, Person> map = (HashMap<Integer, Person>)cache.get(1);
Person p2 = map.get(1);
System.out.format(">>> Loaded [%s] from the cache.\n",p2);
}
catch (ClientException e) {
System.err.println(e.getMessage());
e.printStackTrace();
}
catch (Exception e) {
System.err.format("Unexpected failure: %s\n", e);
e.printStackTrace();
}
I use the thin client of apache-ignite.
I Create a hashmap and put the Person class(org.apache.ignite.examples.model.Person) object into it.
And when I take it out of the hashmap, I get the following exceptions:
> java.lang.ClassCastException:
> org.apache.enite.internal.binary.BinaryObjectImpl cannot be cast to
> org.apache.engite.examples.model.Person.
An exception is given in the code below.
Person p2 = map.get(1);
However, there is no exception if I modify the code as follows:
BinaryObject bo = (BinaryObject) map.get(1);
Person p2 = bo.deserialize();
I don't think that's necessary. Is there another solution?
Change client Cache definition
ClientCache<Integer, Person> cache = igniteClient.getOrCreateCache(CACHE_NAME)
I am getting below error while trying to insert multiple entities in Azure Table storage:
com.microsoft.azure.storage.table.TableServiceException: Bad Request
at com.microsoft.azure.storage.table.TableBatchOperation$1.postProcessResponse(TableBatchOperation.java:525)
at com.microsoft.azure.storage.table.TableBatchOperation$1.postProcessResponse(TableBatchOperation.java:433)
at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:146)
Below is the Java code for batch insert:
public BatchInsertResponse batchInsert(BatchInsertRequest request){
BatchInsertResponse response = new BatchInsertResponse();
String erpName = request.getErpName();
HashMap<String,List<TableEntity>> tableNameToEntityMap = request.getTableNameToEntityMap();
HashMap<String,List<TableEntity>> errorMap = new HashMap<String,List<TableEntity>>();
HashMap<String,List<TableEntity>> successMap = new HashMap<String,List<TableEntity>>();;
CloudTable cloudTable=null;
for (Map.Entry<String, List<TableEntity>> entry : tableNameToEntityMap.entrySet()){
try {
cloudTable = azureStorage.getTable(entry.getKey());
} catch (Exception e) {
e.printStackTrace();
}
// Define a batch operation.
TableBatchOperation batchOperation = new TableBatchOperation();
List<TableEntity> value = entry.getValue();
for (int i = 0; i < value.size(); i++) {
TableEntity entity = value.get(i) ;
batchOperation.insertOrReplace(entity);
if (i!=0 && i % batchSize == 0) {
try {
cloudTable.execute(batchOperation);
batchOperation.clear();
} catch (Exception e) {
e.printStackTrace();
}
}
}
try {
cloudTable.execute(batchOperation);
} catch (Exception e) {
e.printStackTrace();
}
}
}
Above code is working fine if I will assign batchSize value to 10 but if I will assign to 1000 or 100 it will throw Bad request error.
Please help me to resolve this error. I am using Spring boot and Azure-storage Java SDK version 4.3.0.
As Aravind mentioned, 400 error usually means there's something wrong with your data. From this link, an entity batch transaction will fail if one or more of the following conditions are not met:
All entities subject to operations as part of the transaction must have the same PartitionKey value.
An entity can appear only once in the transaction, and only one operation may be performed against it.
The transaction can include at most 100 entities, and its total payload may be no more than 4 MB in size.
All entities are subject to the limitations described in Understanding the Table Service Data Model.
Please check your entities against these four rules and ensure that you're not violating one of the rules.
I have setup single Cassandra node on VM. i have to create a table with 70000 columns. for this i have written java code that read json file and create table.
here is my java code snippet.
When i run my java code it throws exception after creation some columns.
Exception stack is
public void createTable(String keyspaceName, String tableName) throws FileNotFoundException{
JSONParser jsonParser = new JSONParser();
FileReader fileReader;
String filePath = "";
String columnHeader = "";
//String completeColumnHeader = "";
try{
System.out.println("Inside Create Table");
session.executeAsync("DROP TABLE IF EXISTS "+keyspaceName+"."+tableName+";");
String createQuery = "CREATE TABLE "+keyspaceName+"."+tableName +"(\"P:LanguageID\" text, "
+ "\"P:PdmarticleID\" text, PRIMARY KEY(\"P:PdmarticleID\",\"P:LanguageID\"));";
session.execute(createQuery);
System.out.println("Table created");
filePath = "CassandraTableColumnHeader/FixColumnHeader.json";
fileReader = new FileReader(filePath);
JSONObject jsonObject = (JSONObject) jsonParser.parse(fileReader);
JSONArray jsonArray = (JSONArray) jsonObject.get("columnHeaderName");
int columnHeaderSize = jsonArray.size();
int columnHeaderBatchSize = 1000;
int fromIndex = 0;
int toIndex = columnHeaderBatchSize;
while(columnHeaderSize > 0){
columnHeaderSize -=columnHeaderBatchSize;
for(int i = fromIndex; i < toIndex; i++) {
columnHeader = (String) jsonArray.get(i);
if(columnHeader.equals("P:PdmarticleID")||columnHeader.equals("P:LanguageID")){
continue;
}
session.execute("ALTER TABLE "+keyspaceName+"."+tableName +" ADD "+"\""+columnHeader+"\""+" text;");
}
fromIndex = toIndex;
if(columnHeaderSize < columnHeaderBatchSize){
toIndex += columnHeaderSize;
}else{
toIndex = toIndex + columnHeaderBatchSize;
}
}
}catch(FileNotFoundException fnfe){
throw fnfe;
}catch (ParseException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
Exception in thread "main" com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.DriverException: Host replied with server error: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.io.FileNotFoundException: C:\apache-cassandra-new\data\data\system\schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697\system-schema_columnfamilies-tmplink-ka-4839-Data.db (The process cannot access the file because it is being used by another process)))
at com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:265)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:179)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:36)
at com.exportstagging.SparkTest.DataLoaderInCassandra.createTable(DataLoaderInCassandra.java:89)
at com.exportstagging.SparkTest.DataLoaderInCassandra.main(DataLoaderInCassandra.java:216)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.DriverException: Host replied with server error: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.io.FileNotFoundException: C:\apache-cassandra-new\data\data\system\schema_columnfamilies-45f5b36024bc3f83a3631034ea4fa697\system-schema_columnfamilies-tmplink-ka-4839-Data.db (The process cannot access the file because it is being used by another process)))
at com.datastax.driver.core.RequestHandler.reportNoMoreHosts(RequestHandler.java:216)
at com.datastax.driver.core.RequestHandler.access$900(RequestHandler.java:45)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.sendRequest(RequestHandler.java:276)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.run(RequestHandler.java:374)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
I have stuck here. Please help me. Thanks in advance.
If I were you I might reevaluate creating a table with 70k column headers. Your partition key P:PdmarticleID and full primary key (P:PdmarticleID, P:LanguageID) are the only two pieces of information you will be able to use to get results anyway. So having these other pieces of information explicitly stored in columns is not buying you anything.
A collection (eg. map) can hold onto 64k items, with certain other limitations (see http://wiki.apache.org/cassandra/CassandraLimitations). Is there a way you can split the columns such that you can create multiple tables, with some pieces of information stored in one table and some in another?
cassandra-thrift-1.1.2.jar
Problem code:
ColumnOrSuperColumn cosc = null;
org.apache.cassandra.thrift.Column c = new org.apache.cassandra.thrift.Column ();
c.setName ("full_name".getBytes ("UTF-8"));
c.setValue ("Test name".getBytes ("UTF-8"));
c.setTimestamp (System.currentTimeMillis());
// insert data
// long timestamp = System.currentTimeMillis();
try {
client.set_keyspace("CClient");
bb=ByteBuffer.allocate (10);
client.insert (bb.putInt(1),
new ColumnParent ("users"),
c,
ConsistencyLevel.QUORUM);
bb.putInt (2);
cosc = client.get (bb, cp, ConsistencyLevel.QUORUM);
}
catch (TimedOutException toe) {
System.out.println (toe.getMessage());
}
catch (org.apache.cassandra.thrift.UnavailableException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
catch (Exception e) {
e.printStackTrace();
}
finally {
System.out.println (new String (cosc.getColumn().getName()) + "-" + new String (cosc.getColumn().getValue()));
}
The code shown above inserts some junk or null into the database, I don't understand the reason why?
See how it looks on the CLI:
RowKey:
=> (column=full_name, value=Test name, timestamp=1345743985973)
Any help in this is greatly appreciated.
Thanks.
You're creating a row with row key as bytes.
In Cassandra cli you'll probably see the row key if you list the rows as bytes.
E.g. in cassandra cli type:
assume users keys as bytes;
list users;
I had some confusion which I want to clear it - I am inserting values into database using ADO.NET. Let say I want to insert 10 item if I encounter error while inserting data of 5th item it should roll back whatever I had inserted into the database.
I just read the concept of Transaction and Rollback method and also tried to implement it in the program but still it insert 4 item and give me error message of 5th item. It doesn't roll back insert query.
Does transaction and roll back method solved my issue or I need to used other alternative.
here is my code,
for (int i = 0; i < itemLength - 1; i++)
{
//--- Start local transaction ---
myTrans = Class1.conn.BeginTransaction();
//--- Assign transaction object and connection to command object for a pending local transaction ---
_insertQry = Class1.conn.CreateCommand();
_insertQry.Connection = Class1.conn;
_insertQry.Transaction = myTrans;
_insertQry.CommandText = "INSERT INTO Product_PropertyValue(ItemNo, PropertyNo, ValueNo) VALUES (#ItemNo, #PropertyNo, #ValueNo)";
//_insertQry = new SqlCommand("INSERT INTO Product_PropertyValue(ItemNo, PropertyNo, ValueNo) VALUES (#ItemNo, #PropertyNo, #ValueNo)", Class1.conn);
_insertQry.Parameters.AddWithValue("#ItemNo", _itemNo[i]);
_insertQry.Parameters.AddWithValue("#PropertyNo", _propNo);
_insertQry.Parameters.AddWithValue("#ValueNo", _propValue);
_insertQry.ExecuteNonQuery();
myTrans.Commit();
}
Can anyone help me?
It sounds like you are trying to achieve an atomic commit. It either inserts completely or doesn't insert at all.
Try something like the following
SqlTransaction objTrans = null;
using (SqlConnection objConn = new SqlConnection(strConnString))
{
objConn.Open();
objTrans = objConn.BeginTransaction();
SqlCommand objCmd1 = new SqlCommand("insert into tbExample values(1)", objConn);
SqlCommand objCmd2 = new SqlCommand("insert into tbExample values(2)", objConn);
try
{
objCmd1.ExecuteNonQuery();
objCmd2.ExecuteNonQuery();
objTrans.Commit();
}
catch (Exception)
{
objTrans.Rollback();
}
finally
{
objConn.Close();
}
Also take a look at
http://www.codeproject.com/Articles/10223/Using-Transactions-in-ADO-NET
I did 2 modification to your code
1) Move the BeginTransaction() outside the for loop, So that all your 10 INSERt statements are in a single transaction, that is what you want if you want them to be atomic
2) added a TRY/CATCH block, so that you can roll back in case of errors.
//--- Start local transaction ---
myTrans = Class1.conn.BeginTransaction();
bool success = true;
try
{
for (int i = 0; i < itemLength - 1; i++)
{
//--- Assign transaction object and connection to command object for a pending local transaction ---
_insertQry = Class1.conn.CreateCommand();
_insertQry.Connection = Class1.conn;
_insertQry.Transaction = myTrans;
_insertQry.CommandText = "INSERT INTO Product_PropertyValue(ItemNo, PropertyNo, ValueNo) VALUES (#ItemNo, #PropertyNo, #ValueNo)";
//_insertQry = new SqlCommand("INSERT INTO Product_PropertyValue(ItemNo, PropertyNo, ValueNo) VALUES (#ItemNo, #PropertyNo, #ValueNo)", Class1.conn);
_insertQry.Parameters.AddWithValue("#ItemNo", _itemNo[i]);
_insertQry.Parameters.AddWithValue("#PropertyNo", _propNo);
_insertQry.Parameters.AddWithValue("#ValueNo", _propValue);
_insertQry.ExecuteNonQuery();
}
}
catch (Exception ex)
{
success = false;
myTrans.Rollback();
}
if (success)
{
myTrans.Commit();
}
let me know if this doesn't works.
You are on the right path, ADO.NET supports transactions so you will be able to rollback on errors.
Posting your your code here would get you more specific guidance; However since your question is very generic, I will encourage you to follow the template provided by MSDN
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
// Start a local transaction.
SqlTransaction sqlTran = connection.BeginTransaction();
// Enlist a command in the current transaction.
SqlCommand command = connection.CreateCommand();
command.Transaction = sqlTran;
try
{
// Execute two separate commands.
command.CommandText =
"INSERT INTO Production.ScrapReason(Name) VALUES('Wrong size')";
command.ExecuteNonQuery();
command.CommandText =
"INSERT INTO Production.ScrapReason(Name) VALUES('Wrong color')";
command.ExecuteNonQuery();
// Commit the transaction.
sqlTran.Commit();
Console.WriteLine("Both records were written to database.");
}
catch (Exception ex)
{
// Handle the exception if the transaction fails to commit.
Console.WriteLine(ex.Message);
try
{
// Attempt to roll back the transaction.
sqlTran.Rollback();
}
catch (Exception exRollback)
{
// Throws an InvalidOperationException if the connection
// is closed or the transaction has already been rolled
// back on the server.
Console.WriteLine(exRollback.Message);
}
}
}