I am trying to insert thousand of data from mylist to database ,it take very long time to save each of the data.
Is there any way to improve the save performance ?
for (int j = 0; j < listPeople.size(); j++) {
Person people= listPeople.get(j);
people.save();}
Log
11-27 04:15:06.991 10268-10268/com.testall I/Sugar﹕ Person saved : 1
11-27 04:15:07.991 10268-10268/com.testall I/Sugar﹕ Person saved : .......
11-27 04:16:08.991 10268-10268/com.testall I/Sugar﹕ Person saved : 1000
There's a method named saveInTx which takes in a collection of objects and is faster than saving individual objects. It's available in 1.3
Related
I am just wondering if someone could help me to return a number randomly from given values for a Netsuite Saved Search
For example:
I want to return either one of the 3 values randomly here: 196429,190569,150567
Thank you so much
First thing that needs to be done here is to return the saved search values and store them in an array. Please try this and let me know how this goes!!
1.Define an array like below
var getArray = [];
2.Push the saved search values to the array by looping through the saved search results like below (I am pushing internal id here, you can push any value you want)
for (var i = 0; i < searchResult.length; i++)
{
var id = searchResult[i].getValue({ name: 'internalid'});
getArray.push(id);
}
3.The last step would be to generate the random item from the array
var randomGen = getArray[Math.floor(Math.random()*getArray.length)];
log.debug('randomGen',randomGen);
I'm trying to read a dynamic table, which is updated 1-3 times per second. I'm using Selenium, in Python 3.x, but if you have a solution for other languages I can work it out as well.
My question is: what is the best practice for reading frequently updated tables?
What I've tried:
driver.wait.until along with expected_conditions
re-read the table with a call to find_elements if a stale exception is thrown
Neither of them is working, due to the high refreshing rate. I can successfully retrieve the table for a moment, but when I try to access its rows the moment after, I get a stale exception. It's worth to say that when I try the same code in the same table when there are less frequent updates everything works fine.
I'm not posting any code for the moment, as I'd be interested in knowing what more experienced people do in this case.
My naive thinking: Being non-expert (but keen to learn) in web scraping nor in any web-related languages, I'd say that if this was a problem with dynamic data, I'd take a pointer or a reference to the actual table (and then looping dynamically on the rows). Is that possible in this framework?
We usually get stale element exception when the Webelement has been changed at present when compared to its attributes at the time of webelement's creation.
Let's say the intent is to print second data element in a table every seconds, our code looks like this, (Sorry for giving the code in Java)
//This will work if the page is static
WebElement element = driver.findElement(By.xpath("//td[2]"));
for(int i = 0; i< 10;i++)
{
System.out.println(element.getText());
Thread.sleep(1000);
}
To make this work for dynamic loading tables / refreshing tables we need to initiate the webelement before the each iteration something like this,
//This will work for dynamic content
WebElement element = null;
for(int i = 0; i< 10;i++)
{
element = driver.findElement(By.xpath("//td[2]"));
System.out.println(element.getText());
Thread.sleep(1000);
}
In the case, if you need to get the i'th cell value in a table, we can parameter the value inside the xpath such as,
//In this case we need the fifth cell value
int j = 5;
WebElement element = null;
for(int i = 0; i< 10;i++)
{
element = driver.findElement(By.xpath("//td["+j+"]"));
System.out.println(element.getText());
Thread.sleep(1000);
}
In the case if you need to have all five cell values,
WebElement element = null;
for(int i = 1; i<=5;i++)
{
element = driver.findElement(By.xpath("//td["+i+]"));
System.out.println(element.getText());
Thread.sleep(1000);
}
Just construct a loop accordingly.
Hope this helps you. Thanks.
I am currently working with MEAN stack. I have around 99000 records in mongo dB. each record consist of an image array, which is containing image urls. maximum size of this image array can be 10. so every record can maximum have imageURL array length = 10 .
Now I want to fetch every records, and then compare images of every records with each other, using resemble js. then save their average value in that same record.
I used async module and tried to implement this, but it is taking too much time even with 5 records. also used async's forEachLimit but it won't help.
So basically How can I manipulate these kind of large amount of data with Node and mongo?
is there any way to do it in batches ? any other solution ?
loop1 ==> all records (response) {
loop2 == > convert all images of one record to base64 (resemble can't use images from urls)==> saved in new array = TempArray1 <==loop ends
loop3 == > TempArray1.length (TempArray1[i]) {
loop4 ==> TempArray1.length (TempArray1[j]){
count += resemble(TempArray1[i],TempArray1[j]);
}
avg[i] = count/(TempArray1.length -1);
}
}
I has to display a list of books that containes more than 50 000 book.
I want to display paged list where for each page i invoke a method that gives me 20 books.
List< Books > Ebooks = Books.GetLibrary(index);
But using PagedList doesnt match with my want because it creates a subset of the collection of objects given and accesse to each subset with the index. And refering to the definition of its methode, i had to charge the hole list from the begining.
I also followed this article
var EBooks = from b in db.Books select b;
int pageSize = 20;
int pageNumber = (page ?? 1);
return View(Ebooks.ToPagedList(pageNumber, pageSize));
But doing so, i has to invoke (var Books = from b in db.Books select b; ) on each index
**EDIT****
I'm searching for indications to achieve this
List< Books > Ebooks = Books.GetLibrary(index);
and of course i has the number of all the books so i know the number of pages
So i'm searching for indication that leads me to achieve it: for each index, i invoke GetLibrary(index)
any suggestions ?
Have you tried something like:
var pagedBooks = Books.GetLibrary().Skip(pageNumber * pageSize).Take(pageSize);
This assumes a 0-based pageNumber.
If that doesn't work, can you add a new method to the Books class that gets a paged set directly from the data source?
Something like "Books.GetPage(pageNumber, pageSize);" that way you don't get the entire collection every time.
Other than that, you may have to find a way to cache the initial result of Books.GetLibrary() somewhere.
I have a coucdb database which contains about 200000 tweets, keys are tweet ID. I have a query which needs to retrieve all documents to look for some information. I'm using lightcouch to work with couchdb in a java web app. If I create a dbClient like this:
List<JsonObject>tweets = dbClient.view("_all_docs").query(JsonObject.class);
and then loop through tweets, for each JsonObject in tweets, use
JsonObject tweetJson = dbClient.find(JsonObject.class, tweet.get("id").toString().replaceAll("\"", ""));
to retrieve each tweet one by one it took extremely long time for 200000 documents. If I load all documents in one single query using includeDocs(true)
List<JsonObject>allTweets = dbClient.view("_all_docs").includeDocs(true).query(JsonObject.class);
it caused outofmemory exception since the number of documents are too large. So how can i deal with this problem? I'm thinking about using limit(5000) to retrieve 5000 documents for each time and loop through whole database, but I don't know how to write the loop to continue to retrieve the next 5000 after the first 5000 docs. One possible solution is using startKey and endKey but I'm confused how to use them when the key is tweet ID.
Use queryPage but make sure to use a String as the Key
See: https://github.com/lightcouch/LightCouch/issues/26#event-122327174
0.1.6 still seems to show this behaviour.
A workaround that I found for this goes something like this:
changes = DbClient.changes()
.since(null) // or... since(since) if you want an offset
.includeDocs(true);
int size = 1;
getCursor("0");
while (size > 0 ) {
ChangesResult resultSet = changes.limit(40000).getChanges();
List<ChangesResult.Row> rowList = resultSet.getResults();
for (ChangesResult.Row feed: rowList) {
<instantiate your object via gson>
.
.
.
}
getCursor(resultSet.getLastSeq());
size = rowList.size();
}