I have a website that needs to periodically import 600-800 records from a CSV file.
As part of the import, the existing records are removed / deleted, then replaced by the newly imported data.
At present I am removing the existing items like this:
var itemsToRemove = _contentManager.Query(VersionOptions.Published, "Store").List();
foreach (var item in itemsToRemove)
{
_contentManager.Remove(item);
}
Then importing my new records like so:
var item = _contentManager.New("Store");
item.As<TitlePart>().Title = title;
item.As<StorePart>().Address1 = address1;
_contentManager.Create(item);
It works, but the process is taking so long that it is timing out.
Can anyone suggest a better or more efficient way of doing this? Or tell me how I could extend the timeout duration?
Thanks in advance.
Related
Ive been trying to create a suitelet that allows for a saved search to be run on a collection of item records in netsuite using suitescript 1.0
Pagination is quite easy everywhere else, but i cant get my head around how to do it in NetSuite.
For instance, we have 3,000 items and I'm trying to limit the results to 100 per page.
I'm struggling to understand how to apply a start row and a max row parameter as a filter so i can run the search to return the number of records from my search
I've seen plenty of scripts that allow you to exceed the limit of 1,000 records, but im trying to throttle the amount shown on screen. but im at a loss to figure out how to do this.
Any tips greatly appreciated
function searchItems(request,response)
{
var start = request.getParameter('start');
var max = request.getParameter('max');
if(!start)
{
start = 1;
}
if(!max)
{
max = 100;
}
var filters = [];
filters.push(new nlobjSearchFilter('category',null,'is',currentDeptID));
var productList = nlapiSearchRecord('item','customsearch_product_search',filters);
if(productList)
{
response.write('stuff here for the items');
}
}
You can approach this a couple different ways. Either way, you will definitely need to sort your search results by something meaningful and consistent, like by internal ID. Make sure you've got your results sorted either in your saved search definition or by adding a search column in your script.
You can continue building your search exactly like you are, and then just using the native slice method on the productList Array. You would use your start and end parameters to pass as the arguments to slice appropriately.
Another approach is to use the async API for searches. It will look similar to this:
var search = nlapiLoadSearch("item", "customsearch_product_search");
search.addFilter(new nlobjSearchFilter('category',null,'is',currentDeptID));
var productList = search.runSearch().getResults(start, end);
For more references on this approach, check out the NetSuite Help page titled "Search APIs" and the reference page for nlobjSearch.
How can we take full hybris customer export.
i wrote an impex to export the data but there are 2 million records in database so impex is not working.Please suggest a way.
Impex should work, maybe it takes some time but it shouldn't failed (and if it's failing you should post the error if you want to be helped).
You have to do it by code for better perfomance, using a flexibleSearch.
String flexiString = "SELECT * from {Customer}"
FlexibleSearchQuery flexibleSearchQuery = new FlexibleSearchQuery(flexiString);
flexibleSearchQuery.setResultClassList(Arrays.asList(CustomerModel.class));
final SearchResult<CustomerModel> searchResult = flexibleSearchService.search(flexibleSearchQuery);
List<CustomerModel> results = searchResult.getResult();
if(!results.isEmpty()){
//Iterate over CustomerModel and append what you want in a file.
}
There is also a old method in a manager that could be used but I don't recommend it because manager are likely to be deprecated because they use jalo classes (some class are deprecated some aren't).
import de.hybris.platform.jalo.user.*
import de.hybris.platform.jalo.type.*
import de.hybris.platform.core.model.user.*
Collection<Customer> users=UserManager.getInstance().findUsers(TypeManager.getInstance().getComposedType(Customer.class),null,null,null)
for(Customer cust : users){
//Iterate over Customer and append what you want in a file.
}
Maybe you can use virtualjdbc extension: https://help.hybris.com/6.3.0/hcd/8c7ec0628669101481ec9d2d8dbb3a7c.html
Also there is no limit with impex. This impex file will be smale after compression.
I have a coucdb database which contains about 200000 tweets, keys are tweet ID. I have a query which needs to retrieve all documents to look for some information. I'm using lightcouch to work with couchdb in a java web app. If I create a dbClient like this:
List<JsonObject>tweets = dbClient.view("_all_docs").query(JsonObject.class);
and then loop through tweets, for each JsonObject in tweets, use
JsonObject tweetJson = dbClient.find(JsonObject.class, tweet.get("id").toString().replaceAll("\"", ""));
to retrieve each tweet one by one it took extremely long time for 200000 documents. If I load all documents in one single query using includeDocs(true)
List<JsonObject>allTweets = dbClient.view("_all_docs").includeDocs(true).query(JsonObject.class);
it caused outofmemory exception since the number of documents are too large. So how can i deal with this problem? I'm thinking about using limit(5000) to retrieve 5000 documents for each time and loop through whole database, but I don't know how to write the loop to continue to retrieve the next 5000 after the first 5000 docs. One possible solution is using startKey and endKey but I'm confused how to use them when the key is tweet ID.
Use queryPage but make sure to use a String as the Key
See: https://github.com/lightcouch/LightCouch/issues/26#event-122327174
0.1.6 still seems to show this behaviour.
A workaround that I found for this goes something like this:
changes = DbClient.changes()
.since(null) // or... since(since) if you want an offset
.includeDocs(true);
int size = 1;
getCursor("0");
while (size > 0 ) {
ChangesResult resultSet = changes.limit(40000).getChanges();
List<ChangesResult.Row> rowList = resultSet.getResults();
for (ChangesResult.Row feed: rowList) {
<instantiate your object via gson>
.
.
.
}
getCursor(resultSet.getLastSeq());
size = rowList.size();
}
I seem to be having a problem with assigning values to fields of a content item with a custom content part and the values not persisting.
I have to create the content item (OrchardServices.ContentManager.Create) first before calling the following code which modifies a field value:
var fields = contentItem.As<MyPart>().Fields;
var imageField = fields.FirstOrDefault(o => o.Name.Equals("Image"));
if (imageField != null)
{
((MediaLibraryPickerField)imageField).Ids = new int[] { imageId };
}
The above code works perfectly when against an item that already exists, but the imageId value is lost if this is done before creating it.
Please note, this is not exclusive to MediaLibraryPickerFields.
I noticed that other people have reported this aswell:
https://orchard.codeplex.com/workitem/18412
Is it simply the case that an item must be created prior to amending it's value field?
This would be a shame, as I'm assigning this fields as part of a large import process and would inhibit performance to create it and then modify the item only to update it again.
As the comments on this issue explain, you do need to call Create. I'm not sure I understand why you think that is an issue however.
I need to parse an XLIFF file using C#, but I'm having some trouble. These files are fairly complex, containing a huge amount of nodes.
Basically, all I need to do is read the source node from each trans-unit node, do some processing on it, and insert the processed text into the corresponding target node (which will always be present, but empty).
An example of one of the nodes I need to parse would be (the whole file may contain 100s of these):
<trans-unit id="0000000002" datatype="text" restype="string">
<source>Windows Update is not installed</source>
<target/>
<iws:segment-metadata tm_score="0.00" ws_word_count="6" max_segment_length="0">
<iws:status target_content="placeholders_only"/>
</iws:segment-metadata>
<iws:boundary-seg sequence="bs20721"/>
<iws:markup-seg sequence="0000000001">
</trans-unit>
The trans-unit nodes can be buried deep in the files, the header section contains a lot of data. I'd like to use LINQ to XML to read the data, but I'm not having any luck getting it to work. Here's my current code (just trying to read and output the source nodes from the file:
XDocument doc = XDocument.Load(path);
Console.WriteLine("Before loop");
foreach (var transUnitNode in doc.Descendants("trans-unit"))
{
Console.WriteLine("In loop");
XElement sourceNode = transUnitNode.Element("source");
XElement targetNode = transUnitNode.Element("target");
Console.WriteLine("Source: " + sourceNode.Value);
}
I never see 'In loop' and I don't know why, can someone tell me what I'm doing wrong here, or suggest a better way to achieve what I'm trying to do here?
Thanks.
Try
XNamespace df = doc.Root.Name.Namespace;
foreach (XElement transUnitNode in doc.Descendants(df + "trans-unit"))
{
XElement sourceNode = transUnitNode.Element(df + "source");
// and so one, use the df namespace object to qualify any elements names
}
See also http://msdn.microsoft.com/en-us/library/bb387093.aspx.