How can we take full hybris customer export.
i wrote an impex to export the data but there are 2 million records in database so impex is not working.Please suggest a way.
Impex should work, maybe it takes some time but it shouldn't failed (and if it's failing you should post the error if you want to be helped).
You have to do it by code for better perfomance, using a flexibleSearch.
String flexiString = "SELECT * from {Customer}"
FlexibleSearchQuery flexibleSearchQuery = new FlexibleSearchQuery(flexiString);
flexibleSearchQuery.setResultClassList(Arrays.asList(CustomerModel.class));
final SearchResult<CustomerModel> searchResult = flexibleSearchService.search(flexibleSearchQuery);
List<CustomerModel> results = searchResult.getResult();
if(!results.isEmpty()){
//Iterate over CustomerModel and append what you want in a file.
}
There is also a old method in a manager that could be used but I don't recommend it because manager are likely to be deprecated because they use jalo classes (some class are deprecated some aren't).
import de.hybris.platform.jalo.user.*
import de.hybris.platform.jalo.type.*
import de.hybris.platform.core.model.user.*
Collection<Customer> users=UserManager.getInstance().findUsers(TypeManager.getInstance().getComposedType(Customer.class),null,null,null)
for(Customer cust : users){
//Iterate over Customer and append what you want in a file.
}
Maybe you can use virtualjdbc extension: https://help.hybris.com/6.3.0/hcd/8c7ec0628669101481ec9d2d8dbb3a7c.html
Also there is no limit with impex. This impex file will be smale after compression.
Related
I've spent hours finding out why the excel export with the package cyber-duck/laravel-excel export OK the excel when the datasource is a query, but when using a custom serialiser it simply stops formatting the excel correctly
No errors in code, super simple excel. Even trying the code posted in the documentation:
Usage:
$serialiser = new CustomSerialiser();
$excel = Exporter::make('Excel');
$excel->load($collection);
$excel->setSerialiser($serialiser);
return $excel->stream('filename.xlsx');
CustomSerialiser:
namespace App\Serialisers;
use Illuminate\Database\Eloquent\Model;
use Cyberduck\LaravelExcel\Contract\SerialiserInterface;
class ExampleSerialiser implements SerialiserInterface
{
public function getData($data)
{
$row = [];
$row[] = $data->field1;
$row[] = $data->relationship->field2;
return $row;
}
public function getHeaderRow()
{
return [
'Field 1',
'Field 2 (from a relationship)'
];
}
}
Any thoughts?
What software do you use to open the file? Excel? OpenOffice?
If you open the test folder > Unit > ExporterTest.php, you should see a working example in test_can_use_a_custom_serialiser.
You can change row 155 to $exporter = $this->app->make('cyber-duck/exporter')->make('Excel');, row 160 to $reader = ReaderFactory::create(Type::XLSX); (otherwise it would use a CSV) and comment out line 174 to keep the file so you can open it after the test has running.
The ExampleSerialiser you posted needs to be modified to match your Eloquent model and relationships. Also, the example use an Eloquent version and you mentioned the query builder. If you want to use the Query builder version, you need to use loadQuery (I'll try to update the documentation next week to cover this user case). Feel free to drop me an email with your code so I can have a look and try to help out (It's a bit hard to understand the issue without seeing the actual implementation). You should find me on github, I'm one of the Cyber-Duck guys working on our projects.
I have a website that needs to periodically import 600-800 records from a CSV file.
As part of the import, the existing records are removed / deleted, then replaced by the newly imported data.
At present I am removing the existing items like this:
var itemsToRemove = _contentManager.Query(VersionOptions.Published, "Store").List();
foreach (var item in itemsToRemove)
{
_contentManager.Remove(item);
}
Then importing my new records like so:
var item = _contentManager.New("Store");
item.As<TitlePart>().Title = title;
item.As<StorePart>().Address1 = address1;
_contentManager.Create(item);
It works, but the process is taking so long that it is timing out.
Can anyone suggest a better or more efficient way of doing this? Or tell me how I could extend the timeout duration?
Thanks in advance.
I have a custom content type called Store, which has a Brands taxonomy field. A Store can have multiple Brands associated with it.
I have been tasked with building an import/export routine that allows the user to upload a CSV file containing new Stores and their associated Brands.
I can create the Stores other fields OK, but can't work out how to set the taxonomy field?
Can anyone tell me how I access the Taxonomy field for my custom content type?
Thanks in advance.
OK so (as Bertrand suggested), using the Import/Export feature might be a better way to go, but as a relative noob on Orchard I don't have the time to spend looking at it and couldn't find a good tutorial.
Below is an alternative approach, using the TaxonomyService to programatically assign Terms to a ContentItem.
First of all, inject the ContentManager and TaxonomyService into the constructor...
private ITaxonomyService _taxonomyService;
private IContentManager _contentManager;
public MyAdminController(IContentManager contentManager, ITaxonomyService taxonomyService)
{
_contentManager = contentManager;
_taxonomyService = taxonomyService;
}
Create your ContentItem & set the title
var item = _contentManager.New("MyContentType");
item.As<TitlePart>().Title = "My New Item";
_contentManager.Create(item);
Now we have a ContentItem to work with. Time to get your taxonomy & find your term(s)...
var taxonomy = _taxonomyService.GetTaxonomyByName("Taxonomy Name");
var termPart = _taxonomyService.GetTermByName(taxonomy.Id, "Term Name");
Add the terms to a List of type TermPart...
List<TermPart> terms = new List<TermPart>();
terms.Add(termPart);
Finally, call UpdateTerms, passing in the ContentItem, terms to assign and the name of the field on the ContentItem you want to update...
_taxonomyService.UpdateTerms(item, terms.AsEnumerable<TermPart>(), "My Field");
Hope this helps someone. Probably me next time round! : )
Background: Project is a Data Import utility for importing data from tsv files into a EF5 DB through DbContext.
Problem: I need to do a lookup for ForeignKeys while doing the import. I have a way to do that but the retrieval if the ID is not functioning.
So I have a TSV file example will be
Code Name MyFKTableId
codevalue namevalue select * from MyFKTable where Code = 'SE'
So when I process the file and Find a '...Id' column I know I need to do a lookup to find the FK The '...' is always the entity type so this is super simple. The problem I have is that I don't have access to the properties of the results of foundEntity
string childEntity = column.Substring(0, column.Length - 2);
DbEntityEntry recordType = myContext.Entry(childEntity.GetEntityOfReflectedType());
DbSqlQuery foundEntity = myContext.Set(recordType.Entity.GetType()).SqlQuery(dr[column])
Any suggestion would be appreciated. I need to keep this generic so we can't use known type casting. The Id Property accessible from IBaseEntity so I can cast that, but all other entity types must be not be fixed
Note: The SQL in the MyFKTableId value is not a requirement. If there is a better option allowing to get away from SqlQuery() I would be open to suggestions.
SOLVED:
Ok What I did was create a Class called IdClass that only has a Guid Property for Id. Modified my sql to only return the Id. Then implemented the SqlQuery(sql) call on the Database rather than the Set([Type]).SqlQuery(sql) like so.
IdClass x = ImportFactory.AuthoringContext.Database.SqlQuery<IdClass>(sql).FirstOrDefault();
SOLVED:
Ok What I did was create a Class called IdClass that only has a Guid Property for Id. Modified my sql to only return the Id. Then implemented the SqlQuery(sql) call on the Database rather than the Set([Type]).SqlQuery(sql) like so.
IdClass x = ImportFactory.AuthoringContext.Database.SqlQuery<IdClass>(sql).FirstOrDefault();
I'm writing a workflow validator in Groovy to link two issues based on a custom field value input at case creation. It is required that the custom filed value to Jira issue link be unique. In other words, I need to ensure only one issue has a particular custom field value. If there is more than one issue that has the input custom field value, the validation should fail.
How or what do I return to cause a workflow validator to fail?
Example code:
// Set up jqlQueryParser object
jqlQueryParser = ComponentManager.getComponentInstanceOfType(JqlQueryParser.class) as JqlQueryParser
// Form the JQL query
query = jqlQueryParser.parseQuery('<my_jql_query>')
// Set up SearchService object used to query Jira
searchService = componentManager.getSearchService()
// Run the query to get all issues with Article number that match input
results = searchService.search(componentManager.getJiraAuthenticationContext().getUser(), query, PagerFilter.getUnlimitedFilter())
// Throw a FATAL level log statement because we should never have more than one case associated with a given KB article
if (results.getIssues().size() > 1) {
for (r in results.getIssues()) {
log.fatal('Custom field has more than one Jira ssue associated with it. ' + r.getKey() + ' is one of the offending issues')
}
return "?????"
}
// Create link from new Improvement to parent issue
for (r in results) {
IssueLinkManager.createIssueLink(issue.getId(), r.getId(), 10201, 1, getJiraAuthenticationContext().getUser())
}
try something like
import com.opensymphony.workflow.InvalidInputException
invalidInputException = new InvalidInputException("Validation failure")
this is based of the groovy script runner. If it doesn't work for you, i would recommend you using some sort of framework to make scripting easier, I like using either groovy script runner , Jira Scripting Suite or Behaviours Plugin
. All of them really makes script writing easier and much more intuitive.