Anyone configured apache solr FieldReaderDataSource? - search

I have a datbase column containing xml and I want to index using apache solr content in that column i have following data-config.xml (configuration). The database name is "solrdb" and columns name is "xmlfield", There seems to be some problem in it, the error is specified at the bottom.
<dataConfig>
<!--Data source to connect to database-->
<dataSource
name="XmlDocDS"
type="JdbcDataSource"
driver="com.mysql.jdbc.Driver"
url="jdbc:mysql://127.0.0.1/solrdb"
user="root"
password="root" />
<!-- Data Source for getting xml columne data-->
<dataSource
name="solrFieldReaderDS"
type="FieldReaderDataSource"/>
<document>
<entity
name="xmltable"
rootEntity="false"
datasource="XmlDocDS"
query="select xmlfield from xmltable">
<field column="xmldata" blob="true" />
<entity
name="page"
dataSource="solrFieldReaderDS"
dataField="xmltable.xmldata"
processor="XPathEntityProcessor"
forEach="/page">
<field column="id" xpath="/mediawiki/page/id"/>
<field column="Title" xpath="/mediawiki/page/title"/>
</entity>
</entity>
</document>
</dataConfig>
The error is following:
SEVERE: Exception while processing: xmltable document : null:org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to execute query: select xmlfield from xmltable Processing Document # 1

The error is thrown in this part of JDBC importer code:
try {
Connection c = getConnection();
stmt = c.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(batchSize);
stmt.setMaxRows(maxRows);
LOG.debug("Executing SQL: " + query);
long start = System.currentTimeMillis();
if (stmt.execute(query)) {
resultSet = stmt.getResultSet();
}
LOG.trace("Time taken for sql :"
+ (System.currentTimeMillis() - start));
colNames = readFieldNames(resultSet.getMetaData());
} catch (Exception e) {
wrapAndThrow(SEVERE, e, "Unable to execute query: " + query);
}
So there can be error in connection or query (smth wrong with DB?). Also grep logs for "Executing SQL" and "Time taken for sql :"

there was an error in connection, for some reason it was nto able to connect to my local machine, i changed the database host and it connected!, the problem is that I have the configuration in place and FieldReaderDataSource seems to work fine, but now when it completes everything it says documents indexed/updated = 0
here is my xml configuration
<dataSource
name="jdbcDataSource"
driver="com.mysql.jdbc.Driver"
url="xxxx"
user="yyyy"
password="zzzz" readOnly="true"/>
<dataSource
name="solrFieldReaderDS"
type="FieldReaderDataSource"/>
<document>
<entity
name="tabledata"
dataSource="jdbcDataSource"
query="select codeID,codeText from ArticlePoolState where codeID=3">
<entity
name="xmldata"
dataSource="solrFieldReaderDS"
forEach="/med"
dataField="tabledata.codeText"
processor="XPathEntityProcessor">
<field column="title" xpath="/title"/>
</entity>
</entity>
</document>
The query is fine.

Related

close wizard after clicking button in odoo 12

can you please help me regarding close wizard.
i have created a wizard from xml when i add dates and click on xlsx button,
xlsx generated and wizard close it self. it works fine.
but when i click on pdf, pdf generate successfully but wizard remains open.
how can i close it.
here is my code of xml.
<record id="payment_invoice_wizard_form" model="ir.ui.view">
<field name="name">Invoice Payment Report</field>
<field name="model">invoice.payment_report</field>
<field name="arch" type="xml">
<form string="Invoice Payment Report">
<group>
<field name="start_date"/>
<field name="end_date"/>
<field name="status"/>
</group>
<!-- other fields -->
<footer>
<button name="print_pdf" string="Print" type="object" class="btn-primary"/>
<button name="print_xls" string="Print in XLS" type="object" class="btn-primary"/>
<button string="Cancel" class="btn-default" special="cancel" />
</footer>
</form>
</field>
</record>
on py side i am getting all necessary data and returning this function
#api.multi
def print_pdf(self):
#mycode
return self.env.ref('customer_products.pdf_products').report_action(self)
When Odoo launch the download action of a report it will check if close_on_report_download action attribute is set to true, if so it will return action of type ir.actions.act_window_close which will close the wizard.
#api.multi
def print_pdf(self):
action = self.env.ref('customer_products.pdf_products').report_action(self)
action.update({'close_on_report_download': True})
return action
Edit:
You can implement the same logic, override QWEBActionManager and check if the option is passed through the action definition and if yes close the window.
var ActionManager = require('web.ActionManager');
var session = require('web.session');
ActionManager.include({
ir_actions_report: function (action, options) {
var self = this;
return $.when(this._super.apply(this, arguments), session.is_bound).then(function() {
if (action && action.report_type === 'qweb-pdf' && action.close_on_report_download) {
return self.do_action({ type: 'ir.actions.act_window_close' });
}
});
},
});

ejabberd Search - Module failed to handle the query

I'm using this code to search user in ejabberd:
BareJid bareJid = JidCreate.bareFrom(_user_name + "#domain");
UserSearchManager sm = MainService.getUserSearchManager();
DomainBareJid sDomain = sm.getSearchServices().get(0);
Form form = sm.getSearchForm(sDomain).createAnswerForm();
form.setAnswer("user",_user_name);
ReportedData reportedData = sm.getSearchResults(form, sDomain);
but I got this error:
<iq from='vjud.mnyr' id='jeRII-100' to='admin#mnyr/rsrc' type='error' xml:lang='en'>
<query xmlns='jabber:iq:search'>
<x type='submit' xmlns='jabber:x:data'>
<field type='text-single' var='user'>
<value>1*</value>
</field>
</x>
</query>
<error code='500' type='wait'>
<internal-server-error xmlns='urn:ietf:params:xml:ns:xmpp-stanzas' />
<text xmlns='urn:ietf:params:xml:ns:xmpp-stanzas' xml:lang='en'>Module failed to handle the query
</text>
</error>
</iq>
and in the log:
Reason = {error,{{case_clause,undefined},[{io_lib_pretty,cind_rec,6,[{file,"io_lib_pretty.erl"},{line,813}]},{io_lib_pretty,cind_record,8,[{file,"io_lib_pretty.erl"},{line,765}]},{io_lib_pretty,cind_element,7,[{file,"io_lib_pretty.erl"},{line,849}]},{io_lib_pretty,cind_list,7,[{file,"io_lib_pretty.erl"},{line,819}]},{io_lib_pretty,cind_field,7,[{file,"io_lib_pretty.erl"},{line,795}]},{io_lib_pretty,cind_fields_tail,8,[{file,"io_lib_pretty.erl"},{line,779}]},{io_lib_pretty,cind_element,7,[{file,"io_lib_pretty.erl"},{line,849}]},{io_lib_pretty,cind_list,7,[{file,"io_lib_pretty.erl"},{line,819}]}]}}
Please help me. I'm using Ejabberd v18.0.6 on MacOS using XmppFramework and Smack v4.
I can reproduce the problem easily. It is a small bug, that I've now fixed in this commit:
https://github.com/processone/ejabberd/commit/1be21126342d503205798605725ba5ceef9de42b
Thanks for commenting it!

Odoo: how to filter values from related project_task_type

In Odoo, I inherited the "project" model and made some small changes.
Project model in my module:
class project(models.Model):
_inherit = "project.project"
_columns = {
'is_project' : fields.boolean("Is project", default=True)
}
class project_task_type(models.Model):
_inherit = "project.task.type"
_columns = {
'task_type_is_project' : fields.boolean("Is project", default=True)
}
Relation beetwen project_project and project_task_type in original project module:
project_project:
'type_ids': fields.many2many(
'project.task.type', 'project_task_type_rel', 'project_id',
'type_id', 'Tasks Stages',
states={'close':[('readonly',True)], 'cancelled':[('readonly',True)]}),
project_task_type:
'project_ids': fields.many2many(
'project.project', 'project_task_type_rel',
'type_id', 'project_id', 'Projects'),
In original form view :
<record id="edit_project" model="ir.ui.view">
<field name="name">project.project.form</field>
<field name="model">project.project</field>
<field eval="2" name="priority"/>
<field name="arch" type="xml">
[...]
<page string="Project Stages" attrs="{'invisible': [('use_tasks', '=', False)]}" name="project_stages">
<field name="type_ids"/>
</page>
[...]
So my question is how to filter type_ids records to get values from project_task_type where task_type_is_project = False.
I added domain attribute to field with name "type_ids"
<field name="type_ids domain="[('type_id.task_type_is_poject','=',False)]"/>
<field name="type_ids domain="[('task_type_is_poject','=',False)]"/>
but without success.
I will be very grateful for any help.
I added domain attribute to field with name "type_ids"
<field name="type_ids domain="[('type_id.task_type_is_poject','=',False)]"/>
<field name="type_ids domain="[('task_type_is_poject','=',False)]"/>
but without success.
The second way is the correct one, the domain operates on the model of the field, so project.task.type, which means you should directly filter on its fields, in this case task_type_is_project. There is no type_id field.
The only problem I see is a typo: you forgot a r in project.

The given key is not present in the dictionary

Hi I am trying to fetch the accounts from CRM 2011. I am fetching the data in the EntityCollection . But when I am trying to read or access data from entityCollection it displayed first record but throwing an error after that record. Kindly have a look to below code and suggest me.
string fetch2 = #"
<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'>
<entity name='account'>
<attribute name='name' />
<attribute name='address1_city' />
<attribute name='primarycontactid' />
<attribute name='telephone1' />
<attribute name='accountid' />
<order attribute='name' descending='false' />
<filter type='and'>
<condition attribute='accounttype' operator='eq' value='01' />
</filter>
</entity>
</fetch>";
try
{
EntityCollection fxResult = _service.RetrieveMultiple(new FetchExpression(fetch2));
foreach (var e in fxResult.Entities)
{
Console.WriteLine("Id:{0},Name:{1},City:{2}", e.Attributes ["accountid"].ToString(), e.Attributes["name"].ToString(), e.Attributes["address1_city"].ToString());
// Console.WriteLine("Id:{0},Name:{1},City:{2}", e.ToEntity["accountid"]);
}
}
catch (Exception e)
{
Console.WriteLine("Error:==" + e.Message);
}
Before access an attribute you need to ask if it is in the context:
e.Attributes.Contains("address1_city")
If the collection contains the attribute, then you can access it safe.
string accountid = (string)e.Attributes["address1_city"]
The reason the attribute doesn't come in the collection it's because it is null or you are not retrieving it. In this case maybe, one of your attributes is null. Maybe address1_city.
When retrieving attribute values of late-bound Entity objects, the recommended approach is to use method getAttributeValue<T>. When the attribute is not present in the entity's attribute collection, it returns default(T).
The primary key ('id') of the record is always present when it is returned by the OrganizationService.
So your code should look like this:
EntityCollection fxResult = _service.RetrieveMultiple(new FetchExpression(fetch2));
foreach (var e in fxResult.Entities)
{
Console.WriteLine(
"Id:{0},Name:{1},City:{2}",
e.Id,
e.GetAttributeValue<string>("name"),
e.GetAttributeValue<string>("address1_city"));
}
You can safely use the item selector when you need to assign a value to an attribute, regardless if it is already present or not.
E.g. the following code line is valid:
e["name"] = "Demo Accountname";

File inbound-channel-adapter spring integration for Multiple Files aggregation into one master File for Job processing

I have written a code to combined multiple files into one single Master file.
The issue is with int-transformer where I am getting one file at a time although I have aggregated List of File in composite Filter of File inbound-channel-adapter. The List of File size in composite filter is correct but in Transformer bean the List of File size is always one and not getting the correct list size aggregated file by the filter.
Here is my config:
<!-- Auto Wiring -->
<context:component-scan base-package="com.nt.na21.nam.integration.*" />
<!-- intercept and log every message -->
<int:logging-channel-adapter id="logger"
level="DEBUG" />
<int:wire-tap channel="logger" />
<!-- Aggregating the processed Output for OSS processing -->
<int:channel id="networkData" />
<int:channel id="requests" />
<int-file:inbound-channel-adapter id="pollProcessedNetworkData"
directory="file:${processing.files.directory}" filter="compositeProcessedFileFilter"
channel="networkData">
<int:poller default="true" cron="*/20 * * * * *" />
</int-file:inbound-channel-adapter>
<bean id="compositeProcessedFileFilter"
class="com.nt.na21.nam.integration.file.filter.CompositeFileListFilterForBaseLine" />
<int:transformer id="aggregateNetworkData"
input-channel="networkData" output-channel="requests">
<bean id="networkData" class="com.nt.na21.nam.integration.helper.CSVFileAggregator">
</bean>
</int:transformer>
CompositeFileListFilterForBaseLine:
public class CompositeFileListFilterForBaseLine implements FileListFilter<File> {
private final static Logger LOG = Logger
.getLogger(CompositeFileListFilterForBaseLine.class);
#Override
public List<File> filterFiles(File[] files) {
List<File> filteredFile = new ArrayList<File>();
int index;
String fetchedFileName = null;
String fileCreatedDate = null;
String todayDate = DateHelper.toddMM(new Date());
LOG.debug("Date - dd-MM: " + todayDate);
for (File f : files) {
fetchedFileName = StringUtils.removeEnd(f.getName(), ".csv");
index = fetchedFileName.indexOf("_");
// Add plus one to index to skip underscore
fileCreatedDate = fetchedFileName.substring(index + 1);
// Format the created file date
fileCreatedDate = DateHelper.formatFileNameDateForAggregation(fileCreatedDate);
LOG.debug("file created date: " + fileCreatedDate + " today Date: "
+ todayDate);
if (fileCreatedDate.equalsIgnoreCase(todayDate)) {
filteredFile.add(f);
LOG.debug("File added to List of File: " + f.getAbsolutePath());
}
}
LOG.debug("SIZE: " + filteredFile.size());
LOG.debug("filterFiles method end.");
return filteredFile;
}
}
The Class file for CSVFileAggregator
public class CSVFileAggregator {
private final static Logger LOG = Logger.getLogger(CSVFileAggregator.class);
private int snePostion;
protected String masterFileSourcePath=null;
public File handleAggregateFiles(List<File> files) throws IOException {
LOG.debug("materFileSourcePath: " + masterFileSourcePath);
LinkedHashSet<String> allAttributes = null;
Map<String, LinkedHashSet<String>> allAttrBase = null;
Map<String, LinkedHashSet<String>> allAttrDelta = null;
LOG.info("Aggregator releasing [" + files.size() + "] files");
}
}
Log Output:
INFO : com.nt.na21.nam.integration.aggregator.NetFileAggregatorClient - NetFileAggregator context initialized. Polling input folder...
INFO : com.nt.na21.nam.integration.aggregator.NetFileAggregatorClient - Input directory is: D:\Projects\csv\processing
DEBUG: com.nt.na21.nam.integration.file.filter.CompositeFileListFilterForBaseLine - Date - dd-MM: 0103
DEBUG: com.nt.na21.nam.integration.file.filter.CompositeFileListFilterForBaseLine - file created date: 0103 today Date: 0103
DEBUG: com.nt.na21.nam.integration.file.filter.CompositeFileListFilterForBaseLine - File added to List of File: D:\Projects\NA21\NAMworkspace\na21_nam_integration\csv\processing\file1_base_0103.csv
DEBUG: com.nt.na21.nam.integration.file.filter.CompositeFileListFilterForBaseLine - file created date: 0103 today Date: 0103
DEBUG: com.nt.na21.nam.integration.file.filter.CompositeFileListFilterForBaseLine - File added to List of File: D:\Projects\NA21\NAMworkspace\na21_nam_integration\csv\processing\file2_base_0103.csv
DEBUG: com.nt.na21.nam.integration.file.filter.CompositeFileListFilterForBaseLine - **SIZE: 2**
DEBUG: com.nt.na21.nam.integration.file.filter.CompositeFileListFilterForBaseLine - filterFiles method end.
DEBUG: org.springframework.integration.file.FileReadingMessageSource - Added to queue: [csv\processing\file1_base_0103.csv, csv\processing\file2_base_0103.csv]
INFO : org.springframework.integration.file.FileReadingMessageSource - Created message: [GenericMessage [payload=csv\processing\file2_base_0103.csv, headers={timestamp=1425158920029, id=cb3c8505-0ee5-7476-5b06-01d14380e24a}]]
DEBUG: org.springframework.integration.endpoint.SourcePollingChannelAdapter - Poll resulted in Message: GenericMessage [payload=csv\processing\file2_base_0103.csv, headers={timestamp=1425158920029, id=cb3c8505-0ee5-7476-5b06-01d14380e24a}]
DEBUG: org.springframework.integration.channel.DirectChannel - preSend on channel 'networkData', message: GenericMessage [payload=csv\processing\file2_base_0103.csv, headers={timestamp=1425158920029, id=cb3c8505-0ee5-7476-5b06-01d14380e24a}]
DEBUG: org.springframework.integration.handler.LoggingHandler - org.springframework.integration.handler.LoggingHandler#0 received message: GenericMessage [payload=csv\processing\file2_base_0103.csv, headers={timestamp=1425158920029, id=cb3c8505-0ee5-7476-5b06-01d14380e24a}]
DEBUG: org.springframework.integration.handler.LoggingHandler - csv\processing\file2_base_0103.csv
DEBUG: org.springframework.integration.channel.DirectChannel - postSend (sent=true) on channel 'logger', message: GenericMessage [payload=csv\processing\file2_base_0103.csv, headers={timestamp=1425158920029, id=cb3c8505-0ee5-7476-5b06-01d14380e24a}]
DEBUG: org.springframework.integration.transformer.MessageTransformingHandler - org.springframework.integration.transformer.MessageTransformingHandler#606f8b2b received message: GenericMessage [payload=csv\processing\file2_base_0103.csv, headers={timestamp=1425158920029, id=cb3c8505-0ee5-7476-5b06-01d14380e24a}]
DEBUG: com.nt.na21.nam.integration.helper.CSVFileAggregator - materFileSourcePath: null
INFO : com.nt.na21.nam.integration.helper.CSVFileAggregator - **Aggregator releasing [1] files**
Can some one help me here in identifying the issue with Filter and same is not collecting for transformation?
Thanks in advance.
The issue is with int:aggregator as I am not sure how to invoke. I have used this earlier in my design but it didn't get executed at all. Thanks for the quick response.
For this problem I have written a FileScaner utility which will scan all the files in Folder inside and aggregation is working perfectly.
Please find the config with Aggregator which didn't works, hence I splited the design by two poller first produced all the CSV file(s) and second collect it and aggregate it.
<!-- Auto Wiring -->
<context:component-scan base-package="com.bt.na21.nam.integration.*" />
<!-- intercept and log every message -->
<int:logging-channel-adapter id="logger" level="DEBUG" />
<int:wire-tap channel = "logger" />
<int:channel id="fileInputChannel" datatype="java.io.File" />
<int:channel id="error" />
<int:channel id="requestsCSVInput" />
<int-file:inbound-channel-adapter id="pollNetworkFile"
directory="file:${input.files.directory}" channel="fileInputChannel"
filter="compositeFileFilter" prevent-duplicates="true">
<int:poller default="true" cron="*/20 * * * * *"
error-channel="error" />
</int-file:inbound-channel-adapter>
<bean id="compositeFileFilter"
class="com.nt.na21.nam.integration.file.filter.CompositeFileListFilterForTodayFiles" />
<int:transformer id="transformInputZipCSVFileIntoCSV"
input-channel="fileInputChannel" output-channel="requestsCSVInput">
<bean id="transformZipFile"
class="com.nt.na21.nam.integration.file.net.NetRecordFileTransformation" />
</int:transformer>
<int:router ref="docTypeRouter" input-channel="requestsCSVInput"
method="resolveObjectTypeChannel">
</int:router>
<int:channel id="Vlan" />
<int:channel id="VlanShaper" />
<int:channel id="TdmPwe" />
<bean id="docTypeRouter"
class="com.nt.na21.nam.integration.file.net.DocumentTypeMessageRouter" />
<int:service-activator ref="vLanMessageHandler" output-channel="newContentItemNotification" input-channel="Vlan" method="handleFile" />
<bean id="vLanMessageHandler" class="com.nt.na21.nam.integration.file.handler.VLanRecordsHandler" />
<int:service-activator ref="VlanShaperMessageHandler" output-channel="newContentItemNotification" input-channel="VlanShaper" method="handleFile" />
<bean id="VlanShaperMessageHandler" class="com.nt.na21.nam.integration.file.handler.VlanShaperRecordsHandler" />
<int:service-activator ref="PweMessageHandler" output-channel="newContentItemNotification" input-channel="TdmPwe" method="handleFile" />
<bean id="PweMessageHandler" class="com.nt.na21.nam.integration.file.handler.PseudoWireRecordsHandler" />
<int:channel id="newContentItemNotification" />
<!-- Adding for aggregating the records in one place for OSS output -->
<int:aggregator input-channel="newContentItemNotification" method="aggregate"
ref="netRecordsResultAggregator" output-channel="net-records-aggregated-reply"
message-store="netRecordsResultMessageStore"
send-partial-result-on-expiry="true">
</int:aggregator>
<int:channel id="net-records-aggregated-reply" />
<bean id="netRecordsResultAggregator" class="com.nt.na21.nam.integration.aggregator.NetRecordsResultAggregator" />
<!-- Define a store for our network records results and set up a reaper that will
periodically expire those results. -->
<bean id="netRecordsResultMessageStore" class="org.springframework.integration.store.SimpleMessageStore" />
<int-file:outbound-channel-adapter id="filesOut"
directory="file:${output.files.directory}"
delete-source-files="true">
</int-file:outbound-channel-adapter>
The code is working fine till the routed to all the channel below:
<int:channel id="Vlan" />
<int:channel id="VlanShaper" />
<int:channel id="TdmPwe" />
I am trying to return LinkedHashSet from the Process of the above channel which contains CSV data and I need to aggregate all the merge
LinkedHashSet vAllAttributes to get the master output CSV file.
List<String> masterList = new ArrayList<String>(vAllAttributes);
Collections.sort(masterList);
Well, looks like you misunderstood a bit <int-file:inbound-channel-adapter> behaviour. Its nature is producing one file per message to the channel. It doesn't depend on the logic of the FileListFilter. The is like:
The FileReadingMessageSource uses DirectoryScanner to retrieve files from the provided directory to an internal toBeReceived Queue
Since we scan the directory for the files the design for the DirectoryScanner looks like List<File> listFiles(File directory). I guess this has led you astray.
After that the filter is applied to the original file list and returns only appropriate files.
They are stored to the toBeReceived Queue.
And only after that the FileReadingMessageSource polls an item from the queue to build message for the output channel.
To achieve your aggregation requirements you really should use an <aggregator> between <int-file:inbound-channel-adapter> and your <int:transformer>.
You can mark the <poller> of the <int-file:inbound-channel-adapter> with max-messages-per-poll="-1" to really poll all your files during the single scheduled task. But anyway there will as much messages as your filter returns files.
After that you must accept some tricks for the <aggregator>:
correlationKey - to allow your file messages to be combined to the single MessageGroup for release a single message for the further <transformer>. Since we don't have any context from <int-file:inbound-channel-adapter>, but we know that all messages are provided by the single polling task and withing scheduled Thread (you don't use task-executor on the <poller>), hence we can simply use correlationKey as:
correlation-strategy-expression="T(Thread).currentThread().id"
But the is not enough, because we should produce somehow the single message in the end anyway. Unfortunately we don't know the number of files (however you can do that via the ThreadLocal from your custom FileListFilter) to allow the ReleaseStrategy to return true for the aggregate phase. Hence we never have the normal group completion. But we can forceRelease uncompleted groups from the aggregator to use the MessageGroupStoreReaper or group-timeout on the <aggregator>.
In addition to the previous clause you should supply these options on the <aggegator>:
send-partial-result-on-expiry="true"
expire-groups-upon-completion="true"
And that's all. There is no reason to provide any custom aggregation function (ref/method or expression), because the default on just build a single message with the List of payloads from all messages in group. And that is appropriate for your CSVFileAggregator. Although you can avoid that <transformer> and this CSVFileAggregator for the aggregation function.
Hope I ma clear

Resources