Aggregator is failing after upgrading to Spring-Integration 4 - spring-integration

We are using Spring-Integration in our project. We are trying to migrate from spring-integration-core:jar:3.0.1.RELEASE to spring-integration-core:jar:4.3.2.RELEASE, java 8, spring 4. We are running into issues with aggregator. Weirdly, the aggregator's method is not called during the execution. The configuration is shown below:
<!-- Store the original payload in header for future purpose -->
<int:header-enricher default-overwrite="true" should-skip-nulls="true" >
<int:header name="${headerNames.originalPayload}" expression="payload" />
</int:header-enricher>
<!-- split the issues-->
<int-xml:xpath-splitter >
<int-xml:xpath-expression expression="//transaction"/>
</int-xml:xpath-splitter>
<int:service-activator ref="httpOutboundGatewayHandler" method="buildHttpOutboundGatewayRequest" />
<int:header-filter header-names="accept-encoding"/>
<int-http:outbound-gateway url-expression="headers.restResourceUrl"
http-method-expression="headers.httpMethod"
extract-request-payload="true"
expected-response-type="java.lang.String">
</int-http:outbound-gateway>
<int:service-activator ref="msgHandler" method="buildMessageFromExtSysResponse" />
<int-xml:xslt-transformer xsl-resource="${stylesheet.PQGetWorklist-Response-MoveSources}" />
</int:chain>
<int:aggregator input-channel="PQGetWorklist-Aggregate-sources" output-channel="PQGetWorklist-MoveSourcesUnderIssues"
ref="xmlAggregator" method="aggregateSources">
</int:aggregator>
In the above code, <int-http:outbound-gateway is getting executed, but the XmlAggregator.aggregateSources is not called for unknown reasons. I could see that the message is sent to on the channel PQGetWorklist-Aggregate-sources. But from there the aggregator's method aggregateSources is not called. As a result, we are getting No reply received within timeout. Remember the same configuration is working fine with spring-integration-core:jar:3.0.1.RELEASE. The problem is seen only when we upgrade it to spring-integration-core:jar:4.3.2.RELEASE .
Here is my XmlAggregator.java
public class XmlAggregator {
private static final Logger logger = Logger.getLogger(XmlAggregator.class);
public Message aggregateSources(List < Message > messages) throws DocumentException {
Document mainDom = XmlParserUtil.convertString2Document("<Results> </Results>");
Document splitMessageDom = XmlParserUtil.convertString2Document(messages.get(0).getPayload().toString());
Document IssuesDom = XmlParserUtil.convertString2Document("<Issues> </Issues>");
Document sourcesDom = XmlParserUtil.convertString2Document("<RetrievedSources> </RetrievedSources>");
if(messages.get(0).getHeaders().get("jobDesignerJobName").equals("PQIssueInquiry")){
//extract callerType node
Element callerType = XmlParserUtil.getXmlElements(XmlParserUtil.convertString2Document(messages.get(0).getPayload().toString()), "//callerType").get(0);
//add callerType to root node
mainDom.getRootElement().content().add(callerType);
}
//extract sort node
Element sort = XmlParserUtil.getXmlElements(XmlParserUtil.convertString2Document(messages.get(0).getPayload().toString()), "//sort").get(0);
//add sort to root node
mainDom.getRootElement().content().add(sort);
//get all the issues
List < Element > transactionElements = XmlParserUtil.getXmlElements(splitMessageDom, "//transaction");
for (Element issue: transactionElements) {
// add all the issues to the IssuesDom
IssuesDom.getRootElement().content().add(issue);
}
//add all the issues to the root node
XmlParserUtil.appendChild(mainDom, IssuesDom, null);
for (Message source: messages) {
Document sourcesTempDom = XmlParserUtil.convertString2Document(source.getPayload().toString());
Reader xml = new StringReader((String) source.getPayload());
SAXReader reader = new SAXReader();
Document document = reader.read(xml);
//get all the sources
List < Element > sourceElements = XmlParserUtil.getXmlElements(sourcesTempDom, "//sources");
for (Element sources: sourceElements) {
//add all the sources to sourcesDom
sourcesDom.getRootElement().content().add(sources);
}
}
// add all the sources to the root node
XmlParserUtil.appendChild(mainDom, sourcesDom, null);
MessageBuilder < ? > msgBuilder = MessageBuilder.withPayload(mainDom.asXML());
Message message = msgBuilder.build();
logger.debug("aggregateSources Results after aggregation " + mainDom.asXML());
return message;
}
}
Any thoughts?

Starting with version 4.2 the XPathMessageSplitter is based on the iterator functionality by default:
<xsd:attribute name="iterator" default="true">
<xsd:annotation>
<xsd:documentation>
The iterator mode: 'true' (default) to return an 'java.util.Iterator'
for splitting 'payload', 'false to return a 'java.util.List'.
Note: the 'list' contains transformed nodes whereas with the
'iterator' each node is transformed while iterating.
</xsd:documentation>
</xsd:annotation>
<xsd:simpleType>
<xsd:union memberTypes="xsd:boolean xsd:string" />
</xsd:simpleType>
</xsd:attribute>
Do not overhead the memory with the size interest and pull the data from the target source on demand.
The functionality can be turned off via iterator="false" option.

Related

How to make aggregator wait for response from all methods in publish-subsribe channel?

I have used the below spring configuration to combine results from publish subscribe channel using aggregator. But the aggregator populates response only from the first service activator in the publish subscribe channel and it does not wait for the response from the other service activators. How should i modify my configuration to make the aggregator wait for response from all 4 service activators?
<int:bridge id="ValidationsBridge" input-channel="RequestChannel" output-channel="bridgeOutputChannel"></int:bridge>
<int:publish-subscribe-channel id="bridgeOutputChannel" apply-sequence="true" />
<int:service-activator input-channel="bridgeOutputChannel" output-channel="aggregatorInput"
method="populateResponse1" ref="WebServiceImpl" >
</int:service-activator>
<int:service-activator input-channel="bridgeOutputChannel" method="populateResponse2" ref="WebServiceImpl" output-channel="aggregatorInput"
>
</int:service-activator>
<int:service-activator input-channel="bridgeOutputChannel" method="populateResponse3" ref="WebServiceImpl" output-channel="aggregatorInput"
>
</int:service-activator>
<int:service-activator input-channel="bridgeOutputChannel" method="populateResponse4" ref="WebServiceImpl" output-channel="aggregatorInput"
>
</int:service-activator>
<task:executor id="executor" pool-size="4" keep-alive="20"/>
<int:aggregator input-channel="aggregatorInput" output-channel="aggregatorOutput" ref="vehicleAggregator" method="populateResponse"
></int:aggregator>
<int:service-activator id="processorServiceActivator" input-channel="aggregatorOutput" ref="Processor" method="mapResponse" output-channel="ResponseChannel"/>
<int:channel id="bridgeOutputChannel" />
<int:channel id="aggregatorInput" />
<int:channel id="aggregatorOutput" />
</beans>
Below is a snippet from my aggregator
public Message<?> populateResponse(Collection<Message<?>> message){
MessageBuilder<?> MsgBuilder =null;
MsgBuilder=MessageBuilder.withPayload(message.iterator().next().getPayload());
for (Message<?> message2 : message) {
if(null!=message2.getHeaders().get(Constants.RESPONSE1)){
MsgBuilder.setHeader(Constants.RESPONSE1, message2.getHeaders().get(Constants.RESPONSE1));
}
if(null!=message2.getHeaders().get(Constants.RESPONSE2)){
MsgBuilder.setHeader(Constants.RESPONSE2, message2.getHeaders().get(Constants.RESPONSE2));
}
}
return (Message<?>) MsgBuilder.build();
}
You should use apply-sequence="true" on that publish-subscribe-channel (which should be there by default) and don't use any correlation options on the aggregator - just rely on the default correlationKey header populated by that apply-sequence.
Having the correlation strategy based on the message id (your code), makes the aggregator to build new groups for each message, just because message id is always unique.
By the way, you don't need release strategy as well. The aggregator easily do that by the populate sequenceNumber header.
And I'm not sure that you need group-timeout too.
In other words what you need for your use-case is just rely on the out-of-the-box sequence details feature:
http://docs.spring.io/spring-integration/docs/4.3.9.RELEASE/reference/html/messaging-channels-section.html#channel-configuration-pubsubchannel
http://docs.spring.io/spring-integration/docs/4.3.9.RELEASE/reference/html/messaging-routing-chapter.html#aggregator

Same file gets picked up again and again in spring-ftp but with different names

I have a spring input channel defined like this
<file:inbound-channel-adapter prevent-duplicates="false" id="inpChannel" directory="file:/Users/abhisheksingh/req" auto-startup="true">
<int:poller id="poller" fixed-delay="1000" />
</file:inbound-channel-adapter>
<int:service-activator input-channel="inpChannel" ref="inpHandler" />
The file name example as TEST.SQQ. SQQ is the file format which the client uses to place the files in ftp. However, I see that the same file is picked up by the spring ftp adapter again and again with different file names. So the first time it is TEST.SQQ. Then the the next time it is TEST.SQQ-20170204.PQQ and then the next time it is TEST.SQQ-20170204.PQQ.20170304.PQQ. This keeps on continuing. I have a filter on my end which checks the name of the file already processed. But since the file name being polled is different each time, all of these files are picked up for processing.
This is my ftp adapter -
<int-ftp:inbound-channel-adapter id="sqqFtpInbound"
channel="ftpChannel"
session-factory="sqqFtpClientFactory"
auto-create-local-directory="true"
delete-remote-files="false"
local-filter="acceptAllFileListFilter"
local-directory="file:/Users/abhisheksingh/ddrive/everge_ws/sqqReq" auto-startup="true" >
<int:poller id="poller" fixed-delay="1000" />
</int-ftp:inbound-channel-adapter>
Here is my ftp server image -
Here is my local directory image -
I dont understand why the same file gets picked up again and again. I will appreciate some help !
This is my file list filter code.
public class TestFileListFilter<F> extends AbstractFileListFilter<F> {
private static final Logger log = LoggerFactory.getLogger(EvergeFileListFilter.class);
#Override
protected boolean accept(F file) {
File f = (File) file;
if(f.getAbsolutePath().contains(".PQQ")) {
String newDir = "/Users/abhisheksingh/ddrive/sample/pqqReq/";
String archiveLocation = "/Users/abhisheksingh/ddrive/sample/pqqArchive/";
String fullName = archiveLocation + f.getName();
log.info("Check if the file has already been processed " + fullName);
File fl = new File(fullName);
final File dir = new File(archiveLocation);
for (final File child : dir.listFiles()) {
String archiveName = FilenameUtils.getBaseName(child.getName());
String inputName = FilenameUtils.getBaseName(fl.getName());
log.info("Archive file name is " + archiveName);
log.info("Input file name is " + inputName);
if(inputName.contains(archiveName)) {
log.info("The file is already processed "+inputName);
}
}
if(fl.exists()) {
log.error("PQQ file has already been processed.");
removeFile(f);
return false;
}else{
log.info("PQQ File received " + f.getAbsolutePath());
}
moveFile(f, newDir);
return true;
}
}
I think your custom local-filter has some vulnerabilities to rely on an nonexistent fact to wait for unique files from remote store.
You should ensure that ability because it isn't switched by default.
For this purpose consider to add filter option to the <int-ftp:inbound-channel-adapter> as a reference to the AcceptOnceFileListFilter or FtpPersistentAcceptOnceFileListFilter.
We have a JIRA on the matter.
Please, confirm that it is exactly an issue for you and we might revise a priority for that ticket and will fix it soon.

Unable to get Aggregator to work

I am trying to understand the Aggregator basics. Below is the use case I am trying to implement:
1) Read message (order details) from queue.
<?xml version="1.0" encoding="UTF-8"?>
<order xmlns="http://www.example.org/orders">
<orderItem>
<isbn>12333454443</isbn>
<quantity>4</quantity>
</orderItem>
<orderItem>
<isbn>545656777</isbn>
<quantity>50</quantity>
</orderItem>
..
..
</order>
One order message will contain multiple orderItem. And we can expect hundreds of order messages in the queue.
2) End Result ::
a) Each orderitem should be written to a file.
b) 4 such files should be written to a unique folder.
To give an example, lets say we got two order messages - each containing three orderitem.
So we need to create 2 folders :
In "folder 1", there should be 4 files(1 orderitem in each file)
In "folder 2", there should be 2 files(1 orderitem in each file). Here for simplicity we assume no more order messages came and we can write after 5 mins.
Implementation:
I am able to read the message from the queue (websphere MQ) and unmarshall the message successfully.
Used splitter to split the message based on orderitem count.
Used Aggregator to group the message in size of 4.
I unable to get the aggregator to work as per my understanding.
I push one order when 4 orderitem, the message is getting aggregated correctly.
I push one order with 5 orderitem, the first 4 is getting aggregated but the last one is sent to discard channel. This is expected as the MessageGroup is released so the last message is discarded.
I push two orders each containing 2 orderitem. The last 2 orderitem are sent to discard channel.
The correlation strategy is hardcoded (OrderAggregator.java) but the above case should have worked.
Need pointers on how to implement this use case where I can group them in 4 and write to unique folders.
Please note that the orderitem are all independent book orders and have no relation amongst them.
Below is the configuration.
spring-bean.xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans">
<int:channel id="mqInbound"/>
<int:channel id="item"/>
<int:channel id="itemList"/>
<int:channel id="aggregatorDiscardChannel"/>
<int-jms:message-driven-channel-adapter id="jmsIn"
channel="mqInbound"
destination="requestQueue"
message- converter="orderMessageConverter"/>
<int:splitter input-channel="mqInbound" output-channel="item" expression="payload.orderItem"/>
<int:chain id="aggregateList" input-channel="item" output-channel="itemList" >
<int:header-enricher>
<int:header name="sequenceSize" expression="4" overwrite="true"/>
</int:header-enricher>
<int:aggregator correlation-strategy="orderAggregator" correlation-strategy-method="groupOrders" discard-channel="aggregatorDiscardChannel" />
</int:chain>
<int:service-activator input-channel="itemList" ref="displayAggregatedList" method="display"/>
<int:service-activator input-channel="aggregatorDiscardChannel" ref="displayAggregatedList" method="displayDiscarded"/>
<bean id="orderAggregator" class="com.samples.Aggregator.OrderAggregator"/>
<bean id="displayAggregatedList" class="com.samples.Aggregator.DisplayAggregatedList"/>
...
....
</beans>
OrderAggregator.java
public class OrderAggregator {
#Aggregator
public List<OrderItemType> sendList(List<OrderItemType> orderItemTypeList) {
return orderItemTypeList;
}
#CorrelationStrategy
public String groupOrders( OrderItemType orderItemType) {
return "items";
}
}
DisplayAggregatedList.java
public class DisplayAggregatedList {
public void display(List <OrderItemType> orderItemTypeList) {
System.out.println("######## Display Aggregated ##############");
for(OrderItemType oit : orderItemTypeList) {
System.out.println("### Isbn :" + oit.getIsbn() + ":: Quantity :" + oit.getQuantity());
}
}
public void displayDiscarded(Message<?> message) {
System.out.println("######## Display Discarded ##############" + message);
}
}
What you need is called expire-groups-upon-completion:
When set to true (default false), completed groups are removed from the message store, allowing subsequent messages with the same correlation to form a new group. The default behavior is to send messages with the same correlation as a completed group to the discard-channel.
If you need to release uncompleted groups anyway (2 orders left, for example), consider to use group-timeout: http://docs.spring.io/spring-integration/reference/html/messaging-routing-chapter.html#agg-and-group-to
Please, use expire-groups-upon-completion="true" and consider to use MessageCountReleaseStrategy` for release-strategy – Artem Bilan

spring-integration is losing headers when sent to a subscriber

I am using spring-integration with hornetQ. The problem is that I have put a custom header in the message (Method), but when it hits the subscriber the header is no longer available. I there some sort of configuration property I need to setup to preserve headers?
An application receives the message (I can see the Method header in the console log so I know it is actually getting the correct message). It basically just routes the message onto the outbound queue so that client can subscribe to it (if there is a cleaner way to do this please let me know)
<int:channel id="partsChannel" />
<int-jms:message-driven-channel-adapter
id="jmsPartsInbound"
acknowledge="transacted"
destination-name="parts.in"
channel="partsChannel"
connection-factory="jmsConnectionFactory"
/> <!-- error-channel="partsInboundFailedChannel" -->
<int-jms:outbound-channel-adapter
id="jmsPartsOutbound"
destination-name="parts.out"
channel="partsChannel"
connection-factory="jmsConnectionFactory"
pub-sub-domain="true"
>
<int-jms:request-handler-advice-chain>
<int:retry-advice max-attempts="3">
<int:exponential-back-off initial="2000" multiplier="2" />
</int:retry-advice>
</int-jms:request-handler-advice-chain>
</int-jms:outbound-channel-adapter>
Applications subscribe like so:
<int:channel id="partsInboundChannel" />
<int-jms:message-driven-channel-adapter
id="jmsPartsInbound"
acknowledge="transacted"
destination-name="parts.out"
channel="partsInboundChannel"
pub-sub-domain="true"
connection-factory="jmsConnectionFactory"/>
And this is the part that gets the message in the subscriber.
#ServiceActivator(inputChannel = "partsInboundChannel")
public void processPart(final Message message) {
...message.getHeaders does not contain the "Method" header
}
Isn't your issue here in the DefaultJmsHeaderMapper.fromHeaders:
if (value != null && SUPPORTED_PROPERTY_TYPES.contains(value.getClass())) {
try {
String propertyName = this.fromHeaderName(headerName);
jmsMessage.setObjectProperty(propertyName, value);
}
where SUPPORTED_PROPERTY_TYPESare:
private static List<Class<?>> SUPPORTED_PROPERTY_TYPES = Arrays.asList(new Class<?>[] {
Boolean.class, Byte.class, Double.class, Float.class, Integer.class, Long.class, Short.class, String.class });
So, if your method is really of Method type, it will be skipped.
Consider to use its name instead.

"Cannot complete this action" when trying to provision list instance with site TaxonomyField

Fair warning: The setup for this question is long, so be patient and stay with me.
I have two features in my solution package. The first is a set of site fields and content types; let's call this one Feature A. Among the fields are a field of type "TaxonomyFieldType" and an associate field of type "Note" (an explanation of the note field).
<Elements ...>
<Field ID="{956a1078-ec35-4c04-83c4-0a3742119496}"
Name="TaxonomyTextField"
Type="Note" DisplayName="Tags_0"
ShowInViewForms="FALSE"
Required="FALSE"
Group="MyGroup"
Hidden="TRUE"/>
<Field ID="{92BC866B-0415-45F0-B431-D4DF69C421CC}"
Name="Tags"
DisplayName="Custom Tags"
Type="TaxonomyFieldType"
ShowField="Term1033"
Required="FALSE"
Group="MyGroup"
>
<Customization>
<ArrayOfProperty>
<Property>
<Name>IsPathRendered</Name>
<Value xmlns:q7="http://www.w3.org/2001/XMLSchema" p4:type="q7:boolean" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">true</Value>
</Property>
<Property>
<Name>TextField</Name>
<Value xmlns:q6="http://www.w3.org/2001/XMLSchema" p4:type="q6:string" xmlns:p4="http://www.w3.org/2001/XMLSchema-instance">{956a1078-ec35-4c04-83c4-0a3742119496}</Value>
</Property>
</ArrayOfProperty>
</Customization>
</Field>
</Elements>
and
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
<!-- Parent ContentType: Item (0x01) -->
<ContentType ID="0x0100b61c774f4c0e4a89bf230cbb44cd4f75"
Name="MyContent"
Group="MyGroup"
Description="Description of My Content Type"
Inherits="FALSE"
Overwrite="TRUE"
Version="0">
<FieldRefs>
<FieldRef ID="{52578fc3-1f01-4f4d-b016-94ccbcf428cf}" DisplayName="Comments" Name="Comments" Required="FALSE"/>
<FieldRef ID="{956a1078-ec35-4c04-83c4-0a3742119496}" Name="TimeTrackerTaxonomyTextField"/>
<FieldRef ID="{92BC866B-0415-45F0-B431-D4DF69C421CC}" DisplayName="Tags" Name="Tags" Required="FALSE"/>
</FieldRefs>
</ContentType>
</Elements>
In the feature receiver for the first feature (let's call it the Feature A), I programmatically retrieve this TaxonomyField and
ensure that it is configured to retrieve terms out of a predetermined term set:
public override void FeatureActivated(SPFeatureReceiverProperties properties)
{
SPWeb web = GetWebObj(properties.Feature.Parent);
Guid fieldId = new Guid("92BC866B-0415-45F0-B431-D4DF69C421CC");
TaxonomyField field = web.Fields[fieldId] as TaxonomyField;
string groupName = properties.Feature.Properties["TaxonomyGroupName"].Value;
string termSetName = properties.Feature.Properties["TermSetName"].Value;
DiagnosticService logger = DiagnosticService.Local;
TermSet set = null;
TaxonomySession session = new TaxonomySession(web.Site);
TermSetCollection termSets = session.GetTermSets(termSetName, System.Threading.Thread.CurrentThread.CurrentUICulture.LCID);
if (termSets == null || termSets.Count == 0)
{
logger.WriteTrace(1, logger[CategoryId.Deployment], TraceSeverity.Medium,
"Activity Tags term set not found. Ensuring '{0}' group and '{1}' term set.", groupName, termSetName);
// create the term set in the default store
var store = session.DefaultSiteCollectionTermStore;
var group = store.EnsureGroup(groupName);
set = group.EnsureTermSet(termSetName);
store.CommitAll();
logger.WriteTrace(1, logger[CategoryId.Provisioning], TraceSeverity.Verbose, "created taxonomy group '{0}' and term set '{1}'", group, set);
}
else
{
logger.WriteTrace(1, logger[CategoryId.Deployment], TraceSeverity.Verbose, "term set found.");
// need to make sure we grab the one in the right group, or it might be someone else's term set.
foreach (var termSet in termSets)
{
if (String.Equals(termSet.Group.Name,groupName))
{
if (set == null)
{
set = termSet;
}
else
{
logger.WriteTrace(1, logger[CategoryId.Deployment], TraceSeverity.Unexpected,
"Multiple term sets named '{0}' found in more than one taxonomy group.", termSetName);
throw new SPException(String.Format("Multiple term sets named '{0}' found in more than one taxonomy group. "+
"Was there a previous installation that was not removed properly?", termSetName));
}
}
}
if (set == null)
{
// term set found, but in an unrecognized group. leave it alone and do like above
logger.WriteTrace(1, logger[CategoryId.Deployment], TraceSeverity.Verbose,
"Term set '{0}' found, but in unrecognized group. Provisioning new group and term set as configured.", termSetName);
var store = session.DefaultSiteCollectionTermStore;
var group = store.EnsureGroup(groupName);
set = group.EnsureTermSet(termSetName);
store.CommitAll();
logger.WriteTrace(1, logger[CategoryId.Provisioning], TraceSeverity.Verbose, "created taxonomy group '{0}' and term set '{1}'", group, set);
}
}
// set termSets to the newly created term set
field.SspId = set.TermStore.Id;
field.TermSetId = set.Id;
field.TargetTemplate = String.Empty;
field.AnchorId = Guid.Empty;
field.Open = true;
field.AllowMultipleValues = true;
field.Update();
}
The second feature contains list templates and instances, one of which uses the above content type; let's call this feature Feature B.
Here's the list schema for the list that blows up when provisioned (ListInstance element not shown):
<?xml version="1.0" encoding="utf-8"?>
<List xmlns:ows="Microsoft SharePoint" Title="My List" FolderCreation="FALSE" Direction="$Resources:Direction;" Url="Lists/MyList" BaseType="0" xmlns="http://schemas.microsoft.com/sharepoint/">
<MetaData>
<ContentTypes>
<ContentTypeRef ID="0x0100b61c774f4c0e4a89bf230cbb44cd4f75"></ContentTypeRef>
</ContentTypes>
<Fields>
<Field ID="{956a1078-ec35-4c04-83c4-0a3742119496}" Name="TaxonomyTextField" Type="Note"/>
<Field ID="{92bc866b-0415-45f0-b431-d4df69c421cc}" Name="Tags" Type="TaxonomyFieldType"/>
<Field ID="{52578FC3-1F01-4f4d-B016-94CCBCF428CF}" Name="_Comments" Type="Note"/>
</Fields>
<Views>
<View BaseViewID="1" Type="HTML" WebPartZoneID="Main" DisplayName="$Resources:core,objectiv_schema_mwsidcamlidC24;" DefaultView="TRUE" MobileView="TRUE" MobileDefaultView="TRUE" SetupPath="pages\viewpage.aspx" ImageUrl="/_layouts/images/generic.png" Url="AllItems.aspx">
<Toolbar Type="Standard" />
<XslLink Default="TRUE">main.xsl</XslLink>
<RowLimit Paged="TRUE">30</RowLimit>
<ViewFields>
<!-- <FieldRef Name="Tags"></FieldRef> -->
<FieldRef Name="_Comments"></FieldRef>
</ViewFields>
<Query>
<OrderBy>
<FieldRef Name="ID">
</FieldRef>
</OrderBy>
</Query>
<ParameterBindings>
<ParameterBinding Name="NoAnnouncements" Location="Resource(wss,noXinviewofY_LIST)" />
<ParameterBinding Name="NoAnnouncementsHowTo" Location="Resource(wss,noXinviewofY_DEFAULT)" />
</ParameterBindings>
</View>
</Views>
<Forms>
<Form Type="DisplayForm" Url="DispForm.aspx" SetupPath="pages\form.aspx" WebPartZoneID="Main" />
<Form Type="EditForm" Url="EditForm.aspx" SetupPath="pages\form.aspx" WebPartZoneID="Main" />
<Form Type="NewForm" Url="NewForm.aspx" SetupPath="pages\form.aspx" WebPartZoneID="Main" />
</Forms>
</MetaData>
</List>
After the solution deploys, I am able to activate Feature A without issue. The site columns and content types are created. When I attempt to activate Feature B, the feature activation call stack blows up and results in an error page with the following stack trace:
[COMException (0x80004005): Cannot complete this action.
Please try again.]
Microsoft.SharePoint.Library.SPRequestInternalClass.UpdateField(String bstrUrl, String bstrListName, String bstrXML) +0
Microsoft.SharePoint.Library.SPRequest.UpdateField(String bstrUrl, String bstrListName, String bstrXML) +134
[SPException: Cannot complete this action.
Please try again.]
Microsoft.SharePoint.Administration.SPElementDefinitionCollection.ProvisionListInstances(SPFeaturePropertyCollection props, SPSite site, SPWeb web, Boolean fForce) +23649702
Microsoft.SharePoint.Administration.SPElementDefinitionCollection.ProvisionElements(SPFeaturePropertyCollection props, SPWebApplication webapp, SPSite site, SPWeb web, Boolean fForce) +197
Microsoft.SharePoint.SPFeature.Activate(SPSite siteParent, SPWeb webParent, SPFeaturePropertyCollection props, Boolean fForce) +25437263
Microsoft.SharePoint.SPFeatureCollection.AddInternal(SPFeatureDefinition featdef, Version version, SPFeaturePropertyCollection properties, Boolean force, Boolean fMarkOnly) +27496735
Microsoft.SharePoint.SPFeatureCollection.AddInternalWithName(Guid featureId, String featureName, Version version, SPFeaturePropertyCollection properties, Boolean force, Boolean fMarkOnly, SPFeatureDefinitionScope featdefScope) +150
Microsoft.SharePoint.SPFeatureCollection.Add(Guid featureId, Boolean force, SPFeatureDefinitionScope featdefScope) +83
Microsoft.SharePoint.WebControls.FeatureActivator.ActivateFeature(Guid featid, SPFeatureDefinitionScope featdefScope) +699
Microsoft.SharePoint.WebControls.FeatureActivatorItem.BtnActivateFeature_Click(Object objSender, EventArgs evtargs) +140
System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115
System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140
System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2981
I'm fairly certain that there's something wrong with the way I'm configuring the TaxonomyField in Feature A; its association with a list instance at provisioning time is the cause of the error (I've determined this by commenting out pieces and deploying over and over). There seems to be very little documentation, or even blogger experience, with TaxonomyFields and provisioning them in list instances, so I'm a bit at a loss. Does someone have any idea what's going wrong?
I followed How to provision SharePoint 2010 Managed Metadata columns by Wictor Wilén and was able to get something similar working (be sure to also make the modification from this comment).
I ended up opening a support incident with microsoft to get this figured out. Eventually, their service rep tracked it down to setting the property DisallowContentTypes="FALSE" on the list template and EnableContentTypes="TRUE" on the list schema. That solved my provisioning issue.
However, I still have an issue with being able to actually create items on the newly provisioned lists, having to do with the hidden text field that must accompany the taxonomy field (grrr). I have provisioned a note field in the site, and I have referenced it in my list template, and I have set the TextField property to the ID of this note field in both the site column definition and in the field definition.
Wictor makes mention of that (if I recall from reading his post), but there's more here: http://www.sharepointconfig.com/2011/03/the-complete-guide-to-provisioning-sharepoint-2010-managed-metadata-fields/
I am currently stuck on the exception that's thrown at item creation time, saying:
Failed to get value of the "Tags"
column from the "Managed Metadata"
field type control. See details in
log. Exception message: Invalid field
name.
{956a1078-ec35-4c04-83c4-0a3742119496}
http://server/sites/mysite
/sites/mysite/Lists/Entries
I have been getting the same error when activating a sandboxed feature that contains ListInstance elements for a sandboxed custom list template that contains a custom ContentTypeRef. The list is created, but the feature errors out on creation. Further, the list contains a auto-generated Content Type rather than the one specified in the list definition. If you keep attempting to activate the feature until all lists are created, the feature will finally activate.
Further, I have noticed that I cannot update sandboxed custom field properties from SandBoxed solutions in MOSS 2010. I get the same type of error indicating that it cannot complete the action when SPListItem.UpdateField is called on a custom field defined via XML in a sandboxed solution.
I am now concluding that sandboxed field updates are not supported in sandboxed solutions for MOSS 2010.
Instead of programmatically updating fields from sandboxed custom content types, you should completely define the field completely in the XML field definition and list template field XML element.
The way to get your list to use your custom content type and activate without causing the "Cannot Complete This Action" error is as follows:
1) Use a Default ContentTypeRef in your list definition, instead of the custom one you created.
https://msdn.microsoft.com/en-us/library/office/ms452896(v=office.14).aspx
For example:
0x01 Default Item Content Type
0x0101 Default Document Content Type
In the List Schema.xml file, if your Content Type is based off the Default Item Content Type, you'd change it to:
<ContentTypes>
<ContentTypeRef ID="0x01"></ContentTypeRef>
</ContentTypes>
2) Add a Feature Activated Event Receiver to the feature that runs code to configure the list's content type.
I wrote a function that basically sets the Content Type for list and gets rid of any other content types associated with the list. Your feature activation event receiver can run this function and set the content types for your lists to the one it should be. The function below assumes your content types are uniquely named. You can add a check for the content type group name, as well, if needed.
public static string ConfigureCustomListForCustomContentType(SPWeb web, string strListName, string strCustomContentTypeName)
{
StringBuilder sbOutput = new StringBuilder();
try
{
SPList customlist = web.Lists[strListName];
SPContentType CustomContentType = null;
//Validate Content Types
//1) Find the Content Type in the Content Type list
foreach (SPContentType spct in web.Site.RootWeb.ContentTypes)
{
if (spct.Name == strCustomContentTypeName)
{
CustomContentType = spct;
break;
}
}
if (CustomContentType == null)
{
sbOutput.Append("<div class='error'>Unable to find custom content type named " + strCustomContentTypeName +".</div>");
return sbOutput.ToString();
}
sbOutput.Append("Found content Type "+CustomContentType.Name+"...<br />");
Boolean bFoundContentType = false;
customlist.ContentTypesEnabled = true;
List<SPContentTypeId> RemoveContentTypeList = new List<SPContentTypeId>();
//Remove all other content types
foreach (SPContentType spct in customlist.ContentTypes)
{
if (spct.Name == strCustomContentTypeName)
{
bFoundContentType = true;
}
else
{
RemoveContentTypeList.Add(spct.Id);
}
}
if (!bFoundContentType)
{
sbOutput.Append("Adding [" + strCustomContentTypeName + "] to List " + customlist.Title + "<br />");
customlist.ContentTypes.Add(CustomContentType);
}
else
{
sbOutput.Append("[" + strCustomContentTypeName + "] already in List " + customlist.Title + ".<br />");
}
for (int i = 0; i < RemoveContentTypeList.Count; i++)
{
sbOutput.Append("Removing extra content type: " + customlist.ContentTypes[RemoveContentTypeList[i]].Name + "<br />");
customlist.ContentTypes[RemoveContentTypeList[i]].Delete();
}
}
catch (Exception ex)
{
sbOutput.Append("<div class='error'>Error occurred configuring "+strListName+": " + ex.ToString() + "<br /></div>");
}
return sbOutput.ToString();
}
That should allow you to get your list instantiated and set the content type to your custom content type.
If you list is based off the Event Type 0x0102, then the above function may not work without causing an error in a sandboxed solution.
For the Event Type, I used the default Event Content Type and ran code to customize the list (adding columns) as needed.

Resources