How do I access the result of FetchXML as XML rather than entities? - dynamics-crm-2011

In the past my FetchXML was delivering me the result in xml format, but since I change server this function string ret = service.Fetch(fetchXml); no longer works, so I had to resort with another solution, but this one give me more work to build a XML file.
Fetch String example:
string fetchXml = #"<fetch version='1.0' output-format='xml-platform' mapping='logical' distinct='false'>
<entity name='account'>
<attribute name='name'/>
<attribute name='telephone1'/>
</entity>
</fetch>";
EntityCollection ec = organizationProxy.RetrieveMultiple(new FetchExpression(fetchXml));
XElement rootXml = new XElement("account");
foreach (Entity account in ec.Entities)
{
if (account.Attributes.Contains("name"))
{
rootXml.Add(new XElement("name", account.Attributes.Contains("name") ? account["name"] : ""));
rootXml.Add(new XElement("telephone1", account.Attributes.Contains("telephone1") ? account["telephone1"] : ""));
}
}
res.XmlContent = rootXml.ToString();
So what I'm doing here is build the XML string by hand, and I know that CRM can deliver the result in XML, I have googleit (http://social.msdn.microsoft.com/Forums/en-US/af4f0251-7306-4d76-863d-9508d88c1b68/dynamic-crm-2011-fetchxml-results-into-xmltextreader-to-build-an-xml-output) But this give me more work than my code. Or there is no other solution?

In the past I have used Serialization to convert objects to XML and back again.
To convert to XML
public static string SerializeAnObject(object _object)
{
System.Xml.XmlDocument doc = new XmlDocument();
System.Xml.Serialization.XmlSerializer serializer = new System.Xml.Serialization.XmlSerializer(_object.GetType());
System.IO.MemoryStream stream = new System.IO.MemoryStream();
try
{
serializer.Serialize(stream, _object);
stream.Position = 0;
doc.Load(stream);
return doc.InnerXml;
}
catch (Exception ex)
{
throw;
}
finally
{
stream.Close();
stream.Dispose();
}
}
To convert it back into an Entity Collection (or other object)
public static object DeSerializeAnObject(string xmlOfAnObject, Type _objectType)
{
System.IO.StringReader read = new StringReader(xmlOfAnObject);
System.Xml.Serialization.XmlSerializer serializer = new System.Xml.Serialization.XmlSerializer(_objectType);
System.Xml.XmlReader reader = new XmlTextReader(read);
try
{
return (object)serializer.Deserialize(reader);
}
catch (Exception ex)
{
throw;
}
finally
{
read.Close();
read.Dispose();
read = null;
}
}

Related

How to create Structure & Template programmatically in Liferay 6

I need to create the Structure and Template progrmatically through java code.I used following code snippets.
Structure:
public void createStructure(String userName,long userId){
log_.info("Inside create structure ");
long structureId=115203;
DDMStructure ddmStructure=DDMStructureLocalServiceUtil.createDDMStructure(structureId);
ddmStructure.setName("MigrationStructure");
ddmStructure.setDescription("This Structure created programatically");
ddmStructure.setUserId(userId);
ddmStructure.setUserName(userName);
File fXmlFile = new File("D:/FilesDataMigration/structure.xml");
try {
Document document = SAXReaderUtil.read(fXmlFile);
ddmStructure.setDocument(document);
DDMStructureLocalServiceUtil.addDDMStructure(ddmStructure);
}catch (DocumentException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (SystemException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
log_.info("Inside create structure done");
}
Template:
public void createTemplate(String userName,long userId){
log_.info("Inside create template ");
long templateId=12504;
DDMTemplate ddmTemplate=DDMTemplateLocalServiceUtil.createDDMTemplate(templateId);
ddmTemplate.setName("MigrationTemplate");
ddmTemplate.setDescription("This Template created programatically");
ddmTemplate.setUserId(userId);
ddmTemplate.setUserName(userName);
try {
BufferedReader br = new BufferedReader(new FileReader("D:/FilesDataMigration/template.txt"));
StringBuilder sb = new StringBuilder();
String line = br.readLine();
while (line != null) {
sb.append(line);
sb.append(System.lineSeparator());
line = br.readLine();
}
String script = sb.toString();
ddmTemplate.setScript(script);
DDMTemplateLocalServiceUtil.addDDMTemplate(ddmTemplate);
}catch(IOException e){
e.printStackTrace();
} catch (SystemException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
log_.info("Inside create template done");
}
The above snippets are executing properly with out any exceptions But unable to see in the content section of Control Panel.Suggest me if anything wrong
There are couple of issues with your code:
You are not setting all the required properties, like groupId, companyId, classNameId, structureKey, dates etc.
There isn't any setName and setDescription method for DDMStructure or DDMTemplate accepting String argument (Liferay 6.2 GA2). Instead, there are only setNameMap and setDescriptionMap methods for both accepting Map<Locale, String>.
Use dynamic ids (structureId and templateId) in place of hard-coded ids, as following:
DDMStructure ddmStructure = DDMStructureUtil.create(CounterLocalServiceUtil.increment());and
DDMTemplate ddmTemplate = DDMTemplateUtil.create(CounterLocalServiceUtil.increment());
For classNameId, you can get it using it's value, like:
ClassName className = ClassNameLocalServiceUtil.getClassName("com.liferay.portlet.journal.model.Journ‌​alArticle");
long classNameId = className.getClassNameId();
Also, better to use update over populated object in place of adding:
DDMStructureUtil.update(ddmStructure);
and
DDMTemplateUtil.update(ddmTemplate);
Additionally, if you have access to the ThemeDisplay object, you can get groupId, companyId, userId, userFullName from it. Also, set new Date() for createDate and modifiedDate properties.

View cloudinary images/vid through android app

I have looked in so many places on a lead on how or if it is possible to view images uploaded to cloudinary, by a specific tag through Android studio app i am trying to build.
I was able to implement the upload option by user, with adding a tag to the images, and public id, also retrieving these information, but i cant find anything on how to view these images, for example i want the app to be able to view all images with a specific tag ( username ) to the user that uploaded the pictures, and could delete them ? and also view other images uploaded by other user with no other permission.
Is it possible and how !?
I ended up with this code, and i encountered a problem;
#Override
public void onClick(View v) {
new JsonTask().execute("http://res.cloudinary.com/cloudNAme/video/list/xxxxxxxxxxxxxxxxxxx.json");
// uploadExtract();
}
});
public class JsonTask extends AsyncTask<String ,String,String> {
#Override
protected String doInBackground(String... params) {
HttpURLConnection connection = null;
BufferedReader reader = null;
try {
URL url = new URL(params[0]);
connection = (HttpURLConnection) url.openConnection();
connection.connect();
InputStream stream = connection.getInputStream();
reader = new BufferedReader(new InputStreamReader(stream));
StringBuffer buffer = new StringBuffer();
String line = "";
while ((line = reader.readLine()) != null) {
buffer.append(line);
}
return buffer.toString();
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (connection != null) {
connection.disconnect();
}
try {
if (reader != null) {
reader.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
return null;
}
In the log i get the following;
03-28 12:36:14.726 20333-21459/net.we4x4.we4x4 W/System.err: java.io.FileNotFoundException: http://res.cloudinary.com/we4x4/video/list/3c42f867-8c3a-423b-89e8-3fb777ab76f8.json
i am not sure if my understanding is correct of the method or i am doing something wrong ? since in the Admin API Docs. or cloudinary the syntax for the HTML request and also in the suggested page by Nadav:
https://support.cloudinary.com/hc/en-us/articles/203189031-How-to-retrieve-a-list-of-all-resources-sharing-the-same-tag-
this should've returned a JSON ?
The following feature allows you to retrieve a JSON formatted list of resources that which share a common tag:
https://support.cloudinary.com/hc/en-us/articles/203189031-How-to-retrieve-a-list-of-all-resources-sharing-the-same-tag-
Note that image removal will coerce you to use server-side code (e.g. JAVA), since deleting via Cloudinary requires a signature that is based on your API_SECRET.

MVC, Lucene, AzureDirectory, Application Architecture

I've just included Lucene search into my first application (ASP.NET MVC 5) but I'm having some issues indexing my database initially and in figuring out what the correct overall application architecture should be.
My current setup is:
Azure scheduler that calls into my web API to request indexing of all categories, and then separately all items. Another scheduler that runs once every 3 months or so to call Optimize (db won't change that often). The API grabs all entities from the DB and adds a new CloudQueueMessage to a CloudQueueClient.
A running WebJob pulls from the queue and calls back into my web API with an ID. The API grabs the item and adds or removes it from the Lucene index accordingly using the AzureDirectory project (https://azuredirectory.codeplex.com/). This all functions as expected, and if I manually post ID's to my API method, everything is great. When it runs through the WebJob, somehow my index becomes corrupt and any call, even to get the number of indexed items, returns a 404 not found looking for a file that doesn't exist in my directory (typically something like _1a.cfs).
I'm thinking it's some sort of a locking issue or that my objects aren't being disposed of properly, but I can't see where I'm going wrong and I'm not sure this is the best way to structure the application and the workflow. Latter half of the code pasted below, any help would be hugely appreciated!
WebJob:
public class Program
{
static void Main()
{
JobHost host = new JobHost();
host.RunAndBlock();
}
public static void IndexItemOrCategory([QueueTrigger("searchindexrequest")] string id)
{
string baseUri = ConfigurationSettings.AppSettings["BaseUri"];
using (var client = new HttpClient())
{
client.BaseAddress = new Uri(baseUri);
string response = client.GetStringAsync("api/search/indexitem/" + id).Result;
Console.Out.WriteLine(response);
}
}
}
API:
public HttpResponseMessage IndexItem(string id)
{
try
{
var item = db.Items.Find(dbId);
if (item != null && item.Active && !string.IsNullOrWhiteSpace(item.Name))
LuceneSearch.AddUpdateLuceneIndex(item);
else
{
LuceneSearch.RemoveLuceneIndexRecord<Item>(dbId);
removed = true;
}
}
catch (Exception exc)
{
Request.CreateErrorResponse(HttpStatusCode.InternalServerError, exc);
}
return Request.CreateResponse(HttpStatusCode.OK, id + (removed ? " removed " : " indexed ") + "successfully.");
}
public class LuceneSearch
{
private static CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConfigurationManager.ConnectionStrings["AzureConnection"].ToString());
private static AzureDirectory azureDirectory = new AzureDirectory(storageAccount, "lucene-search", new RAMDirectory());
public static void AddUpdateLuceneIndex(Item item)
{
AddToLuceneIndex(item);
}
private static void AddToLuceneIndex(Item toIndex)
{
using (StandardAnalyzer analyzer = new StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_30))
{
using (IndexWriter writer = new IndexWriter(azureDirectory, analyzer, new IndexWriter.MaxFieldLength(IndexWriter.DEFAULT_MAX_FIELD_LENGTH)))
{
string Id = "Item" + toIndex.ItemID.ToString();
//Remove any existing index entry
var searchQuery = new TermQuery(new Term("Id", Id));
writer.DeleteDocuments(searchQuery);
string name = string.IsNullOrWhiteSpace(toIndex.CommonName) ? toIndex.Name : toIndex.Name + ", " + toIndex.CommonName;
if (!string.IsNullOrWhiteSpace(name))
{
//Create new entry
Document doc = new Document();
doc.Add(new Field("Id", Id, Field.Store.YES, Field.Index.NOT_ANALYZED));
doc.Add(new Field("Name", toIndex.Name, Field.Store.YES, Field.Index.ANALYZED));
if (!string.IsNullOrWhiteSpace(toIndex.Description1))
doc.Add(new Field("Description", toIndex.Description1, Field.Store.YES, Field.Index.ANALYZED));
doc.Add(new Field("CategoryName", toIndex.Category.Name, Field.Store.YES, Field.Index.ANALYZED));
doc.Add(new Field("SeoUrl", toIndex.SeoUrl, Field.Store.YES, Field.Index.NOT_ANALYZED));
if (!string.IsNullOrWhiteSpace(toIndex.Image1))
doc.Add(new Field("Image", toIndex.Image1, Field.Store.YES, Field.Index.NOT_ANALYZED));
doc.Add(new Field("CategorySeoUrl", toIndex.Category.SeoUrl, Field.Store.YES, Field.Index.NOT_ANALYZED));
writer.AddDocument(doc);
}
writer.Dispose();
}
}
}
...
}

NetDataContractSerializer produces invalid XML

My NetDataContractSerializer seems to be confused: The end of the XML appears twice:
<?xml version="1.0" encoding="utf-8"?>
<Project xmlns:i="http://www.w3.org/2001/XMLSchema-instance" z:Id="1"
[...]
<d2p1:anyType i:nil="true" />
</d2p1:_items>
<d2p1:_size>2</d2p1:_size>
<d2p1:_version>2</d2p1:_version>
</d2p1:items>
</ProjectParts>
<ProjectPath z:Id="31">D:\t10\</ProjectPath>
</Project>ze>
<d2p1:_version>3</d2p1:_version>
</d2p1:items>
<d2p1:_monitor xmlns:d7p1="http://schemas.datacontract.org/2004/07/System.Collections.ObjectModel" z:Id="33">
<d7p1:_busyCount>0</d7p1:_busyCount>
</d2p1:_monitor>
</Elements>
<Project z:Ref="1" i:nil="true" xmlns="http://schemas.datacontract.org/2004/07/Modules.WorkspaceManager.Types" />
</d2p1:anyType>
<d2p1:anyType i:nil="true" />
<d2p1:anyType i:nil="true" />
</d2p1:_items>
<d2p1:_size>2</d2p1:_size>
<d2p1:_version>2</d2p1:_version>
</d2p1:items>
</ProjectParts>
<ProjectPath z:Id="34">D:\t10\</ProjectPath>
</Project>
As you can see, there is some serious stammering going on. It happens occasionally and I can't reproduce the error. Any ideas? Could it be caused by the file being opened in VS while it's being written?
I serialize my object like this:
private void SerializeToFile(object objectToSerialize)
{
Stream stream = null;
try
{
stream = File.Open(_fileName, FileMode.OpenOrCreate, FileAccess.Write);
using (var writer = XmlWriter.Create(stream, new XmlWriterSettings { Indent = true }))
{
NetDataContractSerializer serializer = new NetDataContractSerializer();
serializer.WriteObject(writer, objectToSerialize);
}
}
finally
{
if (stream != null) stream.Close();
}
}
And the class serialized looks like this:
[DataContract(IsReference = true)]
public class Project : IProject
{
[DataMember] public string ProjectPath { get; set; }
[DataMember] public string ProjectName { get; set; }
[DataMember] public Collection<IProjectPart> ProjectParts { get; set; }
public T GetPart<T>() where T : IProjectPart
{
return ProjectParts.OfType<T>().First();
}
public void RegisterPart<T>(T part) where T : IProjectPart
{
if (ProjectParts.Any(p => p.GetType().IsInstanceOfType(part))) throw new InvalidOperationException("Part already registered.");
ProjectParts.Add(part);
part.Project = this;
}
public void Load()
{
foreach (var projectPart in ProjectParts)
{
projectPart.Load();
}
}
public void Unload()
{
foreach (var projectPart in ProjectParts)
{
projectPart.Unload();
}
}
public void Save()
{
foreach (var projectPart in ProjectParts)
{
projectPart.Save();
}
}
public Project()
{
ProjectParts = new Collection<IProjectPart>();
}
}
Thank you!
The issue is simple - when you serialize over and over your object, you do it with different size of IProjectPart collection. The File.Open method does not clear the file from previous content so assume following steps :
i) serialize object with two IProjectPart instaces - let's say it will take 10 lines of xml file
ii) serialize object again with one IProjectPart instance in the collection - this time it will take 8 lines of xml file
iii) lines 9 and 10 will be filled with old xml data since they are not cleared between serialization attempts - so there is some duplicated-trash-looking xml data.
Try it for yourself , you will see exactly how those multiple tags are generated.
NOTE : The 8 and 10 lines are approximate values for my implementation
NOTE 2 : I suggest using using statement for the stream inside serialization method(as for all IDisposable objects) :
private void SerializeToFile(object objectToSerialize)
{
using(var stream = File.Open(_fileName, FileMode.OpenOrCreate, FileAccess.Write))
{
using (var writer = XmlWriter.Create(stream, new XmlWriterSettings { Indent = true }))
{
NetDataContractSerializer serializer = new NetDataContractSerializer();
serializer.WriteObject(writer, objectToSerialize);
}
}
}

The given key was not present in the dictionary Plugin CRM 2011 online

Could anyone tell me what I am doing wrong, I've been trying for over a week now.
Follow the code.
Unexpected exception from plug-in (Execute):
Microsoft.Crm.Sdk.Samples.ProjectTotalAmount:
System.Collections.Generic.KeyNotFoundException: The given key was not
present in the dictionary.
namespace Microsoft.Crm.Sdk.Samples
{
public class ProjectTotalAmount : IPlugin
{
public void Execute(IServiceProvider serviceProvider)
{
Microsoft.Xrm.Sdk.IPluginExecutionContext context = (Microsoft.Xrm.Sdk.IPluginExecutionContext) serviceProvider.GetService(typeof(Microsoft.Xrm.Sdk.IPluginExecutionContext));
if (context.InputParameters.Contains("Target") &&
context.InputParameters["Target"] is Entity)
{
IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);
//create a service context
var ServiceContext = new OrganizationServiceContext(service);
//ITracingService tracingService = localContext.TracingService;
Entity entity = (Entity)context.InputParameters["Target"];
if (entity.LogicalName == "new_project")
{
Guid projectGUID = ((EntityReference)entity["new_project"]).Id;
Entity a = service.Retrieve("new_project", ((EntityReference)entity["new_project"]).Id, new ColumnSet(true));
decimal totalAmount = 0;
try
{
//fetchxml to get the sum total of estimatedvalue
string new_amount_sum = string.Format(#"
<fetch distinct='false' mapping='logical' aggregate='true'>
<entity name='new_projectitem'>
<attribute name='new_amount' alias='new_amount' aggregate='sum' />
<filter type='and'>
<condition attribute='new_projectid' operator='eq' value='{0}' uiname='' uitype='' />
</filter>
</entity>
</fetch>", a.Id);
EntityCollection new_amount_sum_result = service.RetrieveMultiple(new FetchExpression(new_amount_sum));
foreach (var c in new_amount_sum_result.Entities)
{
totalAmount = ((Money)((AliasedValue)c["new_amount_sum"]).Value).Value;
}
//updating the field on the account
Entity acc = new Entity("new_project");
acc.Id = a.Id;
acc.Attributes.Add("new_amount", new Money(totalAmount));
service.Update(acc);
}
catch (FaultException ex)
{
throw new InvalidPluginExecutionException("An error occurred in the plug-in.", ex);
}
}
}
}
}
}
The settings for the plugin:
Post-validation
Synchronous execution mode
Server deployment
A few pointers to help you before we start looking at your code...
This error usually means your code is referring to an attribute that does not exist (or does not have a value)
You haven't said on which message your plugin is registered. This may affect the available parameters at runtime
You've commented out your tracingService variable but this can help you at least see how far your code has come. Reinstate it and add a few lines such as this to track your progress prior to the failure. This information will be written to the error log that is offered to you in the client side Exception dialog.:
tracingService.Trace("Project Id is {0}", projectGUID);`
and
tracingService.Trace("Number of returned records: {0}", new_amount_sum_result.Entities.Count);`
The following line seems entirely redundant since you are only using the attribute Id from a and this already exists as entity.Id:
Entity a = service.Retrieve("new_project", ((EntityReference)entity["new_project"]).Id, new ColumnSet(true));

Resources