When upgrading classic Domino applications to XPages one particular problem arises constantly: "what to do with the PRINT statements in existing agents that write back directly to the browser?" I have 200 agents in my nsf file which all uses Print statement in their code.
I have investigated found the link http://www.wissel.net/blog/d6plinks/SHWL-8SF7AH
but it is not possible for me to change agent code as it is also used from forms.
I have to re-use these agents from xpages on button click and also have to pass document.
Is there any way or alternative which can solve my problem.
help is required.
We have Agent.runWithDocumentContext(doc:NotesDocument) method which can run the agent from xpage and passing an in-memory document. I create an im-memory document
var doc = database.createDocument();
doc.replaceItemValue("StartDate",startDate.getDateOnly())
doc.replaceItemValue("EndDate",endDate.getDateOnly())
doc.replaceItemValue("ReportName",reportName)
var agent:NotesAgent = database.getAgent("("+reportName+")");
agent.runWithDocumentContext(doc);
This in-memory document i can pass to the agent. But the issue I am facing currently that my agents are used to print directly which is not possible through xpage i assume.
Is there any alternative way to pass in-memory document to agent and able to print directly to browser through xpages.
Help needed please help.
If you want to use XPages.. then USE XPages... Meaning migrate the agents to the XPages way of doing things. There can be a lot of coexhistance between XPages and Notes Client or I supposed even Classic Domino Web. But if you're set on keeping 200 Agents which are NOT really part of XPages Best practices, then it sounds to me like you shouldn't be using XPages at all.
EDIT:
This link:
http://www-10.lotus.com/ldd/ddwiki.nsf/dx/XPages_and_Calling_Agents_Using_an_In-Memory_Document
Talks about calling an Agent from XPages and being able to use the in memory document. Maybe that will help but if it's heavily used performance will not be great since an agent loads and unloads for each call. It's still a bad idea to do.
create a java class that calls your agent and read the output from your agents in an input stream and display the stream in computed field in Xpages.
here is a sample java class that will retrive the output from your agent
package com.thomas;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.MalformedURLException;
import java.net.URL;
public class getAgentData{
public String getData() throws IOException {
try {
URL url = new URL("http://localhost/mydatabase.nsf/myagent?openagent");
BufferedReader in = new BufferedReader(
new InputStreamReader(url.openStream()));
String content = new String();
String inputLine;
while ((inputLine = in.readLine()) != null) {
content+=inputLine;
}
in.close();
return content;
} catch (MalformedURLException e) {
e.printStackTrace();
}
return null;
}
}
If you add your java class as a bean in faces-config.xml you can call it using a computed field in your xpages like this
<xp:text escape="true" id="computedField1" value="#{javascript:getAgentData().getData()}"></xp:text>
You need to change your agent a little. Instead of Print "Something" you would use someobj.print "Something" - which should be a manageable task using search/replace. The detailed steps have been documented in this blog entry.
Now you state that you need to dual use the agent. Not that hard, once you move beyond copy and paste programming.
When you check the class - it looks for documentcontext. When you call the agent directly, the document context is not there. So you check if you have the document - if not it is classic and you add a print statement to the class, if it is there you save to the context.
The code is myObject.printResult = true to print - or you set it to true per default (but then you pollute your log :-) )
If you provide a context doc when calling the agent, then you would provide an item (e.g. DontPrint=1) from your XPages and check that one. If it is there, only save to the context, otherwise add the print statement.
Let us know how it goes.
Related
I started working with CodedUI few months before to automate a desktop Application(WPF).
Just checking out for the best ways to create a framework for my Application.
As, I have seen in other automation tools, I feel the heart of an automation framework using any tool(UI Based) is the way it's object Repository is created i.e. how well the UI objects are defined. A Cleaner and well defined Object Repository always proves to be very helpful when it comes to updating your tests.
I am trying to discover the best way to store my UIObjects so that in case of any UI changes in my Application, I have to put minimum effort to update my automation test.
Also, If an Object changes in application, updating it only at one place should solve the problem.
This can be any kind of change like :
->change in just a property(This I feel would be very easy to update in automation Test. The best and Easiet way I feel is to simply update the .uitest file(the xml file) if possible.)
->change in hierarchy and position
->entirely new object added
For the 2nd and 3rd changes, updating scripts become a difficult job, esp if the UIObject is being referred at may places, in many TestMethods, or Modules.
Also, I have generally seen that in Test Methods, Variable Declarations are done to create a reference to the UIMap objects and those variables are further used in the TestMethod Code.
So, in this case If the UI of my application changes, I will have to update the variable decalaration in each of the Test Methods. I want to reduce this effort to changing the variable decalaration only at one place. OfCourse, I cannot have all the code inside only one Test Method. One way that came to my mind is as:
Can't I have simply one common place for all these Variable decalarations. We can give a unique and understandable name to each UIObject e.g.: The decalratoions will look like:
UITabPage UITabPage = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage;
WpfRow UIRow = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage.UIEquipmentDetailsTable.UIRow;
WpfText UIEquipmentTagText = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage.UIEquipmentDetailsTable.UIRow.UITagCell.UIEquipmentTagText;
WpfCheckBox UIEquipmentCheckBox = this.UIMap.UISimWindow.UISelectEquipmentTabList.UITabPage.UIEquipmentDetailsTable.UIRow.UICheckBoxCell.UICheckBox;
....
....
and use these variables wherever required. Hence, In case of any chnages also, there will be only one place where you will need to update thse objects.
But for this, These varaibles must be made STATIC. What can be problem with making these Object Variables static?
Please provide your suggestion on this topic. May be what I am thinking is not possible or practical. I just want to choose the best way to start with before I go too far with the automation scripts and realize later that my approach wasn't a good one.
Thanks in Advance,
Shruti
Look into using descriptive programming instead of using the UIMaps.
Make a static class with generic functions to assist. Going to give you some examples of how to set it up.
For example:
public WinWindow parentwin(string ParentControlName)
{
var parentwin = new WinWindow();
parentwin.SearchProperties.Add("Control Name", ParentControlName);
return parentwin;
}
public WinWindow childwin(string ChildWinControlName, string ParentControlName)
{
var childwin = new WinWindow(parentwin(ParentControlName));
childwin.SearchProperties.Add("Control Name", ChildWinControlName);
return childwin;
}
public WinButton button(string ButtonName,string ChildWinControlName, string ParentControlName)
{
var childwin = childwin(ChildWinControlName,ParentControlName);
var button = new WinButton(childwin);
button.SearchProperties.Add("Name", ButtonName);
}
public void ClickButton(string ButtonName,string ChildWinControlName, string ParentControlName)
{
var button = button(ButtonName,ChildWinControlName,ParentControlName);
Mouse.Click(button);
}
public void ChangeFocus(WinWindow NewFocus)
{
var NewFocus = new NewFocus();
NewFocus.SetFocus();
}
public void ChangeFocus(WinWindow NewFocusChild, string c)
{
var a = new NewFocus();
a.SetFocus();
}
ChangeFocus(childwin("WelcomeForm", "MainForm");
ClickButton("&OK", "WelcomeForm", "MainForm");
So, I am working on a class called DMFWriteExportData and trying to get it run in Batch.
I am at a point where I need to figure out a way to get rid of fieldControl and the reason being it does not let me Run the class on the server and throws an error because it is not supposed to be running on server? (not sure)
Error: "The method Dialog Control.control cannot be called from the server; use methods on the Dialog Field class instead."
-
public Object dialog()
{
DialogRunbase dialog = new DialogRunbase("#DMF372", this);
FormStringControl control;
dialogExecution = dialog.addFieldValue(extendedTypeStr(dMFExecutionId), executionId);
control = dialogExecution.fieldControl();
control.mandatory(true);
control.displayLength(24);
control.registerOverrideMethod(methodstr(FormStringControl, lookup), methodstr(DMFWriteExecutionParameters, executionIdLookup), this);
control.registerOverrideMethod(methodstr(FormStringControl, modified), methodstr(DMFWriteExecutionParameters, executionIdModified), this);
dialogdescription=dialog.addFieldValue(extendedTypeStr(description),DMFExecution::find(executionId).Description);
dialogdescription.enabled(false);
return dialog;
}
I am wondering:
If it is actually true that this class cannot be set to server
when using control.registerOverrideMethod
If yes, what would be the ideal solution to overcome this situation,
is there any way I can create custom lookups? I see there is method
called registerOverrideMethod in the DialogField class.
Any help would be appreciated.
Thanks,
Khosla
The reason why you cannot (and should) run the code above in batch is because it uses dialog controls that only exist on the client side. You should never run this kind of code on server. Please check runon property of your class and set it to called from.
However, I assume you are using RunBaseBatch. If you are on AX 2012, you should use the SysOperation framework instead.
When using RunBaseBatch, all code is on the same class. This way, you are mixing client side code (main method, dialog method etc) with the code that should run on server (run method). For this reason you should set the "runon" property of the class to CalledFrom, not Server.
You can solve this by using SysOperation which applies the Model View Controller (MVC) pattern that neatly sepperates the two.
For an introduction to SysOperation, check my blog here:
AX2012: SysOperation introduction
Is there a way we can use ObjectContext with DbContext's ModelBuilder? We don't want to use POCO because we have customized property code that does not modify entire object in update, but only update modified properties. Also we have lots of serialisation and auditing code that uses EntityObject.
Since poco does create a proxy with EntityObject, we want our classes to be derived from EntityObject. We don't want proxy. We also heavily use CreateSourceQuery. The only problem is EDMX file and its big connection string syntax web.config.
Is there any way I can get rid of EDMX file? It will be useful as we can dynamically compile new class based on reverse engineering database.
I would also like to use DbContext with EntityObject instead of poco.
Internal Logic
Access Modified Properties in Save Changes which is available in ObjectStateEntry and Save them onto Audit with Old and New Values
Most of times we need to only check for Any condition on Navigation Property for example
User.EmailAddresses.CreateSourceQuery()
.Any( x=> x.EmailAddress == givenAddress);
Access Property Attributes, such as XmlIgnore etc, we rely heavily on attributes defined on the properties.
A proxy for a POCO is a dynamically created class which derives from (inherits) a POCO. It adds functionality previously found in EntityObject, namely lazy loading and change tracking, as long as a POCO meets requirements. A POCO or its proxy does not contain an EntityObject as the question suggests, but rather a proxy contains functionality of EntityObject. You cannot (AFAIK) use ModelBuilder with EntityObject derivatives and you cannot get to an underlying EntityObject from a POCO or a proxy, since there isn't one as such.
I don't know what features of ObjectContext does your existing serialisation and auditing code use, but you can get to ObjectContext from a DbContext by casting a DbContext to a IObjectContextAdapter and accessing IObjectContextAdapter.ObjectContext property.
EDIT:
1. Access Modified Properties in Save Changes which is available in ObjectStateEntry and Save them onto Audit with Old and New Values
You can achieve this with POCOs by using DbContext.ChangeTracker. First you call DbContext.ChangeTracker.DetectChanges to detect the changes (if you use proxies this is not needed, but can't hurt) and then you use DbCotnext.Entries.Where(e => e.State != EntityState.Unchanged && e.State != EntityState.Detached) to get DbEntityEntry list of changed entities for auditing. Each DbEntityEntry has OriginalValues and CurrentValues and the actual Entity is in property Entity.
You also have access to ObjectStateEntry, see below.
2. Most of times we need to only check for Any condition on Navigation Property for example:
User.EmailAddresses.CreateSourceQuery().Any( x=> x.EmailAddress == givenAddress);
You can use CreateSourceQuery() with DbContext by utilizing IObjectContextAdapter as described previously. When you have ObjectContext you can get to the source query for a related end like this:
public static class DbContextUtils
{
public static ObjectQuery<TMember> CreateSourceQuery<TEntity, TMember>(this IObjectContextAdapter adapter, TEntity entity, Expression<Func<TEntity, ICollection<TMember>>> memberSelector) where TMember : class
{
var objectStateManager = adapter.ObjectContext.ObjectStateManager;
var objectStateEntry = objectStateManager.GetObjectStateEntry(entity);
var relationshipManager = objectStateManager.GetRelationshipManager(entity);
var entityType = (EntityType)objectStateEntry.EntitySet.ElementType;
var navigationProperty = entityType.NavigationProperties[(memberSelector.Body as MemberExpression).Member.Name];
var relatedEnd = relationshipManager.GetRelatedEnd(navigationProperty.RelationshipType.FullName, navigationProperty.ToEndMember.Name);
return ((EntityCollection<TMember>)relatedEnd).CreateSourceQuery();
}
}
This method uses no dynamic code and is strongly typed since it uses expressions. You use it like this:
myDbContext.CreateSourceQuery(invoice, i => i.details);
I have a problem and hoped somone could help me
I'm trying to start multiple threads from an XAgent (not rendered XPage)
public class ImportThread extends NotesThread {
Session currentSession;
public ImportThread(String maildb, String Server)
{
try{
currentSession =DominoAccess.getCurrentSession();
this.maildb = currentSession.getDatabase(Server, maildb);
}catch (Exception e) {
e.printStackTrace();
}
}
public void runNotes()
{
View v = maildb.getView("$Calendar");
}
in this version I could not access the View I only get "null" back
Ive tryed a version with Java Threads not realy better.
thean i've found something on Openntf
http://www.openntf.org/internal/home.nsf/project.xsp?action=openDocument&name=Threads%20and%20Jobs
but there I got an "AccessControl Exception"
I have no more ideas, I hope that someone has an idea how to create
an XAgent with multiple thread
As Egor wrote you need the change the Java policy file if you run the Java code from an NSF. You don't have to do this if you deploy your Java code as OSGi plugin. See the documentation of that OpenNTF project.
Afaik NotesObjects should not be shared between threads. So instead of using Database mailDB you should use String mailDBName and instantiate all NotesObjects inside their own thread. You also need to watch run time: if your XAgent waits for the treads to conclude, you should be fine, but if it is a 'fire-and-forget' approach you need to start it from something more persistent like a managed bean in the session scope.
Hope that helps
I have a piece of code which changes XSLT of a SearchResultWebPart at Sharepoint 2010 Search Center result page (spFileItem - is SPFile of a search result page) :
SPLimitedWebPartManager wpManager = spFileItem.GetLimitedWebPartManager(PersonalizationScope.Shared);
foreach (WebPart wpItem in wpManager.WebParts)
{
if (wpItem is CoreResultsWebPart)
{
((CoreResultsWebPart)wpItem).UseLocationVisualization = false;
((CoreResultsWebPart)wpItem).Xsl = someXSL;
wpManager.SaveChanges(wpItem);
}
spFileItem.Update();
spFileItem.CheckIn(Consts.CheckInComment, SPCheckinType.MajorCheckIn);
But, this code doesn't work if it is called on feature activated (gives InvalidOperationException - incorrect object state). However it perfectly works in Console application.
After some reflecting, I found out that there is a piece of code inside the SearchResultWebPart, which checks if the webpart wasn't initialized - it throws the mentioned above exception on setting XSL property. Does anybody know how to work this problem out? For me it'd be quite convenient to do XSL change at FeatureActivated...
I found a solution to my problem, but it uses different way of setting xsl for SearchResultBaseWebPart.
SPLimitedWebPartManager wpManager = spFileItem.GetLimitedWebPartManager(PersonalizationScope.Shared);
foreach (WebPart wpItem in wpManager.WebParts)
{
if (wpItem is CoreResultsWebPart)
{
((CoreResultsWebPart)wpItem).UseLocationVisualization = false;
((CoreResultsWebPart)wpItem).XslLink = spFileItem.Web.Url + #"/_layouts/XSL/MYXSL.xsl";
wpManager.SaveChanges(wpItem);
}
}
spFileItem.Update();
spFileItem.CheckIn(Consts.CheckInComment, SPCheckinType.MajorCheckIn);
I feel you mix up a few things in the question. You would like to set the Xsl property of the CoreResultsWebPart. This class has no direct implementation of the Xsl method, so it inherits the implementation of its parent class (SearchResultBaseWebPart). The Xsl property setter try to set the XslHash property (but only if we are after the OnInit that sets _BeforeOnInit = false;), and the setter method of the XslHash property throws an InvalidOperationException, but this exception should be catched by the try/catch block in Xsl property setter anyway. I don't see any other potential source of InvalidOperationException in the code.
You should check the patch level of your SP2010 (is it SP1/some of the cummulative updates/original version?) and try to activate the feature from different contexts (from web site / STSADM or PowerShell).
But first I suggest you to add a try / catch block to your feature receiver code and trace out the error details (like stack trace) and monitor the results using DebugView.