I use CSOM to import some Documents to Sharepoint.
When I execute my Importer I have seen, that it uses more and more Memory. The Analysis have shown, that the ClientContext owns holds DIctionary with all ObjectPathIdentity-Objets.
This gives me a Memory-Problem when I export a lot of documents to different Folders in Sharepoint.
Is there a solution to clear this dictionary or to disable this caching-mechanism?
I solve the same problem, try this, maybe help you
clientContext.RequestTimeout = -1;
clientContext.DisableReturnValueCache = true;
.......execute query......
clientContext.Dispose();
clientContext = null;
Related
When upgrading classic Domino applications to XPages one particular problem arises constantly: "what to do with the PRINT statements in existing agents that write back directly to the browser?" I have 200 agents in my nsf file which all uses Print statement in their code.
I have investigated found the link http://www.wissel.net/blog/d6plinks/SHWL-8SF7AH
but it is not possible for me to change agent code as it is also used from forms.
I have to re-use these agents from xpages on button click and also have to pass document.
Is there any way or alternative which can solve my problem.
help is required.
We have Agent.runWithDocumentContext(doc:NotesDocument) method which can run the agent from xpage and passing an in-memory document. I create an im-memory document
var doc = database.createDocument();
doc.replaceItemValue("StartDate",startDate.getDateOnly())
doc.replaceItemValue("EndDate",endDate.getDateOnly())
doc.replaceItemValue("ReportName",reportName)
var agent:NotesAgent = database.getAgent("("+reportName+")");
agent.runWithDocumentContext(doc);
This in-memory document i can pass to the agent. But the issue I am facing currently that my agents are used to print directly which is not possible through xpage i assume.
Is there any alternative way to pass in-memory document to agent and able to print directly to browser through xpages.
Help needed please help.
If you want to use XPages.. then USE XPages... Meaning migrate the agents to the XPages way of doing things. There can be a lot of coexhistance between XPages and Notes Client or I supposed even Classic Domino Web. But if you're set on keeping 200 Agents which are NOT really part of XPages Best practices, then it sounds to me like you shouldn't be using XPages at all.
EDIT:
This link:
http://www-10.lotus.com/ldd/ddwiki.nsf/dx/XPages_and_Calling_Agents_Using_an_In-Memory_Document
Talks about calling an Agent from XPages and being able to use the in memory document. Maybe that will help but if it's heavily used performance will not be great since an agent loads and unloads for each call. It's still a bad idea to do.
create a java class that calls your agent and read the output from your agents in an input stream and display the stream in computed field in Xpages.
here is a sample java class that will retrive the output from your agent
package com.thomas;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.MalformedURLException;
import java.net.URL;
public class getAgentData{
public String getData() throws IOException {
try {
URL url = new URL("http://localhost/mydatabase.nsf/myagent?openagent");
BufferedReader in = new BufferedReader(
new InputStreamReader(url.openStream()));
String content = new String();
String inputLine;
while ((inputLine = in.readLine()) != null) {
content+=inputLine;
}
in.close();
return content;
} catch (MalformedURLException e) {
e.printStackTrace();
}
return null;
}
}
If you add your java class as a bean in faces-config.xml you can call it using a computed field in your xpages like this
<xp:text escape="true" id="computedField1" value="#{javascript:getAgentData().getData()}"></xp:text>
You need to change your agent a little. Instead of Print "Something" you would use someobj.print "Something" - which should be a manageable task using search/replace. The detailed steps have been documented in this blog entry.
Now you state that you need to dual use the agent. Not that hard, once you move beyond copy and paste programming.
When you check the class - it looks for documentcontext. When you call the agent directly, the document context is not there. So you check if you have the document - if not it is classic and you add a print statement to the class, if it is there you save to the context.
The code is myObject.printResult = true to print - or you set it to true per default (but then you pollute your log :-) )
If you provide a context doc when calling the agent, then you would provide an item (e.g. DontPrint=1) from your XPages and check that one. If it is there, only save to the context, otherwise add the print statement.
Let us know how it goes.
I was happy enough to have inherited a terribly written SharePoint project.
Apparently, the original developer was a big fan of reusable code (30% of code is reused across 20 projects without using any libraries—guess how?).
I would often find his code calling some Common.OpenWeb method to retrieve SPWeb object for operating SharePoint stuff. Most of this function's incarnations look exactly the same:
public SPWeb OpenWeb()
{
String strSiteUrl = ConfigurationManager.AppSettings["SiteUrl"].ToString();
SPSite site = null;
SPWeb web = null;
try
{
using (site = new SPSite(strSiteUrl))
{
using (web = site.OpenWeb())
{
return web;
}
}
}
catch (Exception ex)
{
LogEvent("Error occured in OpenWeb : " + ex.Message, EventLogEntryType.Error);
}
return web;
}
And now I'm really worried.
How come this works in production? This method always returns a disposed object, right?
How unstable is it, exactly?
UPDATE:
This method is used in the following fashion:
oWeb = objCommon.OpenWeb();
SPList list = oWeb.Lists["List name"];
SPListItem itemToAdd = list.Items.Add();
itemToAdd["Some field"] = "Some value";
oWeb.AllowUnsafeUpdates = true;
itemToAdd.Update();
oWeb.AllowUnsafeUpdates = false;
I omitted the swallowing try-catch for brevity.
This code inserts value into the list! This is a write operation, I'm pretty sure Request property is being used for this. Then how can it work?
First, the short answer: that method indeed returns a disposed object. An object should not be used after being disposed, because it's no longer in a reliable state, and any further operation performed on that object should (theoretically) throw an ObjectDisposedException.
Now, after digging a little, SharePoint objects don't seem to follow that rule. Not only does SPWeb never throw ObjectDisposedException after being disposed, but it actually tests for that case in its Request property and rebuilds a valid SPRequest from its internal state if it has been disposed.
It seems that at least SPWeb was designed to be fully functional even in a disposed state. Why, I don't know. Maybe it's for accommodating client code like the one you're working on. Maybe it's some kind of complicated optimization I can't possibly understand.
That said, I'd suggest you don't rely on that behavior, because it might change in the future (even though, given Microsoft's policy on bug-for-bug backwards compatibility, it might not).
And of course, you will still leak the new SPRequest instance, which can be quite costly. Never, ever, use a disposed object, even if SharePoint lets you get away with it.
I am very new to Sharepoint programming, like the rest of my team is. We have decided to use smart part as our bridge between sharepoint and our development efforts. After some effort we got it up and running. However, the problem is, that when i use a simple user control for test, with sharepoint om code that gets names of files in a document library, sharepoint gives me a rather helpful "An unknown error has occured". This code works just fine when inside an aspx page. I have written another simple test user control that just executes a Response.Write() line to check is there a problem with executing code, but this one works just fine in smart part too.
Code goes like
protected void Button1_Click(object sender, EventArgs e)
{
Microsoft.SharePoint.SPSite srv1 =
new SPSite("http://server:port/");
SPWeb web = srv1.OpenWeb();
var list = web.GetFolder("http://server:port/documentLibrary");
for (int i = 0; i < list.Files.Count; i++)
{
ListBox1.Items.Add(list.Files[i].Name);
}
}
Anything we may be missing or doing wrong?
Many thanks in advance...
AFAIK, Smart Part hasn't been really needed since SharePoint 2003. Why don't you just create a regular user control and plop it in the /ControlTemplates folder? Deploy it is as part of a Feature with related code, if appropriate...
Also, update your Web.Config file to display meaningful error messages:
customErrors=off
Enable Stack Traces by adding CallStack=”true” to the SafeMode tag
Set the compilation debug attribute to "true"
Just a side note, you should generally wrap your SPSite and SPWeb objects in a using clause as these are unmanaged objects as outlined here:
http://msdn.microsoft.com/en-us/library/aa973248.aspx
protected void Button1_Click(object sender, EventArgs e)
{
using (Microsoft.SharePoint.SPSite srv1 = new SPSite("http://server:port/"))
{
using (SPWeb web = srv1.OpenWeb())
{
var list = web.GetFolder("http://server:port/documentLibrary");
for (int i = 0; i < list.Files.Count; i++)
{
ListBox1.Items.Add(list.Files[i].Name);
}
}
}
}
Ok it's solved, thanks everybody for information and help.
It was about trust level and i set thrust level to "WSS_Medium" in relevant site collection's web.config file.
<trust level="WSS_Medium" originUrl="" />
I have found this solution (along with some more relevant information on subject) in Jan Tielen's blog at here
How could I find out if a URL is available and usable to create a new site within a site collection or whether it is already in use by an other site, list or library?
Assumed that the relative URL "/newUrl/ is not yet in use, the following code won't actually throw an exception until you try to access any of the SPWeb's properties.
using(SPSite site = new Site("http://portal/"))
{
SPWeb web = site.OpenWeb("/newUrl/"); // no exception
string title = web.Title; // throws exception
}
Of course it would be possible to check the availableness of the URL this way, but it would be more like a hack than like good code.
So got anyone any ideas how to solve this?
Bye,
Flo
The normal answer is
if(web.Exists)
But... you might want to wrap this SPWeb into a using.
using(SPWeb web = site.OpenWeb("/newUrl/"))
{
if(web.Exists)
{
string title = web.Title;
}
}
if (web.Exists)
http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spweb.exists.aspx
I'm looking for articles, forum or blog posts dealing with SharePoint and thread safety? I'm quite sure there are some special aspects regarding thread safety that have to be considered when working with the SharePoint object model.
Actually I didn't find many information about this, yet.
So I'm looking forward to your answers.
Bye,
Flo
There are much worse pitfalls in the SharePoint OM than just plain old thread safety. Pay particular attention to working with objects retrieved from properties. You should always keep a pointer to an object while you work on it; example:
var list = web.List["MyList"]
list.Items[0]["Field1"] = "foo"
list.Items[0]["Field2"] = "bar"
list.Items[0].Update() // nothing is updated!
You might expect Field1 and Field2 to be updated by the final Update() call, but nope. Each time you use the indexer, a NEW reference to the SPListItem is returned.
Correct way:
SPListItem item = list.Items[0]
item["Field1"] = "foo"
item["Field2"] = "bar"
item.Update() // updated!
Just a start. Also google for pitfalls around the IDisposabe/Dispose pattern.
-Oisin
There is one issue that I often run into: when writing your own list item receivers, you need to be aware of the fact that some of the events fire asynchronously, e.g. ItemAdded() which means your code could be running in multiple threads at the same time.
So after doing some more googling and searching on the web and testing, it seems as if you don't have to care about thread-safety that much when using the MOSS object model because you're always working with non-static and unique instances.
Furthermore an exception is thrown when a object e.g. a SPWeb was altered and saved by calling the Update() method before you saved your changes (also calling the Update() method) even though you got your object first.
In the following example the instruction web11.Update() will throw an exception telling you that the SPWeb represented through the object web12 was altered meanwhile.
SPSite siteCol1 = new SPSite("http://localhost");
SPWeb web11 = siteCol1.OpenWeb();
SPWeb web12 = siteCol1.OpenWeb();
web12.Title = "web12";
web12.Update();
web11.Title = "web11";
web11.Update();
So the thready-safety seems to be handled by the object model itself. Of course you have to handle the exceptions that might be thrown due to race conditions.