I'm creating a SharePoint application and want to do things the right way as often as possible.
I'm deploying a series of Lists in a Site Definition, and I want to lock down Data Access to a series of Get() methods to maintain conventions.
Some of my lists have SecurityBits="22" set in the list definition, because I want List Item entries to be modified only in the UI.
I want to avoid abusing SPSecurity.RunWithElevatedPrivileges. I also want to sidestep the limitation of SPSecurity.RunWithElevatedPrivileges where you cannout have return functions in the delegates.
This seems like a good way to enforce this. If you're calling a list to get list items with normal security, you can call var PostList = CoreLists.Posts. If you need to call that same list with elevated permissions, you can call var PostList = CoreLists.SystemAccount.Posts
Is this a good way to do this?
public static class CoreLists
{
public static SPList Posts()
{
return SPContext.Current.Web.GetList(SPContext.Current.Web.ServerRelativeUrl + "/lists/CommunityPost");
}
public static class SystemAccount
{
public static SPList Posts()
{
using (var elevatedSite = new SPSite(SPContext.Current.Site.ID, SPContext.Current.Site.SystemAccount.UserToken))
using (var web = elevatedSite.OpenWeb())
return web.GetList(web.ServerRelativeUrl + "/lists/CommunityPost");
}
}
}
I think the first class looks fine as long as your security context makes sense (i.e. the user with lowest security can access the list items using these methods). I would be curious how the using statements in the second affect code calling that.
The thing with ElevatePriveleges is that sometimes when you have data access code that is widely used, a lot of times you have to elevate priveleges if you don't want to give them access to the list directly in the SharePoint UI, but want to have code executing in their context access the list items for user controls and the like.
A couple things to keep in mind:
1. always be cognizant of memory leaks with list access code in SharePoint. In your classes you handled both situations well as far as dealing with any SPRequest disposing.
2. You almost never want to use SPList.Items in your code. If you surface lists this way and you don't manage the code calling the list you may run into performance issues with larger lists as calling the .Items property, and not a specific query, loads every single item and field possible for every item in that list.
hope this helps
Related
To not have to keep repeating some validations, for example, who can see a button in a certain status of a document in the worlflow, I'm using session, scope, and session variables to store the user roles and application variable to store the Status related to each area.
I was evaluating whether it would be better from a performance and build point of view to implement a managed bean, to return the user roles and the possible statuses of each participating workflow area. Would it be the best structure in fact? What do you think? I do not have much experience in java. How could I construct the structure in java, several methods, one for roles and the other for set of status associated with the area that would name the related method? You could return the results of this method in arrays, or there is a better return structure.
Thanks a lot!
My best suggestion is to adopt the pageController Methodology. Then it's more like true MVC. This has been talked about on NotesIn9 screencast many times but basically you have a java object that's bound to your XPage. In effect it's a viewScoped bean that holds all your page logic. Then you can have methods like isGroupMember(), hasRole() etc and calculate that on the pageInit. There's little need to hold onto that in sessionScope in my opinion. So for example I have this in my pageController :
public boolean isGroupMember(String groupName) {
return JSFUtil.getXSPContext().getUser().getGroups().contains(groupName);
}
So that's available to each page. BUT I don't need to copy that snippet onto every page controller. In Java you can have your page controllers extend a more generic class. so I have a "base.pageController" class. All the specific page controllers extend that. So this isGroupMember() code goes into the base and then it's available to be used on every XPage. Doing it this way gives you the ability to have generic functions like this and then hold more specific function that are only for the individual page.
You can also have a hasRole() function etc...
Recommend you check out this video : http://www.notesin9.com/2016/08/25/notesin9-196-no-dependency-page-controllers/
Also for a question like this, I recommend you just use the xpages tag. Adding others like javabeans can bring people in who know nothing about XPages and XPages is unique enough of a beast that outsiders can cause some confusion on occasion.
I have two similar problems that I suspect have a common solution.
1) I'd like to create custom Parts that are Attachable, but only to specific content types, only Taxonomies for example. It would be really cool if that was possible out of the box through migrations e.g something like .Attachable(cfg => cfg.ToType("Taxonomy")) but I don't think it is.
Currently, to prevent my custom Part from being used on content that it's not intended for, I just write checks in the driver methods:
protected override DriverResult Editor(CustomPart part, dynamic shapeHelper)
{
if (part.ContentItem.ContentType != "Taxonomy") return null;
return ContentShape("Parts_Custom_Edit", ...
}
Is this a good way to go about it? Would the Handler be better fit for this kind of logic?
2) Similarly, I'd like to be able to conditionally attach different Parts to different individual Content Items. For example, I would like only first level parent Terms in a Taxonomy to have some fields while child Terms have some others.
The best way I can currently come up with to handle this is to just create one Part that holds all fields and run similar checks to the one above in its Driver methods to return different models depending on its container. Then in the template View I check which fields to render:
#if (Model.ThisField != null) {
<div>#Html.EditorFor(m => m.ThisField)</div>
}
else {
<div>#Html.EditorFor(m => m.ThatField)</div>
}
Ideally I'd like to create one attachable Part that's capable of adding several non-attachable secondary Parts to existing Content Items when it is attached to a Type and to new Content Items when they are created or updated. Is there a painless way to do this? I think 'Welding' might be what I need but I haven't been able to find any documentation or tutorials that can explain Welding to me like I'm five.
I think you need to implement a dynamic welding approach. I had to solve a similar issue, it is posted here. Hope this helps.
I'm developing an application with Domain Drive Design approach. in a special case I have to retrieve the list of value objects of an aggregate and present them. to do that I've created a read only repository like this:
public interface IBlogTagReadOnlyRepository : IReadOnlyRepository<BlogTag, string>
{
IEnumerable<BlogTag> GetAllBlogTagsQuery(string tagName);
}
BlogTag is a value object in Blog aggregate, now it works fine but when I think about this way of handling and the future of the project, my concerns grow! it's not a good idea to create a separate read only repository for every value object included in those cases, is it?
anybody knows a better solution?
You should not keep value objects in their own repository since only aggregate roots belong there. Instead you should review your domain model carefully.
If you need to keep track of value objects spanning multiple aggregates, then maybe they belong to another aggregate (e.g. a tag cloud) that could even serve as sort of a factory for the tags.
This doesn't mean you don't need a BlogTag value object in your Blog aggregate. A value object in one aggregate could be an entity in another or even an aggregate root by itself.
Maybe you should take a look at this question. It addresses a similar problem.
I think you just need a query service as this method serves the user interface, it's just for presentation (reporting), do something like..
public IEnumerable<BlogTagViewModel> GetDistinctListOfBlogTagsForPublishedPosts()
{
var tags = new List<BlogTagViewModel>();
// Go to database and run query
// transform to collection of BlogTagViewModel
return tags;
}
This code would be at the application layer level not the domain layer.
And notice the language I use in the method name, it makes it a bit more explicit and tells people using the query exactly what the method does (if this is your intent - I am guessing a little, but hopefully you get what I mean).
Cheers
Scott
I'm trying to figure out how to accomplish the following:
User can have many Websites
What I need to do before adding a new website to a user, is to take the website URL and pass it to a method which will check whether the Website already exist in the database (another User has the same website associated), or whether to create a new record. <= The reason for this is whether to create a new thumbnail or use an existing.
The problem is that the repository should be per aggregate root, which means I Cant do what I've Explained above? - I could first get ALL users in the database and then foreach look with if statement that checks where the user has a website record with same URL, but that would result in an endless and slow process.
Whatever repository approach you're using, you should be able to specify criteria in some fashion. Therefore, search for a user associated with the website in question - if the search returns no users, the website is not in use.
For example, you might add a method with the following signature (or you'd pass a query object as described in this article):
User GetUser(string hasUrl);
That method should generate SQL more or less like this:
select u.userId
from User u
join Website w
on w.UserId = u.UserId
where w.Url = #url
This should be nearly as efficient as querying the Website table directly; there's no need to load all the users and website records into memory. Let your relational database do the heavy lifting and let your repository implementation (or object-relational mapper) handle the translation.
I think there is a fundamental problem with your model. Websites are part of a User aggregate group if I understand correctly. Which means a website instance does not have global scope, it is meaningful only in the context of belonging to a user.
But now when a user wants to add a new website, you first want to check to see if the "website exists in the database" before you create a new one. Which means websites in fact do have a global scope. Otherwise anytime a user requested a new website, you would create a new website for that specific user with that website being meaningful in the scope of that user. Here you have websites which are shared and therefore meaningful in the scope of many users and therefore not part of the user aggregate.
Fix your model and you will fix your query difficulties.
One strategy is to implement a service that can verify the constraint.
public interface IWebsiteUniquenessValidator
{
bool IsWebsiteUnique(string websiteUrl);
}
You will then have to implement it, how you do that will depend on factors I don't know, but I suggest not worrying about going through the domain. Make it simple, it's just a query (* - I'll add to this at the bottom).
public class WebsiteUniquenessValidator : IWebsiteUniquenessValidator
{
//.....
}
Then, "inject" it into the method where it is needed. I say "inject" because we will provide it to the domain object from outside the domain, but .. we will do so with a method parameter rather than a constructor parameter (in order to avoid requiring our entities to be instantiated by our IoC container).
public class User
{
public void AddWebsite(string websiteUrl, IWebsiteUniquenessValidator uniquenessValidator)
{
if (!uniquenessValidator.IsWebsiteUnique(websiteUrl) {
throw new ValidationException(...);
}
//....
}
}
Whatever the consumer of your User and its Repository is - if that's a Service class or a CommandHandler - can provide that uniqueness validator dependency. This consumer should already by wired up through IoC, since it will be consuming the UserRepository:
public class UserService
{
private readonly IUserRepository _repo;
private readonly IWebsiteUniquenessValidator _validator;
public UserService(IUserRepository repo, IWebsiteUniquenessValidator validator)
{
_repo = repo;
_validator = validator;
}
public Result AddWebsiteToUser(Guid userId, string websiteUrl)
{
try {
var user = _repo.Get(userId);
user.AddWebsite(websiteUrl, _validator);
}
catch (AggregateNotFoundException ex) {
//....
}
catch (ValidationException ex) {
//....
}
}
}
*I mentioned making the validation simple and avoiding the Domain.
We build Domains to encapsulate the often complex behavior that is involved with modifying data.
What experience shows, is that the requirements around changing data are very different from those around querying data.
This seems like a pain point you are experiencing because you are trying to force a read to go through a write system.
It is possible to separate the reading of data from the Domain, from the write side, in order to alleviate these pain points.
CQRS is the name given to this technique. I'll just say that a whole bunch of lightbulbs went click once I viewed DDD in the context of CQRS. I highly recommend trying to understand the concepts of CQRS.
I want to know whats the best method of Instantiating SPSite and SPWeb objects . As there are no. of ways by which you can do this.Some of the ways I know
1.
SPSite mySite = SPControl.GetContextSite(Context);
SPWeb myWeb = SPControl.GetContextWeb(Context);
//Why we use second method as in first method there is no need to write the hardcoded url and also no need to dispose too as recommended by Microsoft.
2.
SPSite mySite=new SPSite("http://abc");
SPWeb myweb= mySite.RootWeb;
//Dispose
mySite.Dispose();
myweb.Dispose();
or difff. way of disposing for it by having using( )
/
3. Similar to first.. SPSite mySite = SPContext.Current.Site;
SPWeb myweb = SPContext.Current.Web;
Let me know if there is any other best approach or means out of these which should be the best approach to instantiate objects.....
Thanks,
You should do something like this:
using(SPSite oSPsite = new SPSite("http://server"))
{
using(SPWeb oSPWeb = oSPSite.OpenWeb())
{
// do stuff
}
}
You should also take a look into this: SharePoint Dispose Checker Tool, as it can inspect your code and to point where you're missing best practices.
EDIT: Yes, you can to use Context (and it's way I always do) but it shouldn't be used in some scenarios, like inside a SPSecurity.RunWithElevatedPrivileges. So, I recommend:
1 method for normal operations
2 for RunWithElevatedPrivileges and
3 should not be used, as it probably will mess your request if disposed.
Basically, creating a new SPSite object is "expensive" in terms of memory it requires. This is why you have to Dispose() them whenever you can - to free up resources you have taken.
So, whenever such method is available, you should call methods which use "singletons" built into SharePoint. For instance, in your 3rd example, you call SPContext.Current.Web. Internally (you can see it, if you load the code in Reflector) it stores a reference to the SPWeb object and returns you the same object, every time you call it. It means that different web controls in the same page use one single SPSite object and one SPWeb object. Your second example, however, creates a new SPSite object and that costs you 2Mb of memory (information taken from Robert Lamb's article).
In my opinion, the 1st and the 3rd method are equivalent. One of the methods calls the other one internally (I don't have access to microsoft.sharepoint.dll at the moment, so I cannot verify).
The 2nd example is different.
There is no one best way, it depends on what you're doing. If you're writing code where you know you have access to a current/implicit context such as a web part, option #1 is preferable. This "piggy backs" on the current context, is faster and saves resources. Rubens Farias' post offers some additional details regarding limitations.
Sometimes you don't have a current/implicit context such as in a command line utility. Sometimes you want to access objects outside of the current context such as in another web app. In these cases you are left with option #2 which spins up its own context.
I tend to view option #3 as a redundant and less expressive version of option #1. Someone else may be able to offer a compelling case for it's use but I have not run into it.
Both approaches (current vs. explicit context) work well and should be in your toolbox. The key is knowing why and how to employ one approach vs. another in a given situation.
Method 1 and 3 are equivalent. In fact SPContext (method 3) is using method 1 itself.
I prefer to use SPContext.Current because it gives a nice consistency when you also want to use SPContect.Current.List and so on which isn't available from SPControl
Method 2 is for use when you're not running your code inside the site in question so if you're creating a console/WPF app or an extension to stsadm.
If you need to run with elevated privileges then use the variant of method 2 with Guid and SPUserToken as parameters
To sum it all up my recommendation is: Use method 3 if you can and method 2 when you need to