Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm new to Sharepoint but I do have a background in .NET development. How is it different to develop in Sharepoint? What does a Sharepoint engineers program exactly?
There's a wide range of things a developer can do on SharePoint. A short list of most common (to me) items are:
Web Parts
Application Pages
Event Receivers
Workflow
Timer Jobs
If you're not familiar with the raw ASP.NET Web Parts, SharePoint Web Parts are kind of analogous to ASP.NET User Controls with some additional wrapping that lets them store and retrieve settings, be targeted for visibility for users, etc. These are generally the most common (that I've seen) project for SharePoint. You can put multiple Web Parts on a page and the user can drag them to different zones to customize the way the page looks.
Application Pages are a bit more complicated. They require you to include a number of SharePoint-specific page directives and Content Areas in order for them to be rendered correctly. The result of which is the ability to control (the whole?) page render in SharePoint. This is in contract to Web Parts which only take up a small amount of space shared with other web parts on a web part page.
Event Receivers (List or Item receivers) are a lightweight mechanism to attach either to specific list instances or to whole list types. (A list is an instance of a type. There are pre-defined ones and a generic list type and you can use content type ids to specify your own unique list types.) Most commonly these are used when a new List Item is created/edited/deleted in a list to provide some additional notification, categorization, kick off some external process, etc. They're really easy to define and set up and one of the most flexible ways to listen for changes.
SharePoint Workflows are less common than the previous two, from my experience, but are still used quite heavily by larger organizations. Workflows can be synchronous (ItemUpdating) which will execute on the server currently serving the user, or asynchronous (ItemUpdated) which can be handled by any server in the SharePoint Farm when the Timer Service picks up the job. Workflows are generally used for watching forms, creating tasks, organizing new items, etc.
Timer Jobs are content-less pieces of code that are run on a schedule by the SharePoint Time Server. They run under the OWSTIMER (versus the w3wp IIS worker process) and there are some limitations and "gotchas" with these. They're analogous to Windows Scheduled Jobs.
Edit: Added Workflow information.
Edit 2: Added Event Receivers. Sorry! It's been awhile since I've had to crack my knuckles over SharePoint. This trip down memory lane is...a trip.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
we are just going to start with XPages and i have the following questions:
We have a system composed of several Notes databases. Now we all want to add a XPage, so that can be edited in the browser and the client.
What is better now:
A database with the XPages and programming and data are fetched from the other databases.
All databases get the XPages and are addressed individually via a navigation.
For me, what is better for performance.
Best practice is to store all XPage design elements for each "application" in a single NSF. This can be the same container as some of the data, or it can be a separate NSF entirely. But what you should definitely avoid is storing XPage elements in separate NSFs just because the data happens to be stored in several NSFs.
Rather, within XPage applications, the data should always be considered philosophically separate from the user interface, even if it is stored in the same NSF. This philosophy makes it easier to design modern, intuitive user interfaces for applications without constraining these design decisions simply as a result of how the back end data is structured.
The ACL of each NSF is still honored, so if you have imposed different access levels for each database, the user will still only be able to access content to which they have access based on the ACL of the NSF that contains each record, regardless of the ACL of the NSF that contains the XPage design elements.
One rather specific performance consideration is that both the application scope and the session scope are unique to the NSF that contains the XPage element a user is currently accessing. As such, if your application consists of 6 databases, for example, and you split the XPage design elements across those databases, you will be unable to cache configuration settings, or other computationally expensive queries, across all of the applications. If, conversely, all of the XPage design elements are in a single NSF, you have a single application scope. Each portion of the user interface, therefore, can access information already cached by any other portion of the interface -- spanning not only different pages within the app, but spanning users as well: if data that is retrieved for one user should be the same data returned to all users, caching it for one caches it for all.
Similarly, since the user will have a different session scope within each NSF they access, any user preferences (or behavior) that is applicable in all areas of the app would be forgotten as the user navigates to a different NSF.
Storing different XPage elements in different NSFs just because that's where the data is removes these, and other, opportunities for performance and interface optimization. It might feel simpler for those new to this type of development to segregate the design, but ultimately the end user experience is bound to suffer, potentially in ways of which they'll be consciously aware. But usually they'll be confused and frustrated and unable to pinpoint exactly why.
In short, here's the best way to determine where each XPage should be: if an end user navigating from one XPage to another would assume that they're still in the same app, then both should be in the same NSF, regardless of the location of the data each XPage accesses.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
Tumblr allows users to edit the HTML and CSS of their blogs through a Templating system. In fact, it even allows users to include their own external scripts into their pages. Doing this obviously opens a lot of security holes for Tumblr; however, it's obviously doing fine. What's even more interesting is how Tumblr's team managed to do all these through a LAMP Stack before.
One thing that I found out through pinging different Tumblr sites is that the blog sites are dispersed through multiple servers.
Nonetheless, hacking the root of just one of those servers could compromise a ton of data. Furthermore, while some could argue that Tumblr may just be doing manual checks on each of its blogging sites, it still seems pretty risky and unpractical for Tumblr's team to do so because of the amount of data that Tumblr has. Because of this, I think there are still some aspects that checking manually hasn't covered yet, especially in terms of how Tumblr's team filters their user input before it enters their database.
My main question: How does Tumblr (or any other similar site) filter its user input and thereby, prevent hacking and exploits from happening?
What is Tumblr.
Tumblr is a microbloggin service, which lets its users to post multimedia and short text blogs on their website.
Formating and styling blog
Every blog service lets its user to edit and share the content. At the same time they also let their users to style their blog depending on what type of service they are providing.
For instance, A company blog can never have a garden image as its background and at the same time a shopkeeper can never show a beach image; unless they are present at that place or include such objects in their work.
What Tumblr. does
Well, they just keep checking the files for any error!
As a general bloggin platform. It is necessary to allow the users to upload and style them blogs. And at the same time it is a job for the company to keep the control of how their service is used!
So Tumblr. keeps a great note on these things. They also donot allow to upload files that infect the system, and are well-known to delete such accounts if anything fishy is caught!
Tumblr. allows the users to upload files and multimedia that is used to style the blog. They used a seperate platform where to save all such files! So when you upload it, it does not get executed on their system. They access it from the server or from the hard drive which these files are saved on and then provide you with the blog that includes those files.
What would I do
I would do the same, I would first upload and save the files on a seperate place, where if executed they donot harm my system if are infected by a virus. Not all the users upload virus. But once they do, you should use an antivirus system to detect and remove the virus and at the same time block that account.
I would have let the users to use my service, now its user's job to upload content and its my job to prevent hacking.
All this stuff (HTML/CSS/External scripts) does not run on Tumblr machines. So to them it does not matter. One is responsible for the stuff that runs on your own PC. As to Javascript it lives in a sandpit
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I like how facebook releases features incrementally and not all at once to their entire user base. I get that this can be replicated with a bunch if statements smattered all throughout your code, but there needs to be a better way to do this. Perhaps that really is all they are doing, but that seems rather inelegant. Does anyone know if there is an industry standard for an architecture than can incrementally release features to portions of a user base?
On that same note, I have a feeling that all of their employees see an entirely different completely beta view of the site. So it seems that they are able to deem certain portions of their website as beta and others as production and have some sort of access control list to guide what people see? That seems like it would be slow.
Thanks!
Facebook has a lot of servers so they can apply new features only on some of them. Also they have some servers where they test new features before commiting to the production.
A more elegant solution is, if statements and feature flags using systems like gargoyle (in python).
Using a system like this you could do something like:
if feature_flag.is_active(MY_FEATURE_NAME, request, user, other_key_objects):
# do some stuff
In a web interface you would be able to isolate describe users, requests, or any other key object your system has and deliver your feature to them. In fact, via requests you could do things like direct X% of traffic to the new feature, and thus do things like A/B test and gather analytics.
An approach to this is to have a tiered architecture where the authentication tier hands-off to the product tier.
A user enters the product URL and that is resolved to direct them to a cluster of authentication servers. These servers handle authentication and then hand off the session to a cluster of product servers.
Using this approach you can:
Separate out your product servers in to 'zones' that run different versions of your application
Run logic on your authentication servers that decides which zone to route the session to
As an example, you could have Zone A running the latest production code and Zone B running beta code. At the point of login the authentication server sends every user with a user name starting with a-m to Zone A and n-z to Zone B. That way roughly half the users are running on the beta product.
Depending on the information you have available at the point of login you could even do something more sophisticated than this. For example you could target a particular user demographic (e.g. age ranges, gender, location, etc).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
There's a project which we're doing for the govt which necessisiates the use of a workflow system. I'm looking for advice on what software systems we can use (either commercial or open source / freeware) with appropriate customizations.
Steps:
0. We monitor a certain site for "notifications". Whenever a notification is posted, this is what happens for each notification.
1. A team of 2-3 people (our employees) have to examine the document, examine whether we need to act on it. One person examines it, the other reviews the first's decision/action.
2. If we need to act on it, one of them needs to prepare a sort of summary document for outside experts. Again, another person (not the writer) needs to review it.
3. This document needs to be sent to outside experts (emailed in most cases, but also via postal mail). A database of experts and their specialities needs to be maintained.
4. A system of keeping track of which document went to whom and when needs to be maintained.
5. Responses will be received from the outside-experts (electronically and postal). The system needs to keep track of from whom we did NOT receive responses, so that we can remind them.
6. Once all responses have been collated, the company employees need to prepare a report which needs to be approved by a supervisor before it can be sent out to the govt.
I understand that a number of tools would be required and/or extensive customizations. That's fine - looking for inputs on all these aspects.
Steve!
If you already define a fixed workflow process, you can develop the workflow with Windows Workflow Foundation, or hire a developer to do it for you.
If you prefer a customizable workflow product, K2 (http://www.k2.com/) is a good option.
Are you using SharePoint or not?
In that case have a look at BlackPoint and Nintex.
Both will give you lots of workflow options based on SharePoint. If I interpret your requirements correctly these packages should be able to implement them all without coding.
I would advocate Nintex Workflow since I have positive experience working with it. Installation and initial configuration is quite easy although features are impressive. It also built is a way that end users can build their own flows, it's fairly easy to do. Also you can build more complex flows, create custom actions and use the web services SDK to access the activities/perform actions from the outside of the SharePoint - I used it from InfoPath and Silverlight forms.
A good commercial option would be PNMsoft's Sequence.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Regarding Agile development, what are the best practices for testing security per release?
If it is a monthly release, are there shops doing pen-tests every month?
What's your application domain? It depends.
Since you used the word "Agile", I'm guessing it's a web app. I have a nice easy answer for you.
Go buy a copy of Burp Suite (it's the #1 Google result for "burp" --- a sure endorsement!); it'll cost you 99EU, or ~$180USD, or $98 Obama Dollars if you wait until November.
Burp works as a web proxy. You browse through your web app using Firefox or IE or whatever, and it collects all the hits you generate. These hits get fed to a feature called "Intruder", which is a web fuzzer. Intruder will figure out all the parameters you provide to each one of your query handlers. It will then try crazy values for each parameter, including SQL, filesystem, and HTML metacharacters. On a typical complex form post, this is going to generate about 1500 hits, which you'll look through to identify scary --- or, more importantly in an Agile context, new --- error responses.
Fuzzing every query handler in your web app at each release iteration is the #1 thing you can do to improve application security without instituting a formal "SDLC" and adding headcount. Beyond that, review your code for the major web app security hot spots:
Use only parameterized prepared SQL statements; don't ever simply concatenate strings and feed them to your database handle.
Filter all inputs to a white list of known good characters (alnum, basic punctuation), and, more importantly, output filter data from your query results to "neutralize" HTML metacharacters to HTML entities (quot, lt, gt, etc).
Use long random hard-to-guess identifiers anywhere you're currently using simple integer row IDs in query parameters, and make sure user X can't see user Y's data just by guessing those identifiers.
Test every query handler in your application to ensure that they function only when a valid, logged-on session cookie is presented.
Turn on the XSRF protection in your web stack, which will generate hidden form token parameters on all your rendered forms, to prevent attackers from creating malicious links that will submit forms for unsuspecting users.
Use bcrypt --- and nothing else --- to store hashed passwords.
I'm no expert on Agile development, but I would imagine that integrating some basic automated pen-test software into your build cycle would be a good start. I have seen several software packages out there that will do basic testing and are well suited for automation.
I'm not a security expert, but I think the most important fact you should be aware of, before testing security, is what you are trying to protect. Only if you know what you are trying to protect, you can do a proper analysis of your security measures and only then you can start testing those implemented measures.
Very abstract, I know. However, I think it should be the first step of every security audit.
Unit testing, Defense Programming and lots of logs
Unit testing
Make sure you unit test as early as possible (e.g. the password should be encrypted before sending, the SSL tunnel is working, etc). This would prevent your programmers from accidentally making the program insecure.
Defense Programming
I personally call this the Paranoid Programming but Wikipedia is never wrong (sarcasm). Basically, you add tests to your functions that checks all the inputs:
is the user's cookies valid?
is he still currently logged in?
are the function's parameters protected against SQL injection? (even though you know that the input are generated by your own functions, you will test anyway)
Logging
Log everything like crazy. Its easier to remove logs then to add them. A user have logged in? Log it. A user found a 404? Log it. The admin edited/deleted a post? Log it. Someone was able to access a restricted page? Log it.
Don't be surprised if your log file reaches 15+ Mb during your development phase. During beta, you can decide which logs to remove. If you want, you can add a flag to decide when a certain event is logged.