Xpages across multiple databases [closed] - xpages

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
we are just going to start with XPages and i have the following questions:
We have a system composed of several Notes databases. Now we all want to add a XPage, so that can be edited in the browser and the client.
What is better now:
A database with the XPages and programming and data are fetched from the other databases.
All databases get the XPages and are addressed individually via a navigation.
For me, what is better for performance.

Best practice is to store all XPage design elements for each "application" in a single NSF. This can be the same container as some of the data, or it can be a separate NSF entirely. But what you should definitely avoid is storing XPage elements in separate NSFs just because the data happens to be stored in several NSFs.
Rather, within XPage applications, the data should always be considered philosophically separate from the user interface, even if it is stored in the same NSF. This philosophy makes it easier to design modern, intuitive user interfaces for applications without constraining these design decisions simply as a result of how the back end data is structured.
The ACL of each NSF is still honored, so if you have imposed different access levels for each database, the user will still only be able to access content to which they have access based on the ACL of the NSF that contains each record, regardless of the ACL of the NSF that contains the XPage design elements.
One rather specific performance consideration is that both the application scope and the session scope are unique to the NSF that contains the XPage element a user is currently accessing. As such, if your application consists of 6 databases, for example, and you split the XPage design elements across those databases, you will be unable to cache configuration settings, or other computationally expensive queries, across all of the applications. If, conversely, all of the XPage design elements are in a single NSF, you have a single application scope. Each portion of the user interface, therefore, can access information already cached by any other portion of the interface -- spanning not only different pages within the app, but spanning users as well: if data that is retrieved for one user should be the same data returned to all users, caching it for one caches it for all.
Similarly, since the user will have a different session scope within each NSF they access, any user preferences (or behavior) that is applicable in all areas of the app would be forgotten as the user navigates to a different NSF.
Storing different XPage elements in different NSFs just because that's where the data is removes these, and other, opportunities for performance and interface optimization. It might feel simpler for those new to this type of development to segregate the design, but ultimately the end user experience is bound to suffer, potentially in ways of which they'll be consciously aware. But usually they'll be confused and frustrated and unable to pinpoint exactly why.
In short, here's the best way to determine where each XPage should be: if an end user navigating from one XPage to another would assume that they're still in the same app, then both should be in the same NSF, regardless of the location of the data each XPage accesses.

Related

When to write user input to the database?

Newer developer here. I'm creating a Nodejs application with MongoDB. When do you write user inputs to the database? Is it immediately when they want to perform a CRUD action? Or do you wait until they end their session to update their changes (showing them a "fake" updated view during the meantime)? I would think writing to the database every time would be less than ideal, but I also wouldn't want to make the user think their changes were saved to the database, and then some error occurs where it didn't actually happen. How's this handled in the real world?
The user inputs should be written to the database as soon as the user wants to perform the CRUD operations.
If they are not, and you wait for the user to terminate their session, there may be other parts of the application that try and change the data that was supposed to be updated. Or you may want to take certain action in your application based on the current user data from the database, but your database reflects older data, and your application may behave incorrectly.
One may argue that you can maintain the current state of your application, but in case of backend code, the database should always be your single source of thruth.
This is what's known in the "real world" (as you referred to) as a design decision. It's not something for which there's anything even remotely resembling a rule-of-thumb or a hard-and-fast rule.
Instead, it's important to consider all possible factors relating to this design prior to committing to it:
User expectations - will the users of this application expect that their input is stored immediately? When they click the "Save" button? Do they expect their input to be destroyed?
Data retention - are there requirements to retain user input prior to its formal submission? (This is useful in applications for which
Infrastructure - can the underlying infrastructure handle the increased workload? When this application is scaled, will the infrastructure demands exceed capacity?
Cost/benefit - will the addition of this feature trigger development/testing times that exceed acceptable levels for the benefit the feature provides?
These are just some of the considerations you might have. I'm sure with additional time most people could come up with at least ten more.

Is it possible to prevent man in the browser attack at the server with hardware device [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Recently I found a hardware device that can prevent bot attacks by changing html DOM elements on the fly The details are mentioned here
The html input element id and name and also form element action will be replaced with some random string before page is sent to client. After client submit, the hardware device replace its values with originals. So the server code will remain on change and bots can not work on fixed input name, id.
That was the total idea, BUT they also have claimed that this product can solve the man in the browser attack.
http://techxplore.com/news/2014-01-world-botwall.html :
Shape Security claims that the added code to a web site won't cause
any noticeable delays to the user interface (or how it appears) and
that it works against other types of attacks as well, such as account
takeover, and man-in-the-browser. They note that their approach works
because it deflects attacks in real time whereas code for botnets is
changed only when it installs (to change its signature).
Theoretically is it possible that some one can prevent the man in the browser attack at the server?!
Theoretically is it possible that some one can prevent the man in the browser attack at the server?!
Nope. Clearly the compromised client can do anything a real user can.
Making your pages more resistant to automation is potentially an arms race of updates and countermeasures. Obfuscation like this can at best make it annoying enough to automate your site that it's not worth it to the attacker—that is, you try to make yourself no longer the ‘low-hanging fruit’.
They note that their approach works because it deflects attacks in real time whereas code for botnets is changed only when it installs (to change its signature).
This seems pretty meaningless. Bots naturally can update their own code. Indeed banking trojans commonly update themselves to work around changes to account login pages. Unless the service includes live updates pushed out to the filter boxes to work around these updates, you still don't win.
(Such an Automation Arms Race As A Service would be an interesting proposition. However I would be worried about new obfuscation features breaking your applications. For example imagine what would happen for the noddy form-field-renaming example on the linked site if you have your own client-side scripts were relying on those names. Or indeed if your whole site was a client-side Single Page App, this would have no effect.)

What does a Sharepoint engineer develop? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm new to Sharepoint but I do have a background in .NET development. How is it different to develop in Sharepoint? What does a Sharepoint engineers program exactly?
There's a wide range of things a developer can do on SharePoint. A short list of most common (to me) items are:
Web Parts
Application Pages
Event Receivers
Workflow
Timer Jobs
If you're not familiar with the raw ASP.NET Web Parts, SharePoint Web Parts are kind of analogous to ASP.NET User Controls with some additional wrapping that lets them store and retrieve settings, be targeted for visibility for users, etc. These are generally the most common (that I've seen) project for SharePoint. You can put multiple Web Parts on a page and the user can drag them to different zones to customize the way the page looks.
Application Pages are a bit more complicated. They require you to include a number of SharePoint-specific page directives and Content Areas in order for them to be rendered correctly. The result of which is the ability to control (the whole?) page render in SharePoint. This is in contract to Web Parts which only take up a small amount of space shared with other web parts on a web part page.
Event Receivers (List or Item receivers) are a lightweight mechanism to attach either to specific list instances or to whole list types. (A list is an instance of a type. There are pre-defined ones and a generic list type and you can use content type ids to specify your own unique list types.) Most commonly these are used when a new List Item is created/edited/deleted in a list to provide some additional notification, categorization, kick off some external process, etc. They're really easy to define and set up and one of the most flexible ways to listen for changes.
SharePoint Workflows are less common than the previous two, from my experience, but are still used quite heavily by larger organizations. Workflows can be synchronous (ItemUpdating) which will execute on the server currently serving the user, or asynchronous (ItemUpdated) which can be handled by any server in the SharePoint Farm when the Timer Service picks up the job. Workflows are generally used for watching forms, creating tasks, organizing new items, etc.
Timer Jobs are content-less pieces of code that are run on a schedule by the SharePoint Time Server. They run under the OWSTIMER (versus the w3wp IIS worker process) and there are some limitations and "gotchas" with these. They're analogous to Windows Scheduled Jobs.
Edit: Added Workflow information.
Edit 2: Added Event Receivers. Sorry! It's been awhile since I've had to crack my knuckles over SharePoint. This trip down memory lane is...a trip.

Are there any examples of group data-sharing using a replicated database, such as CouchDB? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Background: I am working on a proposal for a PHP/web-based P2P replication layer for PDO databases. My vision is that someone with a need to crowd-source data sets up this software on a web server, hooks it up to their preferred db platform, and then writes a web app around it to add/edit/delete data locally. Other parties, if they wish, may set up a similar thing - with their own web apps written around it - and set up data-sharing agreements with one or more peers. In the general case, changes made to one database are written to another on a versioned basis, such that they eventually flow around the whole network.
Someone has asked me why I'm not using CouchDB, since it has bi-directional replication and record versioning offered as standard. I wasn't aware of these capabilities, so this turns out to be an excellent question! It occurs to me, if this facility is already available, are there any existing examples of server-to-server replication between separate groups? I've done a great deal of hunting and not found anything.
(I suppose what I am looking for is examples of "group-sourcing": give groups a means to access a shared dataset locally, plus the benefits of critical mass they would be unable to build individually, whilst avoiding the political ownership/control problems associated with the traditional centralised model.)
You might want to check out http://refuge.io/
It is built around couchdb, but more specifically to form peer groups.
Also, here is a couchbase sponsored case study of replication between various groups
http://site.couchio.couchone.com/case-study-assay-depot
This can be achived on standard couchdb installs.
Hope that gives you a start.

Are there best practices for testing security in an Agile development shop? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Regarding Agile development, what are the best practices for testing security per release?
If it is a monthly release, are there shops doing pen-tests every month?
What's your application domain? It depends.
Since you used the word "Agile", I'm guessing it's a web app. I have a nice easy answer for you.
Go buy a copy of Burp Suite (it's the #1 Google result for "burp" --- a sure endorsement!); it'll cost you 99EU, or ~$180USD, or $98 Obama Dollars if you wait until November.
Burp works as a web proxy. You browse through your web app using Firefox or IE or whatever, and it collects all the hits you generate. These hits get fed to a feature called "Intruder", which is a web fuzzer. Intruder will figure out all the parameters you provide to each one of your query handlers. It will then try crazy values for each parameter, including SQL, filesystem, and HTML metacharacters. On a typical complex form post, this is going to generate about 1500 hits, which you'll look through to identify scary --- or, more importantly in an Agile context, new --- error responses.
Fuzzing every query handler in your web app at each release iteration is the #1 thing you can do to improve application security without instituting a formal "SDLC" and adding headcount. Beyond that, review your code for the major web app security hot spots:
Use only parameterized prepared SQL statements; don't ever simply concatenate strings and feed them to your database handle.
Filter all inputs to a white list of known good characters (alnum, basic punctuation), and, more importantly, output filter data from your query results to "neutralize" HTML metacharacters to HTML entities (quot, lt, gt, etc).
Use long random hard-to-guess identifiers anywhere you're currently using simple integer row IDs in query parameters, and make sure user X can't see user Y's data just by guessing those identifiers.
Test every query handler in your application to ensure that they function only when a valid, logged-on session cookie is presented.
Turn on the XSRF protection in your web stack, which will generate hidden form token parameters on all your rendered forms, to prevent attackers from creating malicious links that will submit forms for unsuspecting users.
Use bcrypt --- and nothing else --- to store hashed passwords.
I'm no expert on Agile development, but I would imagine that integrating some basic automated pen-test software into your build cycle would be a good start. I have seen several software packages out there that will do basic testing and are well suited for automation.
I'm not a security expert, but I think the most important fact you should be aware of, before testing security, is what you are trying to protect. Only if you know what you are trying to protect, you can do a proper analysis of your security measures and only then you can start testing those implemented measures.
Very abstract, I know. However, I think it should be the first step of every security audit.
Unit testing, Defense Programming and lots of logs
Unit testing
Make sure you unit test as early as possible (e.g. the password should be encrypted before sending, the SSL tunnel is working, etc). This would prevent your programmers from accidentally making the program insecure.
Defense Programming
I personally call this the Paranoid Programming but Wikipedia is never wrong (sarcasm). Basically, you add tests to your functions that checks all the inputs:
is the user's cookies valid?
is he still currently logged in?
are the function's parameters protected against SQL injection? (even though you know that the input are generated by your own functions, you will test anyway)
Logging
Log everything like crazy. Its easier to remove logs then to add them. A user have logged in? Log it. A user found a 404? Log it. The admin edited/deleted a post? Log it. Someone was able to access a restricted page? Log it.
Don't be surprised if your log file reaches 15+ Mb during your development phase. During beta, you can decide which logs to remove. If you want, you can add a flag to decide when a certain event is logged.

Resources