How can I logout an administrator in SilverStripe 3.1.x after period of inactivity? - security

How do I expire the administrator session after a period of inactivity in SilverStripe 3.1.x? Is there a config option for this?
I searched and found the following code snippet, which, when placed in the Page_Controller class, works for frontend users, but totally ineffective in the administration area.
public function init() {
parent::init();
self::logoutInactiveUser();
}
public static function logoutInactiveUser() {
$inactivityLimit = 1; // in Minutes - deliberately set to 1 minute for testing purposes
$inactivityLimit = $inactivityLimit * 60; // Converted to seconds
$sessionStart = Session::get('session_start_time');
if (isset($sessionStart)){
$elapsed_time = time() - Session::get('session_start_time');
if ($elapsed_time >= $inactivityLimit) {
$member = Member::currentUser();
if($member) $member->logOut();
Session::clear_all();
$this->redirect(Director::baseURL() . 'Security/login');
}
}
Session::set('session_start_time', time());
}
After over 1 minute of inactivity, the admin user is still logged in and the session has not timed out.

For people like myself still searching for a solution to this, there's a much simpler alternative. As it turns out, the only good solution at the moment is indeed to disable LeftAndMain.session_keepalive_ping and simon_w's solution will not work precisely because of this ping. Also, disabling this ping should not cause data loss (at least not for SilverStripe 3.3+) because the user will be presented with an overlay when they attempt to submit their work. After validating their credentials, their data will be submitted to the server as usual.
Also, for anyone who (like myself) was looking for a solution on how to override the CMS ping via LeftAndMain.session_keepalive_ping using _config.yml keep reading.
Simple Fix: In your mysite/_config.php, simply add:
// Disable back-end AJAX calls to /Security/ping
Config::inst()->update('LeftAndMain', 'session_keepalive_ping', false);
This will prevent the CMS from refreshing the session which will naturally expire on it's own behind the scenes (and will not be submitted on the next request). That way, the setting you may already have in _config.yml dictating the session timeout will actually be respected and allowing you to log out a user who's been inactive in the CMS. Again, data should not be lost for the reasons mentioned in the first paragraph.
You can optionally manually override the session timeout value in mysite/_config/config.yml to help ensure it actually expires at some explicit time (e.g. 30min below):
# Set session timeout to 30min.
Session:
timeout: 1800
You may ask: Why is this necessary?
Because, while the bug (or functionality?) preventing you from overriding the LeftAndMain.session_keepalive_ping setting to false was supposedly fixed in framework PR #3272 it was actually reverted soon thereafter in PR #3275
I hope this helps anyone else confused by this situation like I was!

This works, but would love to hear from the core devs as to whether or not this is best practice.
In mysite/code I created a file called MyLeftAndMainExtension.php with the following code:
<?php
class MyLeftAndMainExtension extends Extension {
public function onAfterInit() {
self::logoutInactiveUser();
}
public static function logoutInactiveUser() {
$inactivityLimit = 1; // in Minutes - deliberately set to 1 minute for testing
$inactivityLimit = $inactivityLimit * 60; // Converted to seconds
$sessionStart = Session::get('session_start_time');
if (isset($sessionStart)){
$elapsed_time = time() - Session::get('session_start_time');
if ($elapsed_time >= $inactivityLimit) {
$member = Member::currentUser();
if($member) $member->logOut();
Session::clear_all();
Controller::curr()->redirect(Director::baseURL() . 'Security/login');
}
}
Session::set('session_start_time', time());
}
}
Then I added the following line to mysite/_config.php
LeftAndMain::add_extension('MyLeftAndMainExtension');
That seemed to do the trick. If you prefer to do it through yml, you can add this to mysite/_config/config.yml :
LeftAndMain:
extensions:
- MyLeftAndMainExtension

The Session.timeout config option is available for setting an inactivity timeout for sessions. However, setting it to anything greater than 5 minutes isn't going to work in the CMS out of the box.
Having a timeout in the CMS isn't productive, and your content managers will end up ruing the timeout. This is because it is possible (and fairly common) to be active in the CMS, while appearing inactive from the server's perspective (say, you're writing a lengthy article). As such, the CMS is designed to send a ping back to the server every 5 minutes to ensure users are logged in. While you can stop this behaviour by setting the LeftAndMain.session_keepalive_ping config option to false, I strongly recommended against doing so.

Related

What is Comunication Failure in Shopware 5?

when creating new theme there's error occurred.
0 - Communication Failure
Why this happen? could you please help me?
This usually happens due to a timeout that occurs when the Theme-controller tries to read the Theme's configuration for the first time. Unfortunately, this is quite a resource-heavy process; on weaker servers, timeouts may occur during this process quite often.
You can confirm this by opening the Theme-Manager, opening your browser's developer tools, refreshing the Theme overview and look at the response of the backend/Themes/list-Request.
You can give your server more time with the php-function set_time_limit. In engine/Shopware/Components/Theme/Installer.php, in the synchronize-method, prepend set_time_limit(0):
public function synchronize()
{
set_time_limit(0);
$this->synchronizeThemes();
}
Alternatively, prepend set_time_limit(0); to your shopware.php file, but don't forget to remove it again once the theme-overview loaded successfully.

Random failure of selenium test on test server

I'm working on a project which uses nodejs and nighwatch for test automation. The problem here is that the tests are not reliable and give lots of false positives. I did everything to make them stable and still getting the errors. I went through some blogs like https://bocoup.com/blog/a-day-at-the-races and did some code refactoring. Did anyone have some suggestions to solve this issue. At this moment I have two options, either I rewrite the code in Java(removing nodejs and nightwatch from solution as I'm far more comfortable in Java then Javascript. Most of the time, struggle with the non blocking nature of Javascript) or taking snapshots/reviewing app logs/run one test at a time.
Test environment :-
Server -Linux
Display - Framebuffer
Total VM's -9 with selenium nodes running the tests in parallel.
Browser - Chrome
Type of errors which I get is element not found. Most of the time the tests fail as soon the page is loaded. I have already set 80 seconds for timeout so time can't be issue. The tests are running in parallel but on separate VM's so I don't know whether it can be issue or not.
Edit 1: -
Was working on this to know the root cause. I did following things to eliminate random fails: -
a. Added --suiteRetries to retry the failed cases.
b. Went through the error screenshot and DOM source. Everything seems fine.
c. Replaced the browser.pause with explicit waits
Also while debugging I observed one problem, maybe that is the issue which is causing random failures. Here's the code snippet
for (var i = 0; i < apiResponse.data.length; i++) {
var name = apiResponse.data[i];
browser.useXpath().waitForElementVisible(pageObject.getDynamicElement("#topicTextLabel", name.trim()), 5000, false);
browser.useCss().assert.containsText(
pageObject.getDynamicElement("#topicText", i + 1),
name.trim(),
util.format(issueCats.WRONG_DATA)
);
}
I added the xpath check to validate if i'm waiting enough for that text to appear. I observed that visible assertion is getting passed but in next assertion the #topicText is coming as previous value or null.This is an intermittent issue but on test server happens frequently.
There is no magic bullet to brittle UI end to end tests. In the ideal world there would be an option set avoid_random_failures=true that would quickly and easily solve the problem, but for now it's only a dream.
Simple rewriting all tests in Java will not solve the problem, but if you feel better in java, then I would definitely go in that direction.
As you already know from this article Avoiding random failures in Selenium UI tests there are 3 commonly used avoidance techniques for race conditions in UI tests:
using constant sleep
using WebDriver's "implicit wait" parameter
using explicit waits (WebDriverWait + ExpectedConditions + FluentWait)
These techniques are also briefly mentioned on WebDriver: Advanced Usage, you can also read about them here: Tips to Avoid Brittle UI Tests
Methods 1 and 2 are generally not recommended, they have drawbaks, they can work well on simple HTML pages, but they are not 100% realiable on AJAX pages, and they slow down the tests. The best one is #3 - explicit waits.
In order to use technique #3 (explicit waits) You need to familiarize yourself and be comfortable with the following WebDriver tools (I point to theirs java versions, but they have their counterparts in other languages):
WebDriverWait class
ExpectedConditions class
FluentWait - used very rarely, but very usefull in some difficult cases
ExpectedConditions has many predefinied wait states, the most used (in my experience) is ExpectedConditions#elementToBeClickable which waits until an element is visible and enabled such that you can click it.
How to use it - an example: say you open a page with a form which contains several fields to which you want to enter data. Usually it is enought to wait until the first field appears on the page and it will be editable (clickable):
By field1 = By.xpath("//div//input[.......]");
By field2 = By.id("some_id");
By field3 = By.name("some_name");
By buttonOk = By.xpath("//input[ text() = 'OK' ]");
....
....
WebDriwerWait wait = new WebDriverWait( driver, 60 ); // wait max 60 seconds
// wait max 60 seconds until element is visible and enabled such that you can click it
// if you can click it, that means it is editable
wait.until( ExpectedConditions.elementToBeClickable( field1 ) ).sendKeys("some data" );
driver.findElement( field2 ).sendKeys( "other data" );
driver.findElement( field3 ).sendKeys( "name" );
....
wait.until( ExpectedConditions.elementToBeClickable( buttonOK)).click();
The above code waits until field1 becomes editable after the page is loaded and rendered - but no longer, exactly as long as it is neccesarry. If the element will not be visible and editable after 60 seconds, then test will fail with TimeoutException.
Usually it's only necessary to wait for the first field on the page, if it becomes active, then the others also will be.

Make Watir-webdriver to load a page for limited time and to be able to retrieve information later

I know that there are several questions related to implementation of waiting and timeouts in Watir, however I have not found an answer to my problem (which must be common). I use Watir-webdriver for testing of a page that due to AJAX implementation loads portion-by-portion for very long time (more than 5 min). I need to be able just to sample this page for a limited time (20-40 sec) and to be able to analyze the information that is loaded during this short time. However, as I know, there is no straightforward direct mechanism to tell Watir::Browser to stop. I can use the Timeout, but although my script gets the control after rescue, it is impossible to interrogate the browser and verify the information that it is able to received during the timeout window. All I can do at this point is to kill the process and restart the browser as discussed here: Make headless browser stop loading page and elsewhere.
The code below illustrates my situation. In this example I have a global timeout (30 sec) and a local timeout (15 sec) used for reading the page. It never gets to b.text call; the script just outputs the first exception after 15 sec and then it keeps waiting for the browser to be released and after the global timeout of 30 sec prints the second exception message.
Time out. Got into exception branch
Dropped to bottom rescue.
The end.
I also tried to send an 'escape' key to the browser, but any communication with it while it is in the goto method is impossible. Any tips and suggestions will be appreciated!
require 'watir-webdriver'
require 'timeout'
client = Selenium::WebDriver::Remote::Http::Default.new
client.timeout = 30 # Set the global timeout
b = Watir::Browser.new :chrome, :http_client => client
my_url = '...here is my address...'
begin
begin
Timeout::timeout(15) { b.goto my_url } # Access the page with local timeout
b.close # if all is unbelievably good and the page is loaded
rescue Exception => e
puts 'Time out. Got into exception branch'
if b.text.include? 'my_text' # NEVER GETS HERE
puts 'Yes, I see the text!'
else
puts 'I do not see the text.'
end
end
rescue Exception => e
puts 'Dropped to bottom rescue.'
end
puts 'The end.'
Watir relies on Selenium WebDriver to handle calls to the browser. At this time all browsers require that the document.readyState of the current frame return "complete" before returning control to your code.
A recent update to the webdriver specification appears to allow for the possibility of a browser driver implementing a page loading strategy that is not blocking, but it is not a requirement and is not supported at this time.
https://w3c.github.io/webdriver/webdriver-spec.html#the-page-load-strategy

Potentially vulnerability using setInterval in Firefox addon?

I've written a Firefox addon for the first time and it was reviewed and accepted a few month ago. This add-on calls frequently a third-party API. Meanwhile it was reviewed again and now the way it calls setInterval is criticized:
setInterval called in potentially dangerous manner. In order to prevent vulnerabilities, the setTimeout and setInterval functions should be called only with function expressions as their first argument. Variables referencing function names are acceptable but deprecated as they are not amenable to static source validation.
Here's some background about the »architecture« of my addon. It uses a global Object which is not much more than a namespace:
if ( 'undefined' == typeof myPlugin ) {
var myPlugin = {
//settings
settings : {},
intervalID : null,
//called once on window.addEventlistener( 'load' )
init : function() {
//load settings
//load remote data from cache (file)
},
//get the data from the API
getRemoteData : function() {
// XMLHttpRequest to the API
// retreve data (application/json)
// write it to a cache file
}
}
//start
window.addEventListener(
'load',
function load( event ) {
window.removeEventListener( 'load', load, false ); needed
myPlugin.init();
},
false
);
}
So this may be not the best practice, but I keep on learning. The interval itself is called inside the init() method like so:
myPlugin.intervalID = window.setInterval(
myPlugin.getRemoteData,
myPlugin.settings.updateMinInterval * 1000 //milliseconds!
);
There's another point setting the interval: an observer to the settings (preferences) clears the current interval and set it exactly the same way like mentioned above when a change to the updateMinInterval setting occures.
As I get this right, a solution using »function expressions« should look like:
myPlugin.intervalID = window.setInterval(
function() {
myPlugin.getRemoteData();
},
myPlugin.settings.updateMinInterval * 1000 //milliseconds!
);
Am I right?
What is a possible scenario of »attacking« this code, I've overlooked so far?
Should setInterval and setTimeout basically used in another way in Firefox addons then in »normal« frontend javascripts? Because the documentation of setInterval exactly shows the way using declared functions in some examples.
Am I right?
Yes, although I imagine by now you've tried it and found it works.
As for why you are asked to change the code, it's because of the part of the warning message saying "Variables referencing function names are acceptable but deprecated as they are not amenable to static source validation".
This means that unless you follow the recommended pattern for the first parameter it is impossible to automatically calculate the outcome of executing the setInterval call.
Since setInterval is susceptible to the same kind of security risks as eval() it is important to check that the call is safe, even more so in privileged code such as an add-on so this warning serves as a red flag to the add-on reviewer to ensure that they carefully evaluate the safety of this line of code.
Your initial code should be accepted and cause no security issues but the add-on reviewer will appreciate having one less red flag to consider.
Given that the ability to automatically determine the outcome of executing JavaScript is useful for performance optimisation as well as automatic security checks I would wager that a function expression is also going to execute more quickly.

Testing Workflow History Autocleanup

I am facing a rather peculiar problem here. We have an OOTB approval workflow, which logs all the workflow history into a workflow history list. This list gets purged every 60 days. To increase this time period for which the workflow history is retained, I googled around and found that I have run this code:
using (SPSite wfSite = new SPSite(siteUrl))
{
using (SPWeb wfWeb = wfSite.OpenWeb(webName))
{
SPList wfList = wfWeb.Lists[listName];
SPWorkflowAssociation _wfAssociation = null;
foreach (SPWorkflowAssociation a in wfList.WorkflowAssociations)
{
if("approval 1" == wfAssociationName.ToLowerInvariant())
{
a.AutoCleanupDays = newCleanupDays;
_wfAssociation = a;
assoCounter++;
}
else
{
_wfAssociation = a;
}
}
wfList.UpdateWorkflowAssociation(_wfAssociation);
}
}
The code works fine, in the sense that it does not throw any exceptions. So far so good. But my problem now is that I need to test whether my code works. So i set the newCleanupDays variable to 0. But i see that new workflow activities are still getting logged in the workflow history list. I can set the variable to 1, but that would mean waiting an entire day to see if the code works..
Is there any way in which I can test my scenario, so that I can set the autocleanup days to 1, and I dont't have to wait an entire day to see if my code works? Is there any way I can "Fool" the system into thinking that 1 day has elapsed? I tried changing the system time and restarting the server and everything, but it didn't work for me.
Changing the system time should work, but you are going to have to kick off the timer job that initiates the workflows. Don't restart the server after the time change.
One warning is that SharePoint really really does not like travelling back in time. Any documents that are created in the "future" are going to have issues. So remember to test in a new web site that can be deleted when you roll back to "now".

Resources