I've implemented IAlertUpdateHandler interface in a class and used it for handling creation and updating of alerts. The code is fired but it goes into endless loop by calling itself again and again.
Actually I want to suppress email notification so I'm calling a.Update(false); but this again calls PreUpdate or PostUpdate method and there is
StackOverFlowException :(
I've tried returning true/false from both the methods but nothing is helping.
According to the documentation, you're supposed to just return true or false. Try commenting out the Update call and just return either true or false. If you call Update, you'll just go into the recursive loop and your return statement will never get evaluated.
1
I know that it might be a bit late for this but I thought put it anyway to help the next one.
I found a solution/hack for this problem.
I beleive that when the alert is created from the UI the system fires an SPAlert.update(), so what I came up with is to do similar but ignore the update called from the UI to do that I added a custom property to the SPAlert property bag.
public bool PreUpdate(SPAlert a, SPWeb web, bool newAlert, string properties)
{
if (CHECK_IF_SUPPRESSING_EMAIL && !a.Properties.ContainsKey("CustomUpdate"))
{
//add a property to identify this update
a.Properties.Add("CustomUpdate", ""); //can be called anything :)
a.Update(false);
//return false to ignore the update sent by the UI
return false;
}
else
{
//no changes here proceed with custom behaviour
return true;
}
}
I have tested and it seems to do the trick hope this helps someone :)
Related
I have overwritten the data view for a custom graph in an extension, which returns the correct data without issue, both by re-declaring the view, and using the delegate object techniques. The issue is that when I do, the AllowSelect/AllowDelete modifications on the view in the primary graph stop working, once I comment out the overwrite, the logic works as normal.
Not sure what I'm missing, but any thoughts would be appreciated
Edit: To clarify, on the main graph, without the extension, the data retrieval and Allow... work without issue
public class FTTicketEntry : PXGraph<FTTicketEntry, UsrFTHeader>
{
public PXSelect<UsrFTHeader> FTHeader;
public PXSelect<UsrFTGridLabor, Where<UsrFTGridLabor.ticketNbr, Equal<Current<UsrFTHeader.ticketNbr>>>> FTGridLabor;
And with the extension, the data is returned correctly from the modified view, but the Allow... do not work from the main graph, only when entered on the extension
public class FTTicketEntryExtension : PXGraphExtension<FTTicketEntry>
{
public PXSelect<UsrFTGridLabor, Where<UsrFTGridLabor.ticketNbr, Equal<Current<UsrFTHeader.ticketNbr>>, And<UsrFTGridLabor.projectID, Equal<Current<UsrFTHeader.projectID>>, And<UsrFTGridLabor.taskID, Equal<Current<UsrFTHeader.taskID>>>>>> FTGridLabor;
I have also tried the other process on the extension with the same results, the data is filtered correctly, but the Allow... commands fail.
public PXSelect<UsrFTGridLabor, Where<UsrFTGridLabor.ticketNbr, Equal<Current<UsrFTHeader.ticketNbr>>>> FTGridLabor;
public virtual IEnumerable fTGridLabor()
{
foreach (PXResult<UsrFTGridLabor> record in Base.FTGridLabor.Select())
{
UsrFTGridLabor p = (UsrFTGridLabor)record;
if (p.ProjectID == Base.FTHeader.Current.ProjectID && p.TaskID == Base.FTHeader.Current.TaskID)
{
yield return record;
}
}
}
My main concern with not wanting to use PXSelectReadOnly, is that there is a status field on the header which drives when certain combinations of the conditions are required and are called on the rowselected events, sometimes all and sometimes none, and the main issue is that I obviously don't want to have to replicate all of the UI logic into the extension, when overwriting the view was the main intent of the extension for the screen.
Appreciate the assistance, and hopefully you see something I'm overlooking or have missed
Thanks
Every BLC instance stores all actual data views and actions within 2 collections: Views and Actions. Whenever, you customize a data view or an action with a BLC extension, the original data view / action gets replaced in the appropriate collection by your custom object declared within the extension class. After the original data view or action was removed from the appropriate collection, it's quite obvious that any change made to the original object will not make any effect, since the original object is not used by the BLC anymore.
The easiest way to access actual object from either of these 2 collections would be as follows: Views["FTGridLabor"].Allow... = value;
Alternatively, you might operate with AllowInsert, AllowUpdate and AllowDelete properties on the cache level: FTGridLabor.Cache.Allow... = value;
By changing AllowXXX properties on the cache level, you completely eliminate the need for setting AllowXXX on the data view, since PXCache.AllowXXX properties have higher priority when compared to identical properties on the data view level:
public class PXView
{
...
protected bool _AllowUpdate = true;
public bool AllowUpdate
{
get
{
if (_AllowUpdate && !IsReadOnly)
{
return Cache.AllowUpdate;
}
return false;
}
set
{
_AllowUpdate = value;
}
}
...
}
With all that said, to resolve your issue with UI Logic not applying to modified view, please consider one of the following options:
Set AllowXXX property values in both the original BLC and its extensions via the object obtained from the Views collection:
Views["FTGridLabor"].Allow... = value;
operate with AllowXXX property values on the cache level: FTGridLabor.Cache.Allow... = value;
First check if your DataView should/should not be a variant of PXSelectReadonly.
Without more information my advice would be to set the Allow properties in Initialize method of your extension:
public override void Initialize()
{
// This is similar to PXSelectReadonly
DataView.AllowDelete = false;
DataView.AllowInsert = false;
DataView.AllowUpdate = false;
}
I'm trying to make my MVC4-website check to see if people should be alerted with an email because they haven't done something.
I'm having a hard time figuring out how to approach this. I checked if the shared hosting platform would allow me to activate some sort of cronjob, but this is not available.
So now my idea is to perform this check on each page-request, which already seems suboptimal (because of the overhead). But I thought that with using an async it would not be in the way of people just visiting the site.
I first tried to do this in the Application_BeginRequest method in Global.asax, but then it gets called multiple times per page-request, so that didn't work.
Next I found that I can make a Global Filter which executes on OnResultExecuted, which would seemed promising, but still it's no go.
The problem I get there is that I'm using MVCMailer to send the mails, and when I execute it I get the error: {"Value cannot be null.\r\nParameter name: httpContext"}
This probably means that mailer needs the context.
The code I now have in my global filter is the following:
public override void OnResultExecuted(ResultExecutedContext filterContext)
{
base.OnResultExecuted(filterContext);
HandleEmptyProfileAlerts();
}
private void HandleEmptyProfileAlerts()
{
new Thread(() =>
{
bool active = false;
new UserMailer().AlertFirst("bla#bla.com").Send();
DB db = new DB();
DateTime CutoffDate = DateTime.Now.AddDays(-5);
var ProfilesToAlert = db.UserProfiles.Where(x => x.CreatedOn < CutoffDate && !x.ProfileActive && x.AlertsSent.Where(y => y.AlertType == "First").Count() == 0).ToList();
foreach (UserProfile up in ProfilesToAlert)
{
if (active)
{
new UserMailer().AlertFirst(up.UserName).Send();
up.AlertsSent.Add(new UserAlert { AlertType = "First", DateSent = DateTime.Now, UserProfileID = up.UserId });
}
else
System.Diagnostics.Debug.WriteLine(up.UserName);
}
db.SaveChanges();
}).Start();
}
So my question is, am I going about this the right way, and if so, how can I make sure that MVCMailer gets the right context?
The usual way to do this kind of thing is to have a single background thread that periodically does the checks you're interested in.
You would start the thread from Application_Start(). It's common to use a database to queue and store work items, although it can also be done in memory if it's better for your app.
In my controller, I have a before() function that calls parent::before() and then does some additional processing once the parent returns. based on a specific condition, I want to "save" the original request and pass execution to a specific action. Here is my before() function.
public function before() {
parent::before();
$this->uri = Request::Instance()->uri;
$match = ORM::factory('survey_tester')
->where('eid','=',$this->template->user->samaccountname)
->find();
if (!$match->loaded()) {
self::action_tester("add",$this->template->user);
}
}
And the action that is being called..
public function action_tester($op=null,$user=null) {
$testers = ORM::factory('survey_tester')->find_all();
$tester = array();
$this->template->title = 'Some new title';
$this->template->styles = array('assets/css/survey/survey.css' => 'screen');
$this->template->scripts = array('assets/js/survey/tester.js');
$tester['title'] = $this->template->title;
$tester['user'] = $this->template->user;
switch ($op) {
case "add":
$tester = ORM::factory('survey_tester');
$tester->name = $user->displayname;
$tester->email = $user->mail;
$tester->division = $user->division;
$tester->eid = $user->samaccountname;
if ($tester->save()) {
$this->template->content = new View('pages/survey/tester_add', $admin);
} else {
$this->template->content = new View('pages/survey/tester_error', $admin);
}
break;
default:
break;
}
}
This all seems to work fine. This is designed to prompt the user for a specific piece of information that is not provided by $user (populated by LDAP) if this is the first time they are hitting the controller for any reason.
The problem is the views are not rendering. Instead control passes back to whatever action was originally requested. This controller is called survey. If i browse to http://my.site.com/survey and login with new user info, the record gets written and i get the action_index views instead of my action_tester views.
I cannot figure out what I am doing wrong here. Any ideas will be appreciated. Thank you.
EDIT: I managed to get this working (sort-of) by using $this->request->action = 'tester'; but I'm not sure how to add/set new params for the request yet.
The issue is that you are calling your method (action_tester), but then Kohana is still going to call the original action after the before method is called, which is going to change the response content overwriting the changed made in action_tester().
You can change the action being called (after before is called) inside your before() method:
$this->request->action('action_tester');
After the before method is called, it should then call the new Action (action_tester) rather than the old one, but then you need to do something about the way you are passing your parameters then.
Or you could just redirect the request upon some condition:
if($something) {
$this->request->redirect('controller/tester');
}
This doesn't seem like a nice way to do it anyway.
Im using subsonic 2.2
I tried asking this question another way but didnt get the answer i was looking for.
Basically i ususally include validation at page level or in my code behind for my user controls or aspx pages. However i haev seen some small bits of info advising this can be done within partial classes generated from subsonic.
So my question is, where do i put these, are there particular events i add my validation / business logic into such as inserting, or updating. - If so, and validation isnt met, how do i stop the insert or update. And if anyone has a code example of how this looks it would be great to start me off.
Any info greatly appreciated.
First you should create a partial class for you DAL object you want to use.
In my project I have a folder Generated where the generated classes live in and I have another folder Extended.
Let's say you have a Subsonic generated class Product. Create a new file Product.cs in your Extended (or whatever) folder an create a partial class Product and ensure that the namespace matches the subsonic generated classes namespace.
namespace Your.Namespace.DAL
{
public partial class Product
{
}
}
Now you have the ability to extend the product class. The interesting part ist that subsonic offers some methods to override.
namespace Your.Namespace.DAL
{
public partial class Product
{
public override bool Validate()
{
ValidateColumnSettings();
if (string.IsNullOrEmpty(this.ProductName))
this.Errors.Add("ProductName cannot be empty");
return Errors.Count == 0;
}
// another way
protected override void BeforeValidate()
{
if (string.IsNullOrEmpty(this.ProductName))
throw new Exception("ProductName cannot be empty");
}
protected override void BeforeInsert()
{
this.ProductUUID = Guid.NewGuid().ToString();
}
protected override void BeforeUpdate()
{
this.Total = this.Net + this.Tax;
}
protected override void AfterCommit()
{
DB.Update<ProductSales>()
.Set(ProductSales.ProductName).EqualTo(this.ProductName)
.Where(ProductSales.ProductId).IsEqualTo(this.ProductId)
.Execute();
}
}
}
In response to Dan's question:
First, have a look here: http://github.com/subsonic/SubSonic-2.0/blob/master/SubSonic/ActiveRecord/ActiveRecord.cs
In this file lives the whole logic I showed in my other post.
Validate: Is called during Save(), if Validate() returns false an exception is thrown.
Get's only called if the Property ValidateWhenSaving (which is a constant so you have to recompile SubSonic to change it) is true (default)
BeforeValidate: Is called during Save() when ValidateWhenSaving is true. Does nothing by default
BeforeInsert: Is called during Save() if the record is new. Does nothing by default.
BeforeUpdate: Is called during Save() if the record is new. Does nothing by default.
AfterCommit: Is called after sucessfully inserting/updating a record. Does nothing by default.
In my Validate() example, I first let the default ValidatColumnSettings() method run, which will add errors like "Maximum String lenght exceeded for column ProductName" if product name is longer than the value defined in the database. Then I add another errorstring if ProductName is empty and return false if the overall error count is bigger than zero.
This will throw an exception during Save() so you can't store the record in the DB.
I would suggest you call Validate() yourself and if it returns false you display the elements of this.Errors at the bottom of the page (the easy way) or (more elegant) you create a Dictionary<string, string> where the key is the columnname and the value is the reason.
private Dictionary<string, string> CustomErrors = new Dictionary<string, string>
protected override bool Validate()
{
this.CustomErrors.Clear();
ValidateColumnSettings();
if (string.IsNullOrEmpty(this.ProductName))
this.CustomErrors.Add(this.Columns.ProductName, "cannot be empty");
if (this.UnitPrice < 0)
this.CustomErrors.Add(this.Columns.UnitPrice, "has to be 0 or bigger");
return this.CustomErrors.Count == 0 && Errors.Count == 0;
}
Then if Validate() returns false you can add the reason directly besides/below the right field in your webpage.
If Validate() returns true you can safely call Save() but keep in mind that Save() could throw other errors during persistance like "Dublicate Key ...";
Thanks for the response, but can you confirm this for me as im alittle confused, if your validating the column (ProductName) value within validate() or the beforevalidate() is string empty or NULL, doesnt this mean that the insert / update has already been actioned, as otherwise it wouldnt know that youve tried to insert or update a null value from the UI / aspx fields within the page to the column??
Also, within asp.net insert or updating events we use e.cancel = true to stop the insert update, if beforevalidate failes does it automatically stop the action to insert or update?
If this is the case, isnt it eaiser to add page level validation to stop the insert or update being fired in the first place.
I guess im alittle confused at the lifecyle for these methods and when they come into play
I've created a snippet that pulls data from a databse table and displays it in tabular format. The snippet takes an id as parameter, and this is added to the sql query.
My problem is that if I've got more than 1 snippet call (sometimes need the tabular data for different id's displayed on a page) on the same page, all table data is the same as the last database call that's been made by the last snippet.
What do I need to do to kinda not cache the snippet database calls and have them all display their own content?
I've tried setting the page to no cache-able. Also used the [! !] brackets for the snippet calls, and even used the function_exists() method, but none of them helped.
Please can someone help me?
thanks
Try this at the end of the snippet:
mysql_connect('host', 'user', 'pass');
mysql_select_db('db_name');
You need to specify the connection parameters ofcourse.
It would help to answer if you can post your snippet. I do this with multiple calls on the page without issue, so there is either something wrong inside the snippet, or you need to output to unique placeholder names.
You have encountered a glitch of ModX, and it took me a long time to solve. ModX does a lot of caching by using hashing and apparently, when multiple connections are made from within one page divided over multiple snippets, this erratic behaviour can be seen. This is most likely very unwanted behaviour, it can be solved easily but gives you terrible headache otherways.
One sympton is that $modx->getObject($classname, $id)returns null (often).
The solution is very simple:
either use a static class with a single db instance, or
use $modx->setPlaceholder($instance, $tag);, or a combination.
My solution has been:
class dt__xpdo {
private function __construct() {}
public function __destruct() {
$this->close();
}
static public function db($modx = null) {
if ($modx->getPlaceholder('dt_xpdo') == '') {
$dt_user = 'xxxxxxxxx';
$dt_pw = 'xxxxxxxxx';
$dt_host = 'localhost';
$dt_dbname = 'xxxxxxxxx';
$dt_port = '3306';
$dt_dsn = "mysql:host=$dt_host;dbname=$dt_dbname;port=$dt_port;charset=utf8";
$dt_xpdo = new xPDO($dt_dsn, $dt_user, $dt_pw);
$dt_xpdo->setPackage('mymodel', MODX_CORE_PATH.'components/mymodel/'.'model/', '');
//$modx->log(modX::LOG_LEVEL_DEBUG, 'mymodel.config.php');
//$modx->log(modX::LOG_LEVEL_DEBUG, 'Could not addPackage for mymodel!');
$modx->setPlaceholder('dt_xpdo', $dt_xpdo);
}
return $modx->getPlaceholder('dt_xpdo');
}
}
Now you can use in your code:
require_once 'above.php';
and use something like
$xpdo = dt__xpdo::db($modx);
and continue flawlessly!