Data updated in Starcounter DB, push updates to active sessions? - starcounter

Say for example I have a something like this declared in Starcounter
[Database]
public class User
{
public string Username;
public string Email;
}
I have a page listing one row from the DB with an update button with PuppetJS and all working fine.
If I change the value from another session or directly in the DB, is there anyway to directly update the values to any client who are active by pushing the new values to the clients?
*** Edit:
I added following to my TestPage.json.cs file :
void Handle(Input.Update action)
{
Transaction.Commit();
Session.ForAll(s =>
{
if (s.Data is TestPage)
s.CalculatePatchAndPushOnWebSocket();
});
}
This push updates directly to other sessions nicely. Still wonder if there is some better way to do this.

The code that you've presented in your edit is exactly the way to go:
void Handle(Input.Update action)
{
Transaction.Commit();
Session.ForAll(s =>
{
if (s.Data is TestPage)
s.CalculatePatchAndPushOnWebSocket();
});
}
What it does is:
commit changes to db
for every running session, check if that session has a TestPage instance attached to it
if the above is positive, revaluate the bound data and send patches if required
More about pushing changes over WebSocket can be found here: http://starcounter.io/guides/web/sessions/.

Related

Fetching OpenIdConnectConfiguration while offline/no connection to AuthServer

I've been working on how to save OpenIdConnecConfiguration locally in the odd case that the AuthServer is not reachable but the frontend client (e.g. Phone) still has a valid refresh token which still needs to be validated again when signing in. It is also needed to be saved locally to a file in the case that the backend (e.g. WCF) has restarted due to a update or the frequent restarts it has (once a day)
What I've done so far, I've saved the JSON object of the ".well-known/openid-configuration" to a file/variable and now I want to create the OpenIdConnectConfiguration object.
OpenIdConnectConfiguration.Create(json) does a lot of the work but the signingKeys do not get created. I think maybe it's because the authorization endpoint needs to be created in some other manner maybe?
Or maybe I'm doing this the wrong way and there is another solution to this issue. I'm working in C#.
Edit: I know there are some caveats to what I'm doing. I need to check once in awhile to see if the public key has been changed, but security wise it should be fine to save the configuration because it's already public. I only need the public key to validate/sign the jwt I get from the user and nothing more.
Figured out a solution after looking through OpenIdConnectConfiguration.cs on the official github.
When fetching the OpenIdConnectConfiguration the first time, use Write() to get a JSON string and use it to save it to file.
Afterwards when loading the file, use Create() to create the OpenIdConnectConfiguration again from the JSON string (This had the issue of not saving the signingKeys as said in the question, but alas! there is a fix)
Lastly to fix the issue with the signingKeys not being created, (this is what I found out from the github class) all we need to do is loop through the JsonWebKeySet and create them as is done in the class. We already have all the information needed from the initial load and therefore only need to create them again.
I'll leave the code example below of what I did. I still need to handle checking if he key has been changed/expired which is the next step I'll be tackling.
interface IValidationPersistence
{
void SaveOpenIdConnectConfiguration(OpenIdConnectConfiguration openIdConfig);
OpenIdConnectConfiguration LoadOpenIdConnectionConfiguration();
}
class ValidationPersistence : IValidationPersistence
{
private readonly string _windowsTempPath = Path.GetTempPath();
private readonly string _fileName = "TestFileName";
private readonly string _fullFilePath;
public ValidationPersistence()
{
_fullFilePath = _windowsTempPath + _fileName;
}
public OpenIdConnectConfiguration LoadOpenIdConnectionConfiguration()
{
FileService fileService = new FileService();
OpenIdConnectConfiguration openIdConfig = OpenIdConnectConfiguration.Create(fileService.LoadFromJSONFile<string>(_fullFilePath));
foreach (SecurityKey key in openIdConfig.JsonWebKeySet.GetSigningKeys())
{
openIdConfig.SigningKeys.Add(key);
}
return openIdConfig;
}
public void SaveOpenIdConnectConfiguration(OpenIdConnectConfiguration openIdConfig)
{
FileService fileService = new FileService();
fileService.WriteToJSONFile(OpenIdConnectConfiguration.Write(openIdConfig), _fullFilePath);
}
}

Hazelcast not working for insert and update

I have this this method for save and update student but whenever I save student into database and hit getAllStudent it dosen't get back the last student I already have saved ?
ANy help?
#CachePut(cacheNames="studentCache")
public StudentDTO save (StudentDTO studentDTo)
{
Student student=studentRepository.save().map
return studentMapper.toDto(student);
}
#override
#Transactional
#Cacheable
public Page(StudentDTO>findALL(Pageable p)
{
return studentRepository.findAll(pagebale).map(mapper::toDTO);
}
I understand that I can do clear cache whenever I create or update Student
#CacheEvict(value = "users", allEntries=true)
Student create(Student student) {
userStudent.create(student)
}
But I want to avoid that
Your save is caching singletons, findAll is caching collections. As far as the cache manager is concerned, there are two different things, updating one won't update the other.

Azure Mobile Services Soft Delete Issue / Practices

With soft delete turned on, I add a single record on the client, push, delete the added record push and then attempt to add a new record (and then push) with the same primary key as the initial record I get an exception. It would appear that EntityDomainManager just attempts to do a new insert without checking to see if the record is to be 'updated' instead of inserted.
However if I turn off soft delete in the domain manager constructor everything works fine.
We are using incremental sync, so soft delete as I understand it is required to make this work, so we don't end up with different pictures of what's right between mobile and server.
When is/are the recommended approach? A Custom EntityDomainManager (or other DomainManager)? If so it would be useful for more clarity on the interactions between the table controller and the domain manager.
I have constructed this custom domain manager which seems to work, but would appreciate any guidance/suggestions.
public class CustomEntityDomainManager<TData> : EntityDomainManager<TData> where TData : class, ITableData
{
public CustomEntityDomainManager(DbContext context, HttpRequestMessage request, ApiServices services)
: base(context, request, services)
{
}
public CustomEntityDomainManager(DbContext context, HttpRequestMessage request, ApiServices services, bool enableSoftDelete) : base(context, request, services, enableSoftDelete)
{
}
public async override Task<TData> InsertAsync(TData data)
{
if (data == null)
{
throw new ArgumentNullException("data");
}
// now then, if we have soft delete enabled & data has been provided with an id in it
if (EnableSoftDelete && data.Id != null)
{
// now look to see if the record exists and if it is deleted
// if so we look to remove the record before then attempting the insert
// record old value of deleted, since need to query to see if deleted.
var oldIncludeDeleted = IncludeDeleted;
try
{
IncludeDeleted = true;
var existingData = await this.Lookup(data.Id).Queryable.FirstOrDefaultAsync();
// if record exists, and its soft deleted then truly delete it
if (existingData != null && existingData.Deleted)
{
// now need to remove this record...
this.Context.Set<TData>().Remove(existingData);
}
}
finally
{
IncludeDeleted = oldIncludeDeleted;
}
}
if (data.Id == null)
{
data.Id = Guid.NewGuid().ToString("N");
}
return await base.InsertAsync(data);
}
This behavior is by design--we require that you do an explicit undelete before doing the update.
The solution you've presented is fine. You can also move the code to your table controller, assuming you only need this behavior in one table. If you need it in multiple tables, then the custom domain manager is the best approach.

Document not available in query direct after store

I'm trying to store a "Role" object and then get a list of Roles, as shown here:
public class Role
{
public Guid RoleId { get; set; }
public string RoleName { get; set; }
public string RoleDescription { get; set; }
}
//Function store:
private void StoreRole(Role role)
{
using (var docSession = docStore.OpenSession())
{
docSession.Store(role);
docSession.SaveChanges();
}
}
// then it return and a function calls this
public List<Role> GetRoles()
{
using (var docSession = docStore.OpenSession())
{
var Roles = from roles in docSession.Query<Role>() select roles;
return Roles.ToList();
}
}
However, in the GetRoles I am missing the last inserted record/document. If I wait 200ms and then call this function the item is there.
So I am not in sync. ?!
How can I solve this, or alternately how could I know when the result is in the document store for querying?
I've used transactions, but cannot figure this out. Update and delete are just fine, but when inserting I need to delay my 'List' call.
You are treating RavenDB as if it is a relational database, and it isn't. Load and Store are ACID operations in RavenDB, Query is not. Indexes (necessary for queries) are updated asynchronously, and in fact, temporary indexes may have to be built from scratch when you do a session.Query<T>() without a durable index specified. So, if you are trying to query for information you JUST stored, or if you are doing the FIRST query that requires a temporary index to be created, you probably won't get the data you expect.
There are methods of customizing your query to wait for non-stale results but you shouldn't lean on these too much because they're indicative of a bad design - it is better to figure out a better way to do the same thing in a way that embraces eventual consistency, either changing your model (so you get consistency via Load/Store - perhaps you could have one document that defines ALL of the roles in a list?) or by changing the application flow so you don't need to Store and then immediately Query.
An additional way of solving this is to query the index with WaitForNonStaleResultsAsOfLastWrite() turned on inside the save function. That way when the save is completed the index will be updated to at least include the change you just made.
You can read more about this here

Where to check user email does not already exist?

I have an account object that creates a user like so;
public class Account
{
public ICollection<User> Users { get; set; }
public User CreateUser(string email)
{
User user = new User(email);
user.Account = this;
Users.Add(user);
}
}
In my service layer when creating a new user I call this method. However there is a rule that the users email MUST be unique to the account, so where does this go? To me it should go in the CreateUser method with an extra line that just checks that the email is unique to the account.
However if it were to do this then ALL the users for the account would need to be loaded in and that seems like a bit of an overhead to me. It would be better to query the database for the users email - but doing that in the method would require a repository in the account object wouldn't it? Maybe the answer then is when loading the account from the repository instead of doing;
var accountRepository.Get(12);
//instead do
var accountRepository.GetWithUserLoadedOnEmail(12, "someone#example.com");
Then the account object could still check the Users collection for the email and it would have been eagerly loaded in if found.
Does this work? What would you do?
I'm using NHibernate as an ORM.
First off, I do not think you should use exceptions to handle "normal" business logic like checking for duplicate email addresses. This is a well document anti-pattern and is best avoided. Keep the constraint on the DB and handle any duplicate exceptions because they cannot be avoid, but try to keep them to a minimum by checking. I would not recommend locking the table.
Secondly, you've put the DDD tag on this questions, so I'll answer it in a DDD way. It looks to me like you need a domain service or factory. Once you have moved this code in a domain service or factory, you can then inject a UserRepository into it and make a call to it to see if a user already exists with that email address.
Something like this:
public class CreateUserService
{
private readonly IUserRepository userRepository;
public CreateUserService(IUserRepository userRepository)
{
this.userRepository = userRepository;
}
public bool CreateUser(Account account, string emailAddress)
{
// Check if there is already a user with this email address
User userWithSameEmailAddress = userRepository.GetUserByEmailAddress(emailAddress);
if (userWithSameEmailAddress != null)
{
return false;
}
// Create the new user, depending on you aggregates this could be a factory method on Account
User newUser = new User(emailAddress);
account.AddUser(newUser);
return true;
}
}
This allows you to separate the responsiblities a little and use the domain service to coordinate things. Hope that helps!
If you have properly specified the constraints on the users table, the add should throw an exception telling you that there is already a duplicate value. You can either catch that exception in the CreateUser method and return null or some duplicate user status code, or let it flow out and catch it later.
You don't want to test if it exists in your code and then add, because there is a slight possibility that between the test and the add, someone will come along and add the same email with would cause the exception to be thrown anyway...
public User CreateUser(string email)
{
try
{
User user = new User(email);
user.Account = this;
user.Insert();
catch (SqlException e)
{
// It would be best to check for the exception code from your db...
return null;
}
}
Given that "the rule that the users email MUST be unique to the account", then the most important thing is to specify in the database schema that the email is unique, so that the database INSERT will fail if the email is duplicate.
You probably can't prevent two users adding the same email nearly-simultaneously, so the next thing is that the code should handle (gracefully) an INSERT failure cause by the above.
After you've implemented the above, seeing whether the email is unique before you do the insert is just optional.

Resources