Springboot Multi-tenancy does not work with multi-threading - multithreading

I have referenced this link to implement springboot multi-tenancy for two data sources - different databases(same schemas though) - https://anakiou.blogspot.in/2015/08/multi-tenant-application-with-spring.html
It works fine till I did not introduce any multi-threading in my application.
When I added a ExecutorService to do inserts in multiple tables for every record in a csv file - I saw the new threads did not contain information of the original tenant identifier that the rest service call was made with.
Instead, it started using the default tenant in the new threads.
How can we solve this? Will really appreciate any pointers.
EDIT 1: ExecutorService code: Trying to set current tenant as below:
List<Future<Output>> futures = new ArrayList<Future<Output>>();
for (int j = 0; j < myList.size(); j++) {
final Output output= myList.get(j);
Future<Output> future = executorService.submit(new Callable<Output>() {
#Override
public Output call() throws Exception {
**TenantContext.setCurrentTenant(<current tenant goes here>);**
Output currentOutput= someService.executeQueries(output);
return currentOutput;
}
});
futures.add(future);
}

The normal approach to propagate the tenant is by using ThreadLocals. In the blog example, it is using the class RequestContextHolder to store the whole request in a ThreadLocal and then resolving the tenant from there.
When you are changing the thread, thread locals are lost in the new thread unless you take care of setting them again.

Related

Azure Logic App, Cant get data from CreateFile Function

So I've noticed a strange behavior which I would like to share and see if anyone has had the similar problem.
We are using on Prem solution where we pickup a file or a http event request, map it to an outgoing xml xsd/schema and then create the file later on prem.
The problem was that the system where we save the file does not cooperate so good with the logic app, the logic app failes sometime because the system takes the file before the logic app can finish writing the full content.
The system receiving the files only read .xml files, so we though we should first rename the files to tmp, let logic app create the files and then rename them.
This solution sounded quite simple before we started actually applying it to the logic app.
If we take FileSystem function which has Rename File function and use the parameters “Name” from the create file on prem
{
"statusCode": 404,
"message": "Resource not found"
}
We get the message 404 that the resource is not found, now this complicates a lot of things, I’ve checked the privileges on the account that should not be an issue.
What we also have tried is listing all files in the folder, creating a foreach and then adding a rule and the Rename File function. This makes it work but the logic app does not cope well with receiving a lof of files at ones with that solution.
But the Rename Files works when it’s in a foreach loop and we extract the file names in a list from root folder or normal folder.
But why does it not work with just using the Rename Function? Is this perhaps an azure function bug in the Logic app Rename File Function?
So after discussing with Microsoft support on Azure they have actually confirmed that there is a bug with the “Create File” function.
It looks like all the data and information is actually lost during that functions, the support technicians do not know why that is happening but they have had similar cases which people have reported.
I have not stumbled across any of those posts, but I will post how we solved the problem with a work around.
FYI, The support team has taken the case further so that the developers at azure should look into it, because it’s not just “name” tag which is lost from Create a File, ( it’s all valuable options are actually lost ).
So first we initialize a variable and then actually set the variable name with two steps before we create the file:
The name is set with a temp name and a GUID.
Next step is creating the file with the temp-name used in function “Set Variable Temp FileName”
And on the Rename File function we use the Path from where we store the temp file and add \”FILENAME”
And add the “New Name” which we want to use.
This proved to work but is a workaround, support confirmed that you should be able to just use the “RenameFile” after creating the file with a temp name and changing it to the desired name.
But since Create a File does not send or pass any information at all from this list we have to initialize Variables to make it work.
If anyone has stumbled on the same problem where the Backend system reads the files before they are managed to be created by the logic app and you need some workaround this worked good for me.
Hope it helps!
We recently had the same issue; and the workaround of renaming the file also failed.
The cause seems to be that the Azure On Prem Gateway creates a file (or renames a file), then releases its lock, before checking that the file exists. In the gap between releasing the lock and checking that the file exists, the file may be picked up (deleted) thus causing LogicApps to think the step failed (reporting a 404 error), and thus confusion.
Our workaround was to create a Windows service which we hosted on the file servers (so they'd be able to respond to file changes before anything else on the network). This service has a configuration file which accepts a list of paths and file filters, and it uses the FileSystemWatcher to monitor for new files, or renamed files. When it detects a match it takes out a read lock on the file. This ensure it's not blocked by anything writing to the file (i.e. so it doesn't have to wait for the On Prem Gateway's write aciton to complete before obtaining its own lock), but whilst our service holds its lock the file can't be deleted (so the consumer can't remove the file / buying time for the On Prem Gateway to perform it's post-write read and report success). Our service releases its own lock after a defined period (we've gone with 30 seconds, though you could likely get away with much less). At that point, the consumer can successfully consume the file.
Basic code for the file watch & locking logic below:
sing System;
using System.IO;
using System.Diagnostics;
using System.Threading.Tasks;
namespace AzureFileGatewayHelper
{
public class Interceptor: IDisposable
{
object lockable = new object();
bool disposed = false;
readonly FileSystemWatcher watcher;
readonly int lockTimeInMS;
public Interceptor(string path, string filter, int lockTimeInSeconds)
{
lockTimeInMS = lockTimeInSeconds * 1000;
watcher = new FileSystemWatcher();
watcher.Path = path;
watcher.Filter = filter;
watcher.NotifyFilter = NotifyFilters.LastAccess
| NotifyFilters.LastWrite
| NotifyFilters.FileName
| NotifyFilters.DirectoryName;
watcher.Created += OnIncercept;
watcher.Renamed += OnIncercept;
}
public Interceptor(InterceptorConfigElement config) : this(config.Path, config.Filter, config.TimeToLockInSeconds) { Debug.WriteLine($"Loaded config ${config.Key}: Path: '${config.Path}'; Filter: '${config.Filter}'; LockTime: : '${config.TimeToLockInSeconds}'."); }
public void Start()
{
watcher.EnableRaisingEvents = true;
}
public void Stop()
{
if (watcher != null)
watcher.EnableRaisingEvents = false;
}
private async void OnIncercept(object source, FileSystemEventArgs e)
{
using (var fs = new FileStream(e.FullPath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
Debug.WriteLine($"Locked: {e.FullPath} {e.ChangeType}");
await Task.Delay(lockTimeInMS);
}
Debug.WriteLine($"Unlocked {e.FullPath} {e.ChangeType}");
}
public void Dispose()
{
if (disposed) return;
lock (lockable)
{
if (disposed) return;
Stop();
watcher?.Dispose();
disposed = true;
}
}
}
}

Spring + Hibernate - handling concurrency with #Transactional method

I have a problem of accessing some transactional method concurrently by multiple threads
The requirement is to check if an account exists already or otherwise create it, The problem with below code is if two threads parallelly execute the accountDao.findByAccountRef() method with same account reference and if they dont find that account , then both try to create the same account which would be a problem ,
Could anyone provide me some suggestion how to overcome this situation ?
The code is pasted below
Thanks
Ramesh
#Transactional(isolation = Isolation.DEFAULT, propagation = Propagation.REQUIRED)
#Override
public void createAccount(final String accountRef, final Money amount) {
LOG.debug("creating account with reference {}", accountRef);
if (isNotBlank(accountRef)) {
// only create a new Account if it doesn't exist already for the given reference
Optional<AccountEO> accountOptional = accountDao.findByAccountRef(accountRef);
if (accountOptional.isPresent()) {
throw new AccountException("Account already exists for the given reference %s", accountRef);
}
// no such account exists, so create one now
accountDao.create(newAccount(accountRef, neverNull(amount)));
} else {
throw new AccountException("account reference cannot be empty");
}
}
If you want your system to perform when you have more than a handful of people using it, you can use the concept of Optimistic Locking (I know that there is no lock involved here).
On the creation this works by trying to insert the new account, and if you get an exception because of the duplicate primary key (you'll need to check this from the exception), then you know that the account was already created.
So in short, you optimistically try to create the row and if it fails, you know that there's one already there.

Azure Storage Tables - Update Condition Not Satisfied

I'm getting a random Exception when I try to update an entity on a storage table. The exception I get is
System.Data.Services.Client.DataServiceRequestException: An error occurred while processing this request. ---> System.Data.Services.Client.DataServiceClientException: {"odata.error":{"code":"UpdateConditionNotSatisfied","message":{"lang":"en-US","value":"The update condition specified in the request was not satisfied.\nRequestId:2a205f10-0002-013b-028d-0bbec8000000\nTime:2015-10-20T23:17:16.5436755Z"}}} ---
I know that this might be a concurrency issue, but the thing is that there's no other process accessing that entity.
From time to time I get dozens of these exceptions, I restart the server and it starts working fine again.
public static class StorageHelper
{
static TableServiceContext tableContext;
static CloudStorageAccount storageAccount;
static CloudTableClient CloudClient;
static StorageHelper()
{
storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudClient = storageAccount.CreateCloudTableClient();
tableContext = CloudClient.GetTableServiceContext();
tableContext.IgnoreResourceNotFoundException = true;
}
public static void Save(int myId,string newProperty,string myPartitionKey,string myRowKey){
var entity = (from j in tableContext.CreateQuery<MyEntity>("MyTable")
where j.PartitionKey == myId
select j).FirstOrDefault();
if (entity != null)
{
entity.MyProperty= myProperty;
tableContext.UpdateObject(entity);
tableContext.SaveChanges();
}
else
{
entity = new MyEntity();
entity.PartitionKey =MyPartitionKey;
entity.RowKey =MyRowKey;
entity.MyProperty= myProperty;
tableContext.AddObject("MyTable", entity);
tableContext.SaveChanges();
}
}
The code you've posted uses the very old table layer which is now obsolete. We strongly recommend you update to a newer version of the storage library and use the new table layer. See this StackOverflow question for more information. Also note that if you're using a very old version of the storage library these will eventually stop working as the service version they're using is going to be deprecated service side.
We do not recommend that customers reuse TableServiceContext objects as has been done here. They contain a variety of tracking that can cause performance issues as well as other adverse effects. These kind of limitations is part of the reason we recommend (as described above) moving to the newer table layer. See the how-to for more information.
On table entity update operations you must send an if-match header indicating an etag. The library will set this for you if you set the entity's etag value. To update no matter what the etag of the entity on the service, use "*".
I suggest you can consider using the Transient Fault Handling Application Block from Microsoft's Enterprise Library to retry when your application encounters such transient fault in Azure to minimize restarting the server every time when the same exception occurs.
https://msdn.microsoft.com/en-us/library/hh680934(v=pandp.50).aspx
While updating your entity, set ETag = "*".
Your modified code should look something like this -
if (entity != null)
{
entity.MyProperty= "newProperty";
tableContext.UpdateObject(entity);
tableContext.SaveChanges();
}

Windows Service and multithreading

Im working on a Windows Service in which I would like to have two threads. One thread should look for updates (in a RSS feed) and insert rows into a DB when updates is found.
When updates are found I would like to send notification via another thread, that accesses the DB, gets the messages and the recipients and then sends notifications.
Perhaps the best practice isn't to use two threads. Should I have db-connections in both threads?
Could anyone provide me with tips how to solve this?
The major reason to make an application or service multithreaded is to perform database or other background operations without blocking (i.e. hanging) a presentation element like a Windows form. If your service depends on very rapid polling or expects db inserts to take a very long time, it might make sense to use two threads. But I can't imagine that either would be the case in your scenario.
If you do decide to make your service multithreaded, the two major classes in C# that you want to look into are BackgroundWorker and ThreadPool. If you want to do multiple concurrent db inserts (for example, if you want to execute an insert for each of multiple RSS feeds polled at the same time), you should use a ThreadPool. Otherwise, use a BackgroundWorker.
Typically, you'd have a db access class that would have a method to insert a row. That method would create a background worker, add DoWork handler to some static method in that db access class to the background worker, then call DoWorkAsync. You should only have db connection settings in that one class in order to make maintaining the code easier. For example:
public static class DbAccess
{
public void InsertRow(SomeObject entity)
{
BackgroundWorker bg = new BackgroundWorker();
bg.DoWork += InsertRow_DoWork;
bg.RunWorkerCompleted += InsertRow_RunWorkerCompleted;
bg.RunWorkerAsync(entity);
}
private void InsertRow_DoWork(object sender, DoWorkEventArgs e)
{
BackgroundWorker bg = sender as BackgroundWorker;
SomeObject entity = e.Argument as SomeObject;
// insert db access here
}
private void InsertRow_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
// send notifications
// alternatively, pass the InsertRow method a
// delegate to a method in the calling class that will notify
}
}

Silverlight - Waiting for asynchronous call to finish before returning from a method

I have a Silverlight application that uses WCF services and also uses the Wintellect Power Threading library to ensure logic executes fully before the application continues. This is achieved by calling back to the application using delegates so it can continue after the service call has completely finished.
I wish to achieve the same thing in another part of my application but without the use of callbacks e.g. call method that uses WCF service to say load an object from the database, wait for this to return and then return the Id of the object from the original method called.
The only way I could see to do this was to carry out the call to the WCF service in a helper library which loads the object on a different thread and the original method would keep checking the helper library (using static variables) to wait for it to complete and then return it.
Is this the best way to achieve this functionality? If so here are details of my implementation which is not working correctly.
public class MyHelper
{
private static Thread _thread;
private static User _loadedObject;
public static GetUser()
{
return _loadedObject;
}
public static void LoadObject(int userId)
{
_loadedObject = null;
ParameterizedThreadStart ts = new ParameterizedThreadStart(DoWork);
_thread = new Thread(ts);
_thread.Start(userId);
}
private static void DoWork(object parameter)
{
var ae = new AsyncEnumerator();
ae.BeginExecute(DoWorkWorker(ae, Convert.ToInt32(parameter)), ae.EndExecute);
}
private static IEnumerator<Int32> DoWorkWorker(AsyncEnumerator ae, int userId)
{
// Create a service using a helper method
var service = ServiceHelper.GetService<IUserServiceAsync>();
service.BeginGetUserById(userId, ae.End(), null);
yield return 1;
_loadedObject = service.EndGetUserById(ae.DequeueAsyncResult());
_thread.Abort();
}
}
My method then is:
public int GetUser(int userId)
{
MyHelper.LoadObject(userId);
User user = MyHelper.GetUser();
while (user == null)
{
Thread.Sleep(1000);
user = MyHelper.GetUser();
}
return user.Id;
}
The call to the get the user is executed on a different thread in the helper method but never returns. Perhaps this is due to the yield and the calling method sleeping. I have checked the call to get the user is on a different thread so I think everything should be kept separate,
The whole construct you are using does not match current best practices of Silverlight. In Silverlight your data access methods (via WebServices of course) are executed asynchronously. You should not design around that, but adapt your design accordingly.
However calling services sequentially (which is different than synchonously) can be valid in some scenarios. In this blog post I have shown how to achieve this by subscribing the Completed event of the remote call and block the UI in the meantime, with which the workflow looks and feels like normal async calls.
I believe calls to the server from Silverlight apps use events that fire on the UI thread; I think that's part of the Silverlight host environment in the browser and can't be worked around. So trying to call back to the server from another thread is never going to end well. If you are waiting in program code in the UI thread, your never going to get the call result events from your WCF calls.
You can simulate a synchronous call from a non-UI thread with a callback on the UI thread, but that is probably not what you want. It's better to bite the bullet and make your program logic work with the async calls Silverlight gives you.
If you code against the Interface created for your service reference you can call the Begin and End methods 'synchronously' for each one of your service calls, we then pass in an Action<T> to execute after the End methods has completed. Take note that you have to do this from a dispatcher. This is very close to making a synchronous call as the code to run after the call is still written where the call is made, and it executes after the service call is completed. It does however involve creating wrapper methods but we also worked around that by hiding our wrappers and generating them automatically. Seems like a lot of work but isn't, and ends up being more elegant than all the event handlers etc. Let me know if you need more info on this pattern

Resources