Background : I am using Geb+Spock+ Gradle for UI Test automation and using build.gradle file
which contains following config:
tasks.withType(Test) {
maxParallelForks = 2
forkEvery = 1
include '**/*TestSuite*.class'
}
now,there are two suites ->
TestSuite1.class
and
TestSuite2.class
both are running in parallel using multiForking with the help of above config.
Both suite contains UI testcases which intend to verify the status of a payment on a Sandbox,
Actual Problem :Sandbox allow only 1 login at a time (session expires if other thread try to verify the payment status)
I want to run the payment verification method in a synchronized way so that payment verification could be done by one thread at a time(while other thread waits).
Regards
Niks
First of all, for the example you showed, Gradle works in a way that it starts a new JVM, a separate process, for every TestSuite. This means that this is not a multithreading problem, but a rather a process synchronization problem.
You will need to create some kind of lock for your processes.
The most basic way that I can think of is creating a lock directory on the file system.
Write a utility method, that checks if the lock directory is present, and if it is, wait for it to disappear to continue.
If the directory is not present, create the directory.
Then access the payment sandbox, only if you have created the directory.
Be aware that there might be a race condition, depending on your implementation.
But it should not be a problem in practice. Since UI tests are rather slow, you will probably not be requesting in a frequency high enough to ever notice it.
Related
I created in Jmeter a Test Plan that looks like that:
the idea is to keep track of all the APIs the browser access between two pages. So, I open the localhost:63948/Home/LoginEmailSenha and I go until the other page I want. Under the Recording Controller label I get the list of the APIs.
But what can I do so that it feels like there isn't only one person accessing but actually 200? I tried to change the Number of Threads(users) at the Thread Group but nothing seems to change as I want to get the API errors that occured because there are too many people.
HTTP(S) Test Script Recorder is used to create a test scenario "skeleton", it basically stands between browser and application under test, captures requests and generates relevant HTTP Request samplers.
If you need to mimic 200 users you need to replay the recorded scenario with increased number of threads.
References:
JMeter Proxy Step by Step
Building a Web Test Plan
Building an Advanced Web Test Plan
If your target is to simulate the load during recording time you can take a look at JMeter Chrome Extension, it has FollowMe Mode when any your online activity is replicated with a given number of virtual users.
I have implemented a functional Liferay service using service builder and I want to call a method on the -LocalServiceUtil class just as soon as I possibly can. This is a task I wish to perform when the service starts and also when the service is redeployed.
Even though all the methods on the -LocalServiceUtil class are static, they will throw a BeanLocatorException if they are called too soon.
com.liferay.portal.kernel.bean.BeanLocatorException: BeanLocator has not been set for servlet context portal-navigation-impl
Is there any way to call a method on the -LocalServiceImpl instance or otherwise so that I can do this?
Thank you
As you speak about the initialization order: I'm not 100% sure about this, but I'd write a startup action. This gets run everytime a hook (or plugin) starts up - including a redeploy. Sounds like what you want - and if the initialization order works, this is your solution.
Otherwise: Create a separate hook that's dependent on the one that you're currently using. That will be restarted as well, but only run once the hook providing the *-LocalService did already start up. (dependency is declared in liferay-plugin-package.properties, with the key required-deployment-context - this is from memory - somebody correct me if I'm wrong.)
We have a WEBAPI service running on a windows asp.net MVC solution. There is a load method that takes about 40 minutes to complete and return status on the called page. During that time the browser window is tied up. What design options do we have if we want the web page to come back with submitted and the process to continue to run and complete. I don't care if page never shows complete, we can pull that from another status page.
I've done something similar in the past, even though in my case the delay was shorter - 40-50 seconds of loading of fresh data from multiple backend servers in a VPN. It was also in ASP.NET back then, but I believe that the approach is still feasible and you can get some ideas if I share my experience. I remember an old thread that I had favourited in the past and used the insight from it. You can check it out.
Here are some tips, but in short, because I don't remember the details anymore (excuse my google-assisted memory!):
You should start the task in a new thread and not wait for it in your main thread.
You should also make sure that the task is started only once and cannot be initiated infinite number of times by the user via refresh or via the UI. So, you better persist the state in the database, so at refresh, the new thread is created only if the database says that it has not been executed recently or it is not in progress.
Your page will be loaded and show its contents and you can display a .gif representing a progress bar, a loading wheel or something similar to the user.
The task you started will continue on the server. When it completes you can push and update the UI via ajax from within the code-behind to make the experience even smoother if you like.
On subsequent requests, you can just retrieve the state of your task from the database in order to display something like update completed at hh:mm:ss.
Hope this helps you and I wish you the best of luck!
I have two scripts. I put them in the same namespace (the #namespace field).
I'd like them to interactive with another.
Specifically I want script A to set RunByDefault to 123. Have script B check if RunByDefault==123 or not and then have script A using a timeout or anything to call a function in script B.
How do I do this? I'd hate to merge the scripts.
The scripts cannot directly interact with each other and // #namespace is just to resolve script name conflicts. (That is, you can have 2 different scripts named "Link Remover", only if they have different namespaces.)
Separate scripts can swap information using:
Cookies -- works same-domain only
localStorage -- works same-domain only
Sending and receiving values via AJAX to a server that you control -- works cross-domain.
That's it.
Different running instances, of the same script, can swap information using GM_setValue() and GM_getValue(). This technique has the advantage of being cross-domain, easy, and invisible to the target web page(s).
See this working example of cross-tab communication in Tampermonkey.
On Chrome, and only Chrome, you might be able to use the non-standard FileSystem API to store data on a local file. But this would probably require the user to click for every transaction -- if it worked at all.
Another option is to write an extension (add-on) to act as a helper and do the file IO. You would interact with it via postMessage, usually.
In practice, I've never encountered a situation were it wasn't easier and cleaner to just merge any scripts that really need to share data.
Also, scripts cannot share code, but they can inject JS into the target page and both access that.
Finally, AFAICT, scripts always run sequentially, not in parallel. But you can control the execution order from the Manage User Scripts panel
I have an indexing function named "Execute()" using IndexWriter to index my site's content. It works great if I simply called it from a web page, but failed when I have it as a delegate parameter into System.Threading.Thread. Strangely though, it always work on my local dev machine, it only fails when I uploads to a shared host.
This is the error message I got
"Lock obtain timed out: SimpleFSLock error...."
Below is the failed code (but only fails on a shared host)
Scheduler scheduler = new Scheduler();
System.Threading.Thread schedulerThread = new System.Threading.Thread(scheduler.Execute);
Below is the code that works (work both on my local machine and on shared host)
Scheduler scheduler = new Scheduler();
schedulre.Execute();
Now, some ppl said, it could be a bad left over from the previous debugging session, so before I instantiated the IndexWriter, I did
if (IndexReader.IsLocked(indexingFolder))
{
log.Debug("it is locked");
IndexReader.Unlock(FSDirectory.GetDirectory(indexingFolder));
}
else
{
log.Debug("it is not locked");
}
and guess what? my log says, it is not locked.
So now I'm pretty sure it's caused by the System.Thread.Threading, but I just have no clue as to how to fix it.
Thanks
Check that on the shared host, the thread has the same permissions to the index folder as you do on the development machine/shared host.
Update: You can find what Principal the thread is running under by interrogating the thread's CurrentPrincipal property. Though this is a read-write property, you may not have the permissions to set this property in your shared-host environment.
You might find this post helpful.
Thanks everyone and especially to Vinay for pointing me in the right direction. After much tracing, i finally decided to take a look at the source and see what's there.
In "IndexWriter", you have
Lock #lock = this.directory.MakeLock("write.lock");
if (!#lock.Obtain(this.writeLockTimeout))
which is pointed to the SimpleFSLock implementation. The culprit was
new FileStream(this.lockFile.FullName, FileMode.CreateNew).Close();
by creating a new thread, internally, it throws a system.unauthorizedaccessexception, according to msdn here
When starting a new thread, System.Security.Principal.WindowsIdentity.GetCurrent() returns the identity of the process, not necessarily the identity of the code that called Thread.Start(). This is important to remember when starting asynchronous delegates or threads in an impersonated ASP.NET thread.
If you are in ASP.NET and want the new thread to start with the impersonated WindowsIdentity, pass the WindowsIdentity to the ThreadStart method. Once in the ThreadStart method, call WindowsIdentity.Impersonate().
Such, I solved my issue by impersonate the IIS account running my application in "Execute()" function and all problems are resolved.
Thanks again to all.
Probably the worst one to try and answer this since I haven't used lucene / shared hosting, but SimpleFSLock sounds like it's locking the lucene index file by using an explicit lock file on the file system (not quite the same as locking in threading). I'd say check to make sure you have configured the proper file paths and that file permissions are set correctly.
Otherwise, hopefully someone more familiar with Lucene.net can answer.
I believe the problem is with a write lock file in the Lucene index directory.
Go and list the directory's files.
In Java Lucene, you would have seen a file named write.lock in the index directory,
meaning that the index was not properly closed last time (maybe a process was abruptly stopped). In Lucene.net, look for a similarly named empty file.
I believe the same mechanism will be used in Lucene.net.
Try finding that file, erasing it and restarting Lucene.net.