Blue Prism Slow apperance of action stage - blueprism

When clicking on page or action stage I have to wait 4 seconds for the property window to open. On my friends computers it happens immedietly. This problem is in all my processes and objects. Some of my friends also have this issue on thier computers.
I have reinstall Blue Prism and SQL, then reinstall whole computer.
It did not help.

I add screenshot of my BP system settings page
enter image description here
Blue Prism's technical support team was unable to help me.
I do not see any difference between the memory on your system and your friends system.
I have found answer on other forum.
I had to remove printer device from my computer. It solved my problem.

In my experience the "slowness" of Blueprism while interacting with it is caused by SQL server. Either the network latency si high or the server`s response is slow.
We tried to have one development environment across EMEA and APAC, but due to high network latency (~250ms) it was a cumbersome experience.

In the recent past we have observed performance issues with our Blue Prism Infrastructure, where the process loading from the studio was taking quite long, and even opening properties window or navigating in studio was very slow.
We did some analysis and found the below and did the cleanup which helped us.
We found that our session log table BPASessionLog_NonUnicode had quite some data, millions of records, we run the house keeping script provided by Blue Prism Support Team and did the cleanup.
We found that there were hundreds of process and object, around 2000+ object/process on the database table BPAProcess, which has been lying there for months, last updated months ago, we wrote a custom scripts to purge the old process and objects, which has not been updated in last 180 days.
We disabled the option to start a runtime resource as startup from
the setting of Blue Prism Studio.
The above steps where performed on a enterprise setup which helped improve the performance of the Blue Prism quite significantly.
Its is very important to maintain a clean database, this can been done, by executing the Housekeeping scripts provided by Blue Prism.

Related

Rapid gain in storage use with gitlab not exchanging any files

I am currently running Gitlab CE. I have an issue where it is constantly gaining space,
There is 1 current user (myself). But sitting idle it gains 20gb of usage in under an hour for no apparent reason (not pushing or pulling or even using it, the service is simply live and idle) until eventually it fills my drive (411gb of free space before the installation of Gitlabs. takes less than 24hrs to fill it.).
I cannot locate the source of the issue, google seems to like referring me to size limitations, and that is fine if I needed to increase that which I don't, i have tried to disable some metrics and the safety features such as "Health checks" in an attempt to stop it from doing this but with no success
I have to keep reinstalling it to negate the idle data usage. There is a reason for me setting it up, but I cannot deploy this the way it is. Have any of you experienced this issue? Is there a way around this?
The system current running it: Fedora 36 running the installation on a 500GB SSD, 8 core Ryzen 7 Processor.
any advice to solve this problem would be great. Please note I am not an expert.
Answer to this question:
rsync was scheduled automatically and was in a loop.
Removed rsync, reinstalled it, rescheduled rsync to go on my schedule, removed the older 100 or so back ups and my space has been returned.
for those that are running rsync, just check that it is not running too closely and is detecting that its own backups are there. as the back ups i found were corrupted.

Access queries get very slow after about a week, acts like a cache problem

We have a strange problem with Access. In our plant we have multiple welders running LabVIEW software. Before each weld the operators scan bar codes on their badges and a tag on the part being welded.
The data from the scans goes to Access databases to verify the operator is authorized to operate the machine and to get information on the part being welded.
After a Windows update a few months ago this process slowed WAY down. One of those database queries could take 20 seconds or more.
We tried using more powerful computers and better network connections, neither helped.
One thing that we did find that helped was running Microsoft’s Office scrub tool, “Microsoft Support and Recovery Assistant”. This removes all traces of Office. We then install AccessRuntime_x86 (Access Runtime 2013) so our LabVIEW based software can work with the Access databases.
This restores the systems to normal operation for a week or so, then we need to run it again.
We’ve also found that we can just rerun the Access installer and use the “Repair” option and that helps.
The computers all have i5 processors, 8 or 16GB of RAM, most have SSDs and are running Windows 10 64 Enterprise.
Any advice or suggestions would be greatly appreciated!

Azure WebApps leaking handles "out of nothing"

I have 6 WebApps (asp.net, windows) running on azure and they have been running for years. i do tweak from time to time, but no major changes.
About a week ago, all of them seem to leak handles, as shown in the image: this is just the last 30 days, but the constant curve goes back "forever". Now, while i did some minor changes to some of the sites, there are at least 3 sites that i did not touch at all.
But still, major leakage started for all sites a week ago. Any ideas what would be causing this?
I would like to add that one of the sites does only have a sinle aspx page and another site does not have any code at all. It's just there to run a webjob containing the letsencrypt script. That hasn't changed for several months.
So basically, i'm looking for any pointers, but i doubt this can has anything to do with my code, given that 2 of the sites do not have any of my code and still show the same symptom.
Final information from the product team:
The Microsoft Azure Team has investigated the issue you experienced and which resulted in increased number of handles in your application. The excessive number of handles can potentially contribute to application slowness and crashes.
Upon investigation, engineers discovered that the recent upgrade of Azure App Service with improvements for monitoring of the platform resulted into a leak of registry key handles in application worker processes. The registry key handle in question is not properly closed by a module which is owned by platform and is injected into every Web App. This module ensures various basic functionalities and features of Azure App Service like correct processing HTTP headers, remote debugging (if enabled and applicable), correct response returning through load-balancers to clients and others. This module has been recently improved to include additional information passed around within the infrastructure (not leaving the boundary of Azure App Service, so this mentioned information is not visible to customers). This information includes versions of modules which processed every request so internal detection of issues can be easier and faster when caused by component version changes. The issue is caused by not closing a specific registry key handle while reading the version information from the machine’s registry.
As a workaround/mitigation in case customers see any issues (like an application increased latency), it is advised to restart a web app which resets all handles and instantly cleans up all leaks in memory.
Engineers prepared a fix which will be rolled out in the next regularly scheduled upgrade of the platform. There is also a parallel rollout of a temporary fix which should finish by 12/23. Any apps restarted after this temporary fix is rolled out shouldn’t observe the issue anymore as the restarted processes will automatically pick up a new version of the module in question.
We are continuously taking steps to improve the Azure Web App service and our processes to ensure such incidents do not occur in the future, and in this case it includes (but is not limited to):
• Fixing the registry key handle leak in the platform module
• Fix the gap in test coverage and monitoring to ensure that such regression will not happen again in the future and will be automatically detected before they are rolled out to customers
So it appears this is a problem with azure. Here is the relevant part of the current response from azure technical support:
==>
We had discussed with PG team directly and we had observed that, few other customers are also facing this issue and hence our product team is actively working on it to resolve this issue at the earliest possible. And there is a good chance, that the fixes should be available within few days unless something unexpected comes in and prevent us from completing the patch.
<==
Will add more info as it comes available.

Does anyone have any recommendations for speeding up UFT 14.53 on a Windows 10 platform?

I've upgraded a laptop (Windows 10 Enterprise, Version 1803) and 2 VMs (Windows 10 Enterprise, version 1809) with MicroFocus' UFT version 14.53. The previous version of UFT was 14.02.
The performance of script execution is annoyingly slow. Here are some details about the environment:
Two AUT were developed using J2EE and Angular JS, respectively
A script that took 18 minutes to run on the laptop is now taking 20 minutes
The same script is now taking 30 minutes on the VMs
The scripts are being run in fast mode from the GUI
The windows 10 machines have been set to Best Performance
Every time the script starts, the Windows is running low on resources
popup appears
The browser on which the app is being run is IE11
RAM on the laptop is 16GB and 8GB on the VMs
Anybody else experience these pains who can offer any solutions or suggestions? Unfortunately, our support vendor has been no help.
Thank you!
1) Depending on what kind of object recognition you perform, there might be noticeable differences depending on how many windows are open on the windows desktop.
It might be that in you Windows 10 sessions, there are more windows open, maybe invisible, that UFT needs to take account when locating top-level test objects.
For example, opening four unneeded (and non-interfering) browser instances and four explorer instances greatly impacts the runtime performance of my scripts. Thus, I make sure that I always start with the same baseline state before running a test.
To verify, you could close everything you don't need, and see if runtime improves.
2) Do you use RegisterUserFunc to call your functions as methods? That API has a big performance hole: depending on how much library code you have (no matter where, and no matter what kind of code), such method calls can take more time than you expect.
I've seen scenarios where we had enough code that one call took almost a second (850 milliseconds) on a powerful machine.
Fix was to avoid calling the function as a method, which sucks because you have to rearrange all such calls, but as of today, we are still waiting for a fix, after it took us months to proof to MicroFocus that this symptom is indeed real, and really fatal because as you add library code, performance degrades further and further, in very tiny steps. (No Windows 10 dependency here, though.)
3) Disable smart identification. It might playback fine, but it might need quite some time to find out which "smart" identification variant works. If your scripts fail without smart id, you should fix them anyways because your scripts never should rely on smart identification.
4) Disable the new XPath feature where UFT builds an XPath automatically, and re-uses this XPath silently to speed up detection. It completely messes up object identification in certain cases, with the script detecting the wrong control, or taking a lot of time to detect controls.
5) Try hiding the UFT instance. This has been a performance booster for years, and I think it still is, see QTP datatable operations *extremely* slow (much better under MMDRV batch executor)? for info on this, and more.
6) Certain operations do take a lot of time, unexpectedly. For example, Why does setting a USER environment variable take 12 seconds? completely surprised me.
Here are things to consider that have been tweaked to speed up my scripts in the past, hadn't had any problems with UFT 12.x on VM machines or VDI's and using Windows 11. I'm just starting with UFT 14.53 on Windows 10. Check Windows 10 for background applications or services that are running prior to even opening UFT or executing a script. In UFT check the Test Settings and UFT Test Options for the following:
Object synchronization timeout - Sets the maximum time (in seconds) that UFT waits for an object to load before running a step in the test.
Note: When working with Web objects, UFT waits up to the amount of time set for the Browser navigation timeout option, plus the time set for the object synchronization timeout. For details on the Browser navigation timeout option, see the HP Unified Functional Testing Add-ins Guide.
Browser navigation timeout - Sets the maximum time (in seconds) that UFT waits for a Web page to load before running a step in the test.
When pointing at a window, activate it after __ tenths of a second - Specifies the time (in tenths of a second) that UFT waits before it sets the focus on an application window when using the pointing hand to point to an object in the application (for example, when using the Object Spy, checkpoints, Step Generator, Recovery Scenario Wizard, and so forth).
Default = 5
Add ____ seconds to page load time - Page load time is the time it takes to download and display the entire content of a web page in the browser window (measured in seconds).
Page load time is a web performance metric that directly impacts user engagement and a business’s bottom line. It indicates how long it takes for a page to fully load in the browser after a user clicks a link or makes a request.
There are many different factors that affect page load time. The speed at which a page loads depends on the hosting server, amount of bandwidth in transit, and web page design – as well as the number, type, and weight of elements on the page. Other factors include user location, device, and browser type.
Run mode - Normal -> or Fast ->
Hope some of this helps, good luck...Aimee
You can try Run a Repair UFT installation on Windows 10, see if something was wrong on installation of the uft 14.53.
This worry me a lot, since we gonna change in a couple of days for laptop with Win10.
Try see here if something can help you.
Regards

Azure App Service: How can I determine which process is consuming high CPU?

UPDATE: I've figured it out. See the end of this question.
I have an Azure App Service running four sites. One of the sites has two deployment slots in addition to the primary one. Recently I've been seeing really high CPU utilization for the App Service plan as a whole.
The dark orange line shows the CPU percentage. This is just after restarting all my sites, which brought it down to this level.
However, when I look at the CPU use reported by each site, it's really low.
The darker blue line shows the CPU time, which is basically nothing. I did this for all of my sites, and all the graphs look the same. Basically, it seems that none of my sites are causing the issue.
A couple of the sites have web jobs, so I took a look at the logs but everything is running fine there. The jobs run for a few seconds every few hours.
So my question is: how can I determine the source of this CPU utilization? Any pointers would be greatly appreciated.
UPDATE: Thanks to the replies below, I was able to get more detail into what was happening. I ended up getting what I needed from SCM / Kudu tools. You can get here by going to your web app in Azure and choosing Advanced Tools from the side nav. From the Kudu dashboard, choose Process Explorer. The value in the Total CPU Time column is not directly useful, because it's the time in seconds that the process has run since it started, which might have been minutes or days ago.
However, if you make a record of the value at intervals, you can look at the change over time, and one process might jump out at you. In my case, it was my WebJobs process. Every 60 seconds, this one process was consuming about 10 seconds of processor time, just within one environment.
The great thing about this Kudu dashboard is, if you can catch the problem while it is actually happening, you can hit the Start Profiling button and capture a diagnostic session. You can then open this up in Visual Studio and get some nice details about where the CPU time is being spent.
Just in case anyone else is seeing similar issues, I'll provide more details about my particular case. As I mentioned, my WebJobs exe was the culprit, and I found that all the CPU time was being spent in StackExchange.Redis.SocketManager, which manages connections to Azure Redis Cache. In my main web app, I create only one connection, as recommended. But Since my web jobs only run every once in a while, I was creating a new connection to Azure Redis Cache each time one ran, which apparently can lead to issues. I changed my code to create the Redis Cache connection once when the WebJob process starts up and use the existing connection when any individual WebJob runs.
Time will tell if this really fixes the issue, but I think it will. When the problem occurred, it always fit the same pattern: After a few days of running fine, my CPU would slowly ramp up over the course of about 12 hours. My thinking is that each time a WebJob ran, it created a connection object, which at first didn't produce trouble, but gradually as WebJobs ran every hour or two, cruft was building up until finally some critical threshold was met and the CPU usage would take off.
Hope this helps someone out there. Best wishes!
May be you should go to webApp scm?
%yourAppName%.scm.azurewebsites.com;
There is a page, that can show you all process, that runned now on your web app. (something like Console > Process).
Also you can go to support page (from scm right corner).
You can find some more info about your performance there, and make memory dump (not for this problem, but it useful for performance issues).
According to your description, I assumed that you could leverage the Crash Diagnoser extension to capture dump files from your Web Apps and WebJobs when the CPUs usage percentage is higher than the specific threshold to isolate this issue. For more details, you could refer to this official blog.

Resources