I have three ways to measure installs:
the Chrome Web Store stats for daily installs
the Google Analytics goal completions reported on my GA dashboard (based on adding my GA UA id)
a GA event I send from my extension when it is run for the first time.
For the period 3/21 thru 4/19 those totals are: 520, 392, 243. Are there known errors in CWS reporting? Any suggestions as to how to interpret otherwise?
I would have thought the first two were triggered by the same event (a user clicking 'Add' in the store) and so should be the same. I can see the third might be smaller based on people installing but never actually running it, but still, less than half the installs are run?
PS Even worse I have about 100 installs from the Edge store in the same period which contribute to the 243 number. How can I be turning off so many users before they actually open the thing?!
Related
I have 6 WebApps (asp.net, windows) running on azure and they have been running for years. i do tweak from time to time, but no major changes.
About a week ago, all of them seem to leak handles, as shown in the image: this is just the last 30 days, but the constant curve goes back "forever". Now, while i did some minor changes to some of the sites, there are at least 3 sites that i did not touch at all.
But still, major leakage started for all sites a week ago. Any ideas what would be causing this?
I would like to add that one of the sites does only have a sinle aspx page and another site does not have any code at all. It's just there to run a webjob containing the letsencrypt script. That hasn't changed for several months.
So basically, i'm looking for any pointers, but i doubt this can has anything to do with my code, given that 2 of the sites do not have any of my code and still show the same symptom.
Final information from the product team:
The Microsoft Azure Team has investigated the issue you experienced and which resulted in increased number of handles in your application. The excessive number of handles can potentially contribute to application slowness and crashes.
Upon investigation, engineers discovered that the recent upgrade of Azure App Service with improvements for monitoring of the platform resulted into a leak of registry key handles in application worker processes. The registry key handle in question is not properly closed by a module which is owned by platform and is injected into every Web App. This module ensures various basic functionalities and features of Azure App Service like correct processing HTTP headers, remote debugging (if enabled and applicable), correct response returning through load-balancers to clients and others. This module has been recently improved to include additional information passed around within the infrastructure (not leaving the boundary of Azure App Service, so this mentioned information is not visible to customers). This information includes versions of modules which processed every request so internal detection of issues can be easier and faster when caused by component version changes. The issue is caused by not closing a specific registry key handle while reading the version information from the machine’s registry.
As a workaround/mitigation in case customers see any issues (like an application increased latency), it is advised to restart a web app which resets all handles and instantly cleans up all leaks in memory.
Engineers prepared a fix which will be rolled out in the next regularly scheduled upgrade of the platform. There is also a parallel rollout of a temporary fix which should finish by 12/23. Any apps restarted after this temporary fix is rolled out shouldn’t observe the issue anymore as the restarted processes will automatically pick up a new version of the module in question.
We are continuously taking steps to improve the Azure Web App service and our processes to ensure such incidents do not occur in the future, and in this case it includes (but is not limited to):
• Fixing the registry key handle leak in the platform module
• Fix the gap in test coverage and monitoring to ensure that such regression will not happen again in the future and will be automatically detected before they are rolled out to customers
So it appears this is a problem with azure. Here is the relevant part of the current response from azure technical support:
==>
We had discussed with PG team directly and we had observed that, few other customers are also facing this issue and hence our product team is actively working on it to resolve this issue at the earliest possible. And there is a good chance, that the fixes should be available within few days unless something unexpected comes in and prevent us from completing the patch.
<==
Will add more info as it comes available.
I've upgraded a laptop (Windows 10 Enterprise, Version 1803) and 2 VMs (Windows 10 Enterprise, version 1809) with MicroFocus' UFT version 14.53. The previous version of UFT was 14.02.
The performance of script execution is annoyingly slow. Here are some details about the environment:
Two AUT were developed using J2EE and Angular JS, respectively
A script that took 18 minutes to run on the laptop is now taking 20 minutes
The same script is now taking 30 minutes on the VMs
The scripts are being run in fast mode from the GUI
The windows 10 machines have been set to Best Performance
Every time the script starts, the Windows is running low on resources
popup appears
The browser on which the app is being run is IE11
RAM on the laptop is 16GB and 8GB on the VMs
Anybody else experience these pains who can offer any solutions or suggestions? Unfortunately, our support vendor has been no help.
Thank you!
1) Depending on what kind of object recognition you perform, there might be noticeable differences depending on how many windows are open on the windows desktop.
It might be that in you Windows 10 sessions, there are more windows open, maybe invisible, that UFT needs to take account when locating top-level test objects.
For example, opening four unneeded (and non-interfering) browser instances and four explorer instances greatly impacts the runtime performance of my scripts. Thus, I make sure that I always start with the same baseline state before running a test.
To verify, you could close everything you don't need, and see if runtime improves.
2) Do you use RegisterUserFunc to call your functions as methods? That API has a big performance hole: depending on how much library code you have (no matter where, and no matter what kind of code), such method calls can take more time than you expect.
I've seen scenarios where we had enough code that one call took almost a second (850 milliseconds) on a powerful machine.
Fix was to avoid calling the function as a method, which sucks because you have to rearrange all such calls, but as of today, we are still waiting for a fix, after it took us months to proof to MicroFocus that this symptom is indeed real, and really fatal because as you add library code, performance degrades further and further, in very tiny steps. (No Windows 10 dependency here, though.)
3) Disable smart identification. It might playback fine, but it might need quite some time to find out which "smart" identification variant works. If your scripts fail without smart id, you should fix them anyways because your scripts never should rely on smart identification.
4) Disable the new XPath feature where UFT builds an XPath automatically, and re-uses this XPath silently to speed up detection. It completely messes up object identification in certain cases, with the script detecting the wrong control, or taking a lot of time to detect controls.
5) Try hiding the UFT instance. This has been a performance booster for years, and I think it still is, see QTP datatable operations *extremely* slow (much better under MMDRV batch executor)? for info on this, and more.
6) Certain operations do take a lot of time, unexpectedly. For example, Why does setting a USER environment variable take 12 seconds? completely surprised me.
Here are things to consider that have been tweaked to speed up my scripts in the past, hadn't had any problems with UFT 12.x on VM machines or VDI's and using Windows 11. I'm just starting with UFT 14.53 on Windows 10. Check Windows 10 for background applications or services that are running prior to even opening UFT or executing a script. In UFT check the Test Settings and UFT Test Options for the following:
Object synchronization timeout - Sets the maximum time (in seconds) that UFT waits for an object to load before running a step in the test.
Note: When working with Web objects, UFT waits up to the amount of time set for the Browser navigation timeout option, plus the time set for the object synchronization timeout. For details on the Browser navigation timeout option, see the HP Unified Functional Testing Add-ins Guide.
Browser navigation timeout - Sets the maximum time (in seconds) that UFT waits for a Web page to load before running a step in the test.
When pointing at a window, activate it after __ tenths of a second - Specifies the time (in tenths of a second) that UFT waits before it sets the focus on an application window when using the pointing hand to point to an object in the application (for example, when using the Object Spy, checkpoints, Step Generator, Recovery Scenario Wizard, and so forth).
Default = 5
Add ____ seconds to page load time - Page load time is the time it takes to download and display the entire content of a web page in the browser window (measured in seconds).
Page load time is a web performance metric that directly impacts user engagement and a business’s bottom line. It indicates how long it takes for a page to fully load in the browser after a user clicks a link or makes a request.
There are many different factors that affect page load time. The speed at which a page loads depends on the hosting server, amount of bandwidth in transit, and web page design – as well as the number, type, and weight of elements on the page. Other factors include user location, device, and browser type.
Run mode - Normal -> or Fast ->
Hope some of this helps, good luck...Aimee
You can try Run a Repair UFT installation on Windows 10, see if something was wrong on installation of the uft 14.53.
This worry me a lot, since we gonna change in a couple of days for laptop with Win10.
Try see here if something can help you.
Regards
I've been using functions for a while and it seems the longer the Function is around, the less accurate the Portal logs are. When I first was using my functions for maybe 3 months everything monitor/logging wise was fine. Over time things starting getting less accurate.
Now I see the real logs by going to the ms azure storage explorer and checking the AzureWebJobsStorage.
First when I bring up the code/logs the last log it brings up isn't accurate. It will be from a few days ago usually, or the last error. When it triggers though, it does get the live feed. This isn't that big a deal, it's the monitor being inactive that and not being able to see the logs from that which is bad. I suppose I just use the Azure Storage explorer.
Monitor Invocation Logs, always seems a few days behind. This used to be accurate, but the last month or so, it's always a few days behind
Dan,
The local, file based logs, exist primarily to support the portal experience, so the behavior you're observing on the log window is expected as the logs are not written by the runtime as part of the normal invocation process, but only when you're actively developing/testing on the portal.
The issue you're experiencing with the monitor is due to a regression that has been patched and should be fully rolled out today (you can see more details here)
We've been listening to feedback on our logging capabilities, and there has been a lot of investment in that area, resulting in the recently announced built in integration with Application Insights. That integration addresses some of the pain points you've brought up as well as other issues, so I'd strongly recommend trying it out. You can find more information about it here.
UPDATE: I've figured it out. See the end of this question.
I have an Azure App Service running four sites. One of the sites has two deployment slots in addition to the primary one. Recently I've been seeing really high CPU utilization for the App Service plan as a whole.
The dark orange line shows the CPU percentage. This is just after restarting all my sites, which brought it down to this level.
However, when I look at the CPU use reported by each site, it's really low.
The darker blue line shows the CPU time, which is basically nothing. I did this for all of my sites, and all the graphs look the same. Basically, it seems that none of my sites are causing the issue.
A couple of the sites have web jobs, so I took a look at the logs but everything is running fine there. The jobs run for a few seconds every few hours.
So my question is: how can I determine the source of this CPU utilization? Any pointers would be greatly appreciated.
UPDATE: Thanks to the replies below, I was able to get more detail into what was happening. I ended up getting what I needed from SCM / Kudu tools. You can get here by going to your web app in Azure and choosing Advanced Tools from the side nav. From the Kudu dashboard, choose Process Explorer. The value in the Total CPU Time column is not directly useful, because it's the time in seconds that the process has run since it started, which might have been minutes or days ago.
However, if you make a record of the value at intervals, you can look at the change over time, and one process might jump out at you. In my case, it was my WebJobs process. Every 60 seconds, this one process was consuming about 10 seconds of processor time, just within one environment.
The great thing about this Kudu dashboard is, if you can catch the problem while it is actually happening, you can hit the Start Profiling button and capture a diagnostic session. You can then open this up in Visual Studio and get some nice details about where the CPU time is being spent.
Just in case anyone else is seeing similar issues, I'll provide more details about my particular case. As I mentioned, my WebJobs exe was the culprit, and I found that all the CPU time was being spent in StackExchange.Redis.SocketManager, which manages connections to Azure Redis Cache. In my main web app, I create only one connection, as recommended. But Since my web jobs only run every once in a while, I was creating a new connection to Azure Redis Cache each time one ran, which apparently can lead to issues. I changed my code to create the Redis Cache connection once when the WebJob process starts up and use the existing connection when any individual WebJob runs.
Time will tell if this really fixes the issue, but I think it will. When the problem occurred, it always fit the same pattern: After a few days of running fine, my CPU would slowly ramp up over the course of about 12 hours. My thinking is that each time a WebJob ran, it created a connection object, which at first didn't produce trouble, but gradually as WebJobs ran every hour or two, cruft was building up until finally some critical threshold was met and the CPU usage would take off.
Hope this helps someone out there. Best wishes!
May be you should go to webApp scm?
%yourAppName%.scm.azurewebsites.com;
There is a page, that can show you all process, that runned now on your web app. (something like Console > Process).
Also you can go to support page (from scm right corner).
You can find some more info about your performance there, and make memory dump (not for this problem, but it useful for performance issues).
According to your description, I assumed that you could leverage the Crash Diagnoser extension to capture dump files from your Web Apps and WebJobs when the CPUs usage percentage is higher than the specific threshold to isolate this issue. For more details, you could refer to this official blog.
I have recently just started working with firebird DB v2.1 on a Linux Redhawk 5.4.11 system. I am trying to create a monitor script that gets kicked off via a cron job. However I am running into a few issues and I was hoping for some advice...
First off I have read through most of the documentation that come with the firebird DB and a lot of the documentation that is provided on their site. I have tried using the gstat tool which is supplied but that didn't seem to give me the kind of information I was looking for. I ran across README.monitoring_tables file which seemed to be exactly what I wanted to monitor. Yet this is where I started to hit a snag in my progress....
After running from logging into the db via isql, I run SELECT MON$PAGE_READS, MON$PAGE_WRITES FROM MON$IO_STATS; I was able to get some numbers which seemed okay. However upon running the command again it appeared the data was stale because the numbers were not updating. I waited 1 minute, 5 minutes, 15 minutes and all the data was the same during each. Once I logged off and back on to run the command again the data changed. It appears that only on a relog does the data refresh and yet I am not sure if even then the data is correct.
My question is now am I even doing this correct? Are these commands truly monitoring my db or are just monitoring the command itself? Also why does it take a relog to refresh the statistics? One thing I was worried about was inconsistency in my data. In other words my system was running yet when I would logon each time the read/writes were not linearly increasing. They would vary from 10k to 500 to 2k. Any advice or help would be appreciated!
When you query a monitoring table, a snapshot of the monitoring information is created so the contents of the monitoring tables are stable for the rest of the transaction. You need to commit and start a new transaction if you want fresh information. Firebird always uses a transaction (and isql implicitly starts a transaction if none was started explicitly).
This is also documented in doc/README.monitoring_tables (at least in the Firebird 2.5 version):
A snapshot is created the first time any of the monitoring tables is being selected from in the given transaction and it's preserved until the transaction ends, so multiple queries (e.g. master-detail ones) will always return the consistent view of the data. In other words, the monitoring tables always behave like a snapshot (aka consistency) transaction, even if the host transaction has been started with another isolation level. To refresh the snapshot, the current transaction should be finished and the monitoring tables should be queried in the new transaction context.
(emphasis mine)
Note that depending on your monitoring needs, you should also look at the trace functionality that was introduced in Firebird 2.5.