I have a website where I need to poll a webpage in every minute or so. The page on the server will perform different task. I am trying to use Windows Scheduled Tasks, but that cannot be set to run in every minute. I know that there is cron jobs for this on Linux, but that is not available on Windows.
Any ideas how to do this on Windows?
Try with windows service. It can be set to start when system is started up without the need for someone to log in. Also check this article http://www.codeproject.com/KB/dotnet/WindowsServiceScheduler.aspx
The repeat interval drop-down goes down to 5 minutes here and I can easily type a 1 instead of the 5:
http://hypftier.de/temp/task.png
It gets this actually right, since when I click on OK, the description of the trigger reads:
http://hypftier.de/temp/task2.png
Or maybe that was added to Windows 7 or Vista, but I doubt it, actually.
Related
I've upgraded a laptop (Windows 10 Enterprise, Version 1803) and 2 VMs (Windows 10 Enterprise, version 1809) with MicroFocus' UFT version 14.53. The previous version of UFT was 14.02.
The performance of script execution is annoyingly slow. Here are some details about the environment:
Two AUT were developed using J2EE and Angular JS, respectively
A script that took 18 minutes to run on the laptop is now taking 20 minutes
The same script is now taking 30 minutes on the VMs
The scripts are being run in fast mode from the GUI
The windows 10 machines have been set to Best Performance
Every time the script starts, the Windows is running low on resources
popup appears
The browser on which the app is being run is IE11
RAM on the laptop is 16GB and 8GB on the VMs
Anybody else experience these pains who can offer any solutions or suggestions? Unfortunately, our support vendor has been no help.
Thank you!
1) Depending on what kind of object recognition you perform, there might be noticeable differences depending on how many windows are open on the windows desktop.
It might be that in you Windows 10 sessions, there are more windows open, maybe invisible, that UFT needs to take account when locating top-level test objects.
For example, opening four unneeded (and non-interfering) browser instances and four explorer instances greatly impacts the runtime performance of my scripts. Thus, I make sure that I always start with the same baseline state before running a test.
To verify, you could close everything you don't need, and see if runtime improves.
2) Do you use RegisterUserFunc to call your functions as methods? That API has a big performance hole: depending on how much library code you have (no matter where, and no matter what kind of code), such method calls can take more time than you expect.
I've seen scenarios where we had enough code that one call took almost a second (850 milliseconds) on a powerful machine.
Fix was to avoid calling the function as a method, which sucks because you have to rearrange all such calls, but as of today, we are still waiting for a fix, after it took us months to proof to MicroFocus that this symptom is indeed real, and really fatal because as you add library code, performance degrades further and further, in very tiny steps. (No Windows 10 dependency here, though.)
3) Disable smart identification. It might playback fine, but it might need quite some time to find out which "smart" identification variant works. If your scripts fail without smart id, you should fix them anyways because your scripts never should rely on smart identification.
4) Disable the new XPath feature where UFT builds an XPath automatically, and re-uses this XPath silently to speed up detection. It completely messes up object identification in certain cases, with the script detecting the wrong control, or taking a lot of time to detect controls.
5) Try hiding the UFT instance. This has been a performance booster for years, and I think it still is, see QTP datatable operations *extremely* slow (much better under MMDRV batch executor)? for info on this, and more.
6) Certain operations do take a lot of time, unexpectedly. For example, Why does setting a USER environment variable take 12 seconds? completely surprised me.
Here are things to consider that have been tweaked to speed up my scripts in the past, hadn't had any problems with UFT 12.x on VM machines or VDI's and using Windows 11. I'm just starting with UFT 14.53 on Windows 10. Check Windows 10 for background applications or services that are running prior to even opening UFT or executing a script. In UFT check the Test Settings and UFT Test Options for the following:
Object synchronization timeout - Sets the maximum time (in seconds) that UFT waits for an object to load before running a step in the test.
Note: When working with Web objects, UFT waits up to the amount of time set for the Browser navigation timeout option, plus the time set for the object synchronization timeout. For details on the Browser navigation timeout option, see the HP Unified Functional Testing Add-ins Guide.
Browser navigation timeout - Sets the maximum time (in seconds) that UFT waits for a Web page to load before running a step in the test.
When pointing at a window, activate it after __ tenths of a second - Specifies the time (in tenths of a second) that UFT waits before it sets the focus on an application window when using the pointing hand to point to an object in the application (for example, when using the Object Spy, checkpoints, Step Generator, Recovery Scenario Wizard, and so forth).
Default = 5
Add ____ seconds to page load time - Page load time is the time it takes to download and display the entire content of a web page in the browser window (measured in seconds).
Page load time is a web performance metric that directly impacts user engagement and a business’s bottom line. It indicates how long it takes for a page to fully load in the browser after a user clicks a link or makes a request.
There are many different factors that affect page load time. The speed at which a page loads depends on the hosting server, amount of bandwidth in transit, and web page design – as well as the number, type, and weight of elements on the page. Other factors include user location, device, and browser type.
Run mode - Normal -> or Fast ->
Hope some of this helps, good luck...Aimee
You can try Run a Repair UFT installation on Windows 10, see if something was wrong on installation of the uft 14.53.
This worry me a lot, since we gonna change in a couple of days for laptop with Win10.
Try see here if something can help you.
Regards
I have to run one utility periodically for instance say, every minute.
So, I have two option #Scheduled spring boot vs crontab of linux box that we are using to deploy the artifact.
SO, my question is which way should I use?
what are the pros and cons for each solution , any other solution if you can suggest.
Just for comparing between these two, I don't have much points, but only based on this situation which I faced now. I just built a new end point and am doing performance testing and stress testing for the same on production. I am yet to decide the cron schedule times, and those may need a slight tweaking over some more time of observation. Setting via #Scheduled needs me to deploy/restart application every time I make a change.
Application restart generally takes more time than crontab edit.
Other than this, a few points considering the aspects of availability and scalability -
Setting only via crontab on a single server would mean a single point of failure, if the server goes down.
Setting via #Scheduled also could mean the same.
If you have multiple instances of the server, this could mean endpoint getting triggered twice and you may not want to have the same. Worst case, is if the scaling up happens after a long time, and you wrote the #Scheduled endpoint long back, while it was only deployed on a single server and then you forgot. As soon as scaling up happens, the process will start getting hit twice.
So, none of these seem to be the best in terms of points of availability and scalability.
In such situations, ideally a distributed cron management system (I have heard about Rundeck) is needed, which manages which, out of the available servers is to be called to hit the desired end point and if needed to call the next server in case the first one is down.
In case of any need for investigation. logs of rundeck could be checked to find the server which was actually called.
In Kentico (9) when I run the task "Delete inactive contacts" it never actually runs and the result is always "Rescheduled to delete more contacts in next off-peak period"
I've tried changing the settings to run once a week and I've tried creating a custom IDeleteContacts then setting it to use that custom class, but I always get the same result.
Any ideas?
By default, Kentico runs it's scheduled tasks in the tail of regular web requests. That's fine if you have traffic 24/7. If you don't, then you can run into all kinds of nastyness including the issue you're describing now because scheduled tasks are not executing.
If you're running on a Windows server you can setup a service to trigger scheduled tasks. If that's not an option, you can setup monitoring to hit your site every couple of minutes, for example UptimeRobot or Application Insights. You'll get the added bonus of being notified whenever the site goes down.
If you really need to clean up the EMS contacts because it's getting out of control, you can access the database directly and trigger the same stored procedure that the scheduled task uses. It's called [Proc_OM_Contact_MassDelete] and takes a where clause and a batch size. The where clause is where you specify the delete policy. For example
ContactCreated < GETDATE()-60 AND ([ContactEmail] IS NULL PR [ContactEmail]='')
With this where clause the stored proc would process contacts that were created over 60 days ago and don't have an e-mail address yet.
Please be aware that large volumes of EMS data will require database index tuning for this procedure to run within an acceptable period of time. This is true for EMS in general when your site has a decent amount of traffic.
If the standard Kentico cleanup doesn't work, for example because the database is unable to deal with millions of contacts, we've written a script to purge all EMS data. Use with caution ;)
do you have applied the latest hotfix (9.0.50) on your project? There was a bug when the deletion of inactive contacts took longer than 1 minute, the next run of the "Delete inactive contacts" scheduled task was not set, and the task did not execute again. You can download the package directly from this page: https://devnet.kentico.com/download/hotfixes
The "Delete inactive contacts" scheduled task only runs between 2am and 6am based on the servers time the site is running on. You can see this in the documentation. It only ever deletes a batch of 1000 contacts and never more. If you want to "trick" the site into running the scheduled task more, update the time on the server to 1:58am and restart the site.
In my Lotus Notes workflow application, I have a scheduled server agent (every five minutes). When user's act on a document, a server-side agent is also triggered (this agent modifies the said document, server-side). In the production, we are receiving many complaints that the processing are incomplete or sometimes not being processed at all. I checked the server configuration and found out that only 4 agents can run concurrently. Being a global application with over 50,000 users, the only thing that I can blame with these issues are the volume of agent run, but I'm not sure if I'm correct (I'm a developer and lacks knowledge about these stuffs). Can someone help me find if my reasoning is correct (on simulteneous agents) and help me understand how I can solve this? Can you provide me references please. Thank you in advance!
Important thing to remember.
Scheduled agents on the server will only run one agent from the same database at any given time!
So if you have Database A with agent X (5 mins) and Y (10 mins). It will first run X. Once X completes which ever is scheduled next (X or Y) will run next. It will never let you run X + Y at the same time if they are in the same database.
This is intended behaviour to stop possible deadlocks within the database agents.
Also you have an schedule queue which will have a limit to the number of agents that can be scheduled up. For example if you have Agent X every 5 minutes, but it takes 10 minutes to complete, your schedule queue will slowly fill up and then run out of space.
So how to work around this? There is a couple of ways.
Option 1: Use Program Documents on the server.
Set the agent to scheduled "Never" and have a program document execute the agent with the command.
tell amgr run "dir/database.nsf" 'agentName'
PRO:
You will be able to run agents in <5 minute schedule.
You can run multiple agents in the same database.
CON:
You have to be aware of what the agent is interacting with, and code for it to handle other agents or itself running at the same time.
There can be serious performance implications in doing this. You need to be aware of what is going on in the server and how it would impact it.
If you have lots of databases, you have a messy program document list and hard to maintain.
Agents via "Tell AMGR" will not be terminated if they exceed the agent execution time allowed on the server. They have to be manually killed.
There is easy way to determine what agents are running/ran.
Option 2: Create an agent which calls out to web agents.
PRO:
You will be able to run agents in <5 minute schedule.
You can run multiple agents in the same database.
You have slightly better control of what runs via another agent.
CON:
You need HTTP running on the server.
There are performance implications in doing this and again you need to be aware of how it will interact with the system if multiple instances run or other agents.
Agents will not be terminated if they exceed the agent execution time allowed on the server.
You will need to allow concurrent web agents/web services on the server or you can potentially hang the server.
Option 3: Change from scheduled to another trigger.
For example "When new mail arrives". Overall this is the better option of the three.
...
In closing I would say that you should rarely use the "Execute every 5 mins" if you can, unless it is a critical agent that isn't going to be executed by multiple users across different databases.
Is it possible to check whether a SharePoint (actually WSS 3.0) timer job has run when it was scheduled to ?
Reason is we have a few daily custom jobs and want to make sure they're always run, even if the server has been down during the time slot for the jobs to run, so I'd like to check them and then run them
And is it possible to add a setting when creating them similar to the one for standard Windows scheduled tasks ... "Run task as soon as possible after a scheduled start is missed" ?
check it in job status page and then you can look at the logs in 12 hive folder for further details
central administration/operations/monitoring/timer jobs/check jobs status
As far as the job restart is concerned when it is missed that would not be possible with OOTB features. and it make sense as well since there are lot of jobs which are executed at particular interval if everything starts at the same time load on server would be very high
You can look at the LastRunTime property of an SPJobDefinition to see when the job was actually executed. As far as I can see in Reflector, the value of this property is loaded from the database and hence it should reflect the time it was actually executed.