Custom OS interrupt like ctr + alt + delete - multithreading

I'm using windows 8. Recently the task manager has been basically not working. It freezes and is just generally terrible.
What's more, as apposed to the earlier version of windows, the task-manager thread seems to not have the same priority that it once did. When I hit Ctr+Alt+Delete, the system immediately calls the home screen, but when I open task manager, I get lag.
What I want to do is create my own hotkey sequence to interrupt all systems with as much priority as possible. The notion here is an overclocking mentality, where if the hardware fails, that is more acceptable to me than a lag-lock lasting 5 minutes.
The idea of creating a custom hotkey seems realistically overly complicated. I recently began using 'resmon', and it is fantastic. If there was a way to launch resmon directly from the home screen, with high level priority, that would be an acceptable solution.

It sounds like you want to look at Windows Hook Functions, which allow you to run your own code at various points between the time when a keyboard event is received and when applications receive it.
Additionally, if you don't want to write code, you can attempt to do the following two steps. This is untested because I do not currently have a windows box handy.
Play with Image File Execution Options in the Windows Registry so that resmon will always get executed instead of the Task Manager, by setting it as the debugger for Task Manager in the following key.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\taskmgr.exe
Instead of using Ctrl-Alt-Delete, use the key sequence Ctrl-Shift-Esc which should attempt to bring up the task manager.

Related

How to properly prevent multiple instances of my AutoHotKey script from running at the same time?

I have made this AutoHotKey script after reading the manual, searching and troubleshooting for endless hours in the last few weeks:
#Persistent
#SingleInstance force
if A_Args[1] > 0
{
Menu, Tray, Icon, C:\blablabla\new notifications.png
}
else
{
Menu, Tray, Icon, C:\blablabla\no new notifications.png
}
This, I have compiled into test.exe. Then I call it like this from a terminal: test.exe 1, test.exe 0, test.exe 2, test.exe 3, etc.
If I do it slowly/manually, it works: it only ever keeps one instance of the script, showing visually as a nice little icon in the Notification area (as intended), never launching multiple instances.
However, when I started actually using it for real, by calling the same terminal command from my scripts, it often opens two or more instances, which are kept open and just make the Notification area longer and longer, ignoring the rule that it only can run as one instance ever.
I was able to "solve" it by introducing a short "sleep" in my script after each such command call, so that the same script never calls it too quickly in succession. However, today, I realized that multiple different scripts of mine often call it at the same time, or nearly at the same time. This means that my "clever" solution of sleeping falls short.
I then thought that I can use the database to keep track of when the last time a script called this, and don't do it if it's too soon, but if I did that, the whole point would be lost since I can no longer trust the icon to accurately tell me whether or not there are new notifications in my system. I'd constantly be wondering if there had been such a "race condition" and manually go and check anyway, defeating the point of this visual indication which is supposed to let me always know "at a glance" whether or not there are new notifications in my system.
Maybe I could have a loop in my scripts and repeatedly re-try if it detects that a notification has been sent too recently, but that means my scripts will potentially stall for a long time as they all send notifications (especially in the morning, when I wake up and kickstart my system). It just seems like the wrong solution.
Is there really no way to properly handle this in the script itself? As is obvious from my description, the #Persistent and #SingleInstance force rules aren't respected.
(I've had similar problems in the past with programs being unable to "handle" running commands too quickly, and I never know what to do about it except to introduce sleep. But even then, it often glitches out. For example, Notepad++ requires me to first open a document and then open it again with a specified line number, sleeping in between, or else it doesn't go to the specified line.)

python program using Glade, GObject, runs fine for days, then suddenly all windows are blank

I have a large data acquisition and control program written in Python3.4.2 using GUI mostly developed on Glade 3.18.3, Gtk3.0 GObject running Debian 8 with XFCE.
There are timers that keep doing things, and these work fine. After startup, the program runs for some 3 - 7 days, then suddenly, all of the windows go blank and stay blank. Other applications are not affected. Memory and CPU usage is modest.
There are no indications of problems prior to the windows going blank. The windows show their title bars and respond normally to minimize, restore, move to another Workspace, etc. It looks like they are not getting repainted - no widgets are visible at all. The code is way too large to post here, and I am not able to isolate a specific problem area for lack of obvious symptoms other than the blank screens. There are no error messages or warnings.
The timers continue to run, acquire data and control things. This happens whether the program is run from the command line or under PyDev in Eclipse.
Things I have tried:
In the main timer loop, I put code to look for a file, and then exec the command in it, printing the results, so I have been able to mess with the program in real time:
Replace the usual Gtk.main() with a while loop whose variable, if not made false, will re-execute the Gtk.main(). Executing Gtk.main.quit() stops Gtk.main and starts it again. Windows still blank. Did this repeatedly to no avail.
Experimented with garbage collection with GC. Collecting garbage makes no different. Windows still blank.
Put in code to print percent of time consumed by the timer loops. Fairly steady around 18 - 20% of available CPU time, so nothing is hogging the CPU preventing re-paint.
I have a button that clears a label. I read the label, then executed a builder.get_object(...).activate command to the button. I re-read the label and it was now properly blank. So events and widgets appear to be working normally, at least to some extent.
Finally, if I click on the close X on the title bar, XFCE asks me if I want to wait or close now. So it seems as though there may be a disconnection or problem with signals and the OS, even though XXX.activate() works.
Web searching is in vain. Does anybody have ideas of what might be happening, useful diagnostics, or other suggestions? Many thanks!
April 27, 2017 Update:
I have taken two substantial steps to mostly work around problems. First, partly in response to a couple of Gtk crashes over the last few months, instead of ending the main program with:
Gtk.main()
I end with:
while wannalive:
try:
Gtk.main()
except:
pass
wannalive is True until user does a quit, so recovery is instant.
Second, I grouped all of the code for each window setup and initial population of static items into two functions. I also made another function for closing a window. These functions propagate to children, grandchildren windows. A function in the top window first, closes, then re-creates all windows, with one call. In operation, there are overlaps in what windows exist, but that is not a problem.
Above, I describe that I can inject code with an external program. The external program has a button that injects a call to that third function. In about five seconds or less, the result of a single button click is to replace all blank windows with functional windows. For my purposes in a controlled environment with a trained operator, this is acceptable.
Next, let me address the relationship between the timer loops and GUI events processing. I do use GObject.timeout_add(ms, somefunction). Experiment shows that a button that calls time.sleep(5) stalls the timer. Experiment shows that injecting time.sleep(5) in the timer loop stalls the GUI. This is consistent with my belief (correct me if I am wrong) that Python runs on a single thread. Therefore, bad code caught in an infinite loop should stall both the GUI and the timer loop. (The program has one timeout_add call.)

Starting app with all global variables zeroed

I'm writing an Android NDK app which consists of a large platform-independent core in C with a few lines of Java glue code based on the android.app.NativeActivity class. This is working really fine except one thing which is giving me headaches.
The problem is that Android apps do not really quit when calling ANativeActivity_finish() but instead they are just put into some sort of idle state and AFAIU they are only really killed when Android needs the resources.
This Android peculiarity is a huge problem for me because my C core uses lots of global variables and they aren't reset to 0 when my C core runs its shutdown code. Thus, when Android launches my app after it has been shutdown, all globals contain some random state from the last time my app was run instead of 0.
This isn't a problem at all on all the desktop systems supported by my program (Win32, Mac OS, Linux) because on those systems programs can really quit. But on Android this is fundamentally different. Yes, I know that global variables are bad and that I should reset them to 0 when finished with them but we're talking about a very large C core that has been in development for almost 20 years now so it would require a massive effort to clean this all up just for Android.
That's why I'd like to ask whether there is some way to force Android to always zero out all my globals just like when the app is started for the very first time.
I've already done some research and AFAICS the only way to do this would be to really kill my application using android.os.Process.killProcess() but this looks like a brute-force method. Isn't there any other way to always get zeroed globals when starting my app?
The initial zeroing is performed by the operating system -- the variable storage occupies newly-mapped pages. There is no "zero out the stuff that needs zeroing" internal function to call.
You have two basic approaches:
Make the Android app work like it does on other platforms, and kill the app manually to force a reload.
Make your app work like Android, and don't act like it's shutting down completely when it gets paused. In other words, don't ever run your shutdown code. You still need to release significant resources when the app gets paused, but that should be less of a burden than tearing everything down.
How feasible approach #2 is depends on the nature of your code.
Yes, I know that global variables are bad and that I should reset them to 0 when finished with them but
If you absolutely must use globals, create a single global structure and pile everything into it, so if you need to reset it or dump all the state in a debugger, it's easy to find everything. It's not ideal, but it's far easier to manage than hunting for scattered globals and (heaven forfend) static locals.

SSIS: Make Excel Visible In Script Task During SQL Server Agent Job Run

I built a package in SSIS that uses a script task to open an Excel file, format, and refresh some data in Excel. I would like to have Excel visible when the script task is running to see if Excel gets hung up which occurs all the time. Is this possible? I am converting a process that is calling Excel via a shell script to using SSIS to call Excel instead. I guess a second question is, is that a bad idea?
Why this is a bad idea
Generally speaking, administrators are tasked with maximizing the amount of "uptime" a server or service on the server has. The more software that gets installed on the machine, the greater the odds of service interruptions and outages due to patching. To be able to manipulate Excel in the mechanism you described, you're going to force the installation of MS Office on that machine. That will cost you a software license and the amount of patching required is going to blow holes in whatever SLAs those admins might be required to adhere to.
Memory leaks. Along with the whole patching bit, in the past at least, there were issues with programmatically manipulating Excel and it basically boiled down to it was easy to end up with memory leaks (I gotta make you understand. Allocated memory but never given it up, never let the allocated memory go down). Over time, the compounded effect is that running this package will result in less and less system memory available and the only way to reclaim it is through a reboot, which gets back to SLAs.
The reason you want to see what Excel is doing is so that you can monitor execution because it "gets hung up which occurs all the time". That doesn't sound like a stable process. Again, no admin is going to want an unstable process running on the servers. Something is not right in the cycle of events. Whether it's your code that opens Excel, the macros it runs, etc, something in there is awry and that's why you need to inspect the process. This is akin to putting a bandaid on a shotgun wound. Stop shooting yourself and you won't require bandages.
The task that you're attempting to perform is "open an Excel file, format, and refresh some data in Excel" SSIS can natively push data into Excel. If you preformat the file, develop your SSIS to write to the formatted file and just copy it off, that should work. It's not graceful but it works. There are better methods of providing formatted data but without knowing your infrastructure, I don't know if SSRS, SharePoint, Excel Services, Power Pivot, etc are viable options.
Why you won't be able to see Excel
Generally speaking, the account that runs SQL Agent is probably going to be fairly powerful. To prevent things like a shatter attack, from Windows 2008+ services are restricted in what they can do. For the service account to be able to interact with the desktop, you have to move it into the user tier of apps which might not be a good thing if you, or your DBA/admins, are risk adverse.
For more information, please to enjoy the following links
InteractWithDesktop
http://lostechies.com/keithdahlby/2011/08/13/allowing-a-windows-service-to-interact-with-desktop-without-localsystem/
https://serverfault.com/questions/576144/allow-service-to-interact-with-desktop
https://superuser.com/questions/415204/how-do-i-allow-interactive-services-in-windows-7
That said, if all of the stars are aligned and you accept the risk, of Allow Service to Interact with the Desktop, the answer is exactly as Sam indicated. In your unshown code, you need to set the Visible property to true.
As you go off and allow interactivity with the desktop and someone leaves some "testing" code in the package that gets deployed to production with MessageBox.Show("Click OK to continue"); be aware that if nobody notices this dialog box sitting there, you'll have a job waiting to complete for a very long time.
Regarding your first question, I understand that you want to debug your script task.
You can make Excel visible by adding the following line of code in your script task (assuming C# is the coding language):
// Create your Excel app
var excelApp = new Excel.Application();
// Make the Excel window visible to spot any issues
excelApp.Visible = true;
Don't forget to remove/comment that line after debugging.
Regarding your second question, I don't that this is a bad idea if you properly handle how Excel is opened and closed, in order to avoid memory issues.

Why does CFileDialog::DoModal() Hang?

I have developed a fairly large C++ program in VS 6.0 on a Win XP platform, and have now migrated to a new machine running Win 7 (still running VS 6.0). The code includes a function to instantiate and run a CFileDialog object to find and open an ASCII file with a specific extension from a specific initial directory. But now, the program hangs on the line
if (t1.DoModal()==IDOK)
...where t1 is the CFileDialog instance.
To investigate why the standard CfileDialog class stopped working, I created a separate test project in VS 6.0 with a simple dialog with one button, containing this code:
void CFileDialogTestDlg::OnOpenFileDialogButton()
{
CFileDialog t1(true);
if(t1.DoModal()==IDOK)
{
CString s3=t1.GetPathName();
MessageBox(s3);
}
}
This test works fine and displays a useable file dialog. I can also duplicate what I want in my large project in terms of initial directory,etc by modifying the m_ofn members of t1.
But putting this code into my large project (ie modifying the relevant button in it) still hangs on the DoModal() line. It seems unproductive trying to trace into a standard MS class, the internals are impossible to understand in a reasonable timeframe.
When I increased stackspace for my test project to match my large project (400MB), I reproduced the hanging behaviour identical to the large project.
Can anyone explain why increasing stackspace should affect file dialog execution in this way, and is there a way around the problem, bearing in mind I need the large stackspace to avoid completely rewriting my project?
I'm not sure the stack is your problem. It's been a while but I seem to recall common modals hanging if you access them from the wrong thread.
Use PostMessage() API to send commands from any thread to the thread that owns the modal dialog. It needs to be the owning (and blocking) thread that ultimately receives the command to accept/cancel the dialog so that it returns from its message pump routine.
If you install the Windows debug symbols, you can see the full call stack of your blocking thread in a debugger.

Resources