I am running some unit tests by calling a script in Cygwin. My issue is that I apparently broke something in my code, and now the unit test output is many hundreds of lines. I need to jump to the start of the output in order to figure out what it going on. Scrolling is taking forever, and it is entirely too easy to scroll right past the start of the output.
Is there a shortcut to jump to the start of the output? Google gave me nothing, so I'm hoping someone here knows.
Related
I have been able to run simulations with testbenches for awhile now. For some reason, in the middle of editing one of my test benches, it won't load the simulation. I go to click "Run Simulation" and it just keeps loading. No matter how long I wait for the simulation to load nothing happens. Does anyone know why this is and how to fix it?
I tried looking at the Tcl Console to see if any error messages appear and it just says simulation had loaded even though no window appears showing the simulation.
Is there a default setting in Jupyter Lab to clear output for an entire notebook upon file closure?
Or a bit of code I could run early in the NB with import statements or similar. (I have some NBs where I don't run every cell every time I open).
My current workflow is to manually clear before save and exit, but I just encountered a repo commit where I'd forgotten to do that and I had a bit of an adrenaline spike as I flashed back to a corrupted notebook last year.
My interest is preventing potential problems and minimizing user-executable steps to do so. I'm not interested in stripping from the command line post-hoc.
This thread just clears individual cells.
I'm somewhat intrigued by this one but, without spending more time on it, not clear how much effort it would take to incorporate into my GitKraken flow.
This also seems promising, but again is on the back side of things.
Figured I'd check if I'm missing something much more fundamental.
I'm using jupytext and convert my notebooks to markdown format, so whenever I close them, the outputs are cleared automatically.
https://jupytext.readthedocs.io/en/latest/
I have made this AutoHotKey script after reading the manual, searching and troubleshooting for endless hours in the last few weeks:
#Persistent
#SingleInstance force
if A_Args[1] > 0
{
Menu, Tray, Icon, C:\blablabla\new notifications.png
}
else
{
Menu, Tray, Icon, C:\blablabla\no new notifications.png
}
This, I have compiled into test.exe. Then I call it like this from a terminal: test.exe 1, test.exe 0, test.exe 2, test.exe 3, etc.
If I do it slowly/manually, it works: it only ever keeps one instance of the script, showing visually as a nice little icon in the Notification area (as intended), never launching multiple instances.
However, when I started actually using it for real, by calling the same terminal command from my scripts, it often opens two or more instances, which are kept open and just make the Notification area longer and longer, ignoring the rule that it only can run as one instance ever.
I was able to "solve" it by introducing a short "sleep" in my script after each such command call, so that the same script never calls it too quickly in succession. However, today, I realized that multiple different scripts of mine often call it at the same time, or nearly at the same time. This means that my "clever" solution of sleeping falls short.
I then thought that I can use the database to keep track of when the last time a script called this, and don't do it if it's too soon, but if I did that, the whole point would be lost since I can no longer trust the icon to accurately tell me whether or not there are new notifications in my system. I'd constantly be wondering if there had been such a "race condition" and manually go and check anyway, defeating the point of this visual indication which is supposed to let me always know "at a glance" whether or not there are new notifications in my system.
Maybe I could have a loop in my scripts and repeatedly re-try if it detects that a notification has been sent too recently, but that means my scripts will potentially stall for a long time as they all send notifications (especially in the morning, when I wake up and kickstart my system). It just seems like the wrong solution.
Is there really no way to properly handle this in the script itself? As is obvious from my description, the #Persistent and #SingleInstance force rules aren't respected.
(I've had similar problems in the past with programs being unable to "handle" running commands too quickly, and I never know what to do about it except to introduce sleep. But even then, it often glitches out. For example, Notepad++ requires me to first open a document and then open it again with a specified line number, sleeping in between, or else it doesn't go to the specified line.)
I have Windows 10, and about a week ago I installed Avast Device Driver Updater and it did update my chipset, intel processor, and Intel RAID drivers, which were the only ones I allowed it to with no problems. However a couple of days after I started noticing that a CMD window just pops up out of no where and seemingly when it wants to, and it runs a bunch of FINDSDR commands on a bunch of directories that I don't have on my laptop. The window then closes before I have a chance to read any of the paths and quicker then I can catch it to copy anything. The problem is that I can not find the program, shell script, or anything that would be running that might be causing this to run.
I do know that it always tries the to find the same directories, and they fail every time. But I just cant read them... I checked and according to Avast, the Driver Updater does not run any scripts like this, so it should not be the updater process, and just to be sure, I have stopped it from Auto Loading at Boot, and I turned off all it's functions in the settings.
So, I would like to know if anyone knows of any tricks that would help me identify what is calling the findstr.exe, or a way to trap it, while it's running (though even it I trapped it, I don't know how I can use that info to find the culprit that is causing it to run. I am hoping some one know how I can figure out what or where its running from and whos making it run. Any ideas? Thanks a bunch in advance!
NOTICE: Feedback on how the question can be improved would be great as I am still learning, I understand there is no code because I am confident it does not need fixing. I have researched online a great deal and cannot seem to find the answer to my question. My script works as it should when I change the parameters to produce less outputs so I know it works just fine. I have debugged the script and got no errors. When my parameters are changed to produce more outputs and the script runs for hours then it stops. My goal for the question below is to determine if linux will timeout a process running over time (or something related) and, if, how it can be resolved.
I am running a shell script that has several for loops which does the following:
- Goes through existing files and copies data into a newly saved/named file
- Makes changes to the data in each file
- Submits these files (which number in the thousands) to another system
The script is very basic (beginner here) but so long as I don't give it too much to generate, it works as it should. However if I want it to loop through all possible cases which means I will generates 10's of thousands of files, then after a certain amount of time the shell script just stops running.
I have more than enough hard drive storage to support all the files being created. One thing to note however is that during the part where files are being submitted, if the machine they are submitted to is full at that moment in time, the shell script I'm running will have to pause where it is and wait for the other machine to clear. This process works for a certain amount of time but eventually the shell script stops running and won't continue.
Is there a way to make it continue or prevent it from stopping? I typed control + Z to suspend the script and then fg to resume but it still does nothing. I check the status by typing ls -la to see if the file size is increasing and it is not although top/ps says the script is still running.
Assuming that you are using 'Bash' for your script - most likely, you are running out of 'system resources' for your shell session. Also most likely, the manner in which your script works is causing the issue. Without seeing your script it will be difficult to provide additional guidance, however, you can check several items at the 'system level' that may assist you, i.e.
review system logs for errors about your process or about 'system resources'
check your docs: man ulimit (or 'man bash' and search for 'ulimit')
consider removing 'deep nesting' (if present); instead, create work sets where step one builds the 'data' needed for the next step, i.e. if possible, instead of:
step 1 (all files) ## guessing this is what you are doing
step 2 (all files)
step 3 (all files
Try each step for each file - Something like:
for MY_FILE in ${FILE_LIST}
do
step_1
step_2
step_3
done
:)
Dale