Can protege (ontology tool) report line number of an error when reading a turtle file? - protege

I'm generating Turtle triples, full dataset already about 2GB. I work on a small sample of a few K for most testing. Then I attempt a periodic test on the full dataset. It never loads all the way, but it tells me if there are errors.
My quick test is to load the ttl file into protege. I'm using Protege 5.2 (the windows version). There are no errors in the small samples. But when I larger samples it (protege) reads in the ttl file I generated and tells me there's an error.
• Level: INFO Time: 1504111914814 Message: ------------------------------- Loading Ontology -------------------------------
• Level: INFO Time: 1504111914815 Message: Loading ontology from file:/C:/Projects/gdelt/sample.ttl
• Level: INFO Time: 1504112075814 Message: Finished loading file:/C:/Projects/gdelt/sample.ttl
• **Level: ERROR Time: 1504112075818 Message: An error occurred whilst loading the ontology at GC overhead limit exceeded. Cause: {}**
• Level: INFO Time: 1504112075819 Message: Loading for ontology and imports closure successfully completed in 160995 ms
It can take a very long time to load these sample files- and then it only tells me there was an error without any indication of where the problem was. So my current method of debugging is binary search - generate file half as large, see if there is an error, split the difference, check for error, and that way I narrow it down to a few lines in which I can easily detect the error. This is really tedious. Is there a way to get protege to report the line where it puked?
If not, perhaps there is another tool can I use to check the syntax of the triples I generate?

The out of memory error is not raised in the parser, so there is no line number to provide. The number of lines that can be loaded with your memory limit can only be guessed with successive attempts.
The best workaround is to increase the -Xmx parameter value.

Related

Unreal Engine 4.x on Linux crashes with error: SIGSEGV: invalid attempt to write memory > "Assertion failed: SlotIndex < CacheSlotCapacity"

Using Unreal Engine 4.X on Linux (Ubuntu) to package a game and after re-loading the game level many times (my lucky number was 15, but yours might vary), UE crashes with an
Unhandled Exception: SIGSEGV: invalid attempt to write memory at
address
due to an assertion error related to SlotIndex exceeding the CacheSlotCapacity defined in $UE_Dir/Engine/Source/Runtime/Core/Private/FileCache/FileCache.cpp] [Line: 367] as:
Assertion failed: SlotIndex < CacheSlotCapacityte
After researching the problem, I found that packaging setting
Share Material Shader Code = True
was the guilty boy for the CacheSlotCapacity error on Linux, once disabling it ( =False by unchecking the checkbox beside it from the File>Packaging Setting window), I didn't get the error again.
You could also do it manually in the \Config\DefaultGame.ini file under your project directory as:
[/Script/UnrealEd.ProjectPackagingSettings]
bShareMaterialShaderCode=False
It has to be noted that after disabling this setting, your packaged game's size will increase as no sharing of shader materials is happening, on the other side, your game level will load faster as all the shader materials are packaged with the game.

Extracting Meaningful Error Message from 'RuntimeError: CUDA error: device-side assert triggered' on Google Colab in Pytorch

I am experiencing the following error while training a generative network via Pytorch 1.9.0+cu102:
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
While using a Google Colaboratory GPU session. This segment was triggered on either one of these two lines:
running_loss += loss.item()
or
target = target.to(device)
It produces the error on the first line when I am first running the notebook, and the second line each subsequent time I try to run the block. The first error occurs after training for 3 batches. The second error happens on the first batch. I can confirm that the device is cuda0, that device is available, and target is a pytorch tensor. Naturally, I tried to take the advice of the error and run:
!CUDA_LAUNCH_BLOCKING=1
and
os.system('CUDA_LAUNCH_BLOCKING=1')
However, neither of these lines changes the error message. According to a different post, this is because colab is running these lines in a subshell. The error does not occur when running on CPU, and I do not have access to a GPU device besides the GPU on Colab. While this question has been asked in many different forms, no answers are particularly helpful to me because they either recommend passing the aforementioned line, are about a situation fundamentally different from my own (such as training a classifier with an inappropriate number of classes), or recommend a solution which I have already tried, such as resetting the runtime or switching to CPU.
I am hoping to gain insight into the following questions:
Is there a way for me to get a more specific error message? Efforts to set the launch blocking variable have been unsuccessful.
How could it be that I am getting this error on two seemingly very different lines? How could it be that my network trains for 3 batches (it is always 3), but fails on the fourth?
Does this situation remind anyone of an error that they have encountered previously, and have a possible route for ameliorating it given the limited information I can extract?
I was successfully able to get more information about the error by executing:
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
BEFORE importing torch. This allowed me to get a more detailed traceback and ultimately diagnose the problem as an inappropriate loss function.
This can be mainly due to 2 reasons:
Inconsistency in the number of classes
Wrong input for the loss function
If it's the first one, then see you should get the same error when you change the runtime back to CPU.
In my case, it was the second one. I had used BCE loss, and its input should be between 0 and 1. If it's any other value, this error might appear. So I fixed this by using:
criterion=nn.BCEWithLogitsLoss()
instead of:
criterion=nn.BCELoss()
Oh yeah, and I also used:
CUDA_LAUNCH_BLOCKING = "1"
at the beginning of the code.

"shmop_open(): unable to attach or create shared memory segment 'No error':"?

I get this every time I try to create an account to ask this on Stack Overflow:
Oops! Something Bad Happened!
We apologize for any inconvenience, but an unexpected error occurred while you were browsing our site.
It’s not you, it’s us. This is our fault.
That's the reason I post it here. I literally cannot ask it on Overflow, even after spending hours of my day (on and off) repeating my attempts and solving a million reCAPTCHA puzzles. Can you maybe fix this error soon?
With no meaningful/complete examples, and basically no documentation whatsoever, I've been trying to use the "shmop" part of PHP for many years. Now I must find a way to send data between two different CLI PHP scripts running on the same machine, without abusing the database for this. It must work without database support, which means I'm trying to use shmop, but it doesn't work at all:
$shmopid = shmop_open(1, 'w', 0644, 99999); // I have no idea what the "key" is supposed to be. It says: "System's id for the shared memory block. Can be passed as a decimal or hex.", so I've given it a 1 and also tried with 123. It gave an error when I set the size to 64, so I increased it to 99999. That's when the error changed to the one I now face above.
shmop_write($shmopid, 'meow 123', 0); // Write "meow 123" to the shared variable.
while (1)
{
$shared_string = shmop_read($shmopid, 0, 8); // Read the "meow 123", even though it's the same script right now (since this is an example and minimal test).
var_dump($shared_string);
sleep(1);
}
I get the error for the first line:
shmop_open(): unable to attach or create shared memory segment 'No error':
What does that mean? What am I doing wrong? Why is the manual so insanely cryptic for this? Why isn't this just a built-in "superarray" that can be accessed across the scripts?
About CLI:
It cannot work in standalone CLI processes, as an answer here says:
https://stackoverflow.com/a/34533749
The master process is the one to hold the shared memory block, so you will have to use php-fpm or mod_php or some other web/service-running version, and maybe even start/request/stop it all from a CLI php script.
About shmop usage itself:
Use "c" mode in shmop_open() for creating the segment before it can be used with "a" or "w".
I stumbled on this error in a different scenario where shared memory is completely optional to speed up some repeated operations. So I wanted to try reading first without knowing memory size and then allocate from actual data when needed. In my case I had to call it as #shmop_open() to hide this error output.
About shmop on Windows:
PHP 7 crashed Apache worker process causing its restart with status 3221225477 when trying to reallocate a segment with the same predefined (arbitrary number) key but different size, even after shmop_delete(). As a workaround for this, I took filemtime() of the source file which contains data to be stored in memory, and used that timestamp as the key in shmop_open(). It still was not flawless IIRC, and I don't know if it would cause memory leaks, but it was enough to test my code which would mainly run on Linux anyway.
Finally, as of PHP version 8.0.7, shmop seems to work fine with Apache 2.4.46 and mod_php in Windows 10.

Time limit imposed in command line does not seem to constraint run time

I am trying to run a MiniZinc model with a OSICBC solver via bash, with the following command-line arguments (subject to a time limit of 30000ms or 30s):
minizinc --solver osicbc model.mzn data.dzn --time-limit 30000 --output-time
But for just this run, the entire process upon executing the command to getting outputs takes about a minute, and the output shows that "Time Elapsed: 36.21s" at the end.
Is this the right approach to imposing a time limit in running this model, where total time taken includes the time from which the command is invoked to which the outputs are shown in my terminal?
The --time-limit command line flag was introduced in MiniZinc 2.2.0 to allow the user to restrict the combined time that the compiler and the solver take. It also introduced --solver-time-limit to just limit the solver time.
Note that minizinc will allow the solver some extra time to output their final solutions.
If you experience that these flags do not limit the solver to the specified times and they are not stopped within a second of the given limit, then this would suggest a bug and I would invite you to make a bug report: https://github.com/MiniZinc/libminizinc/issues

What are the log Error Messages for ClientCheck and InvalidMemPool Error Types of Valgrind

I am running a script in which I am trying to get the all possible Error Valgrind messages in the log file. I have following error messages for corresponding Valgrind Error Types :
Error Types Error message in Log File
1. InvalidFree I free() / delete / delete[] / realloc()
2. MismatchedFree Mismatched free() / delete / delete []
3. InvalidRead Invalid read of size
4. InvalidWrite Invalid write of size
5. InvalidJump Jump to the invalid address
6. Overlap Source and destination overlap in memcpy
7. InvalidMemPool
8. UninitCondition Conditional jump or move depends on uninitialised value
9. UninitValue Use of uninitialised value of size
10. SyscallParam Syscall param execve(filename)
11. ClientCheck
12. Leak_DefinitelyLost definitely lost in loss record
13. Leak_IndirectlyLost Indirectly lost in loss record
14. Leak_StillReachable still reachable in loss record
15. Leak_PossiblyLost Possibly Lost in loss record
I have no idea how to generate error for ClientCheck and InvalidMemPool Error types. Please let me know how to generate it or tell me what is the error message will be generated for these two types of Valgrind Error.
ClientCheck errors are generated following memcheck.h client checks
inserted in your code:client requests VALGRIND_CHECK_MEM_IS_ADDRESSABLE
or VALGRIND_CHECK_MEM_IS_DEFINED will generate such errors if the memory
is not addressable or not defined.
InvalidMemPool errors are generated when the 'POOL' related client requests
in valgrind.h are used incorrectly, typically referencing an incorrect
pool (for example, an already destroyed pool, or a not yet created pool)

Resources