SublimeCodeIntel Memory leak - memory-leaks

When I use Sublime with sublimecodeintel package while writing code or plain text, it is using ~1.2GB of Memory and 0% CPU. I observed the behaviour of memory usage and it seems that every time I type a word memory usage jumps ~1MB.
If I set "codeintel": false, the memory usage returns back to normal.
This seems to be a memory leak issue to me. Is there any way that I an fix this?
here are my settings for sublimecodeintel (I use default settings)
/*
SublimeCodeIntel default settings
*/
{
/*
Sets the mode in which SublimeCodeIntel runs:
true - Enabled (the default).
false - Disabled.
*/
"codeintel": true,
// An array of language names which are disabled.
"codeintel_disabled_languages":
[
],
/*
Sets the mode in which SublimeCodeIntel's live autocomplete runs:
true - Autocomplete popups as you type (the default).
false - Autocomplete popups only when you request it.
*/
"codeintel_live": true,
// An array of language names to disable.
"codeintel_live_disabled_languages":
[
],
/*
Maps syntax names to languages. This allows variations on a syntax
(for example "Python (Django)") to be used. The key is
the base filename of the .tmLanguage syntax files, and the value
is the syntax it maps to.
*/
"codeintel_syntax_map":
{
"Python Django": "Python"
}
}

The easiest fix is to set "codeintel":false, unfortunately. If you head over to Github and check out the Issues, you'll see that a number of people have problems with performance, especially on large projects. The plugin was originally ported from Open Komodo Editor to Sublime, and I think some performance was lost in translation. I don't have any problem with it whilst working on small projects, but if I start using IPython with pylab in SublimeREPL (which imports very large parts of numpy and matplotlib, among others) then performance can slow to a crawl - and this is on a quad-core 3.4 GHz i7 with 20GB RAM, so I'm not starving for power.
Unfortunately it doesn't look like any of the performance Issues have been responded to, let alone addressed in the code, so if someone is willing to profile and fix it we'd all be grateful!

Related

Maxima. Thread local storage exhausted

I am writing mathematical modules for analysis problems. All files are compiled to .fasl.
The sizes of these files are gradually increasing and new ones are added to them. I ran into a problem today when loading a module load("foo.mac") ~0.4s loading 100+ files and another module from 200+, which declare functions and variables without precomputing.
Error: Thread local storage exhausted fatal error encountered is SBCL pid %PRIMITIVE HALT called; the party is over. Welcome to LDB.. CPU and RAM indicators are stable at this moment
Doesn't help maxima -X '--dynamic-space-size 2048', 4096 - too, by default 1024. Why it does not work?
SBCL + Windows = works without errors. SBCL 1.4.5.debian + Linux (server) this error is thrown. However, if I reduce the size of the files a little, then the module is loaded.
I recompiled the files, checked all .UNLISP. Changed the order of uploaded files, but an error occurs when loading the most recent ones in the list. Tests run without errors. There are some ways to increase the amount "local storage" through SBCL, Maxima? In which direction to move? Any ideas
Update:
Significantly reduced the load by removing duplicate code matchdeclare(..). Now no error is observed.
From https://sourceforge.net/p/maxima/mailman/message/36659152/
maxima uses quite a few special variables which sometimes makes
sbcl run out of thread-local storage when running the testsuite.
They proposed to add an environment variable that allows to change
the thread-local storage size but added a command-line option
instead => if supported by sbcl we now generate an image with
ab bigger default thread-local storage whose size can be
overridden by users passing the --tls-limit option.
The NEWS file in SBCL's source code also indicates that the default value is 4096
changes in sbcl-1.5.2 relative to sbcl-1.5.1:
* enhancement: RISC-V support with the generational garbage collector.
* enhancement: command-line option "--tls-limit" can be used to alter the
maximum number of thread-local symbols from its default of 4096.
* enhancement: better muffling of redefinition and lambda-list warnings
* platform support:
** OS X: use Grand Central Dispatch semaphores, rather than Mach semaphores
** Windows: remove non-functional definition of make-listener-thread
* new feature: decimal reader syntax for rationals, using the R exponent
marker and/or *READ-DEFAULT-FLOAT-FORMAT* of RATIONAL.
* optimization: various Unicode tables have been packed more efficiently

Why doesn't Qt::AA_DisableHighDpiScaling disable high DPI scaling, and why does Qt::AA_EnableHighDpiScaling disable it?

I'm working on a Qt application (deploying to Qt 5.11, but I'm testing on Qt 5.14) that needs to run on a variety of projectors. At least one of these projectors reports a physical size of over one metre, which results in only 32.5 dpi reported to the Linux OS (compared to the default of 96 dpi). The effect of this setting on our Qt app is that all text becomes unreadably small:
It can be reproduced on any system by running
xrandr --dpi 32.5
before starting the application.
We could configure the system dpi differently, but there are reasons not to: this dpi is actually in the right ballpark (it's even too high), we may want to use it in other applications, and customers may use their own projector which might break our manual configuration.
The safe approach for this particular use case is to pretend we're still living in the stone age: ignore the system dpi setting and just use a 1:1 mapping between device-independent pixels and device pixels. The High DPI displays documentation says:
The Qt::AA_DisableHighDpiScaling application attribute, introduced in Qt 5.6, turns off all scaling. This is intended for applications that require actual window system coordinates, regardless of environment variables. This attribute takes priority over Qt::AA_EnableHighDpiScaling.
So I added this as the first line in main (before the QApplication is created):
QCoreApplication::setAttribute(Qt::AA_DisableHighDpiScaling);
However, it seems to have no effect; text is still unreadably small. I also tried this in various combinations with:
QCoreApplication::setAttribute(Qt::AA_EnableHighDpiScaling, false);
QCoreApplication::setAttribute(Qt::AA_Use96Dpi);
Nothing has any visible effect effect.
What does work is setting QT_AUTO_SCREEN_SCALE_FACTOR=1 in the environment. If I understand correctly, this would enable scaling rather than disable it, but setting it to 0 does not work!
Similarly, if I enable Qt::AA_EnableHighDpiScaling in code like this, everything becomes readable:
QCoreApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
What also works to some extent is hardcoding the font size (found here):
QFont font = qApp->font();
font.setPixelSize(11);
qApp->setFont(font);
However, margins in the layout still seem to be scaled, so this results in a very cramped (albeit usable) layout.
What also works is setting QT_FONT_DPI=96 in the environment (this variable seems to be undocumented, but it works in Qt 5.11 and 5.14 at least).
Either there are bugs in Qt, or more likely, I'm misunderstanding something. How come that enabling the scaling seems to disable it and vice versa?
Edit: just tested on Qt 5.11 too, although it's on another system. There, neither QT_AUTO_SCREEN_SCALE_FACTOR=1 nor QT_AUTO_SCREEN_SCALE_FACTOR=0 works, so it seems we are dealing with Qt bugs to some extent after all. Maybe related:
High DPI scaling not working correctly - CLOSED Out of scope
HighDPi: Update scale factor setting for devicePixelRatio scaling (AA_EnableHighDpiScaling) - CLOSED Done in 5.14
Support of DPI Scaling Level for Displays in Windows 10 - REPORTED Unresolved
Qt uses wrong source for logical DPI on X - REPORTED Unresolved - This may be the root cause of the issue I'm seeing.
Uselessness of setAttribute(Qt::AA_EnableHighDpiScaling) - REPORTED Unresolved
So how can I make it work reliably in all cases?
Here's what I did in the end to forcibly disable any scaling on Qt 5.11:
QCoreApplication::setAttribute(Qt::AA_DisableHighDpiScaling);
if (qgetenv("QT_FONT_DPI").isEmpty()) {
qputenv("QT_FONT_DPI", "84");
}

I-7188ex's weird behaviour

I have quite complex I/O program (written by someone else) for controller ICPDAS i-7188ex and I am writing a library (.lib) for it that does some calculations based on data from that program.
Problem is, if I import function with only one line printf("123") and embed it inside I/O, program crashes at some point. Without imported function I/O works fine, same goes for imported function without I/O.
Maybe it is a memory issue but why should considerable memory be allocated for function which only outputs a string? Or I am completely wrong?
I am using Borland C++ 3.1. And yes, I can't use anything newer since controller takes only 80186 instruction set.
If your code is complex then sometimes your compiler can get stuck and compile it wrongly messing things up with unpredictable behavior. Happen to me many times when the code grows ... In such case usually swapping few lines of code (if you can without breaking functionality) or even adding few empty or rem lines inside code sometimes helps. Problem is to find the place where it do its thing. You can also divide your program into several files compile each separately to obj and then just link them to the final file ...
The error description remembers me of one I did fight with a long time. If you are using class/struct/template try this:
bds 2006 C hidden memory manager conflicts
may be it will help (did not test this for old turbo).
What do you mean by embed into I/O ? are you creating a sys driver file? If that is the case you need to make sure you are not messing with CPU registers. That could cause a lot of problems try to use
void some_function_or_whatever()
{
asm { pusha };
// here your code
printf("123");
asm { popa };
}
If you writing ISR handlers then you need to use interrupt keyword so compiler returns from it properly.
Without actual code and or MCVE is hard to point any specifics ...
If you can port this into BDS2006 or newer version (just for debug not really functional) then it will analyse your code more carefully and can detect a lot of hidden errors (was supprised when I ported from BCB series into BDS2006). Also there is CodeGuard option in the compiler which is ideal for finding such errors on runtime (but I fear you will not be able to run your lib without the I/O hw present in emulated DOS)

Would turning off feature in kernel cause kernel module(using feature) to misbehave?

I am using KVM as a kernel module. I want to turn off the huge page support. I did not find any option in KVM source to turn if off.
However, I see a kernel wide option to turn it off. If I disable a huge page feature using compile time config option CONFIG_TRANSPARENT_HUGEPAGE, the kernel source would not be able to use it, right ? Or at least fail gracefully citing missing feature ? Either of above is fine, I just wanted to know if it could have some unknown problems.
CONFIG_HUGETLBFS can disable the user-space API, and CONFIG_TRANSPARENT_HUGEPAGE can disable automatic creation of huge pages for generic memory.
However, huge pages are an integral part of the x86 memory management code and are used for things like direct mappings or large MMIO regions.
You cannot simply switch off huge pages.
When you are working with the MM code, you cannot avoid worrying about huge pages.

Debugging memory leaks in Windows Explorer extensions

Greetings all,
I'm the developer of a rather large C# Windows Explorer extension. As you can imagine, there is a lot of P/Invoke involved, and unfortunately, I've confirmed that it's leaking unmanaged memory somewhere. However, I'm coming up empty as to how to find the leak. I tried following this helpful guide, which says to use WinDBG. But, when I try to use !heap, it won't let me because I don't have the .PDB files for explorer.exe (and the public symbol files aren't sufficient, apparently).
Help?
I've used many time UMDH with very good results. The guide you mentioned describing WinDbg uses the same method as UMDH, based on ability of debug heap to record stack traces for all allocations. The only difference is that UMDH does it automated -- you simply run umdh from command line and it creates snapshot of all current allocations. Normally you to repeate the snapshots two or more times, then you calculate 'delta' between two snapshots (also using umdh.exe). The information on the 'delta' file gives you all new allocations that happen between your snapshots, sorted by the allocation size.
UMDH also needs symbols. You will need at least symbols for ntdll.dll (heap implementation lives there). Public symbols available on public symbols from http://msdl.microsoft.com/download/symbols will work fine.
Make sure you are using correct bitness of the umdh.exe. Explorer.exe is 64 bit on 64 bit OS, so if your OS is 64 bit you need to use 64 bit umdh.exe -- i.e. download appropriate bitness of Windows debugging tools.

Resources