Make gdb use less memory - linux

gdb uses way too much memory on my Linux machine - I've allocated 2GB to this LXC virtual machine, but that's not enough.
Is there anything I can do apart from selectively uninstalling -debuginfo packages, which will effectively blind me if a problem turns out to involve those packages?

This was due to an issue in a CVS version of gdb. Downgrading gdb solved it.

Related

Run PyCharm with more memory limit

I have problems running PyCharm with bigger memory size on Linux Mint Sylvia 18.3. I have a script that requires a lot of memory and pycharm does only allow to edit this via config file.
Example:
https://superuser.com/questions/919204/how-can-i-increase-the-memory-heap-in-pycharm
Problem is that I have PyCharm installed via snap, and snap installations are monted as RO file systems and I can not edit that config. Is there any easy way around this?
Also tried this remounting but does not seem to work for me:
https://askubuntu.com/questions/47538/how-to-make-read-only-file-system-writable
For anyone coming across this in 2023, PyCharm now allows to configure the Memory Heap using the UI: https://www.jetbrains.com/help/pycharm/increasing-memory-heap.html

memory increase in centOS and Virtualbox

I am using CentOS in Virtualbox.
My testing Web is frequently down, So I think reason is low memory and want to upgrade it.
origin memory is 1024MB, and in system configuration, updraded to 2048MB.
and to sync with CentOS, what commands need??
I think only upgrading memory in virtual box useless.
Must command some code in cent or chance some file.
but I did not know how to.
I think that should work.what makes you think it's not?
If you run
cat /proc/meminfo
before and after does it reflect the value you set?
You could also add a swap file if you haven't already

High CPU usage using Lauterbach T32 with Linux/Qt

I'm experiencing a very high CPU usage (~100%) using the Qt version of T32 on Linux, even when the program is waiting user interaction. The executable is t32marm-qt.
This does not happen when I use the standard Tcl-based t32marm executable.
A strace shows that the executable continuosly cycles on the
clock_gettime(CLOCK_REALTIME,...)
syscall.
The Linux distribution is Mint 14 32-bit (derivation of Ubuntu 12.10).
Has anybody experienced this behavior ?
If so, is it a bug or just a wrong configuration ?
Yes, I have been just confirmed that it is a software bug, fixed in more recent versions of the tool. If you encounter such a problem, update your version.

Setting up Environment for Buffer Overflow Learning

I am currently reading several security books(my passion) regarding secure programming, however either the distro's they provide on disc are faulty, or non-existent.
Books:Hacking The art of Exploitation 2nEd, Grey Hat hacking 2nEd
The issue is that when i try to follow the examples, obviously newer distros have stack protection and other security features implemented to prevent these situations, and I have tried to manually setup the environment provided with Hacking the art of exploitation, but I have failed.
Also I have tried DVL(Dam Vulnerable Linux) but its way too bloated, I just want a minimal environment that I can have in a small partition and choose from bootloader OR have in a small virtualbox.
So my question is this: How do I go about setting up an environment(distro old kernel) that I can follow most of these examples in. Possibly if someone could tell me the kernel and GCC version of DVL I could get most of it setup myself.
You need to rebuild the kernel without stack and heap protections including non-executable stack. You then need to compile using gcc flags to turn off the protections, one of which would be "-fno-stack-protector". Also because you will run into it soon enough you probably want to statically compile your program because it will be a bit easier to understand it when you are debugging into your 0x41414141 payload.
Also depending on your definition of "bloat" it might be easiest to just download an older distro of linux, redhat 5 or an old slackware and install and use that with the default toolchain.
If you still have DVL available, you can use the commands:
$ uname -r
$ gcc --version
to find out for yourself.
Edit: according to distrowatch.com the linux kernel is 2.6.20 and gcc is 3.4.6
There is an article on the sevagas website that is related to your question :
How-to setup a buffer overflow testing environment

Cross Compiling Linux Kernels and Debugging via VMware

I'm considering doing some Linux kernel and device driver development under a vmware VM for testing ( Ubuntu 9.04 as a guest under vmware server 2.0 ) while doing the compiles on the Ubuntu 8.04 host.
I don't want to take the performance hit of doing the compiles under the VM.
I know that the kernel obviously doesn't link to anything outside itself so there shouldn't be any problems in that regard, but
are there any special gotcha's I need to watch out for when doing this?
beyond still having a running computer when the kernel crashes are there any other benefits to this setup?
Are there any guides to using this kind of setup?
Edit
I've seen numerous references to remote debugging in VMware via Workstation 6.0 using GDB on the host. Does anyone know if this works with any of the free versions of VMWare such as Server 2.0.
I'm not sure about ubuntu thing. Given that you are not doing a real cross compilation (i.e. x86->arm), I would consider using make-kpkg package. This should produce an installable .deb
archive with kernel for your system. this would work for me on debian, it might for for you
on ubuntu.
more about make-kpkg:
http://www.debianhelp.co.uk/kernel2.6.htm
I'm not aware of any gotchas. But basically it depends what kind of kernel part you
are working with. The more special HW/driver you need, the more likely VM won't work for you.
probably faster boots and my favorite is the possibility to take screenshot (cut'n'paste) of panic message.
try to browse to vmware communities. this thread looks very promising, although it dicusses
topic for MacOS:
http://communities.vmware.com/thread/185781
Compiling, editing, compiling is quite quick anyway, you don't recompile you whole kernel each time you modify the driver.
Before crashing, you can have deadlock, bad usage of resource that leads to unremovable module, memory leak etc ... All kind of things that needs a reboot even if your machine did not crash, so yes, this can be a good idea.
The gotchas can come in the form of the install step and module dependency generation, since you don't want to install your driver in the host, but in the target machine.

Resources