Increase conventional RAM in Wndows - basic

I have a program that requires 512KB of conventional RAM but my cmd.exe only reports 500KB. My question is how do I increase RAM to the program. Thanks.

I'd say your best bet is to use a more modern programming language, but if you're constrained to QBASIC for whatever reason, you might give QB64 a try: https://www.qb64.org/

I managed to free some conventional ram by specifying:
rem config.nt file contents:
emm=ram
dos=high,umb
devicehigh=%SystemRoot%\system32\himem.sys
devicehigh=%SystemRoot%\system32\ansi.sys
files=255

After freeing up some RAM in Windows, using MEM declares the following:
655360 bytes total conventional memory
655360 bytes available to MS-DOS
626224 largest executable program size
1048576 bytes total contiguous extended memory
0 bytes available contiguous extended memory
941056 bytes available XMS memory
MS-DOS resident in High Memory Area
But what I cannot figure out is why the available contiguous extended memory is always 0?

Related

Virtual memory limit on linux based ARM

I try to allocate memory using calloc(). The maximum size I can get is 1027 Mb (not 1024 Mb). I see this from the top command output. ulimit -v is set to unlimited. imx6q ARM. How can I allocate more memory? Thank you!
If you're on a 32-bit addressed architecture, theoretically you could use at most 4 GiB of virtual memory. Parts of them, however, is either reserved for kernel or allocated to hold library and program code, so it's possible you're left with less than 2 GiB of memory.

Why does high-memory not exist for 64-bit cpu?

While I am trying to understand the high memory problem for 32-bit cpu and Linux, why is there no high-memory problem for 64-bit cpu?
In particular, how is the division of virtual memory into kernel space and user space changed, so that the requirement of high memory doesn't exist for 64-bit cpu?
Thanks.
A 32-bit system can only address 4GB of memory. In Linux this is divided into 3GB of user space and 1GB of kernel space. This 1GB is sometimes not enough so the kernel might need to map and unmap areas of memory which incurs a fairly significant performance penalty. The kernel space is the "high" 1GB hence the name "high memory problem".
A 64-bit system can address a huge amount of memory - 16 EB -so this issue does not occur there.
With 32-bit addresses, you can only address 2^32 bytes of memory (4GB). So if you have more that, you need to address it some special way. With 64-bit addresses, you can address 2^64 bytes of memory without special effort, and that number is way bigger than all the memory that exists on the planet.
That number of bits refers to the word size of the processor. Among other things, the word size is the size of a memory address on your machine. The size of the memory address affects how many bytes can be referenced uniquely. So doing some simple math we find that on a 32 bit system at most 2^32 = 4294967296 memory addresses exist, meaning you have a mathematical limitation to about 4GB of RAM.
However on a 64 bit system you have 2^64 = 1.8446744e+19 memory address available. This means that your computer can theoretically reference almost 20 exabytes of RAM, which is more RAM than anyone has ever needed in the history of computing.

How is RAM and Heap Space allocated for a linux/unix command?

So when I execute a linux command, say a cat command for this example, on a server with 128 GB of RAM, assuming none of this RAM is currently in use, all is free. (I realize this will never happen, but that's why it's an example)
1) Would this command then be executed with a heap space of all 128 GB? Or would it be up to the linux distro I am using to decide how much heap space is aloocated from the available 128 GB?
2) If so, is there another command line argument that I can pass along with my cat command to reserve more heap space than the system standard?
EDIT: 3) Is there a way which I can identify how much heap space will be allocated for my cat command (or any command), preferrably a built-in command line solution, not an external application. If it's not possible please say so, I am just curious.
When you start a program, some amount of memory is allocated for the program; however, this amount of memory is not the limit to what the program is allowed to use. Whenever more memory is requested, it can be granted up until the system has used up all of the memory.
It would be up to the memory allocator (usually found in the libc) to determine how much heap would be allocated. glibc's allocator doesn't use any environment variables, but other replacements may.

On Linux: We see following: Physical, Real, Swap, Virtual Memory - Which should we consider for sizing?

We use a Tool (Whats Up Gold) to monitor memory usage on a Linux Box.
We see Memory usage (graphs) related to:
Physical, Real, Swap, Virtual Memory and ALL Memory (which is a average of all these).
'The ALL' Memory graphs show low memory usage of about: 10%.
But Physical memory shows as 95% used.
Swap memory shows as 2% used.
So, do i need more memory on this Linux Box?
In other words should i go by:
ALL Memory graph(which says memory situation is good) OR
Physical Memory Graph (which says memory situation is bad).
Real and Physical
Physical memory is the amount of DRAM which is currently used. Real memory shows how much your applications are using system DRAM memory. It is roughly lower than physical memory. Linux system caches some of disk data. This caching is the difference between physical and real memory. Actually, when you have free memory Linux goes to use it for caching. Do not worry, as your applications demand memory they gonna get the cached space back.
Swap and Virtual
Swap is additional space to your actual DRAM. This space is borrowed from disk space and once you application fill-out entire DRAM, Linux transfers some unused memory to swap to let all application stay alive. Total of swap and physical memory is the virtual memory.
Do you need extra memory?
In answer to your question, you need to check real memory. If your real memory is full, you need to get some RAM. Use free command to check the amount of actual free memory. For example on my system free says:
$ free
total used free shared buffers cached
Mem: 16324640 9314120 7010520 0 433096 8066048
-/+ buffers/cache: 814976 15509664
Swap: 2047992 0 2047992
You need to check buffer/cache section. As shown above, there are real 15 GB free DRAM (second line) on my system. Check this on your system and find out whether you need more memory or not. The lines represent physical, real, and swap memory, respectively.
free -m
as for free tool analisys about memory lack in linux i have some opinion proved by experiments (practice)
~# free -m
total used free shared buff/cache available
Mem: 2000 164 144 1605 1691 103
you should summarize 'used'+'shared' and compare with 'total'
other columns are useless just confuse and nothing more
i would say
[ total - (used + shared ) ] should be always at least > 200 MB
also you can get almost the same number if you check MemAvailable in meminfo :
# cat /proc/meminfo
MemAvailable: 107304 kB
MemAvailable - is how much memory linux thinks right now is really free before active swapping happens.
so now you can consume 107304 kB maximum. if you
consume more big swappening starts happening.
MemAvailable also is in good correlation with real practice.

Linux process memory consumption in bytes (not Kbytes)

In Linux is there any way to check processes memory measured on bytes (using top or ps for example). Not in kbytes, but bytes.
Thanks in advance!
Beyond the obvious answer of multiplying by 1024 (or 1000 if you want to be SI-correct)?
AFAIK top, ps etc. get their info from reading /proc/[PID]/status or something equivalent. Which reports info in KB. So I'd guess the answer to your question is no. Not that a positive answer would be useful, since memory is allocated from the kernel at page-level granularity, and the minimum page size Linux supports is 4 KB, so you wouldn't get more "resolution" by getting the memory consumption in bytes.
multiply kbytes by 1024

Resources