I'm looking for a function to call from a C program that returns the number of free huge pages.
I've found functions to return the huge page size(s), lots of information on how to use pseudo file system entries, but no man page for a function that returns the same information you get from running:
cat /proc/meminfo | grep '^Huge'
Is there such a thing?
I tried lots of web searches, got lots of information on use of huge pages, but no information on how to write a program that makes a system call to get said information.
This c code will open the file /proc/meminfo and read the values.
Here is output showing 2 Huge values extracted.
Notice that the entire line of text is read from the file. The line buffer here is static but other efforts can modify to buffer the entire file or other approaches for reading file data.
Notice that the sscanf returns number of assignments made by the format specified. The format specified separates the values to match. For example the text "HugePages_Total" must match or no assignments are made.
Good luck!
gcc -o main main.c
./main
HugePages_Total: 0
Hugepagesize: 2048 kB
main.c
// gcc -o main main.c
#include <string.h>
#include <stdio.h>
int main()
{
FILE* fptr = fopen( "/proc/meminfo", "r" );
if ( fptr )
{
while ( !feof( fptr ) )
{
long unsigned number = 10;
char units[10] = {0};
char line[4096] = {0};
fgets(line, sizeof(line), fptr );
if ( sscanf( line, "HugePages_Total: %lu", &number ) == 1 )
{
printf( "HugePages_Total: %lu\n", number );
}
else if ( sscanf( line, "Hugepagesize: %lu %s", &number, units ) == 2 )
{
printf( "Hugepagesize: %lu %s\n", number, units );
continue;
}
}
}
return 0;
}
Related
Basically I'm using Linux 2.6.34 on PowerPC (Freescale e500mc). I have a process (a kind of VM that was developed in-house) that uses about 2.25 G of mlocked VM. When I kill it, I notice that it takes upwards of 2 minutes to terminate.
I investigated a little. First, I closed all open file descriptors but that didn't seem to make a difference. Then I added some printk in the kernel and through it I found that all delay comes from the kernel unlocking my VMAs. The delay is uniform across pages, which I verified by repeatedly checking the locked page count in /proc/meminfo. I've checked with programs that allocate that much memory and they all die as soon as I signal them.
What do you think I should check now? Thanks for your replies.
Edit: I had to find a way to share more information about the problem so I wrote this below program:
#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <string.h>
#include <errno.h>
#include <signal.h>
#include <sys/time.h>
#define MAP_PERM_1 (PROT_WRITE | PROT_READ | PROT_EXEC)
#define MAP_PERM_2 (PROT_WRITE | PROT_READ)
#define MAP_FLAGS (MAP_ANONYMOUS | MAP_FIXED | MAP_PRIVATE)
#define PG_LEN 4096
#define align_pg_32(addr) (addr & 0xFFFFF000)
#define num_pg_in_range(start, end) ((end - start + 1) >> 12)
inline void __force_pgtbl_alloc(unsigned int start)
{
volatile int *s = (int *) start;
*s = *s;
}
int __map_a_page_at(unsigned int start, int whichperm)
{
int perm = whichperm ? MAP_PERM_1 : MAP_PERM_2;
if(MAP_FAILED == mmap((void *)start, PG_LEN, perm, MAP_FLAGS, 0, 0)){
fprintf(stderr,
"mmap failed at 0x%x: %s.\n",
start, strerror(errno));
return 0;
}
return 1;
}
int __mlock_page(unsigned int addr)
{
if (mlock((void *)addr, (size_t)PG_LEN) < 0){
fprintf(stderr,
"mlock failed on page: 0x%x: %s.\n",
addr, strerror(errno));
return 0;
}
return 1;
}
void sigint_handler(int p)
{
struct timeval start = {0 ,0}, end = {0, 0}, diff = {0, 0};
gettimeofday(&start, NULL);
munlockall();
gettimeofday(&end, NULL);
timersub(&end, &start, &diff);
printf("Munlock'd entire VM in %u secs %u usecs.\n",
diff.tv_sec, diff.tv_usec);
exit(0);
}
int make_vma_map(unsigned int start, unsigned int end)
{
int num_pg = num_pg_in_range(start, end);
if (end < start){
fprintf(stderr,
"Bad range: start: 0x%x end: 0x%x.\n",
start, end);
return 0;
}
for (; num_pg; num_pg --, start += PG_LEN){
if (__map_a_page_at(start, num_pg % 2) && __mlock_page(start))
__force_pgtbl_alloc(start);
else
return 0;
}
return 1;
}
void display_banner()
{
printf("-----------------------------------------\n");
printf("Virtual memory allocator. Ctrl+C to exit.\n");
printf("-----------------------------------------\n");
}
int main()
{
unsigned int vma_start, vma_end, input = 0;
int start_end = 0; // 0: start; 1: end;
display_banner();
// Bind SIGINT handler.
signal(SIGINT, sigint_handler);
while (1){
if (!start_end)
printf("start:\t");
else
printf("end:\t");
scanf("%i", &input);
if (start_end){
vma_end = align_pg_32(input);
make_vma_map(vma_start, vma_end);
}
else{
vma_start = align_pg_32(input);
}
start_end = !start_end;
}
return 0;
}
As you would see, the program accepts ranges of virtual addresses, each range being defined by start and end. Each range is then further subdivided into page-sized VMAs by giving different permissions to adjacent pages. Interrupting (using SIGINT) the program triggers a call to munlockall() and the time for said procedure to complete is duly noted.
Now, when I run it on freescale e500mc with Linux version at 2.6.34 over the range 0x30000000-0x35000000, I get a total munlockall() time of almost 45 seconds. However, if I do the same thing with smaller start-end ranges in random orders (that is, not necessarily increasing addresses) such that the total number of pages (and locked VMAs) is roughly the same, observe total munlockall() time to be no more than 4 seconds.
I tried the same thing on x86_64 with Linux 2.6.34 and my program compiled against the -m32 parameter and it seems the variations, though not so pronounced as with ppc, are still 8 seconds for the first case and under a second for the second case.
I tried the program on Linux 2.6.10 on the one end and on 3.19, on the other and it seems these monumental differences don't exist there. What's more, munlockall() always completes at under a second.
So, it seems that the problem, whatever it is, exists only around the 2.6.34 version of the Linux kernel.
You said the VM was developed in-house. Does this mean you have access to the source? I would start by checking to see if it has anything to stop it from immediately terminating to avoid data loss.
Otherwise, could you potentially try to provide more information? You may also want to check out: https://unix.stackexchange.com/ as they would be better suited to help with any issues the linux kernel may be having.
Sorry if the question is basic, is there options in shell to show how much time and how much memory (maximum memory occupied) the execution of a command has took?
Example: I want to call a binary file as follow: ./binary --option1 option1 --option2 option2
After the execution of this command I would like to know how much time and memory this command took.
Thanks
The time(1) command, either as a separate executable or as a shell built-in, can be used to measure de the time used by a program, both in terms of wallclock time and CPU time.
But measuring the memory usage of a program, or even agreeing how to define it, is a bit different. Do you want the sum of its allocations? The maximum amount of memory allocated at a single moment? Are you interested in what the code does, or in the program behavior as a whole, where the memory allocator makes a difference? And how do you consider the memory used by shared objects? Or memory-mapped areas?
Valgrind may help with some memory-related questions, but it is more of a development tool, rather than a day-to-day system administrator tool. More specifically the Massif heap profiler can be used to profile the memory usage of an application, but it does have a measurable performance impact, especially with stack profiling enabled.
There are several files in /proc that might be simpler than using a profiler, assuming you know the PID of the process in which you're interested.
Of primary interest is /proc/$PID/status which lists ( among other things ) Virtual Memory Peak and Size ( VmPeak, VmSize respectively), and Resident "High Water Mark" and current sets ( VmHWM, VnRSS respectively )
I set up a simple C program to grab memory and then free it and then watched the file in proc corresponding to that program's PID, and it seemed to verify the manpage.
see man proc for a complete list of files that may interest you.
Here's the command line and program I used for the test:
Monitored with
PID="<program's pid>"
watch "cat /proc/$PID/status | grep ^Vm"
( compile with gcc -o grabmem grabmem.c -std=c99 )
#include <sys/types.h>
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
#define INTNUM ( 1024 * 1024 )
#define PASSES 16
int main(){
int* mems[16];
int cur = 0;
while( mems[cur] = (int*)calloc( sizeof( int ), INTNUM ) ){
for( int i = 0; i < INTNUM; i++)
mems[cur][i] = rand();
printf("PID %i Pass %i: Grabbed another %lu of mem.\n",
(int)getpid(),
cur,
sizeof(int) * INTNUM
);
sleep( 1 );
cur++;
if( cur >= 16 ){
printf("Freeing memory...");
for( cur = (PASSES - 1); cur >= 0; cur-- ){
free(mems[cur] );
mems[cur] = NULL;
}
cur = 0;
printf("OK\n");
}
}
fprintf(stderr, "Couldn't calloc() memory.\n");
exit( -1 );
}
Do you know if exist one program or method to see (secuences of)bytes from a text,html file?
Not to see characters, rather see the complete sequence of bytes.
recommendations?
yes, it is called hex editor... Hundreds of those exist out there.
Here are some: http://en.wikipedia.org/wiki/Comparison_of_hex_editors
A common hex editor allows you to view any file's byte sequence.
If you just want to see the existing bytes (without changing them) you can use a hex-dump program, which is much smaller and simpler than a hex editor. For example, here's one I wrote several years ago:
/* public domain by Jerry Coffin
*/
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
unsigned long offset = 0;
FILE *input;
int bytes, i, j;
unsigned char buffer[16];
char outbuffer[60];
if ( argc < 2 ) {
fprintf(stderr, "\nUsage: dump filename [filename...]");
return EXIT_FAILURE;
}
for (j=1;j<argc; ++j) {
if ( NULL ==(input=fopen(argv[j], "rb")))
continue;
printf("\n%s:\n", argv[j]);
while (0 < (bytes=fread(buffer, 1, 16, input))) {
sprintf(outbuffer, "%8.8lx: ", offset+=16);
for (i=0;i<bytes;i++) {
sprintf(outbuffer+10+3*i, "%2.2X ",buffer[i]);
if (!isprint(buffer[i]))
buffer[i] = '.';
}
printf("%-60s %*.*s\n", outbuffer, bytes, bytes, buffer);
}
fclose(input);
}
return 0;
}
I need help getting started with libsndfile.
I have four .wav files (all the same sample rate).
I want to join the first two together,then the next two together,
Then mix the resulting 2 .wav files into one.
For mixing two wav files.
#include <cstdio>
#include <sndfile.h>
#include <windows.h>
#include <cstdlib>
#include <cmath>
#define BUFFER_LEN 1024
#define MAX_CHANNELS 6
static void data_processing (double *data, int count, int channels) ;
int main (void) {
static double data [BUFFER_LEN] ;
static double data2 [BUFFER_LEN] ;
static double outdata [BUFFER_LEN] ;
SNDFILE *infile, *outfile, *infile2 ;
SF_INFO sfinfo ;
int readcount ;
SF_INFO sfinfo2 ;
int readcount2 ;
const char *infilename = "inputOne.wav" ;
const char *infilename2 = "inputTwo.wav" ;
const char *outfilename = "output.wav" ;
if (! (infile = sf_open (infilename, SFM_READ, &sfinfo))) {
/* Open failed so print an error message. */
printf ("Not able to open input file %s.\n", infilename) ;
/* Print the error message from libsndfile. */
puts (sf_strerror (NULL)) ;
return 1 ;
} ;
if (sfinfo.channels > MAX_CHANNELS) {
printf ("Not able to process more than %d channels\n", MAX_CHANNELS) ;
return 1 ;
} ;
if (! (infile2 = sf_open (infilename2, SFM_READ, &sfinfo2))) {
/* Open failed so print an error message. */
printf ("Not able to open input file %s.\n", infilename2) ;
/* Print the error message from libsndfile. */
puts (sf_strerror (NULL)) ;
return 1 ;
} ;
if (sfinfo2.channels > MAX_CHANNELS) {
printf ("Not able to process more than %d channels\n", MAX_CHANNELS) ;
return 1 ;
} ;
/* Open the output file. */
if (! (outfile = sf_open (outfilename, SFM_WRITE, &sfinfo))) {
printf ("Not able to open output file %s.\n", outfilename) ;
puts (sf_strerror (NULL)) ;
return 1 ;
} ;
while ((readcount = sf_read_double (infile, data, BUFFER_LEN)) &&
(readcount2 = sf_read_double (infile2, data2, BUFFER_LEN))) {
data_processing (data, readcount, sfinfo.channels) ;
data_processing(data2, readcount2, sfinfo2.channels) ;
for(int i=0; i < 1024;++i) {
outdata[i] = (data[i] + data2[i]) -(data[i])*(data2[i])/65535;
}
sf_write_double (outfile, outdata , readcount) ;
} ;
/* Close input and output files. */
sf_close (infile) ;
sf_close (infile2) ;
sf_close (outfile) ;
system("PAUSE");
return 0 ;
} /* main */
static void data_processing(double *data, int count, int channels) {
double channel_gain [MAX_CHANNELS] = { 0.5, 0.8, 0.1, 0.4, 0.4, 0.9 } ;
int k, chan ;
for (chan = 0 ; chan < channels ; chan ++)
for (k = chan ; k < count ; k+= channels)
data [k] *= channel_gain [chan] ;
return ;
} /* data_processing */
This is how i am mixing two simple wav files which are 16 bit signals.
First of all audio mixing is not easy as one might think. There might be many ambiguities after joining two signals. In my case it is working just fine as much as i need it, but if you need exact results you might need to google more for exact superimposing signals over each other.
For joining two wav files you can just read your first wav file copy the data in the result wav file and finally append the data of the second wav file to the result wav file.
This link might also be useful for you
http://www.vttoth.com/digimix.htm
This is old, but I'm reading it, so somebody else inevitably will do so.
I agree in general with nishantmishra regarding the use of libsndfile, but this mixing algorithm would cause a certain amount of distortion if it was doing as the author expects:
outdata[i] = (data[i] + data2[i]) -(data[i])*(data2[i])/65535;
(In reality, that last term only adds a little bit of low-level noise...before reading V-Toth's article I thought it was an interesting form of dither),
Assuming this was applied in a way that works as intended (floating point audio ranges from -1.0 to 1.0, so dividing by 65535 reduces the product by 96 dB, which makes it inaudible for 16-bit audio). If you actually want to implement this method, then read on in V Toth's posting about doing this for signed data.
Whether signed or unsigned, you will be adding inter-modulation distortion (even if it isn't nasty audible, it will be there). In other words, this would work great for voice, or low bit-rate (telephone) audio where the distortion in the transmission channel far exceeds the intermodulation distortion added by product of channels.
If you are just processing two files, and not doing so in real-time (play-back as you read blocks from the file or stream), you can normalize both files, apply mix gain such that gain1+gain2 = 1.0, and sum them together. These low resolution challenges as mentioned by V Toth are not a large concern with 32-bit float or 64-bit double.
Finally, if you are concerned about one source being too quiet while the other is silent, then you can apply a dynamic range compressor cross-linked to the other channel. Another tactic is to apply the same algorithm, but to the envelopes of the audio, and not to individual samples:
outEnvelope[i] = (envelope1[i] + envelope2[i]) \
-(envelope1[i])*(envelope2[i]);
outdata[i]=outEnvelope[i]*(data[i] + data2[i]);
Where envelope =
envelope1[i]=sqrt(lowPassFilter(data[i]*data[i]));//moving RMS representation
And the low pas filter cutoff frequency is something on the order of <10Hz to minimize harmonic distortion.
Really, I think all you want to do most times is to drop that last term and use this line:
outdata[i] = (data[i] + data2[i]);
As long as the sum of channel_gain[] = 1.0, you will get a good result. The code by nishantmishra works well because that last distortion-adding term is reduced to the noise floor, so you may as well save CPU time and eliminate it.
When viewing (or editing) a .gz file, vim knows to locate gunzip and display the file properly.
In such cases, getfsize(expand("%")) would be the size of the gzipped file.
Is there a way to get the size of the expanded file?
[EDIT]
Another way to solve this might be getting the size of current buffer, but there seems to be no such function in vim. Am I missing something?
There's no easy way to get the uncompressed size of a gzipped file, short of uncompressing it and using the getfsize() function. That might not be what you want. I took at a look at RFC 1952 - GZIP File Format Specification, and the only thing that might be useful is the ISIZE field, which contains "...the size of the original (uncompressed) input data modulo 2^32".
EDIT:
I don't know if this helps, but here's some proof-of-concept C code I threw together that retrieves the value of the ISIZE field in a gzip'd file. It works for me using Linux and gcc, but your mileage may vary. If you compile the code, and then pass in a gzip'd filename as a parameter, it will tell you the uncompressed size of the original file.
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
int main(int argc, char *argv[])
{
FILE *fp = NULL;
int i=0;
if ( argc != 2 ) {
fprintf(stderr, "Must specify file to process.\n" );
return -1;
}
// Open the file for reading
if (( fp = fopen( argv[1], "r" )) == NULL ) {
fprintf( stderr, "Unable to open %s for reading: %s\n", argv[1], strerror(errno));
return -1;
}
// Look at the first two bytes and make sure it's a gzip file
int c1 = fgetc(fp);
int c2 = fgetc(fp);
if ( c1 != 0x1f || c2 != 0x8b ) {
fprintf( stderr, "File is not a gzipped file.\n" );
return -1;
}
// Seek to four bytes from the end of the file
fseek(fp, -4L, SEEK_END);
// Array containing the last four bytes
unsigned char read[4];
for (i=0; i<4; ++i ) {
int charRead = 0;
if ((charRead = fgetc(fp)) == EOF ) {
// This shouldn't happen
fprintf( stderr, "Read end-of-file" );
exit(1);
}
else
read[i] = (unsigned char)charRead;
}
// Copy the last four bytes into an int. This could also be done
// using a union.
int intval = 0;
memcpy( &intval, &read, 4 );
printf( "The uncompressed filesize was %d bytes (0x%02x hex)\n", intval, intval );
fclose(fp);
return 0;
}
This appears to work for getting the byte count of a buffer
(line2byte(line("$")+1)-1)
If you're on Unix/linux, try
:%!wc -c
That's in bytes. (It works on windows, if you have e.g. cygwin installed.) Then hit u to get your content back.
HTH
From within vim editor, try this:
<Esc>:!wc -c my_zip_file.gz
That will display you the number of bytes the file is having.