After checking (most recent) tag v3.2.1:
% sh autogen.sh
% ./configure CC=i686-pc-mingw32-gcc
% make check
All tests appear to fail.
Using CC=gcc, tests seem to work properly. Unfortunately I need the resulting build to have no cygwin dependencies, since I'm building a JNI DLL.
I tried building libffi using MSYS2 environment and mingw-w64 and I hit the same wall:
a) all tests seem to fail when I run make check
b) when I try to compile the libffi Hello World example with -lffi,
the linker complains about unresolved references to all ffi-related symbols (the symbols are indeed included in libffi.a, but probably due to circular dependencies and the order of object files, the linker fails to collect all the symbols)
Fortunately, if I drop -lffi and instead include all object files (*.o) created by libffi make operation, the created executable runs just fine.
Here's a link to the libffi Hello World example I used:
http://www.chiark.greenend.org.uk/doc/libffi-dev/html/Closure-Example.html
[EDIT]
After some additional experiments, I managed to compile the program by replacing -lffi with -Wl,--whole-archive,-lffi,--no-whole-archive. This way, the linker would include all object files from libffi.a and everything would work just fine.
Here's the Hello World example (hello.c) with detailed steps I used, in case someone finds this info useful:
/*
* Steps for building libffi on Windows and running this Hello World example:
*
* 1. Download and install the latest version of MSYS2
* a) download the latest (64-bit or 32-bit) installer from http://msys2.github.io
* b) run the installer accepting default settings
* c) execute the following commands to update the system core
* pacman -Sy pacman
* pacman -Syu
* pacman -Su
* d) restart MSYS2, if requested to do so
* e) execute the following command to install development tools
* for 64-bit gcc:
* pacman --needed -S base-devel dejagnu mingw-w64-x86_64-gcc
* for 32-bit gcc:
* pacman --needed -S base-devel dejagnu mingw-w64-i686-gcc
* f) restart MSYS2
* 2. Download and compile the latest version of libffi
* a) download the latest source code bundle from https://github.com/libffi/libffi/releases
* b) unpack the source code bundle in MSYS2 tmp directory (e.g. C:\msys64\tmp)
* c) execute the following MSYS2 commands to compile libffi (adapt the version number):
* cd /tmp/libffi-3.2.1
* ./autogen.sh
* ./configure --prefix=/tmp/out --enable-static --disable-shared
* make
* d) optionally, execute the following command to run the tests:
* make check
* e) copy the distributable files to the configured /tmp/out directory
* make install
* the following files are needed for the next step:
* /tmp/out/lib/libffi.a
* /tmp/out/lib/libffi-3.2.1/include/ffi.h
* /tmp/out/lib/libffi-3.2.1/include/ffitarget.h
* 3. Compile this example
* a) copy this file to MSYS2 tmp directory (e.g. C:\msys64\tmp\hello.c)
* b) execute the following MSYS2 command to compile the example:
* gcc -I /tmp/out/lib/libffi-3.2.1/include -L /tmp/out/lib -lffi -o /tmp/hello /tmp/hello.c
* c) run the example (/tmp/hello.exe), the output should be:
* Hello World!
*
* Troubleshooting
*
* If the tests seem to fail and the compilation in step 3b) above reports undefined references to 'ffi_*' symbols,
* try compiling using the following command instead:
* gcc -I /tmp/out/lib/libffi-3.2.1/include -L /tmp/out/lib -Wl,--whole-archive,-lffi,--no-whole-archive -o /tmp/hello /tmp/hello.c
* Another alternative is to try linking the original libffi object files (*.o) and drop -lffi as follows:
* For 64-bit version:
* export SRC=/tmp/libffi-3.2.1/x86_64-w64-mingw32/src
* gcc -I /tmp/out/lib/libffi-3.2.1/include -o /tmp/hello /tmp/hello.c $SRC/prep_cif.o $SRC/types.o $SRC/raw_api.o $SRC/java_raw_api.o $SRC/closures.o $SRC/x86/ffi.o $SRC/x86/win64.o
* For 32-bit version:
* export SRC=/tmp/libffi-3.2.1/i686-w64-mingw32/src
* gcc -I /tmp/out/lib/libffi-3.2.1/include -o /tmp/hello /tmp/hello.c $SRC/prep_cif.o $SRC/types.o $SRC/raw_api.o $SRC/java_raw_api.o $SRC/closures.o $SRC/x86/ffi.o $SRC/x86/win32.o
*/
#include <stdio.h>
#include <ffi.h>
/* Acts like puts with the file given at time of enclosure */
void puts_binding(ffi_cif* cif, void* ret, void* args[], void* stream) {
*(ffi_arg*) ret = fputs(*(char**) args[0], (FILE*) stream);
}
typedef int (*puts_t)(char*);
int main() {
ffi_cif cif; /* The call interface */
ffi_type* args[1]; /* The array of pointers to function argument types */
ffi_closure* closure; /* The allocated closure writable address */
void* bound_puts; /* The allocated closure executable address */
int rc; /* The function invocation return code */
/* Allocate closure (writable address) and bound_puts (executable address) */
closure = ffi_closure_alloc(sizeof(ffi_closure), &bound_puts);
if (closure) {
/* Initialize the array of pointers to function argument types */
args[0] = &ffi_type_pointer;
/* Initialize the call interface describing the function prototype */
if (ffi_prep_cif(&cif, FFI_DEFAULT_ABI, 1, &ffi_type_sint, args) == FFI_OK) {
/* Initialize the closure, setting stream to stdout */
if (ffi_prep_closure_loc(closure, &cif, puts_binding, stdout, bound_puts) == FFI_OK) {
rc = ((puts_t) bound_puts)("Hello World!");
/* rc now holds the result of the call to fputs */
}
}
}
/* Deallocate both closure, and bound_puts */
ffi_closure_free(closure);
return 0;
}
Related
How can I make the dynamic loader load a library with no versioning information for a library/executable that requires versioning information?
For example, say I am trying to run /bin/bash which requires symbol S with version X.Y.Z and libtinfo.so.6 provides symbol S but due to being built with a musl toolchain has no versioning information. Currently, this gives me the following error:
/bin/bash: /usr/local/x86_64-linux-musl/lib/libtinfo.so.6: no version information available (required by /bin/bash)
Inconsistency detected by ld.so: dl-lookup.c: 112: check_match: Assertion `version->filename == NULL || ! _dl_name_match_p (version->filename, map)' failed!
I am trying to avoid the process described here where I make a custom DSO that essentially maps all symbols (i.e. I would have to write out each symbol) to the appropriate symbol in the musl library. I have seen a lot of discussion about loading older versions of symbols in a DSO, but nothing about NO symbol versions.
Does this require me to recompile all binaries with versioned symbol so they don't include versioning information?
Thanks for your help!
Update
After some investigation, I found that /bin/bash has a handful of symbols that it gets from libtinfo.so.6 such as tgoto, tgetstr, tputs, tgetent, tgetflag, tgetnum, UP, BC, and PC. When the dynamic loader tries to find the correct version of these symbols (for example, tputs#NCURSES6_TINFO_5.0.19991023) in the musl-built libtinfo.so.6 it fails as there is no versioning information in that file.
I think I have the beginnings of a hack-y solution (hopefully there is a better one out there). Essentially, I make a DSO that I compile with a GNU toolchain and load with LD_PRELOAD. In this DSO, I open the musl-built libtinfo.so.6.1 with dlopen and use dlsym to get the needed symbols. These symbols are then made globally available. While there is no version information for libtinfo.so.6, there are version sections (.gnu.version and .gnu.version_r), and I am able to execute bash without any errors/warning. The DSO source is below:
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <dlfcn.h>
/* Functions */
static char *(*tgoto_internal)(const char *string, int x, int y);
static char *(*tgetstr_internal)(const char * id, char **area);
static int (*tputs_internal)(const char *string, int affcnt, int (*outc)(int));
static int (*tgetent_internal)(char *bufp, const char *name);
static int (*tgetflag_internal)(const char *id);
static int (*tgetnum_internal)(const char *id);
void __attribute__ ((constructor)) init(void);
/* Library Constructor */
void
init(void)
{
void *handle = dlopen("/usr/local/x86_64-linux-musl/lib/libtinfo.so.6.1", RTLD_LAZY);
tgoto_internal = dlsym(handle, "tgoto");
tgetstr_internal = dlsym(handle, "tgetstr");
tputs_internal = dlsym(handle, "tputs");
tgetent_internal = dlsym(handle, "tgetent");
tgetflag_internal = dlsym(handle, "tgetflag");
tgetnum_internal = dlsym(handle, "tgetnum");
}
char *
tgoto(const char *string, int x, int y)
{
return tgoto_internal(string, x, y);
}
char *
tgetstr(const char * id, char **area)
{
return tgetstr_internal(id, area);
}
int
tputs(const char *string, int affcnt, int (*outc)(int))
{
return tputs_internal(string, affcnt, outc);
}
int
tgetent(char *bufp, const char *name)
{
return tgetent_internal(bufp, name);
}
int
tgetflag(const char *id)
{
return tgetflag_internal(id);
}
int
tgetnum(const char *id)
{
return tgetnum_internal(id);
}
/* Objects */
char * UP = 0;
char * BC = 0;
char PC = 0;
However this solution doesn't seem to work all the time, and I still see the same warning as above when testing musl-built binaries, but this time, they don't crash the tests and just print a warning.
It should also be noted that I encountered a similar versioning error before with libreadline.so looking for versioning information in libtinfo.so. This seemed to have stemmed from my musl-built libreadline.so being the wrong version (8 instead of 7) and thus my configuration script went to the GNU libreadline.so which was version 7 and this tried to pull in the musl libtinfo.so which raised the error. Building libreadline.so.7 with the musl toolchain resolved this error perfectly.
Thanks to #LorinczyZsigmond for helping me arrive at the solution! Since they don't want to post a complete answer, I will to close the question.
The error:
/bin/bash: /usr/local/x86_64-linux-musl/lib/libtinfo.so.6: no version information available (required by /bin/bash)
Inconsistency detected by ld.so: dl-lookup.c: 112: check_match: Assertion `version->filename == NULL || ! _dl_name_match_p (version->filename, map)' failed!
tells us that /bin/bash is looking for libtinfo.so.6 in the musl lib directory. However, if we look at /bin/bash under ldd we see that in general it looks for DSO's in GNU's lib directory:
$ ldd /bin/bash
linux-vdso.so.1 (0x00007ffd485f7000)
libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f58ad8ba000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f58ad8b5000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f58ad6f4000)
/lib64/ld-linux-x86-64.so.2 => //lib64/ld-linux-x86-64.so.2 (0x00007f58ada22000)
When /bin/bash is run and the LD_LIBRARY_PATH environment variable points to the musl lib directory, the loader will try to resolve the libtinfo.so.6 dependency with musl's libtinfo.so.6, not GNU's. This causes a conflict since /bin/bash was linked against GNU's libtinfo.so.6 which has symbol versioning and perhaps more.
The fix, as said by #LorincyZsigmond, is:
locally compiled shared objects should be searched first by locally compiled programs, but be hidden from the 'default' programs.
So essentially I needed to not mix the GNU and musl libraries which I was doing by heavy-handedly setting LD_LIBRARY_PATH=/usr/local/x86_64-linux-musl/lib.
Instead of using LD_LIBRARY_PATH, I used the rpath linker option (-L/usr/local/x86_64-linux-musl/lib -Wl,-rpath,/usr/local/x86_64-linux-musl/lib) to hard-code the path to my musl libraries into the executable. This allows musl-built binaries to link against the DSO's the need while also allowing for GNU-built binaries to link against GNU-built DSOs (both of which are required when doing something like testing vim built from source).
As an aside: The rpath entries in an ELF's dynamic section are searched first.
Just documenting this: (self-answer to follow)
I'm aware that Sun's dtrace is not packaged for Ubuntu due to licensing issues; so I downloaded it and built it from source on Ubuntu - but I'm having an issue pretty much like the one in Simple dtraces not working · Issue #17 · dtrace4linux/linux · GitHub; namely loading of the driver seems fine:
dtrace-20130712$ sudo make load
tools/load.pl
23:20:31 Syncing...
23:20:31 Loading: build-2.6.38-16-generic/driver/dtracedrv.ko
23:20:34 Preparing symbols...
23:20:34 Probes available: 364377
23:20:44 Time: 13s
... however, if I try to run a simple script, it fails:
$ sudo ./build/dtrace -n 'BEGIN { printf("Hello, world"); exit(0); }'
dtrace: invalid probe specifier BEGIN { printf("Hello, world"); exit(0); }: "/path/to/src/dtrace-20130712/etc/sched.d", line 60: no symbolic type information is available for kernel`dtrace_cpu_id: Invalid argument
As per the issue link above:
(ctf requires a private and working libdwarf lib - most older releases have broken versions).
... I then built libdwarf from source, and then dtrace based on it (not trivial, requires manually finding the right placement of symlinks); and I still get the same failure.
Is it possible to fix this?
Well, after a trip to gdb, I figured that the problem occurs in dtrace's function dt_module_getctf (called via dtrace_symbol_type and, I think, dt_module_lookup_by_name). In it, I noticed that most calls propagate the attribute/variable dm_name = "linux"; but when the failure occurs, I'd get dm_name = "kernel"!
Note that original line 60 from sched.d is:
cpu_id = `dtrace_cpu_id; /* C->cpu_id; */
Then I found thr3ads.net - dtrace discuss - accessing symbols without type info [Nov 2006]; where this error message is mentioned:
dtrace: invalid probe specifier fbt::calcloadavg:entry {
printf("CMS_USER: %d, CMS_SYSTEM: %d, cpu_waitrq: %d\n",
`cpu0.cpu_acct[0], `cpu0.cpu_acct[1], `cpu0.cpu_waitrq);}: in action
list: no symbolic type information is available for unix`cpu0: No type
information available for symbol
So:
on that system, the request `cpu0.cpu_acct[0] got resolved to unix`cpu0;
and on my system, the request `dtrace_cpu_id got resolved to kernel`dtrace_cpu_id.
And since "The backtick operator is used to read the
value of kernel variables, which will be specific to the running kernel." (howto measure CPU load - DTrace General Discussion - ArchiveOrange), I thought maybe explicitly "casting" this "backtick variable" to linux would help.
And indeed it does - only a small section of sched.d needs to be changed to this:
translator cpuinfo_t < dtrace_cpu_t *C > {
cpu_id = linux`dtrace_cpu_id; /* C->cpu_id; */
cpu_pset = -1;
cpu_chip = linux`dtrace_cpu_id; /* C->cpu_id; */
cpu_lgrp = 0; /* XXX */
/* cpu_info = *((_processor_info_t *)`dtrace_zero); /* ` */ /* XXX */
};
inline cpuinfo_t *curcpu = xlate <cpuinfo_t *> (&linux`dtrace_curcpu);
... and suddenly - it starts working!:
dtrace-20130712$ sudo ./build/dtrace -n 'BEGIN { printf("Hello, world"); exit(0); }'
dtrace: description 'BEGIN ' matched 1 probe
CPU ID FUNCTION:NAME
1 1 :BEGIN Hello, world
PS:
Protip 1: NEVER do dtrace -n '::: { printf("Hello"); }' - this means "do a printf on each and every kernel event", and it will completely freeze the kernel; not even CTRL-Alt-Del will work!
Protip 2: If you want to use DTRACE_DEBUG as in Debugging DTrace, use sudo -E:
dtrace-20130712$ DTRACE_DEBUG=1 sudo -E ./build/dtrace -n 'BEGIN { printf("Hello, world"); exit(0); }'
libdtrace DEBUG: reading kernel .ctf: /path/to/src/dtrace-20130712/build-2.6.38-16-generic/linux-2.6.38-16-generic.ctf
libdtrace DEBUG: opened 32-bit /proc/kallsyms (syms=75761)
...
I try to compile a program I have to control a DAQ device. In Windows, g++ compile and links OK, but in Linux it doesn't. The linker (called by G++) displays:
g++ -Wall -o "acelerar-30-0" "acelerar-30-0.cpp" (en el directorio: /home/poly/)
/tmp/ccRLpB4q.o: In function `main':
acelerar-30-0.cpp:(.text+0x429): undefined reference to `AdxInstantAoCtrlCreate'
collect2: ld returned 1 exit status
Ha fallado la compilación.
The cpp file is this (cut):
include stdlib.h
include stdio.h
include math.h
include "compatibility.h"
include "bdaqctrl.h"
include "comunes.h"
using namespace Automation::BDaq;
define deviceDescription L"USB-4704,BID#0"
int32 channelStart = 0;
int32 channelCount = 1;
double voltaje[0];
int32 modo;
int32 ms;
int main(int argc, char* argv[])
{
if (argc!=3)
salidaerror(argv[0],1);
channelStart = atoi(argv[1]);
ms = atoi(argv[2]);
if (channelStart<0||channelStart>1||ms<10)
salidaerror(argv[0],1);
ErrorCode ret = Success;
InstantAoCtrl * instantAoCtrl = AdxInstantAoCtrlCreate();
...
I have been several hours on this, and can't find the answer. The SDK is for Debian/Ubuntu, and it has the same code for Linux and Windows.
Any hints? Thanks
EDIT: Removed some marks as the formatting was incorrect
In my (limited) experience, typical gcc behavior will require that you specify the library containing that function as an argument on the command line like so:
-lsome_library
This is required even if the library is in your library path (additional library paths can be specified with -L). Find the appropriate library file containing that function and use its filename minus extensions and leading "lib" in the argument format above.
I am working through a sample program that uses both C++ source code as well as CUDA. This is the essential content from my four source files.
matrixmul.cu (main CUDA source code):
#include <stdlib.h>
#include <cutil.h>
#include "assist.h"
#include "matrixmul.h"
int main (int argc, char ** argv)
{
...
computeGold(reference, hostM, hostN, Mh, Mw, Nw); //reference to .cpp file
...
}
matrixmul_gold.cpp (C++ source code, single function, no main method):
void computeGold(float * P, const float * M, const float * N, int Mh, int Mw, int Nw)
{
...
}
matrixmul.h (header for matrixmul_gold.cpp file)
#ifndef matrixmul_h
#define matrixmul_h
extern "C"
void computeGold(float * P, const float * M, const float * N, int Mh, int Mw, int Nw);
#endif
assist.h (helper functions)
I am trying to compile and link these files so that they, well, work. So far I can get matrixmul_gold.cpp compiled using:
g++ -c matrixmul_gold.cpp
And I can compile the CUDA source code with out errors using:
nvcc -I/home/sbu/NVIDIA_GPU_Computing_SDK/C/common/inc -L/home/sbu/NVIDIA_GPU_Computing_SDK/C/lib matrixmul.cu -c -lcutil_x86_64
But I just end up with two .O files. I've tried a lot of different ways to link the two .O files but so far it's a no-go. What's the proper approach?
UPDATE: As requested, here is the output of:
nm matrixmul_gold.o matrixmul.o | grep computeGold
nm: 'matrixmul.o': No such file
0000000000000000 T _Z11computeGoldPfPKfS1_iii
I think the 'matrixmul.o' missing error is because I am not actually getting a successful compile when running the suggested compile command:
nvcc -I/home/sbu/NVIDIA_GPU_Computing_SDK/C/common/inc -L/home/sbu/NVIDIA_GPU_Computing_SDK/C/lib -o matrixmul matrixmul.cu matrixmul_gold.o -lcutil_x86_64
UPDATE 2: I was missing an extern "C" from the beginning of matrixmul_gold.cpp. I added that and the suggested compilation command works great. Thank you!
Conventionally you would use whichever compiler you are using to compile the code containing the main subroutine to link the application. In this case you have the main in the .cu, so use nvcc to do the linking. Something like this:
$ g++ -c matrixmul_gold.cpp
$ nvcc -I/home/sbu/NVIDIA_GPU_Computing_SDK/C/common/inc \
-L/home/sbu/NVIDIA_GPU_Computing_SDK/C/lib \
-o matrixmul matrixmul.cu matrixmul_gold.o -lcutil_x86_64
This will link an executable binary called matrimul from matrixmul.cu, matrixmul_gold.o and the cutil library (implicitly nvcc will link the CUDA runtime library and CUDA driver library as well).
Is there a fallocate() equivalent in OS X?
I would like to aggregate all of those equivalent in OS X questions into some doc/table or whatever for everyone. Anybody knows something familiar?
What about using:
mkfile -n 1m test.tmp
It's not the same command but serves the same purpose.
Note that fallocate uses decimal multipliers, whereas mkfile uses binary multipliers.
mkfile man
fallocate() doesn't exist on OSX. You can "fake" it though; Mozilla fakes it in their FileUtils class. See this file:
http://hg.mozilla.org/mozilla-central/file/3d846420a907/xpcom/glue/FileUtils.cpp#l61
Here's the code, in case that link goes stale:
/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*-
* ***** BEGIN LICENSE BLOCK *****
* Version: MPL 1.1/GPL 2.0/LGPL 2.1
*
* The contents of this file are subject to the Mozilla Public License Version
* 1.1 (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
* http://www.mozilla.org/MPL/
*
* Software distributed under the License is distributed on an "AS IS" basis,
* WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
* for the specific language governing rights and limitations under the
* License.
*
* The Original Code is Mozilla code.
*
* The Initial Developer of the Original Code is
* Mozilla Foundation.
* Portions created by the Initial Developer are Copyright (C) 2010
* the Initial Developer. All Rights Reserved.
*
* Contributor(s):
* Taras Glek <tglek#mozilla.com>
*
* Alternatively, the contents of this file may be used under the terms of
* either the GNU General Public License Version 2 or later (the "GPL"), or
* the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
* in which case the provisions of the GPL or the LGPL are applicable instead
* of those above. If you wish to allow use of your version of this file only
* under the terms of either the GPL or the LGPL, and not to allow others to
* use your version of this file under the terms of the MPL, indicate your
* decision by deleting the provisions above and replace them with the notice
* and other provisions required by the GPL or the LGPL. If you do not delete
* the provisions above, a recipient may use your version of this file under
* the terms of any one of the MPL, the GPL or the LGPL.
*
* ***** END LICENSE BLOCK ***** */
#if defined(XP_UNIX)
#include <fcntl.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#elif defined(XP_WIN)
#include <windows.h>
#endif
#include "nscore.h"
#include "private/pprio.h"
#include "mozilla/FileUtils.h"
bool
mozilla::fallocate(PRFileDesc *aFD, PRInt64 aLength)
{
#if defined(HAVE_POSIX_FALLOCATE)
return posix_fallocate(PR_FileDesc2NativeHandle(aFD), 0, aLength) == 0;
#elif defined(XP_WIN)
return PR_Seek64(aFD, aLength, PR_SEEK_SET) == aLength
&& 0 != SetEndOfFile((HANDLE)PR_FileDesc2NativeHandle(aFD));
#elif defined(XP_MACOSX)
int fd = PR_FileDesc2NativeHandle(aFD);
fstore_t store = {F_ALLOCATECONTIG, F_PEOFPOSMODE, 0, aLength};
// Try to get a continous chunk of disk space
int ret = fcntl(fd, F_PREALLOCATE, &store);
if(-1 == ret){
// OK, perhaps we are too fragmented, allocate non-continuous
store.fst_flags = F_ALLOCATEALL;
ret = fcntl(fd, F_PREALLOCATE, &store);
if (-1 == ret)
return false;
}
return 0 == ftruncate(fd, aLength);
#elif defined(XP_UNIX)
// The following is copied from fcntlSizeHint in sqlite
/* If the OS does not have posix_fallocate(), fake it. First use
** ftruncate() to set the file size, then write a single byte to
** the last byte in each block within the extended region. This
** is the same technique used by glibc to implement posix_fallocate()
** on systems that do not have a real fallocate() system call.
*/
struct stat buf;
int fd = PR_FileDesc2NativeHandle(aFD);
if (fstat(fd, &buf))
return false;
if (buf.st_size >= aLength)
return false;
const int nBlk = buf.st_blksize;
if (!nBlk)
return false;
if (ftruncate(fd, aLength))
return false;
int nWrite; // Return value from write()
PRInt64 iWrite = ((buf.st_size + 2 * nBlk - 1) / nBlk) * nBlk - 1; // Next offset to write to
do {
nWrite = 0;
if (PR_Seek64(aFD, iWrite, PR_SEEK_SET) == iWrite)
nWrite = PR_Write(aFD, "", 1);
iWrite += nBlk;
} while (nWrite == 1 && iWrite < aLength);
return nWrite == 1;
#endif
return false;
}
For those wanting to create fake data files for testing, mkfile is pretty elegant. An alternative is to use dd:
dd if=/dev/zero of=zfile count=1024 bs=1024
As you can see with od -b zfile, it's full of zeros. If you want random data (which you may want for testing workflows with data compression, for example), then use "/dev/random" in place of "/dev/zero":
dd if=/dev/random of=randfile count=1024 bs=1024