I am implementing a virtual filesystem using the fuse, and need some understanding regarding the offset parameter in readdir.
Earlier we were ignoring the offset and passing 0 in the filler function, in which case the kernel should take care.
Our filesystem database, is storing: directory name, filelength, inode number and parent inode number.
How do i calculate get the offset?
Then is the offset of each components, equal to their size sorted in incremental form of their inode number? What happens is there is a directory inside a directory, is the offset in that case equal to the sum of the files inside?
Example: in case the dir listing is - a.txt b.txt c.txt
And inode number of a.txt=3, b.txt=5, c.txt=7
Offset of a.txt= directory offset
Offset of b.txt=dir offset + size of a.txt
Offset of c.txt=dir offset + size of b.txt
Is the above assumption correct?
P.S: Here are the callbacks of fuse
The selected answer is not correct
Despite the lack of upvotes on this answer, this is the correct answer. Cracking into the format of the void buffer should be discouraged, and that's the intent behind declaring such things void in C code - you shouldn't write code that assumes knowledge of the format of the data behind void pointers, use whatever API is provided properly instead.
The code below is very simple and straightforward, as it should be. No knowledge of the format of the Fuse buffer is required.
Fictitious API
This is a contrived example of what some device's API could look
like. This is not part of Fuse.
// get_some_file_names() -
// returns a struct with buffers holding the names of files.
// PARAMETERS
// * path - A path of some sort that the fictitious device groks.
// * offset - Where in the list of file names to start.
// RETURNS
// * A name_list, it has some char buffers holding the file names
// and a couple other auxiliary vars.
//
name_list *get_some_file_names(char *path, size_t offset);
Listing the files in parts
Here's a Fuse callback that can be registered with the Fuse system to
list the filenames provided by get_some_file_names(). It's arbitrarily named readdir_callback() so its purpose is obvious.
int readdir_callback( char *path,
void *buf, // This is meant to be "opaque".
fuse_fill_dir_t *filler, // filler takes care of buf.
off_t off, // Last value given to filler.
struct fuse_file_info *fi )
{
// Call the fictitious API to get a list of file names.
name_list *list = get_some_file_names(path, off);
for (int i = 0; i < list->length; i++)
{
// Feed the file names to filler() one at a time.
if (filler(buf, list->names[i], NULL, off + i + 1))
{
break; // filler() returned 1, requesting a break.
}
incr_num_files_listed(list);
}
if (all_files_listed(list))
{
return 1; // Tell Fuse we're done.
}
return 0;
}
The off (offset) value is not used by the filler function to fill its opaque buffer, buf. The off value is, however, meaningful to the callback as an offset base as it provides file names to filler(). Whatever value was last passed to filler() is what gets passed back to readdir_callback() on its next invocation. filler()
itself only cares whether the off value is 0 or not-0.
Indicating "I'm done listing!" to Fuse
To signal to the Fuse system that your readdir_callback() is done listing file names in parts (when the last of the list of names has been given to filler()), simply return 1 from it.
How off Is Used
The off, offset, parameter should be non-0 to perform the partial listings. That's its only requirement as far as filler() is concerned. If off is 0, that indicates to Fuse that you're going to do a full listing in one shot (see below).
Although filler() doesn't care what the off value is beyond it being non-0, the value can still be meaningfully used. The code above is using the index of the next item in its own file list as its value. Fuse will keep passing the last off value it received back to the read dir callback on each invocation until the listing is complete (when readdir_callback() returns 1).
Listing the files all at once
int readdir_callback( char *path,
void *buf,
fuse_fill_dir_t *filler,
off_t off,
struct fuse_file_info *fi )
{
name_list *list = get_all_file_names(path);
for (int i = 0; i < list->length; i++)
{
filler(buf, list->names[i], NULL, 0);
}
return 0;
}
Listing all the files in one shot, as above, is simpler - but not by much. Note that off is 0 for the full listing. One may wonder, 'why even bother with the first approach of reading the folder contents in parts?'
The in-parts strategy is useful where a set number of buffers for file names is allocated, and the number of files within folders may exceed this number. For instance, the implementation of name_list above may only have 8 allocated buffers (char names[8][256]). Also, buf may fill up and filler() start returning 1 if too many names are given at once. The first approach avoids this.
The offset passed to the filler function is the offset of the next item in the directory. You can have the entries in the directory in any order you want. If you don't want to return an entire directory at once, you need to use the offset to determine what gets asked for and stored. The order of items in the directory is up to you, and doesn't matter what order the names or inodes or anything else is.
Specifically, in the readdir call, you are passed an offset. You want to start calling the filler function with entries that will be at this callback or later. In the simplest case, the length of each entry is 24 bytes + strlen(name of entry), rounded up to the nearest multiple of 8 bytes. However, see the fuse source code at http://sourceforge.net/projects/fuse/ for when this might not be the case.
I have a simple example, where I have a loop (pseudo c-code) in my readdir function:
int my_readdir(const char *path, void *buf, fuse_fill_dir_t filler, off_t offset, struct fuse_file_info *fi)
{
(a bunch of prep work has been omitted)
struct stat st;
int off, nextoff=0, lenentry, i;
char namebuf[(long enough for any one name)];
for (i=0; i<NumDirectoryEntries; i++)
{
(fill st with the stat information, including inode, etc.)
(fill namebuf with the name of the directory entry)
lenentry = ((24+strlen(namebuf)+7)&~7);
off = nextoff; /* offset of this entry */
nextoff += lenentry;
/* Skip this entry if we weren't asked for it */
if (off<offset)
continue;
/* Add this to our response until we are asked to stop */
if (filler(buf, namebuf, &st, nextoff))
break;
}
/* All done because we were asked to stop or because we finished */
return 0;
}
I tested this within my own code (I had never used the offset before), and it works fine.
Related
When I save the 'state of uncompression', I also need to save:
"location in the compressed data, which is both a byte offset and bit offset within that byte".
After a reboot, along with inflateSetDictionary(), I call inflatePrime() as below, "to feed the bits from the byte at the compressed data offset".
inflatePrime ( , streamBits, streamCurrentPos)
Both APIs return Z_OK, but params to inflatePrime(), I am bit uncertain.
This is how I gathered them:
typedef struct state_of_uncompression
{
uInt streamCurrentPos; // Missing this, tried the output from unzGetCurrentFileZStreamPos64()
int streamBits; // from : stream.data_type, after clearing bits 8,7,6: stream.data_type & (~0x1C0)
Byte dictionary_buf[32768]; // from : inflateGetDictionary()
uInt dictLength; // from : inflateGetDictionary();
uint64_t output_wrt_offset // got this already.
} uncompression_state_info;
So after the reboot, the plan is to recontinue the uncompression, but inflate() returns Z_STREAM_END inside unzReadCurrentFile(), as if, inflate() doesn't know where to restart from.
Thanks appreciate any feedback.
The third argument to inflatePrime() is not a position. It is the actual bits to insert, which you need to get from the compressed data. You use fseek() or lseek() to go to the byte offset in the file, where you saved that offset as part of your entry point information. You get that byte, which advances the file pointer to the next byte, and shift down by the number of bits you are not providing, i.e. 8 minus the second argument. That's the third argument. The second argument is always in 1..7. If there are no bits to insert, then you don't call inflatePrime(), and just leave the file pointer where it is to begin inflating.
The position in your state should be a 64-bit value, not a 32-bit value as you currently have it.
I have encountered an interesting issue where a PERCPU_ARRAY created on one system with 2 processors creates an array with 2 per-CPU elements and on another system with 2 processors, an array with 128 per-CPU elements. The latter was rather unexpected to me!
The way I discovered this behavior is that a program that allocated an array for the number of CPUs (using get_nprocs_conf(3)) and then read in the PERCPU_ARRAY into it (using bpf_map_lookup_elem()) ended up writing past the end of the array and crashing.
I would like to find out what is the proper way to determine in a program that reads BPF maps the number of elements in a PERCPU_ARRAY used on a system.
Failing that, I think the second best approach is to pick a buffer for reading in that is "large enough." Here, the problem is similar: what is that number and is there way to learn it at runtime?
The question comes from reading the source of bpftool, which figures this out:
unsigned int get_possible_cpus(void)
{
int cpus = libbpf_num_possible_cpus();
if (cpus < 0) {
p_err("Can't get # of possible cpus: %s", strerror(-cpus));
exit(-1);
}
return cpus;
}
int libbpf_num_possible_cpus(void)
{
static const char *fcpu = "/sys/devices/system/cpu/possible";
static int cpus;
int err, n, i, tmp_cpus;
bool *mask;
/* ---8<--- snip */
}
So that's how they do it!
I was writing programs to count the time of page faults in a linux system. More precisely, the time kernel execute the function __do_page_fault.
And somehow I wrote two global variables, named pfcount_at_beg and pfcount_at_end, which increase once when the function __do_page_fault is executed at different locations of the function.
To illustrate, the modified function goes as:
unsigned long pfcount_at_beg = 0;
unsigned long pfcount_at_end = 0;
static void __kprobes
__do_page_fault(...)
{
struct vm_area_sruct *vma;
... // VARIABLES DEFINITION
unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
pfcount_at_beg++; // I add THIS
...
...
// ORIGINAL CODE OF THE FUNCTION
...
pfcount_at_end++; // I add THIS
}
I expected that the value of pfcount_at_end is smaller than the value of pfcount_at_beg.
Because, I think, every time kernel executes the instructions of code pfcount_at_end++, it must have executed pfcount_at_beg++(Every function starts at the very beginning of the code).
On the other hand, as there are many conditional return between these two lines of code.
However, the result turns out oppositely. The value of pfcount_at_end is larger than the value of pfcount_at_beg.
I use printk to print these kernel variables through a self-defined syscall. And I wrote the user level program to call the system call.
Here is my simple syscall and user-level program:
// syscall
asmlinkage int sys_mysyscall(void)
{
printk( KERN_INFO "total pf_at_beg%lu\ntotal pf_at_end%lu\n", pfcount_at_beg, pfcount_at_end)
return 0;
}
// user-level program
#include<linux/unistd.h>
#include<sys/syscall.h>
#define __NR_mysyscall 223
int main()
{
syscall(__NR_mysyscall);
return 0;
}
Is there anybody who knows what exactly happened during this?
Just now I modified the code, to make pfcount_at_beg and pfcount_at_end static. However the result did not change, i.e. the value of pfcount_at_end is larger than the value of pfcount_at_beg.
So possibly it might be caused by in-atomic operation of increment. Would it be better if I use read-write lock?
The ++ operator is not garanteed to be atomic, so your counters may suffer concurrent access and have incorrect values. You should protect your increment as a critical section, or use the atomic_t type defined in <asm/atomic.h>, and its related atomic_set() and atomic_add() functions (and a lot more).
Not directly connected to your issue, but using a specific syscall is overkill (but maybe it is an exercise). A lighter solution could be to use a /proc entry (also an interesting exercise).
I have a very simple task to do, but somehow I am still stuck.
I have one BIG data file ("File_initial.dat"), which should be read by all nodes on the cluster (using MPI), each node will perform some manipulation on part of this BIG file (File_size / number_of_nodes) and finally each node will write its result to one shared BIG file ("File_final.dat"). The number of elements of files remain the same.
By googling I understood, that it is much better to write data file as a binary file (I have only decimal numbers in this file) and not as *.txt" file. Since no human will read this file, but only computers.
I tried to implement myself (but using formatted in/output and NOT binary file) this, but I get incorrect behavior.
My code so far follows:
#include <fstream>
#define NNN 30
int main(int argc, char **argv)
{
ifstream fin;
// setting MPI environment
int rank, nprocs;
MPI_File file;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
// reading the initial file
fin.open("initial.txt");
for (int i=0;i<NNN;i++)
{
fin >> res[i];
cout << res[i] << endl; // to see, what I have in the file
}
fin.close();
// starting position in the "res" array as a function of "rank" of process
int Pstart = (NNN / nprocs) * rank ;
// specifying Offset for writing to file
MPI_Offset offset = sizeof(double)*rank;
MPI_File file;
MPI_Status status;
// opening one shared file
MPI_File_open(MPI_COMM_WORLD, "final.txt", MPI_MODE_CREATE|MPI_MODE_WRONLY,
MPI_INFO_NULL, &file);
// setting local for each node array
double * localArray;
localArray = new double [NNN/nprocs];
// Performing some basic manipulation (squaring each element of array)
for (int i=0;i<(NNN / nprocs);i++)
{
localArray[i] = res[Pstart+i]*res[Pstart+i];
}
// Writing the result of each local array to the shared final file:
MPI_File_seek(file, offset, MPI_SEEK_SET);
MPI_File_write(file, localArray, sizeof(double), MPI_DOUBLE, &status);
MPI_File_close(&file);
MPI_Finalize();
return 0;
}
I understand, that I do something wrong, while trying to write double as a text file.
How one should change the code in order to be able to save
as .txt file (format output)
as .dat file (binary file)
Your binary file output is almost right; but your calculations for your offset within the file and the amount of data to write is incorrect. You want your offset to be
MPI_Offset offset = sizeof(double)*Pstart;
not
MPI_Offset offset = sizeof(double)*rank;
otherwise you'll have each rank overwriting each others data as (say) rank 3 out of nprocs=5 starts writing at double number 3 in the file, not (30/5)*3 = 18.
Also, you want each rank to write NNN/nprocs doubles, not sizeof(double) doubles, meaning you want
MPI_File_write(file, localArray, NNN/nprocs, MPI_DOUBLE, &status);
How to write as a text file is a much bigger issue; you have to convert the data into string internally and then output those strings, making sure you know how many characters each line requires by careful formatting. That is described in this answer on this site.
I've got this
WCHAR fileName[1];
as a returned value from a function (it's a sys 32 function so I am not able to change the returned type). I need to make fileName to be null terminated so I am trying to append '\0' to it, but nothing seems to work.
Once I get a null terminated WCHAR I will need to pass it to another sys 32 function so I need it to stay as WCHAR.
Could anyone give me any suggestion please?
================================================
Thanks a lot for all your help. Looks like my problem has to do with more than missing a null terminated string.
//This works:
WCHAR szPath1[50] = L"\\Invalid2.txt.txt";
dwResult = FbwfCommitFile(szDrive, pPath1); //Successful
//This does not:
std::wstring l_fn(L"\\");
//Because Cache_detail->fileName is \Invalid2.txt.txt and I need two
l_fn.append(Cache_detail->fileName);
l_fn += L""; //To ensure null terminated
fprintf(output, "l_fn.c_str: %ls\n", l_fn.c_str()); //Prints "\\Invalid2.txt.txt"
iCommitErr = FbwfCommitFile(L"C:", (WCHAR*)l_fn.c_str()); //Unsuccessful
//Then when I do a comparison on these two they are unequal.
int iCompareResult = l_fn.compare(pPath1); // returns -1
So I need to figure out how these two ended up to be different.
Thanks a lot!
Since you mentioned fbwffindfirst/fbwffindnext in a comment, you're talking about the file name returned in FbwfCacheDetail. So from the fileNameLength field you know length for the fileName in bytes. The length of fileName in WCHAR's is fileNameLength/sizeof(WCHAR). So the simple answer is that you can set
fileName[fileNameLength/sizeof(WCHAR)+1] = L'\0'
Now this is important you need to make sure that the buffer you send for the cacheDetail parameter into fbwffindfirst/fbwffindnext is sizeof(WCHAR) bytes larger than you need, the above code snippet may run outside the bounds of your array. So for the size parameter of fbwffindfirst/fbwffindnext pass in the buffer size - sizeof(WCHAR).
For example this:
// *** Caution: This example has no error checking, nor has it been compiled ***
ULONG error;
ULONG size;
FbwfCacheDetail *cacheDetail;
// Make an intial call to find how big of a buffer we need
size = 0;
error = FbwfFindFirst(volume, NULL, &size);
if (error == ERROR_MORE_DATA) {
// Allocate more than we need
cacheDetail = (FbwfCacheDetail*)malloc(size + sizeof(WCHAR));
// Don't tell this call about the bytes we allocated for the null
error = FbwfFindFirstFile(volume, cacheDetail, &size);
cacheDetail->fileName[cacheDetail->fileNameLength/sizeof(WCHAR)+1] = L"\0";
// ... Use fileName as a null terminated string ...
// Have to free what we allocate
free(cacheDetail);
}
Of course you'll have to change a good bit to fit in with your code (plus you'll have to call fbwffindnext as well)
If you are interested in why the FbwfCacheDetail struct ends with a WCHAR[1] field, see this blog post. It's a pretty common pattern in the Windows API.
Use L'\0', not '\0'.
As each character of a WCHAR is 16-bit in size, you should perhaps append \0\0 to it, but I'm not sure if this works. By the way, WCHAR fileName[1]; is creating a WCHAR of length 1, perhaps you want something like WCHAR fileName[1024]; instead.
WCHAR fileName[1]; is an array of 1 character, so if null terminated it will contain only the null terminator L'\0'.
Which API function are you calling?
Edited
The fileName member in FbwfCacheDetail is only 1 character which is a common technique used when the length of the array is unknown and the member is the last member in a structure. As you have likely already noticed if your allocated buffer is is only sizeof (FbwfCacheDetail) long then FbwfFindFirst returns ERROR_NOT_ENOUGH_MEMORY.
So if I understand, what you desire to do it output the non NULL terminated filename using fprintf. This can be done as follows
fprintf (outputfile, L"%.*ls", cacheDetail.fileNameLength, cacheDetail.fileName);
This will print only the first fileNameLength characters of fileName.
An alternative approach would be to append a NULL terminator to the end of fileName. First you'll need to ensure that the buffer is long enough which can be done by subtracting sizeof (WCHAR) from the size argument you pass to FbwfFindFirst. So if you allocate a buffer of 1000 bytes, you'll pass 998 to FbwfFindFirst, reserving the last two bytes in the buffer for your own use. Then to add the NULL terminator and output the file name use
cacheDetail.fileName[cacheDetail.fileNameLength] = L'\0';
fprintf (outputfile, L"%ls", cacheDetail.fileName);