Driver's source code structure requirements for Linux Kernel upstream - linux

I am planning to rewrite my sensor's driver in order to try to get my module in the Linux Kernel. I was wondering whether there were requirements regarding the organization of the source code. Is it mandatory to keep all the code in one single source file or is it possible to split it up in several ones?
I would prefer a modular approach for my implementation, with one file containing the API and all the structures required for the Kernel registration, and another file with the low level operations to exchange data with the sensor (i.e. mysensor.c & mysensor_core.c).
What are the requirements from this point of view?
Is there a limitation in terms of lines of codes for each file?
Note:
I tried to have a look at the official github repo and it seems to me that the code is always limited to one single source file.
https://github.com/torvalds/linux/tree/master/drivers/misc

Here is an extract from "linux/drivers/iio/gyro/Makefile" as an example:
# Currently this is rolled into one module, split it if
# we ever create a separate SPI interface for MPU-3050
obj-$(CONFIG_MPU3050) += mpu3050.o
mpu3050-objs := mpu3050-core.o mpu3050-i2c.o
The "mpu3050.o" file used to build the "mpu3050.ko" module is built by linking two object files "mpu3050-core.o" and "mpu3050-i2c.o", each of which is built by compiling a correspondingly named source file.
Note that if the module is built from several source files as above, the base name of the final module "mpu3050" must be different to the base name of each of the source files "mpu3050-core" and "mpu3050-i2c". So in your case, if you want the final module to be called "mysensor.ko" then you will need to rename the "mysensor.c" file.

Related

Convert Open VMS FDL (File Definition Language) to linux

I am working on a project where we are migrating from Open VMS to Unix/Linux.
There's a functionality called "FDL" in open vms, which i want to achieve in Unix.
What FDL actually does is , it defines a certain set of attributes for a file or a record, like fixing some block size for a particular file, file organization as sequential, variable or relative, specifying record size in a file beforehand, specifying carriage return(escape sequence) for record etc.
How can i set these attributes before a file gets created in unix.
FDL is merely a syntax/descriptive method to set/view OpenVMS file attributes (metadata) which has no equivalent in typical Linux file systems. Those attributes are implemented by the (Files-11 / ODS) file system an acted on by RMS (the OpenVMS Record management Services) for which again there is no equivalent in Linux although there are packages (sector7).
So much more than an FDL question , this is an RMS question.
RMS offers 'record' access where a record is a blob of byte defined in the file which can be read sequentially, by number or by key (indexed file). The attributes mentioned in the question are to do with simple sequential access, but there Linux just offers a byte-stream method. The application is supposed to know how much to read / when to stop reading. Possibly a (record) terminator like (frequently) (linefeed) is used but that's about it (fscanf).
Other than using a 'parallel' meta file, or reserving an initial byte stream in your files there is no standard way to store metadata on how to use the bytestream in the file, and making them hard to use by other applications.
All this to say: No Can Do.
Sorry.

How do I seek for holes and data in a sparse file in golang [duplicate]

I want to copy files from one place to another and the problem is I deal with a lot of sparse files.
Is there any (easy) way of copying sparse files without becoming huge at the destination?
My basic code:
out, err := os.Create(bricks[0] + "/" + fileName)
in, err := os.Open(event.Name)
io.Copy(out, in)
Some background theory
Note that io.Copy() pipes raw bytes – which is sort of understandable once you consider that it pipes data from an io.Reader to an io.Writer which provide Read([]byte) and Write([]byte), correspondingly.
As such, io.Copy() is able to deal with absolutely any source providing
bytes and absolutely any sink consuming them.
On the other hand, the location of the holes in a file is a "side-channel" information which "classic" syscalls such as read(2) hide from their users.
io.Copy() is not able to convey such side-channel information in any way.
IOW, initially, file sparseness was an idea to just have efficient storage of the data behind the user's back.
So, no, there's no way io.Copy() could deal with sparse files in itself.
What to do about it
You'd need to go one level deeper and implement all this using the syscall package and some manual tinkering.
To work with holes, you should use the SEEK_HOLE and SEEK_DATA special values for the lseek(2) syscall which are, while formally non-standard, are supported by all major platforms.
Unfortunately, the support for those "whence" positions is not present
neither in the stock syscall package (as of Go 1.8.1)
nor in the golang.org/x/sys tree.
But fear not, there are two easy steps:
First, the stock syscall.Seek() is actually mapped to lseek(2)
on the relevant platforms.
Next, you'd need to figure out the correct values for SEEK_HOLE and
SEEK_DATA for the platforms you need to support.
Note that they are free to be different between different platforms!
Say, on my Linux system I can do simple
$ grep -E 'SEEK_(HOLE|DATA)' </usr/include/unistd.h
# define SEEK_DATA 3 /* Seek to next data. */
# define SEEK_HOLE 4 /* Seek to next hole. */
…to figure out the values for these symbols.
Now, say, you create a Linux-specific file in your package
containing something like
// +build linux
const (
SEEK_DATA = 3
SEEK_HOLE = 4
)
and then use these values with the syscall.Seek().
The file descriptor to pass to syscall.Seek() and friends
can be obtained from an opened file using the Fd() method
of os.File values.
The pattern to use when reading is to detect regions containing data, and read the data from them – see this for one example.
Note that this deals with reading sparse files; but if you'd want to actually transfer them as sparse – that is, with keeping this property of them, – the situation is more complicated: it appears to be even less portable, so some research and experimentation is due.
On Linux, it appears you could try to use fallocate(2) with
FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE to try to punch a hole at the
end of the file you're writing to; if that legitimately fails
(with syscall.EOPNOTSUPP), you just shovel as many zeroed blocks to the destination file as covered by the hole you're reading – in the hope
the OS will do the right thing and will convert them to a hole by itself.
Note that some filesystems do not support holes at all – as a concept.
One example is the filesystems in the FAT family.
What I'm leading you to is that inability of creating a sparse file might
actually be a property of the target filesystem in your case.
You might find Go issue #13548 "archive/tar: add support for writing tar containing sparse files" to be of interest.
One more note: you might also consider checking whether the destination directory to copy a source file resides in the same filesystem as the source file, and if this holds true, use the syscall.Rename() (on POSIX systems)
or os.Rename() to just move the file across different directories w/o
actually copying its data.
You don't need to resort to syscalls.
package main
import "os"
func main() {
f, _ := os.Create("/tmp/sparse.dat")
f.Write([]byte("start"))
f.Seek(1024*1024*10, 0)
f.Write([]byte("end"))
}
Then you'll see:
$ ls -l /tmp/sparse.dat
-rw-rw-r-- 1 soren soren 10485763 Jun 25 14:29 /tmp/sparse.dat
$ du /tmp/sparse.dat
8 /tmp/sparse.dat
It's true you can't use io.Copy as is. Instead you need to implement an alternative to io.Copy which reads a chunk from the src, checks if it's all '\0'. If it is, just dst.Seek(len(chunk), os.SEEK_CUR) to skip past that part in dst. That particular implementation is left as an exercise to the reader :)

ZIP file format. How to read file properly?

I'm currently working on one Node.js project. I want to have an ability to read, modify and write ZIP file without saving it into FS (we receive it by TCP and send it back after modifications were made), and so far it looks like possible bocause of simple ZIP file structure. Currently I refer to this documentation.
So ZIP file has simple structure:
File header 1
File data 1
File data descriptor 1
File header 2
File data 2
File data descriptor 2
...
[other not important yet]
First we need to read file header, which contains field compressed size, and it could be the perfect way to read file data 1 by it's length. But it's actually not. This field may contain '0' or '0xFFFFFFFF', and those values don't describe its actual length. In that case we have to read file data without information about it's length. But how?..
Compression/Decopression algorithm descriptions looks pretty complex to me, and I plan to use ZLIB for compression itself anyway. So if something useful described there, then I missed the point.
Can someone explain the proper way to read those files?
P.S. Please avoid suggesting npm modules. I do not want to only solve the problem, but also to understand how things work.
Note - I'm assuming you want to read and process the zip file as
it comes off the socket, rather than reading the complete zip file into
memory before processing. Both options are valid.
I'd initially ignore the use cases where the compressed size has a value of '0' or '0xFFFFFFFF'. The former is only present in zip files created in streaming mode, the latter for zip files larger than 4Gig.
Dealing with them adds a lot of complexity - you can add support for them later, if necessary. Whether you ever need to support the 0/0xFFFFFFFF use cases depends on the nature of the zip files you intend to process.
When the compression method is deflated (8), use zlib for compression/decompression. You also need to support compression method stored (0). It gets used for very small files where compression isn't appropriate.

Stream definition: Ignore all files but one filetype

We have a server with a depot that does not allow committing files which are in a client mapping therefore I need a stream configuration.
Now I struggle with a task which I would assume should be simple:
We have a very large stream with lots of different file types and I would like to check out the entire stream but get only a certain file type.
Can this be done with perforce without black-listing every file type in question?
Edit: Sorry that I (for some reason omitted) so many information in my question.
I am already setting up a virtual stream where the UI gives me three nice fields:
Paths – where I can enter import, share isolate paths
Remapping – ignored in my case
Ignored – here I can enter wildcards to ignore directories or files
I was hoping that by creating a virtual stream I actually could define the file types I want, e.g. I could write an import statement like
import RootDir/....txt //Depot/mainline/RootDir/....txt (note the 4 dots, 3 for perforce and the other as a "wildcard"
however the stream definition does not support this and only allows me to write
import RootDir/... //Depot/mainline/RootDir/...
Since I was not able to find a way to white list the files I wanted I only knew a way to blacklist all things I did not want but I would like to avoid that because my Ignored list would be dozens of entries long.
Now I will look into that sync hint because I could use the full stream spec without filter and only sync the files I need on disk, which might be very good.
There are a few different things going on in your question but this seems the most like a statement of what you're trying to do so I'm going to zero in on it:
I would like to check out the entire stream but get only a certain
file type.
If by "check out" you mean you only want to sync that file type to your local workspace:
p4 sync ....TXT
If by "check out" you mean you want to open only that file type for edit:
p4 edit ....TXT
ANY operation in Perforce that operates on files accepts an arbitrary file path, because Perforce tracks all of its state per-file. This is true whether you're using classic clients or streams.
There needs to be some mechanism for telling the Helix (Perforce) server that you only want to retrieve certain files from the stream.
Virtual Streams may be a good fit here, as they allow you to filter the view of an existing stream.
This means you can sync only the files you want and when you submit you will be submitting directly back to the stream your virtual stream is based on.
More information is available here:
https://www.perforce.com/perforce/doc.current/manuals/p4v/p4v_virtual_streams.html

TLClientNode connection

I need to instantiate the recent version of the ICache in ROCKET-CHIP project stand-alone. I was able to test this instantiation using 6 months old version. However, I am facing troubles with its 'mem' port in the recent version:
val node = TLClientNode(TLClientParameters(sourceId = IdRange(0,1)))
.....
val mem = outer.node.bundleOut
According to my understanding, ROCKET-CHIP project started to use special type of nodes where both SOURCE and SINK nodes shall be connected on a bar using 'TLXbar' class. I tried to follow the code in http://stackissue.com/ucb-bar/rocket-chip/tilelink2-245.html but it seem obsolete. Can anyone point to me how can I connect this port?
Recently I successfully created a trivial TileLink2 node (just passing input to output with some masks) and inserted it between l1backend.node and TileNetwork.masterNodes.head. So I think my experience might be helpful.
Rocket-chip's diplomacy package extends chisel's Module hierarchy. It mainly consists two parts: LazyModule and LazyModuleImp, where LazyModuleImp is the real Module in chisel world.
Nodes are always created in LazyModule, while node.bundleIn/Out should be referenced inside LazyModuleImpl. We should use nodes in LazyModule to interconnect with each other by :=.
Another thing that might be helpful is that inside LazyModuleImp we can only reference bundleIn/Out in IO bundles from nodes that directly belong to the corresponding LazyModule.
For example, if you have a sub lazy module of XXXCrossing which contains a node. You'd better not use its bundleIn/Out as your current lazy module's IO bundles. Otherwise, the chisel code might successfully get compiled but the firrtl result contains undeclared symbols.

Resources