open limitation based on file size - linux

Is there any limitation on "open" based on file size. ?
My file size is 2 GB will it open successfully and is there any timing issue can come ?
filesystem is rootfs.

From the open man page:
O_LARGEFILE
(LFS) Allow files whose sizes cannot be represented in an off_t
(but can be represented in an off64_t) to be opened. The
_LARGEFILE64_SOURCE macro must be defined in order to obtain
this definition. Setting the _FILE_OFFSET_BITS feature test
macro to 64 (rather than using O_LARGEFILE) is the preferred
method of obtaining method of accessing large files on 32-bit
systems (see feature_test_macros(7)).
On a 64-bit system, off_t will be 64 bits and you'll have no problem. On a 32-bit system, you'll need the suggested workaround to allow for files larger than 2 GB.

rootfs may not support large files; consider using a proper filesystem instead (tmpfs is almost the same as rootfs, but with more features).
rootfs is intended only for booting and early use.

Related

File size can't exceed 2 Go

I am doing Monte Carlo simulations. I am trying to direct the results of my program into a Huge file using fprintf to avoid tabs because it necessitate much memory size.
The problem is that, when the data size on file achieve 2Go, the program can't write on it anymore. I did some research in this and other sites but I didn't get a helpful response to my problem.
I am using Ubuntu 12.04 LTS with file type ext4 and the partition size is 88 Go. I am not good at computer sciences and I don't know even what means ext but I saw that this type of file can support individual files with 16 Go at least.
So can anyone tell me what to do?
The maximal file size limit for a 32 bit is 2^31 (2 GiB), but using the LFS interface on filesystems that support LFS applications can handle files as large as 263 bytes.
Thank you for your answer it was so helpful. I changed fopen with fopen64 and i used -D_FILE_OFFSET_BITS=64 when compiling, and all got fine :)

Getting disk sector size without raw filesystem permission

I'm trying to get the sector size, specifically so I can correctly size the buffer for reading/writing with O_DIRECT.
The following code works when my app's run as root:
int fd = open("/dev/xvda1", O_RDONLY|O_NONBLOCK);
size_t blockSize;
int rc = ioctl(fd, BLKSSZGET, &blockSize);
How can I get the sector size without it being run as root?
According to the Linux manpage for open():
In Linux alignment restrictions vary by file system and kernel version and might be absent entirely. However there is currently no file system-independent interface for an application to discover these restrictions for a given file or file system. Some file systems provide their own interfaces for doing so, for example the XFS_IOC_DIOINFO operation in xfsctl(3).
So it looks like you may be able to obtain this information using xfsctl()... if you are using xfs.
Since your underlying block device is a Xen virtual block device and there might be any number of layers below that (LVM, dm-crypt, another filesystem, etc...) I'm not sure how meaningful all of this will really be for you.
You could use the stat(2) and related syscall (perhaps on some particular file), then use the st_blksize field. However this would give a file-system related blocksize, not the size of the sector as preferred by the hardware. But for O_DIRECT input (from a file on filesystem!) that st_blocksize might be more relevant.
Otherwise, I would suggest a power-of-two size, perhaps 8Kbytes or 64Kbytes, as the size of your O_DIRECT-ed reads (and you may want to align your read buffer to the page size, usually 4Kbytes).

Large file support

Could someone please explain, what exactly this O_LARGEFILE option does to support opening of large files.
And can there be any side effects of compiling with -D_FILE_OFFSET_BITS=64 flag. In other words, when compiled with this option do we have to make sure something.
Use _FILE_OFFSET_BITS in preference to O_LARGEFILE. These are used on 32 bit systems to allow opening files so large that they exceed the range of a 32bit file pointer.
No, you don't have to do anything special. If you are on 64bit Linux it makes no difference anyway.
From man 2 open:
O_LARGEFILE
(LFS) Allow files whose sizes cannot be represented in an off_t (but can be represented in an off64_t) to be opened. The _LARGE‐
FILE64_SOURCE macro must be defined in order to obtain this definition. Setting the _FILE_OFFSET_BITS feature test macro to 64 (rather
than using O_LARGEFILE) is the preferred method of obtaining method of accessing large files on 32-bit systems (see fea‐
ture_test_macros(7)).
Edit: (ie. RTM :P)

What is the max number of files that can be kept in a single folder, on Win7/Mac OS X/Ubuntu Filesystems?

I'm wondering about what is the maximum number of files that can be present in a single folder, in the file systems used by all the prevalent OSes mentioned. I need this information in order to decide the lowest common denominator, so that the folder I'm building can be opened and accessed in any OS.
In Windows (assuming NTFS): 4,294,967,295 files
In Linux (assuming ext4): also 4 billion files (but it can be less with some custom inode tables)
In Mac OS X (assuming HFS): 2.1 billion
But I have put around 65000 files into a single directory and I have to say just loading the file list can kill an average PC.
This depends on the filesystem. The lowest common denominator is likely FAT32 which only allows 65,534 files in a directory.
These are the numbers I could find:
FAT16 (old format, can be ignored): 512
FAT32 (still used a lot, especially on external media): 65,534
NTFS: 4,294,967,295
ext2/ext3 (Linux): Depends on configuration at format time, up to 4,294,967,295
HFS+ (Mac): "up to 2.1 billion"
Most modern OSes have no upper limit, or a very high upper limit. However, performance usually begins to degrade when you have something on the order of 10,000 files; it's a good idea to break your directory into multiple subdirectories before this point.
From what I know for Windows 7, you can have unlimited amount of files per directory. BUT the more files you have on a volume, the worse the performance will be for that volume.

How to create a file of size more than 2GB in Linux/Unix?

I have this home work where I have to transfer a very big file from one source to multiple machines using bittorrent kinda of algorithm. Initially I am cutting the files in to chunks and I transfer chunks to all the targets. Targets have the intelligence to share the chunks they have with other targets. It works fine. I wanted to transfer a 4GB file so I tarred four 1GB files. It didn't error out when I created the 4GB tar file but at the other end while assembling all the chunks back to the original file it errors out saying file size limit exceeded. How can I go about solving this 2GB limitation problem?
I can think of two possible reasons:
You don't have Large File Support enabled in your Linux kernel
Your application isn't compiled with large file support (you might need to pass gcc extra flags to tell it to use 64-bit versions of certain file I/O functions. e.g. gcc -D_FILE_OFFSET_BITS=64)
This depends on the filesystem type. When using ext3, I have no such problems with files that are significantly larger.
If the underlying disk is FAT, NTFS or CIFS (SMB), you must also make sure you use the latest version of the appropriate driver. There are some older drivers that have file-size limits like the ones you experience.
Could this be related to a system limitation configuration ?
$ ulimit -a
vi /etc/security/limits.conf
vivek hard fsize 1024000
If you do not want any limit remove fsize from /etc/security/limits.conf.
If your system supports it, you can get hints with: man largefile.
You should use fseeko and ftello, see fseeko(3)
Note you should define #define _FILE_OFFSET_BITS 64
#define _FILE_OFFSET_BITS 64
#include <stdio.h>

Resources