I am trying to create patch a file using diff tool.But facing an issues.
I have created one directory named a and put original file in to it:
a/original_file.c
I have created other directory named b and put same file with modified content in to it.
b/original_file.c
I have copied the content of the b/original_file.c file from the internet and put it into some text editor.
After giving command diff -Naur a b > patch_file.patch, I can see patch_file.patch is generated and it has some unwanted changes (it's related to indentation).
For example:
return msg (MSG_NOTIFY, &msg, senr,
- sizeof (struct msgotify));
+ sizeof (struct msgotify));
You can see there are changed related to indentation where sizeof (struct msgotify)) is replaced by same sizeof (struct msgotify)) but one basis of indentation which is what we don't want.
Could anybody let me know how to get rid of this problem??
If you don't care about changes in spacing, add -b to the diff command that generates the patch.
Related
I am trying to extract libraries from the Dyld_shared_cache, and need to fix in external references.
For example, the pointers in the __DATA.__objc_selrefs section usually point to data outside the mach-o file, to fix that I would have to copy the corresponding c-string from the dyld and append it to the __TEXT.__objc_methname section.
Though from my understanding of the Mach-O file format, this extension of the __TEXT.__objc_methname would shift all the sections after it and would force me to fix all the offsets and pointers that reference them. Is there a way to add data to a section without breaking a lot of things?
Thanks!
Thanks to #Kamil.S for the idea about adding a new load command and section.
One way to achieve adding more data to a section is to create a duplicate segment and section and insert it before the __LINKEDIT segment.
Slide the __LINKEDIT segment so we have space to add the new section.
define the slide amount, this must be page-aligned, so I choose 0x4000.
add the slide amount to the relevant load commands, this includes but is not limited to:
__LINKEDIT segment (duh)
dyld_info_command
symtab_command
dysymtab_command
linkedit_data_commands
physically move the __LINKEDIT in the file.
duplicate the section and change the following1
size, should be the length of your new data.
addr, should be in the free space.
offset, should be in the free space.
duplicate the segment and change the following1
fileoff, should be the start of the free space.
vmaddr, should be the start of the free space.
filesize, anything as long as it is bigger than your data.
vmsize, must be identical to filesize.
nsects, change to reflect how many sections your adding.
cmdsize, change to reflect the size of the segment command and its section commands.
insert the duplicated segment and sections before the __LINKEDIT segment
update the mach_header
ncmds
sizeofcmds
physically write the extra data in the file.
you can optionally change the segname and sectname fields, though it isn't necessary. thanks Kamil.S!
UPDATE
After clarifing with OP that extension of __TEXT.__objc_methname would happen during Mach-O post processing of an existing executable I had a fresh look on the problem.
Another take would be to create a new load command LC_SEGMENT_64 with a new __TEXT_EXEC.__objc_methname segment / section entry (normally __TEXT_EXEC is used for some kernel stuff but essentially it's the same thing as __TEXT). Here's a quick POC to ilustrate the concept:
#import <Foundation/Foundation.h>
int main(int argc, const char * argv[]) {
#autoreleasepool {
printf("%lx",[NSObject new]);
}
return 0;
}
Compile like this:
gcc main.m -c -o main.o
ld main.o -rename_section __TEXT __objc_methname __TEXT_EXEC __objc_methname -lobjc -lc
Interestingly only ld up to High Sierra 10.14.6 generates __TEXT.__objc_methname, no trace of it on Catalina, it's done differently.
UPDATE2.
Playing around with it, I noticed execution rights for __TEXT segment (and __TEXT_EXEC for that matter) are not required for __objc_methname to work.
Even better specific segment & section names are not required:
I could pull off:
__DATA.__objc_methname
__DATA_CONST.__objc_methname
__ARBITRARY.__arbitrary
or in my case last __DATA section
__DATA.__objc_classrefs where the original the data got concatenated by the selector name.
It's all fine as long as a proper null terminated C-string with the selector name is there. If I intentionally break the "new\0" in hex editor or MachOView I'll get
"+[NSObject ne]: unrecognized selector sent to instance ..."
upon launching my POC executable so the value is used for sure.
So to sum __TEXT.__objc_methname section itself is likely some debugger hint made by the linker. The app runtime seems to only need selector names as char* anywhere in memory.
I'm reading the Understand the Linux Kernel book, and it says that the list of file_lock is stored in the file's inode (of field i_flock).
But in the sys_flock() of Linux 2.6.11.12, which will eventually call flock_lock_file(). It uses filp->f_dentry->d_inode->i_flock to get the list of file_lock and filp->f_dentry is an dentry of the directory which "contains" the file.
int flock_lock_file(struct file *filp, struct file_lock *new_fl) {
// ...
struct inode * inode = filp->f_dentry->d_inode;
// ...
}
Suppose that the file_lock list are linked with filp->f_dentry->d_inode->i_flock, What will happen when a hard link exists:
/some_path/foo/file.txt
/another_path/bar/file_link
and file_link is a hard link to file.txt
When we use this two path to open the same file, sys_open() will set filp->f_dentry to foo and bar separately, isn't it? If my guess is right, how file_lock can work?
The file_lock is indeed stored in the inode of the corresponding file.
An inode can be referred by several directory entries, which linked in inode's i_dentry field. Even for a unique file might have different filp->f_dentry, filp->f_dentry->d_inode is all refer to the same inode.
I am trying to download Blob file from ORACLE DB. I used dbms_lob.substr to cut binary data on parts (max length of HEX field is 2,000). So I cut it, then I put data into .docx file. When I open it I see the message:
Word found a problem with content in file test777.docx
and asks me to repair the file. After the Office suite repairs, the document just opens fine. I am able to open the document.
The core problem I think in a screenshot:
[![enter image description here][1]][1]
After cutting remained quantity of a symbol of the last field is it is supplemented by '02'. So when I write it in a file and open it with binary view I see lots of spaces in there. As I understand that is a core problem.
[![enter image description here][2]][2]
Does anyone knows how to avoid it? I think the problem in method of downloading.
How to repair bunch of files like Office does? (I have nearly 100 files every month).
You didn't specify a variable name for the length of the blob so I will use BLOB_LENGTH. You need to make sure not to write out more than the full length. Also you do not want the MOD option on the FILE statement since you are creating the file not appending to an existing file.
data _null_;
length fv $ 120;
set blobs;
fv="k:\Folder\"||File_nm;
file writeout FILEVAR=fv recfm=n;
array blob[8] blob_1-blob_8;
do i=1 to 8 ;
len = max(0,min(2000,blob_length - 2000*(i-1)));
put blob[i] $varying2000. len;
end;
run;
I have a perl script that traverses a set of directories and when it hits one of them it blows up with an Invalid Argument and I want to be able to programmatically skip it. I thought I could start by finding out the file type with the file command but it too blows up like this:
$ file /sys/devices/virtual/net/br-ex/speed
/sys/devices/virtual/net/br-ex/speed: ERROR: cannot read `/sys/devices/virtual/net/br-ex/speed' (Invalid argument)
If I print out the mode of the file with the perl or python stat function it tells me 33060 but I'm not sure what all the bits mean and I'm hoping a particular one would tell me not to try to look inside. Any suggestions?
To understand the stats number you got, you need to convert the number to octal (in python oct(...)).
Then you'll see that 33060 interprets to 100444. You're interested only in the last three digits (444). The first digit is file owner permissions, the second is group and the third is everyone else.
You can look at each of the numbers (in your case all are 4) as 3 binary bits in this order:
read-write-execute.
Since in your case owner, group & other has 4, it is translated (for all of them) to 100 (in binary) which means that only the read bit is on for all three - meaning that all three can only read the file.
As far as file permissions go, you should have been successful reading /sys/devices/virtual/net/br-ex/speed.
There are two reasons for the read to fail:
- Either speed is a directory, (directories require execute permissions to read inside).
- Or it's a special file - which can be tested using the -f flag in perl or bash, or using os.path.isfile(...) in python.
Anyhow, you can use the following links to filter files & directories according to their permissions in the 3 languages you mentioned:
ways to test permissions in perl.
ways to test permissions in python.
ways to test permissions in bash.
Not related to this particular case, but I hit the same error when I ran it on a malicious ELF (Linux executable) file. In that case it was because the program headers of the ELF was intentionally corrupted. Looking at the source code for file command, this is clear as it checks the ELF headers and bails out with the same error in case the headers are corrupted:
/*
* Loop through all the program headers.
*/
for ( ; num; num--) {
if (pread(fd, xph_addr, xph_sizeof, off) <
CAST(ssize_t, xph_sizeof)) {
file_badread(ms);
return -1;
}
TLDR; The file command checks not only the magic bytes, but it also performs other checks to validate a file type.
I'm trying to check if a folder has any subfolders without iterating through its children, in Linux. The closest I've found so far is using ftw and stopping at the first subfolder - or using scandir and filtering through the results. Both, are, however, an overkill for my purposes, I simply want a yes/no.
On Windows, this is done by calling SHGetFileInfo and then testing dwAttributes & SFGAO_HASSUBFOLDER on the returned structure. Is there such an option on Linux?
The standard answer is to call stat on the directory, then check the st_nlink field ("number of hard links"). On a standard filesystem, each directory is guaranteed to have 2 hard links (. and the link from the parent directory to the current directory), so each hard link beyond 2 indicates a subdirectory (specifically, the subdirectory's .. link to the current directory).
However, it's my understanding that filesystems aren't required to implement this (see, e.g., this mailing list posting), so it's not guaranteed to work.
Otherwise, you have to do as you're doing:
Iterate over the directory's contents using glob with the GNU-specific GLOB_ONLYDIR flag, or scandir, or readdir.
Call stat on each result and check S_ISDIR(s.st_mode) to verify that files found are directories. Or, nonportably, check struct dirent.d_type: if it's DT_DIR then it's a file, and if it's DT_UNKNOWN, you'll have to stat it after all.
The possibilities you've mentioned (as well as e.James's) seem to me like they're better suited to a shell script than a C++ program. Presuming the "C++" tag was intentional, I think you'd probably be better off using the POSIX API directly:
// warning: untested code.
bool has_subdir(char const *dir) {
std::string dot("."), dotdot("..");
bool found_subdir = false;
DIR *directory;
if (NULL == (directory = opendir(dir)))
return false;
struct dirent *entry;
while (!found_subdir && ((entry = readdir(directory)) != NULL)) {
if (entry->d_name != dot && entry->d_name != dotdot) {
struct stat status;
stat(entry->d_name, &status);
found_subdir = S_ISDIR(status.st_mode);
}
}
closedir(directory);
return found_subdir;
}
Does getdirentries do want you want it to do? I think it shoudl return nothing if there are no directories. I would have tried this myself but am temporarily without access to a linux box :(