In my vc2005 solution , when build it ,some warning will displayed such as "warning LNK4099: PDB 'libbmt.pdb' was not found...", But I don't know to to disable it.
It cannot be disabled, as it is on Microsoft's list of unignorable warnings.
If you have the source for the libraries you are using, you can rebuild them in Debug mode and copy the generated *.pdb files to the same directory as the libs you are linking.
If you do not have the source, there is a workaround, but it involves hex-editing the linker: https://connect.microsoft.com/VisualStudio/feedback/details/176188/can-not-disable-warning-lnk4099
Essentially, hex edit your link.exe (after backing it up!) to zap the
occurrence of 4099 in the list of non-ignorable warnings. I did it and
the hundred or so 4099 warnings disappeared! [L]ook
for the hex bytes 03 10 00 00 (which is 4099 as a 32-bit little-endian
hex value). Change it to (say) FF FF 00 00, save the file and you're
done.
I don't know about VS2005 but in newer versions you can ignore specific link warnings by adding /ignore:4099
Related
I realize a question with the exact title has already been answered, but the steps there requires running the compiler and linker manually, whereas I want to use cmake.
I am trying to debug a C program with WinDbg. But I'm getting this error:
*** WARNING: Unable to verify checksum for main.exe
Reading a mailing list thread1, I'm guessing I need to add a few flags, namely '/Zi' and '/Release'. But I'm building my project with cmake, and I don't know how to add those flags properly so that I can build my program using GNU toolchain with debug symbols too.
My CMakeLists.txt:
cmake_minimum_required(VERSION 3.00)
project(Hello LANGUAGES C)
add_executable(main src/main.c)
With the above cmake file, my program is built properly. Even a pdb file is generated, which is read by WinDbg no problem. But I can't see the line information with .lines and no source file is shown when debugging the EXE; only assembly commands are shown.
After the reading the mail thread (mentioned above), I checked the checksum value of my EXE. It's zero. Now I need to know how to set up a cmake file so it produces EXE with debug symbols with proper checksum.
The checksum-verification warning turned out not to be the issue (it was just a warning after all, not an error). WinDbg didn't load line information. Either it's the default (although I don't know why that would be) or I mistakenly turned it off myself. Whatever the case, here is how you turn it on:
.lines -e
After that, WinDbg was able to bring up the source window by its own accord when I started debugging.
So I'm using this site to setup Qemu on my Lubuntu VM.
https://azeria-labs.com/emulate-raspberry-pi-with-qemu/
My errors happen when im trying to run the Qemu but the screen appears as black and it says "Guest has not initialized the display (yet)."
Looking at the error it says:
Error: invalid dtb and unrecognized/unsupported machine ID
r1=0x00000183 r2=0x00000100
r2[]=05 00 00 00 01 00 41 54 01 00 00 00 00 10 00 00
Available machine support:
ID (hex) NAME
ffffffff Generic DT based system
ffffffff ARM-Versatile (Device Tree Support)
Please check your kernel config and/or bootloader.
As you can see I used the latest kernel and raspberry image (Buster), so I'm not exactly sure if that's contributing to the error, because the source im using is pretty outdated.
$ qemu-system-arm -kernel ~/qemu_vms/kernel-qemu4.19.50-buster -cpu arm1176 -m 256 -M versatilepb -serial stdio -append "root=/dev/sda2 rootfstype=ext4 rw" -hda ~/qemu_vms/2019-09-26-raspbian-buster.img
I couldn't do the redir part from the online example because for some reason it kept saying -redir: invalid option
Here is the visual output that it's giving me:
https://ibb.co/xDmj7D7
https://ibb.co/9YrmD2M
If anyone can tell me what I did wrong, the output should be something similar to the source im using thanks! :
https://azeria-labs.com/emulate-raspberry-pi-with-qemu/
EDIT: Alright i've made some progress since the last time.
So i forgot to include the dtb because buster needs this as well.
-dtb /.../versatile-pb.dtb \
https://github.com/dhruvvyas90/qemu-rpi-kernel
Used the command format from there, but i encountered that my file was raw so i did a drive command to format=raw
Then another error popped up:
vpb_sic_write: Bad register offset 0x2c
Solved with adding: -serial stdio
source: https://github.com/dhruvvyas90/qemu-rpi-kernel/issues/75
It looks like im in the raspberry, but my Qemu still has a black screen saying: Guest has not initialized the display (yet)
I had the same situation as described above with Raspbian Buster image and kernel. But when I switched to 2019-04-08-raspbian-stretch-full.img and kernel-qemu-4.14.79-stretch without any other changes, then I was able to get graphics (I mean mouse cursor, desktop, etc.) in QEMU. Looks like versatile-pb.dtb has to be corrected for Raspbian Buster.
Raspbian Stretch in QEMU
I use the buster img and with the parameter -dtb versatile-pb-buster.dtb(you can download it from https://github.com/dhruvvyas90/qemu-rpi-kernel), then it works.
I'm trying to create a script which detects binary files and gets the version which compiled it. For example, if I have an executable which was compiled with gcc, I would like to get the version of gcc. I find out that I can use the Linux command ldd in order to check the executable, but is there a proper/cleaner and better way to do so?
Also as I understand, some of the python files also being compiled with gcc. How can I detect the version?
This depends on the binary format that you are looking at. As you mention Linux, I am assuming ELF files.
For that, you could use objdump -s --section .comment file:
my_executable: file format elf64-x86-64
Contents of section .comment:
0000 4743433a 20285562 756e7475 20372e33 GCC: (Ubuntu 7.3
0010 2e302d31 36756275 6e747533 2920372e .0-16ubuntu3) 7.
0020 332e3000 3.0.
For clang this looks like
my_executable: file format elf64-x86-64
Contents of section .comment:
0000 4743433a 20285562 756e7475 20382e31 GCC: (Ubuntu 8.1
0010 2e302d31 7562756e 74753129 20382e31 .0-1ubuntu1) 8.1
0020 2e300063 6c616e67 20766572 73696f6e .0.clang version
0030 20362e30 2e302d31 7562756e 74753220 6.0.0-1ubuntu2
0040 28746167 732f5245 4c454153 455f3630 (tags/RELEASE_60
0050 302f6669 6e616c29 00 0/final).
Another option, which is a bit more difficult to parse, would be to use strings:
strings hyrise/build-clang/hyriseClient | grep clang
clang version 6.0.0-1ubuntu2 (tags/RELEASE_600/final) | head -n 1
Note that on Linux, clang might use ld from gcc, so even if the executable was built using clang, you might see both "GCC" and "clang" in the output.
Background
I have an application that generates files that should be in Zip format, PKZIP version 6.3.3 to be exact. (For the curious: SIARD 2.0)
Sample File
I have uploaded a sample file to Google Drive:
sample.siard
Problem
When I point Infozip's unzip under Linux at the file, it complains:
testing: content/ OK
testing: content/schema0/ OK
testing: content/schema0/table0/ OK
testing: content/schema0/table0/table0.xml
error: invalid compressed data to inflate
...
The same error is given for all actual files. (Not directories)
Verbose file listing (unzip -v file) gives:
...
6064 Defl:F 1868 69% 2018-01-30 10:41 055f9f61 content/schema0/table0/table0.xml
...
(no errors here)
Infozip version
I have a reasonable new version of Infozip. unzip -v gives
UnZip 6.00 of 20 April 2009, by Debian. Original by Info-ZIP.
Latest sources and executables are at ftp://ftp.info-zip.org/pub/infozip/ ;
see ftp://ftp.info-zip.org/pub/infozip/UnZip.html for other sites.
Compiled with gcc 4.9.2 for Unix (Linux ELF) on Jan 28 2017.
UnZip special compilation options:
ACORN_FTYPE_NFS
COPYRIGHT_CLEAN (PKZIP 0.9x unreducing method not supported)
SET_DIR_ATTRIB
SYMLINKS (symbolic links supported, if RTL and file system permit)
TIMESTAMP
UNIXBACKUP
USE_EF_UT_TIME
USE_UNSHRINK (PKZIP/Zip 1.x unshrinking method supported)
USE_DEFLATE64 (PKZIP 4.x Deflate64(tm) supported)
UNICODE_SUPPORT [wide-chars, char coding: UTF-8] (handle UTF-8 paths)
LARGE_FILE_SUPPORT (large files over 2 GiB supported)
ZIP64_SUPPORT (archives using Zip64 for large files supported)
USE_BZIP2 (PKZIP 4.6+, using bzip2 lib version 1.0.6, 6-Sept-2010)
VMS_TEXT_CONV
WILD_STOP_AT_DIR
[decryption, version 2.11 of 05 Jan 2007]
The only thing listed as NOT supported is unreducing, but that shouldn't be relevant.
When I try Python's zipfile module, it both tests and extracts with no problem. I have also heard that PKZIP itself have no problem with these files, but I personally don't have that installed.
So, I have no problem using these files myself, but they are intended for long time archiving and I really need to know:
The question
Is there a way for me to find out if there is a bug in the generation of these files or is there a bug in unzip's handling of them?
ZIP64?
I have searched the web and found a lot of people having problem with large files and Zip64 format. However, my files are not large. (up to 20Mb uncompressed)
Also, this version of unzip should support Zip64. (See version info above)
Tools
My preferred tools are Python, hex editors and the bash command line.
On face value, the message "invalid compressed data to inflate", suggests your zip file is corrupt. Are you certain that the exact same file can be read successfully with PKZIP, but cannot with Infozip?
After a (very) quick glance at the SIARD standard, it looks like it just uses bog-standard zip files with deflate/store compression. That means that the zip file won't have used a feature that only PKZIP can handle.
One possibility is that the archive has been created with Zip64 extensions, but your version of Infozip doesn't support it.
If you run unzip -v it should print a line containing the string ZIP64_SUPPORT if it does.
For reference, this is what I get
$ unzip -v
UnZip 6.00 of 20 April 2009, by Info-ZIP. Maintained by C. Spieler. Send
bug reports using http://www.info-zip.org/zip-bug.html; see README for details.
Latest sources and executables are at ftp://ftp.info-zip.org/pub/infozip/ ;
see ftp://ftp.info-zip.org/pub/infozip/UnZip.html for other sites.
Compiled with gcc 4.8.3 20140911 (Red Hat 4.8.3-7) for Unix (Linux ELF) on Feb 25 2015.
UnZip special compilation options:
COPYRIGHT_CLEAN (PKZIP 0.9x unreducing method not supported)
SET_DIR_ATTRIB
SYMLINKS (symbolic links supported, if RTL and file system permit)
TIMESTAMP
UNIXBACKUP
USE_EF_UT_TIME
USE_UNSHRINK (PKZIP/Zip 1.x unshrinking method supported)
USE_DEFLATE64 (PKZIP 4.x Deflate64(tm) supported)
UNICODE_SUPPORT [wide-chars, char coding: UTF-8] (handle UTF-8 paths)
MBCS-support (multibyte character support, MB_CUR_MAX = 6)
LARGE_FILE_SUPPORT (large files over 2 GiB supported)
ZIP64_SUPPORT (archives using Zip64 for large files supported)
USE_BZIP2 (PKZIP 4.6+, using bzip2 lib version 1.0.6, 6-Sept-2010)
VMS_TEXT_CONV
[decryption, version 2.11 of 05 Jan 2007]
UnZip and ZipInfo environment options:
UNZIP: [none]
UNZIPOPT: [none]
ZIPINFO: [none]
ZIPINFOOPT: [none]
To check if you zip file uses Zip64, check the final 6 bytes of the zip file. If the first 4 are all 0xFF (this is the Offset to Central Dir field), it is very likely you have a Zip64 archive. Note that this will not work if your zip file has a comment.
For reference, below is a dump from a zip file that uses Zip64. Note the value of the Offset to Central Dir field is FFFFFFFF
10000020C 000000004 50 4B 05 06 END CENTRAL HEADER 06054B50
100000210 000000002 00 00 Number of this disk 0000
100000212 000000002 00 00 Central Dir Disk no 0000
100000214 000000002 04 00 Entries in this disk 0004
100000216 000000002 04 00 Total Entries 0004
100000218 000000004 DA 00 00 00 Size of Central Dir 000000DA
10000021C 000000004 FF FF FF FF Offset to Central Dir FFFFFFFF
100000220 000000002 00 00 Comment Length 0000
Self-answer.
My subject line was
Finding out what Infozip's unzip is complaining about
The answer to that turned out to require downloading the source code to unzip, add a lot of debug messages, and read them.
In this particular case, unzip was complaining about the fact that the zip file used post-file data descriptors without setting the header flag to indicate it. (General flag 3)
Normally one should set this flag and set the header CRC/length fields to all zeros.
This file did not have the flag set but still the fields were set to zero. unzip then thinks "Oh, the length really must be zero!"
Then the actual non-zero file appears and unzip gets all grumpy. The post-file data descriptor did not help.
In the body of the question I asked:
Is there a way for me to find out if there is a bug in the generation of these files or is there a bug in unzip's handling of them?
I personally think these files are broken. I haven't talked to the people responsible for generating them, but think I have a good case that they have done bad.
On a more philosophical note:
There are two schools on how unzippers should work.
One is the "best effort" school, which says that the program should do whatever it can to recover the files inside regardless of how wrong the formatting is. (There are obviously limits to this)
The other school is the "Not my problem" school of thought that says that if the zip file is in a wrong format, then the unzipper shouldn't touch it. Let the makers of the zipfile fix their problem instead.
Pkware itself is firmly in the first school of thought, while Infozip is in the second.
We have an issue related to a Java application running under a (rather old) FC3 on an Advantech POS board with a Via C3 processor. The java application has several compiled shared libs that are accessed via JNI.
Via C3 processor is supposed to be i686 compatible. Some time ago after installing Ubuntu 6.10 on a MiniItx board with the same processor, I found out that the previous statement is not 100% true. The Ubuntu kernel hanged on startup due to the lack of some specific and optional instructions of the i686 set in the C3 processor. These instructions missing in C3 implementation of i686 set are used by default by GCC compiler when using i686 optimizations. The solution, in this case, was to go with an i386 compiled version of Ubuntu distribution.
The base problem with the Java application is that the FC3 distribution was installed on the HD by cloning from an image of the HD of another PC, this time an Intel P4. Afterwards, the distribution needed some hacking to have it running such as replacing some packages (such as the kernel one) with the i386 compiled version.
The problem is that after working for a while the system completely hangs without a trace. I am afraid that some i686 code is left somewhere in the system and could be executed randomly at any time (for example after recovering from suspend mode or something like that).
My question is:
Is there any tool or way to find out at what specific architecture extensions a binary file (executable or library) requires? file does not give enough information.
The unix.linux file command is great for this. It can generally detect the target architecture and operating system for a given binary (and has been maintained on and off since 1973. wow!)
Of course, if you're not running under unix/linux - you're a bit stuck. I'm currently trying to find a java based port that I can call at runtime.. but no such luck.
The unix file command gives information like this:
hex: ELF 32-bit LSB executable, ARM, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.4.17, not stripped
More detailed information about the details of the architecture are hinted at with the (unix) objdump -f <fileName> command which returns:
architecture: arm, flags 0x00000112:
EXEC_P, HAS_SYMS, D_PAGED
start address 0x0000876c
This executable was compiled by a gcc cross compiler (compiled on an i86 machine for the ARM processor as a target)
I decide to add one more solution for any, who got here: personally in my case the information provided by the file and objdump wasn't enough, and the grep isn't much of a help -- I resolve my case through the readelf -a -W.
Note, that this gives you pretty much info. The arch related information resides in the very beginning and the very end. Here's an example:
ELF Header:
Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00
Class: ELF32
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: ARM
Version: 0x1
Entry point address: 0x83f8
Start of program headers: 52 (bytes into file)
Start of section headers: 2388 (bytes into file)
Flags: 0x5000202, has entry point, Version5 EABI, soft-float ABI
Size of this header: 52 (bytes)
Size of program headers: 32 (bytes)
Number of program headers: 8
Size of section headers: 40 (bytes)
Number of section headers: 31
Section header string table index: 28
...
Displaying notes found at file offset 0x00000148 with length 0x00000020:
Owner Data size Description
GNU 0x00000010 NT_GNU_ABI_TAG (ABI version tag)
OS: Linux, ABI: 2.6.16
Attribute Section: aeabi
File Attributes
Tag_CPU_name: "7-A"
Tag_CPU_arch: v7
Tag_CPU_arch_profile: Application
Tag_ARM_ISA_use: Yes
Tag_THUMB_ISA_use: Thumb-2
Tag_FP_arch: VFPv3
Tag_Advanced_SIMD_arch: NEONv1
Tag_ABI_PCS_wchar_t: 4
Tag_ABI_FP_rounding: Needed
Tag_ABI_FP_denormal: Needed
Tag_ABI_FP_exceptions: Needed
Tag_ABI_FP_number_model: IEEE 754
Tag_ABI_align_needed: 8-byte
Tag_ABI_align_preserved: 8-byte, except leaf SP
Tag_ABI_enum_size: int
Tag_ABI_HardFP_use: SP and DP
Tag_CPU_unaligned_access: v6
I think you need a tool that checks every instruction, to determine exactly which set it belongs to. Is there even an offical name for the specific set of instructions implemented by the C3 processor? If not, it's even hairier.
A quick'n'dirty variant might be to do a raw search in the file, if you can determine the bit pattern of the disallowed instructions. Just test for them directly, could be done by a simple objdump | grep chain, for instance.
To answer the ambiguity of whether a Via C3 is a i686 class processor: It's not, it's an i586 class processor.
Cyrix never produced a true 686 class processor, despite their marketing claims with the 6x86MX and MII parts. Among other missing instructions, two important ones they didn't have were CMPXCHG8b and CPUID, which were required to run Windows XP and beyond.
National Semiconductor, AMD and VIA have all produced CPU designs based on the Cyrix 5x86/6x86 core (NxP MediaGX, AMD Geode, VIA C3/C7, VIA Corefusion, etc.) which have resulted in oddball designs where you have a 586 class processor with SSE1/2/3 instruction sets.
My recommendation if you come across any of the CPUs listed above and it's not for a vintage computer project (ie. Windows 98SE and prior) then run screaming away from it. You'll be stuck on slow i386/486 Linux or have to recompile all of your software with Cyrix specific optimizations.
Expanding upon #Hi-Angel's answer I found an easy way to check the bit width of a static library:
readelf -a -W libsomefile.a | grep Class: | sort | uniq
Where libsomefile.a is my static library. Should work for other ELF files as well.
Quickest thing to find architecture would be to execute:
objdump -f testFile | grep architecture
This works even for binary.