I want further understand the difference between XC8 & C18 compiler.
I know that XC8 is the latest compiler for all 8-bit microchip controllers. e.g. PIC16F, PIC18F. And C18 is the compiler for their PIC18 products. For C18, PIC18 series include both PIC18F and PIC18C, is it?
I see XC8 is the further edition comparing to C18. Is it means XC8 can also compile all / part of the code compiled by C18 previously. If not, what should be.
BTW, currently (days) I am searching for sample/tutorial code about PIC18F2455/2550/4550 USB interface. If you have any pointers they would be really appreciated.
One difference is that the XC8 compiler "does not support the PIC18 extended instruction set; code is always compiled for the standard PIC18 instruction set". Another is the MPLAB XC8 compiler "does not currently support preprocessor macros with variable argument lists". Quotes are from the migration manual. Microchip is phasing out the C18 (the only compiler I've used to generate code for their 18F products) but there seems to be a fair number of complaints about XC8.
The Microchip PICDEM FS USB Demo Board was originally based on the 18F4550 (now 18F45K50). The schematic for it is in the documentation. There is also a lot of sample code for it in the 'Microchip Solutions Library'. All that plus more is downloadable for free on their site.
My understanding is XC8
does not support recursion
does not support dynamic function pointers
Related
As we know, riscv allow any manufacturer to add their custom instructions for their products, this is especially common in embedded cpu. And also, the manufacturers often provides the user with their modified version of GCC to compile code for there chips.
But how about the rust compiler? It seems that seldom of manufacturer will provide a modified rust compiler for there chips.
Will this be a huge disadvantage for rust when use rust in embedded or low level kernel programming? And how to solve this problem?
This is one of the reasons llvm was invented, instead of having to implement a compiler for every language-architecture pair one has only to implement one frontend for every language and one backend for every architecture, I expect manufactures more and more to shift from providing a custom gcc to provide a custom llvm backend at which point rust will support that target since it builds upon llvm.
I've recently been looking at the Rust programming language. How does it work? Rust code seems to be compiled into ELF or PE (etc) binaries, but I've not been able to find any information on how that's done? Is it compiled to an intermediate format then compiled the rest of the way with gxx for example? Any help (or links) would be really appreciated.
The code-generation phase of the Rust compiler is mainly done by LLVM. LLVM is a set of tools for building a compiler, most notably used by the C[++] Compiler clang[++].
First, the Rust compiler (just like clang, for example) does all the Rust specific stuff like type and borrow checking; in the end, it generates LLVM-IR. IR stands for intermediate representation and it's... comparable to assembly, but a tiny bit more high level and most importantly: platform independent. Then the Rust compiler just calls ☏ LLVM and says:
Hey buddy, could you please take this IR and generate machine code for the current platform? That would be fantastic ◕ ◡ ◕
To which LLVM responds:
🌈 Sure, no problem, new friend. Here is your highly optimized machine code for [e.g.] x86_64! ♫♪♫
Afterwards they invite a few more friends to wrap it all up in a nice little [e.g.] ELF package and beautifully place it in the users file system. (and the user is like...)
Information like this can be found in the official FAQ which contains a lot of interesting information anyway. For more in-depth details on the Rust compiler, you can read the "Rustc Guide". For this question, the chapter "High Level Overview" is pretty interesting.
I've decided to learn assembler through online tutorials.
I've come across this one that uses the NASM compiler, which most other tutorials seem to as well:
http://www.tutorialspoint.com/assembly_programming/index.htm
I've also come across this youtube series "Assembly primer for hackers"
https://www.youtube.com/watch?v=K0g-twyhmQ4&list=PLue5IPmkmZ-P1pDbF3vSQtuNquX0SZHpB
This one uses what the guy describes as the 'generic linux compiler' (owtte).
The commands for compiling go something like this:
as -o file.o file.s
Where file.s is the assembly source code. Followed by:
ld -o file file.o
Where file is then the executable.
Each of the tutorials uses a different syntax (e.g. a register in the latter tutorial is always preceded by %. NB. There do appear to be less superficial differences in the syntax than this as well). Are these syntaxes decided by the individual compiler?
I was also initially confused when I tried to compile code from the NASM tutorial with the latter method. I was always under the impression that the instruction set had to depend on the CPU and it therefore shouldn't matter which compiler I use. I've just concluded that it's merely differences in syntax but is that correct?
I'm running a Linux computer, by the way, on kernel 4.1.6.
My main question is really which syntax do I use? Is it just a matter of choice? Is one more widely used than the other? Thanks for any help.
Each of the tutorials uses a different syntax (e.g. a register in the
latter tutorial is always preceded by %. NB. There do appear to be
less superficial differences in the syntax than this as well). Are
these syntaxes decided by the individual compiler?
Yes, different assemblers (= assembly language compilers) might use different assembler language syntax although they provide code for the same processor and platform.
My main question is really which syntax do I use? Is it just a matter
of choice? Is one more widely used than the other?
One assembler, like NASM, might go for a wide range of processors and platforms, in this case you would benefit from learning its syntax when you need to work with several processors or platforms.
In other cases it might be better to stick with the assembler of some prominent vendor, because it is widely used and you can find more example code on the net for it which might help you with your development.
Last not least you might simply prefer a particular assembler because you like its features or syntax.
If your'e on a Windows system, Microsoft's MASM (ML.EXE or ML64.exe for 64 bit) syntax is virtually the same as Intel's syntax. MASM (ML.EXE and ML64.EXE) is included with the free Visual Studio express editions, although you usually have to create a custom build step to invoke the assembler in a VS project. VS express includes a good source level debugger.
If you're on a Linux type system, then you'll probably use AT&T syntax, which I assume ended up that way since it was a conversion of some generic assembler. I don't know which assembler(s) to recommend for Linux.
I have a ATXMega16a4u mcu of Atmel and try to compile code with avr-gcc 4.7.2 (Fedora 4.7.2-1.fc17). I got this error:
Unrecognized argument in option '-mmcu=atxmega16a4u'
So I tried to compile code with -mmcu=atxmega16a4 (without 'u' in the end). And get some 'undeclared' errors:
error: 'ADC_CH_GAIN_DIV2_gc' undeclared (first use in this function)
Is my microcontroller not supported yet by avr-gcc? Is there any possibility to make it work on Fedora, avoiding using avr studio (and windows)?
Thanks
Long
The ATxmega16A4U is not supported by AVR-libc. Your undefined symbol there is an error tossed by the C compiler. A cursory reading of Atmel's website shows that the two microcontrollers, ATxmega16A4U and the ATxmega16A4, are different devices, with the most prominent difference being the USB interface present in the former. As a consequence, some of the register descriptions found on the include files given by avr/io.h will not be readily avaiable for the ATxmega16A4U. The solution to this problem is to create a new header file containing the necessary definitions for this microcontroller. This takes care of the libc side. For the compiler/linker side, you may have to patch gcc to take the proper -mmcu option and define the symbols expected by avr/io.h in general. A linker script may be necessary as well, though a cursory read of Atmel's website suggests that the memory layout for both microcontrollers to be the same, so this last step may not be necessary.
I'm learning assembly language. I started with Paul A. Carter's PC Assembly Language which uses NASM (The Netwide Assembler). Then in the middle I switched and started reading Introduction to 80×86 Assembly Language and Computer Architecture which uses MASM.
In NASM I used to write, for initializing a byte
db 110101b
In MASM I'm using
BYTE 110101b
I'm in the middle of reading. Since these are Assembler directives they will be different for each assembler. right?
Doesn't these assembler developers follow a standard for these directives? Because, They know that mnemonics are CPU specific. So, its pain in the ass to learn and code in assembly language.
Now if they follow different directives, its more pain if you change assembler or if you switch the operating system (MASM developer is in deep trouble if he goes to linux).
My confusion is should I acquaint myself with NASM or MASM? I'm fan of windows but I may have to work (in future) on Linux also.
Every book should be titled "_________ Assembly Language using __________ Assembler"
Unfortunately there has never been a standard for assembly language. You'll just have to learn the directives that your assembler supports. Fortunately most of the directives, while having different names, are semantically similar like db and BYTE.
But wait! It gets worse, especially for the x86. You have (at least) two forms of code that assemblers can accept: Intel and AT&T format. AT&T format reverses the order of most operands to instructions (or is it visa versa ;-).
NASM is probably a better choice for portability, but you could also look at the GNU
assembler..
Intel Syntax / AT&T Syntax
With x86 in particular, the first assemblers were from Intel and then largely-compatible assemblers from Microsoft formed one branch.
These assemblers organize source and destination operands right to left and have an unusual (and to my eyes, kind of wacky) abstraction layer that uses a single mnemonic for 8, 16, and 32-bit ops and then derives the actual machine opcode to use based on properties of the operand. Modifiers exist (on operands) to force a particular size.
But Unix was also important and it had a completely different assembler line with different traditions and conventions.
The original Unix vendor was AT&T, which owned the intellectual property developed at Bell Labs. A series of BSD projects and then Linux continued with this tradition. These assemblers historically process operands left to right, have a spare design optimized for speed, and when used by humans they generally use cpp for macros and conditionals, even if the assembler also has parallel features.
These days you are probably using VS on MS or Gnu on Linux or Mac, but this is why we still say AT&T vs Intel. The GNU assembler has an option to assemble both ways, although it's still really in the AT&T camp.
Generally yes. They are mostly feature-compatible though, so converting from one assembler syntax to another is usually not terribly difficult if you know both.
Processors are all documented in a manufacturer supplied Reference manual. This usually developed into the normative syntax (along with the assembler provided by the vendor) for assembly programs on a particular platform. Consequently, many processors from a single vendor have similar syntax.
The situation became more complex with second sourcing of processors and the eventual development of multi-targeting assemblers that, for historical reasons, use mostly consistent syntax across all platforms. This also provides some arguable advantages when porting code across platforms.
Your best choices are to: pick a notation you are comfortable with and accept books with different syntax, see if you can locate cross-system macro libraries or translation tools or bite the bullet and learn multiple dialects. The third is usually tolerable although it makes building private libraries labour intensive.