MIPS actual encoding scheme [closed] - search

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
How does MIPS actually store opcodes against instructions? Does it linearly search or use hashtable?
I want to know how does it do either in digital logic terms or programming terms.
What i mean is how does "add" become 0x10. Does it perform operation on the ascii code of "add" or it has it already saved and if saved then how?
If not for MIPS then any other architecture would work.
What i have found till now is it that it forms a symbol table for these opcodes. Any links for this would be helpful
Thanks in advance.

As #Michael noted in his comment, the processor never sees your mnemonic (such as add or lw). In fact, some of the most common command mnemonics (such as bge, branch if greater than or equal) are actually pseudo-instructions, which the compiler converts into multiple native instructions.
If you Google "MIPS Green Sheet", you'll find the reference card that is included in the typical Computer Architecture textbook, explaining how each instruction line - mnemonic plus parameters - is converted to a 32-bit word.
If you spend some time plugging MIPS instructions into the MIPS Instruction Converter, you'll see how something like add $t0, $t1, $t2 becomes a 32-bit sequence of 1's and 0's (represented in hex as 0x012A4020), which is all the CPU ever really sees.

Related

Are there any builtin features of Go (the go compiler more likely) that address making your binary more tamper resistant? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have a program that prompts for a PIN before performing particular actions. The PIN is stored, encrypted, in a local config file along side the executable binary. User enters PIN, program decrypts and compares to the stored value, if they are equal, ok, if not etc.
I'm aware this kind of security check could potentially be circumvented with forensic tools that alter the binary, in affect, changing the '==' to '!=' in the right place to make all the wrong PIN's pass the test in my example.
This may be a stupid question, as I know from the first 2 minutes of googling it's a big and challenging topic, but I still thought I should start with checking on features of the language/compiler I'm actually using first. So, are there any features natively available with Go to make this kind of attack harder to successfully perform?
No, there is nothing remotely like this in the official go compiler or standard library.

Can long flags be followed by a single character? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm aware that the general convention is that short flags (or single dash "-") is followed by a single character, and long flags (or double dash "--") is followed by multiple characters (usually an English word). Also, sometimes multiple short flags can be written like this as shorthand ("-l -c" as "-lc").
However, is this also valid "--c"? It seems to be breaking the aforementioned convention but is it fine as long is it's a unique flag identifier?
Tried searching the web but wasn't able to find any results on this.
Yes, as a general rule, neither your shell nor your kernel cares about the format of the arguments you pass to your command, as long as the program you're writing expects that format.
However, if by "can" you mean "does that respect the POSIX conventions of command arguments", then you should look at the Utility conventions part of the POSIX standard. In the last published version, there is no particular restriction against what you want here, therefore you should be fine.
That said, when you write programs for other people, try to apply the Principle of least astonishment. People usually expect single letter commands to be preceded by a -, so it is a good practice to follow the de-facto conventions when possible.

Identifying programming language [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I've been given a very small snippet of code and it piqued my curiosity. I'm wondering what the language is. I'd like to put the snippet up but it belongs to someone else and they wouldn't appreciate it being posted.
Suffice to say that what I've received looks like a function with a *.sub filename.
The keyword macro is used A LOT with a macro name following the keyword, like a function call with what would appear to be arguments separated by a comma.
if statements are terminated by endif.
The program was written for an embedded device (SiLabs device if I remember correctly).
Comments are denoted by ;.
End of commands are denoted by and end of line.
I've programmed in C/C++/C# and so my broader programming experience is lacking. Does anyone know what language I'm referring to?
I'm going to guess 8051, based on the front page of the SiLabs website.
Should look something like this:
http://www.microapl.co.uk/asm2c/sample8051asm.html
To distinguish it from other assembly languages, you'd look for instructions like SETB, CJNE, DJNZ, and arguments like #R1, DPTR, ACC. 8051 can also address individual bits of some memory locations, written as ACC.7 or P0.1.
Definitely Assembly language, but syntax may vary. Different assemblers features different syntax, also syntax may vary depending upon hardware used.
For Ex comment syntax -
'#' are used for i386, x86-64 etc
';' are used AMD 29K family, motorola, PowerPC
Also some high level assemblers hides some abstractions
Sounds like an assembly language. Hard to tell for which micro controller/microprocessor it is for though.

If a programming language is 64bit, does that mean it is better than 32bit? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
If a language says it has 32bit or 64bit (php or other), does that mean that 64bit is somehow "better"? Is it faster? More reliable?
“A 64-bit compiler” usually means that it is generating instructions from the x86-64 instruction set instead of the IA-32 instruction set. The former is more modern and benefits from more experience in the design of efficient instruction sets. On the other hand, the same L1 cache (resp. L2 cache, L3 cache, cacheline) fits only half as many 64-bit words as it fits 32-bit words. In practice, performance is about the same, and memory use is higher with 64-bit instructions, but 64-bit programs are not limited to 4GiB of virtual memory space like 32-bit programs are.

What happens? The output or the process linux,registers [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
What happens if you try to step into (si), the sysenter instruction?
In order to answer this question, you need to understand how si works.
How could it work? There are two ways I can think of:
either the debugger must set a (temporary) breakpoint on the next
instruction, or
the debugger modifies processor state such that the processor will execute one instruction and stop (aka single-step).
Option 1. is complicated, because the instruction could be an indirect jump, e.g. CALL (%eax), or a RET, and so the debugger might have to go to significant trouble to understand what that next instruction is.
All debuggers I am familiar with use option 2.
Now you can probably explain what you observe when you si over a sysenter (or a syscall, or a int80) instruction. The only other thing you need to know is that the kernel can't possibly allow the single-step mode once sysenter switches to the kernel mode (or else your entire system will freeze).

Resources