use of nop(0xc3=ret) in return oriented programming - security

I am failing to see any use of nop's in rop why do people use them? I have seen some examples of rop gadget chains like (G2,G3,G4 are some gadgets)
nop(return gadget)-(G2)-(G3)-somedata-(G4)---nop(return gadget)---
What is the use of using nop at the start or somewhere in the middle?

Related

What is the easiest way to query some input values in Haskell

What is the easiest way to get some values (yes/no, numbers) into a Haskell program. The values should be bound to some variables and other questions should be asked based on previous inputs.
I am trying to solve a little problem for which I think Haskell is best suited. Especially for extending the functionality afterwards. An addition I am also trying to learn this language (I am new to Haskell but have some experience with Prolog, so have some idea about functional programming).
I was checking all he stuff relatd to GUI development, but this is actually an overkill to what I need. The input should be in response to some questions which are dependent on the state of the execution.
I hope this is clear enough.
EDIT:
I would like to have some "po-ups" like these. Not all at once, but just as a pop-up when needed.
It feels a little bit like your assumption is that Haskell is like Javascript here.
That is, it's very simple in Javascript to get a "popup" to display in a browser such as Chrome by using prompt("Are you hungry or thirsty?"), but that's only because the prompt function is built on top of the window object which the browser provides to allow developers to hook into the windowing stack of the operating system that the browser is built in.
Haskell, by default, provides far less functionality "for free". That is, if you want to display a pop up, you'll have to use some library that allows you to display some pop-up.
This is a much bigger question than it possibly seems. It's very similar to the same question in any other batch-style programming language. How would you do this in Java, or in Ruby? Well, you need to find a library that supports it.
One such library for many languages and that is cross platform across operating systems is wxWidgets. It's built in C++, but there are bindings/libraries for Haskell and many other languages. The Haskell library is called wxhaskell: https://wiki.haskell.org/WxHaskell
Good luck, and don't expect it to be an easy path necessarily.
If you have interest in learning Haskell basics, feel free to take a look at this tutorial I helped author: http://happylearnhaskelltutorial.com

High Level language with Low level Graphics [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I am looking for a high level language which will still allow me to work directly with graphics. I want to be able to modify screen pixels, for instance. But I do not want to write huge amounts of code for each operation. I want simple one line commands for graphics somewhat like those listed below. What are some programming languages which would have these features?
Possible pseudocode:
Screen.clear
Graphics.line(4,5,20,25).color=green
Circle(centerx,centery,radius)
Depending on what you want to do (ie, how complex do you need to get?) Processing is a very high-level, graphics-focused environment. Note, however, that it seems to be focused on the fixed function OpenGL pipeline, which is deprecated (though arguably the easiest and most intuitive way to get started).
Processing is built in Java, runs in web browser (or from your desktop), and abstracts most of the initialization and cleanup code required to use OpenGL.
Edit
I've just noticed your comment that says you're not an experienced programmer. In that case, I'd recommend starting with Processing. Once you get the hang of it, move on to Python.
Another, slightly more complex, option is Python. Python is very powerful, fairly easy to pick up (depending upon your prior development experience), and widely supported. It'll also allow you to use shaders and other features from the 21st century, and is cross-platform See this link for PyOpenGL, the first Python OpenGL site that popped up in google.
Then, there's C# + OpenTK. This can get pretty complex pretty quickly, but is very powerful, and since it's compiled (under .NET or Mono), can potentially give you better performance than Python.
Finally, for close-to-bare-metal performance, C++ is unbeatable, though arguably the most complex of these options, with a significant learning curve. However, most of the example code you'll find online is in C++, which can be an issue if you're not using C++ and aren't comfortable reading it.
Using Qt you can create QImage objects and draw on them usign QPainter.
You've of course pixel control using that abstraction level, but you can also access the underlying memory directly using bits() and bytesPerLine() methods thus accessing the image memory directly.
The format that is easiest to use to do fast special computations is in my opinion QImage::Format_ARGB32 with 32 bits per pixel.
Qt is a C++ library portable on may OSs and platforms, and bindings are available for many very high level languages (e.g. Python).

FORTRAN graphic library on Linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I started learning FORTRAN and need graphic library to plot output
As I'm not familiar with FORTRAN environment, I wanted to ask for recommendation
I'm used to matplotlib, and preferably looking for something similar. Similar in means of available features, and workflow concepts
Searching through Synaptic it seems like PGPLOT is the way to go
PS I know I could wrap FORTRAN code in Python in different ways
I've used PGPLOT, dislin and PLPLOT. In this era I'd use dislin or PLPLOT. PGPLOT was last updated in 2001 and only has a FORTRAN 77 interface, which can be used with Fortran 90/95/2003 but the compiler won't be able to check that your calls have the correct arguments. The other two have Fortran 95 interfaces. Of dislin and PLPLOT, I think dislin to be better documented. dislin also provides widgets for GUI input. dislin is free for some uses; for business uses one is supposed to purchase a license. PLPLOT is open source under the LGPL.
Yes, PGPLOT is an option.
You may also want to look into PLplot: http://plplot.sourceforge.net/
DISLIN - supports several platforms and languages (python included).
Have you considered using VisIt or similar software? VisIt can visualise very large datasets and has a mechanism for in-situ visualization with the libsim library. See this presentation for a nice introduction to in-situ visualization with VisIt: http://calcul.math.cnrs.fr/Documents/Ecoles/Data-2011/CouplageSimulationVisualization.pdf.
See here for the libsim api.
Finally, a few additional notes.
While you sate
I'm used to matplotlib, and preferably looking for something similar.
Similar in means of available features, and workflow concepts
using libsimwould be quite different, but it is very powerful.
and
Right now just general graphic package to plot intermediate data
products. If I do well then perhaps I'll look for package that handles
large data sets. But if I get there I'll probably know what to use
till then.
Rather than write a solution now and then change it to deal with large data sets, why not just write a scalable solution now?
Not familiar with FORTRAN plotting but in most languages a quick way to do easy-to-program static graphics is not to use a library at all but to find that language's equivalent of the C "system()" call, find a program that will do the plotting and write the equivalent of:
1. Write plot data to file.
2. Do system call running plot program with data file argument.
3. [Optional] Delete plot data file.
This gives your program the full functionality of the plot program with minimal programming. The plot data and plot program are likely to be in the disk cache so it's all in memory. It's also easily debugged.
People underestimate the system() call - it gives access to a vast array of functionality including scripting. Some would say it's not efficient or "pure" but the user won't care and it will often drastically reduce the amount of programming necessary. Don't reinvent the wheel.
You can use:
PLplot: http://plplot.sourceforge.net/
gtk-fortran. It is a GTK / Fortran binding which offers also an interface to PLplot: https://github.com/vmagnin/gtk-fortran/wiki

Is it worthwile to learn assembly language? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Is it still worthwhile to learn ASM?
I know a little of it, but I haven't really used it or learned it properly because everything I learn to do in assembler I can do in 1/10th the time with some language like C or C++. So, should I really learn and use ASM? Will it do me any good professionally? Will it increase my resourcefulness? In short, would it make me a better programmer?
Note: I am talking about low-level assembly like FASM or NASM and not something like HLA (High-Level Assembler).
I learned from Kip Irvine's book. If you ignore the (fair) criticisms of his (irrelevant) libraries, I can recommend it as a good introduction to the language itself -- although for the really interesting stuff you have to hunt out obsessives on the net.
I think it's useful to understand what happens at the lower levels. As you research assembler you will learn about cpu pipelining, branch prediction, cache alignment, SIMD, instruction reordering and so on. Knowledge of these will help you write better high-level code.
Furthermore, the conventional wisdom is to not try to hand-optimise assembly most of the time but let the compiler worry about it. When you see some examples of the twisted things that compilers generate, you will better understand why the conventional wisdom holds.
Example: LFSRs run fast with the rotate-with-carry instruction, for specific cases like this it's just as easy to write the assembler version as it is to discover whether or not the compiler is smart enough to figure it out. Sometimes you just know something that the compiler doesn't.
It also increases you understanding of security issues -- write-or-execute, stack overruns, etc.
Some concurrency issues only become apparent when you are aware of what is happening at the per-instruction level.
It can be useful sometimes when debugging if you don't have the complete source code.
There's the curiousity value. How are virtual functions implemented anyway? Ever try to write DirectX or COM programs in assembler? How do large structures get returned, does the calling function offer a space for them or vice-versa?
Then there are special assembly languages for graphics hardware, although shader languages went high-level a few years ago, anything which lets you think about a problem a different way is good.
I find it interesting that so many people jump to say that yes, you need/should learn assembly. To me the question is how much assembly do you need to know? I don't think you have to know assembly like a programming language, that is I don't believe that everyone should be able to write a program in assembly, but on the other hand, being able to read it and understand what it actually means (which might require more knowledge of the architecture than the assembler) is enough.
I for sure cannot write assembly (i.e. write any non trivial piece of code in assembly), but I can read it and that together with knowledge of the actual hardware architecture, and the calling conventions that are being used is enough to analyze performance, and identify what piece of C++ code was the source of that assembly.
Yes - the primary reason to learn assembly for C and C++ developers is it helps understanding what's going on under the hood of C and C++ code. It's not that you will actually write code in assembly, but you will be able to look at code disassembly to assess its efficiency and you will understand how different C and C++ features work much better.
It's worthwhile to learn lots of different languages, from lots of different paradigms. Learning Java, C++, C#, and Python doesn't count, since they are all instances of the same paradigm.
As assembly is at the root (well, close to the root) of all languages, I for one say that it is worthwhile to learn assembly.
Then again, it's worthwhile to learn a functional programming language, logic programming, scripting languages, math-based languages. You only have so much time, so you do have to pick and choose.
Knowing ASM is also useful when debugging, as sometimes all you have is "ASM dump of the error".
Depend of which programming level you wish to reach.
If you need to work with debuggers then YES.
If you need to know how compilers works then YES.
Any assembler/debugger is CPU dependent, so there is a lot of work, just check x86 family how big and old is it.
Do you have any use for it in what you plan to do? is it going to aid you in any way in what you currently do or plan to do? those are the two questions you should ask yourself, the answer to those is the answer to your question.
In a more general sense, yes, I'd say in my opinion is well worth learning asm (something like x86 or arm), how well it serves you depends on what you programming and how your debugging it.

How to code in the unix spirit? (small single task tools)

I have written a little script which retrieves pictures and movies from my camera and renames them based on their date and then copies them on my harddrive, managing conflicts automatically (same name? same size? same md5?)
Works pretty well.
But this is ONE script.
From time to time I need to check if a picture is already hidden somewhere in a volume, so I'd like to apply the "conflict manager" only. I guess if I had properly followed the unix spirit of tiny single-task tools, I could do that.
What are the best resources, best practices and your experience on the subject?
Thank you.
Edit : Although I'd love to read unix books and have a deep understanding of the subject, I am looking for the Great Principles first. Plus I tend to limit myself to online resources.
I would look at the book called The Art of Unix Programming.
I've found that most code doesn't start out being reusable, it evolves to be. Take your existing code and factor out the "conflict manager" portion into its own function or program, then call that program instead of having it be a part of your original application. After that you'll be able to reuse that part of your code that you have a need to reuse. Sometimes it's impossible to design software up front for reusability because you simply don't know which parts you'll want to reuse.
As for resources, it seems like the store shelves are packed with books for Linux desktop users and system administrators, but it's hard to find good Linux programming books. A few good ones:
Beginning Linux Programming
Professional Linux Programming
Linux Programming by Example: The Fundamentals
The Linux Programmer's Toolbox
Lastly, Eric Raymond has made The Art of Unix Programming available online for free.
Check out this book:
The Art of Unix Programming by Eric S. Raymond
http://www.amazon.com/UNIX-Programming-Addison-Wesley-Professional-Computing/dp/0131429019
Here is his website:
http://www.catb.org/~esr/writings/taoup/
Personally, whenever I see that the script I'm planning to write will be longer than a dozen lines, I use python instead of shell script. One neat trick with python scripting is that it's very easy to program in a style where you create both "unix spirit" command-line tools and libraries. E.g. for your "conflict manager", create a file (python module) and put the functionality in functions and/or classes, and then at the end you can put a python "main" function (the usual if __name__=='__main__': dance) where you parse command line options (use the builtin OptionParser module for this, it's very nice!) and use the functionality in the functions/classes.
This way you can use the utility both as a stand-alone command line program, or you can import the module in another python script and use the functionality defined there via functions/classes rather than parsing input.
Start with wikipedia (Dataflow programming)
The book Software Tools (amazon) by Kernighan and Plauger is a classic on this subject. I think it should be required reading for any serious student of software development.
- the art of UNIX programming - is quite a nice book ok "the unix way", in so far as one exists. OTOH if the way is "do as little work as gets your job done", you may already be there. :)
I think some of the keys for good gnu code, are:
Handling the system signals properly, like deattaching hard drive files if SIGTERM is received.
Proper use of pipes and standard input/output
Follwing common command line flag rules
I would also recommend this book. Pretty old, but I think is quite clear explaining the principles of unix.

Resources