Creating a shader language - programming-languages

Is it at all possible for an individual to create a shader programming language?

SK-logic has the answer right there as the first comment. You can build a custom shading language compiler which takes the custom language and "compiles" it into GLSL language, or HLSL or anything else.
This is a realistic question because existing shader methodologies can create combinatorial explosions of shaders and can make your shader implementation become very restricted and interdependent. A custom shader language can make it easy for those dependencies to be automated, for example, for multiple layers of effects, or level-of-detail control.
You can make the shaders work they way you "wish" they worked. It also adds a perfect portability layer completely abstracting the underlying shader language if desired.
The generated shaders will go to the driver which takes it the rest of the way and compiles native code for the GPU.
It is fairly straightforward to make a compiler. All you need is a (lexer and) parser, build an abstract syntax tree from it, then do a few passes over the tree to gather information you need to know up front and do some simple constant expresssion optimizations, and generate the shaders as output from the syntax tree. I'm using lemon as parser generator lately.

Related

Overview/showcase of shader techniques/uses for games

I am looking for resources that can provide me with a better understanding of what kind of things shaders are used for in games, what they can do, and maybe even more importantly, what they cannot. I understand how the graphics pipeline works and all that, and I have made some very basic shaders in GLSL (mostly just to replace the fixed-function pipeline functionality), but I don't yet fully understand which things are only possible with custom shaders, which things can be done more efficiently, etc. I have been able to find some examples of certain techniques, most notably lighting, but I am looking for a more higher-level overview of their usage.
Links to and explanations of certain interesting techniques, as opposed to an overview, are also appreciated (but less than an overview ;) ), preferrably in GLSL or pseudocode.
Well considering that DirectX and OpenGL have both moved towards a shader-only (i.e. no fixed-function) system, the answer to your "which effects are only possible with shaders" question could be "everything".
Some techniques that I believe were not possible/feasible without programmable shaders (or by using very specific black-box APIs) though, are:
Per-pixel lighting
Shadow mapping.
GPU skinning (i.e. matrix palette skinning) for animated meshes.
Any of the various post-process effects that are common today: bloom, SSAO, depth of field, etc.
Deferred shading.
Implementing arbitrary/"other" lighting models like Oren–Nayar, Cook–Torrance, Rim-Lighting, etc.
and the list can go on, and I'm sure some people will disagree with my assessment that these couldn't be achieved with fixed-function functionality (through hacks or various fixed-functions extensions).
What it boils down to is, before programmable shaders, a given effect had to be implemented in the hardware/driver by the vendor and it and it had to be something that could be reasonably expressed through the API. Now you can execute effectively any user-defined code you want (within the constraints of the different shader stages and other limitations of the hardware) so you have the flexibility to greatly customize your rendering pipeline and invent new techniques as you see fit.
Take a look at the first couple GPU Gems books (which can be read for free on Nvidia's website) to get a feel for the types of techniques that were showing up once programmable hardware was available.

is it possible to markup all programming languages under object oriented paradigm using a common markup schema?

i have planned to develop a tool that converts a program written in a programming language (eg: Java) to a common markup language (eg: XML) and that markup code is converted to another language (eg: C#).
in simple words, it is a programming language converter that converts program written in one language to another language.
i think it is possible but i don know where to start. i wanna know the possibilities to do so and information about some existing system.
What you are trying to do is extremely hard, but if you want to know what you are up for I've listed the steps you need to follow below:
First the hard bit:
First you obtain or derive an operational semantics for your source and target languages.
Then you enhance the semantics to capture your source and target memory models.
Then you need to unify the two enhanced-semantics within a common operational model.
Then you need to define a mapping from your source languages onto the common operational model.
Then you need to define a mapping from your operational model to your target language
Step 4, as you pointed out in your question, is trivial.
Step 1 is difficult, as most languages do not have sufficiently formal semantics specified; but I recommend checking out http://lucacardelli.name/TheoryOfObjects.html as this is the best starting point for building a traditional OO semantics.
Step 2 is almost certainly impossible in general, but may be merely obscenely difficult if you are willing to sacrifice some efficiency.
Step 3 will depend on how clean the result of step 1 turned out, but is going to be anything from delicate and tricky to impossible.
Step 5 is not going to be trivial, it is effectively writing a compiler.
Ultimately, what you propose to do is impossible in general, due to the difficulties inherited in steps 1 and 2. However it should be difficult, but doable, if you are willing to: severely restrict the source language constructs supported; pretty much forget handling threads correctly; and pick two languages with sufficiently similar semantics (ie. Java and C# are ok, but C++ and anything-else is not).
It depends on what languages you want to support, but in general this is a huge & difficult task unless you plan to only support a very small subset of each language.
The real problem is that each programming languages has different features (with some areas that overlap and others that don't) and different ways of solving the same problems -- and it's pretty tricky to detect the problem the programmer is trying to solve and convert that to a new idiom. :) And think about the differences between GUIs created in different languages....
See http://xmlvm.org/ as an example (a project aimed at converting between source code of many different languages, with an XML middle-point) -- the site covers in some depth the challenges they are tackling and the compromises they take, and (if you still have any interest in this kind of project...) ask more specific followup questions.
Notice specifically what the output source code looks like -- it's not at all readable, maintainable, efficient, etc..
It is "technically easy" to produce XML for any single langauge: build a parser, construct and abstract syntax tree, and dump out that tree as XML. (I build tools that do this off-the-shelf for many languages). By technically easy, I mean that the community knows how to do this (see any compiler textbook, e.g., Aho&Ullman Dragon book). I do not mean this is a trivial exercise in terms of effort, because real languages are complicated and messy; there have been many attempts to build C++ parsers and few successes. (I have one of the successes, and it was expensive to get right).
What is really hard (and I don't try to do) is produce XML according to a single schema in which the language semantics are exposed. And without that, it will be essentially impossible to write a translator from a generic XML to an arbitrary target language. This is known as the UNCOL problem and people have been looking since 1958 for the answer. I note that the Wikipedia article seems to indicate the problem is solved, but you can't find many references to UNCOL in the literature since 1961.
The closest attempt I've seen to this is the OMG's "ASTM" model (http://www.omg.org/spec/ASTM/1.0/Beta1/); it exports XMI which is XML. But the ASTM model has lots of escapes built into it to allow langauges that it doesn't model perfectly (AFAIK, that means every language) to extend the XMI in arbitrary ways so that the language-specific information can be encoded. Consequently each language parser produces a custom version of the XMI, and thus each reader has to pretty much know about the extensions and full generality vanishes.

how to develop domain specific language on top of another language?

say i found a good open source software/library written in python. i want to wrap some of the functions or methods that i have created into easy to understand language of my own.
do porter_stemm(DOC) (the DSL) would be equivalent to the function or series of methods written in python.
i want to create a DSL that is easy to learn, but need this DSL translated into the original open source software software.
im not sure if i am clear here but my intention is:
create an easy to learn code language that users can use to solve a problem in a certain niche.
this simple language needs to be translated or compiled or interpretated via some middleware into the original open source software's language (python).
Write your DSL's syntax and a parser for it, e.g., in Python, with pyparsing (simpler than the traditional lexx-yacc like approach) -- the parser can produce a tree of semantically meaningful nodes, and then you can (simpler) walk that tree and interpret it, or (a bit less simple) walk that tree and generate equivalent Python code. That's the approach I'd suggest for most host languages. Some host languages (Lisp and Scheme being the main ones) have powerful macros, so building a DSL out of macros is more common in those languages.
Embedding your DSL in the host language basically means you're not really doing a DSL but a more traditional framework, so that's really a different approach (possibly more powerful, but may be not quite as easy to learn for non-programmers;-).
Typically a DSL would still have the same basic syntax as its host language, and simply extend the language's capabilities. So everything for you is still written in Python, it's just Python with some domain-specific functions, variables, etc.
My experience with DSLs is mostly with the Lisp family of languages. There, a typical DSL might be implemented with Macros, which generate code-patterns that are specific to the DSL. (Plus, functions relating to the DSL). The point is, although the macros and functions may extend the capabilities of the language, they are still implemented in that language.
I suppose you could go a step farther and write an interpreter in Python - but there you still get Python runnability for free, as your interpreter translates your custom language back into Python.

How to create a language these days?

I need to get around to writing that programming language I've been meaning to write. How do you kids do it these days? I've been out of the loop for over a decade; are you doing it any differently now than we did back in the pre-internet, pre-windows days? You know, back when "real" coders coded in C, used the command line, and quibbled over which shell was superior?
Just to clarify, I mean, not how do you DESIGN a language (that I can figure out fairly easily) but how do you build the compiler and standard libraries and so forth? What tools do you kids use these days?
One consideration that's new since the punched card era is the existence of virtual machines already bountifully provided with "standard libraries." Targeting the JVM or the .NET CLR instead of ye olde "language walled garden" saves you a lot of bootstrapping. If you're creating a compiled language, you may also find Java byte code or MSIL an easier compile target than machine code (of course, if you're in this for the fun of creating a tight optimising compiler then you'll see this as a bug rather than a feature).
On the negative side, the idioms of the JVM or CLR may not be what you want for your language. So you may still end up building "standard libraries" just to provide idiomatic interfaces over the platform facility. (An example is that every languages and its dog seems to provide its own method for writing to the console, rather than leaving users to manually call System.out.println or Console.WriteLine.) Nevertheless, it enables an incremental development of the idiomatic libraries, and means that the more obscure libraries for which you never get round to building idiomatic interfaces are still accessible even if in an ugly way.
If you're considering an interpreted language, .NET also has support for efficient interpretation via the Dynamic Language Runtime (DLR). (I don't know if there's an equivalent for the JVM.) This should help free you up to focus on the language design without having to worry so much about the optimisation of the interpreter.
I've written two compilers now in Haskell for small domain-specific languages, and have found it to be an incredibly productive experience. The parsec library makes playing with syntax easy, and interpreters are very simple to write over a Haskell data structure. There is a description of writing a Lisp interpreter in Haskell that I found helpful.
If you are interested in a high-performance backend, I recommend LLVM. It has a concise and elegant byte-code and the best x86/amd64 generating backend you can find. There is an optional garbage collector, and some experimental backends that target the JVM and CLR.
You can write a compiler in any language that produces LLVM bytecode. If you are adventurous enough to learn Haskell but want LLVM, there are a set of Haskell-LLVM bindings.
What has changed considerably but hasn't been mentioned yet is IDE support and interoperability:
Nowadays we pretty much expect Intellisense, step-by-step execution and state inspection "right in the editor window", new types that tell the debugger how to treat them and rather helpful diagnostic messages. The old "compile .x -> .y" executable is not enough to create a language anymore. The environment is nothing to focus on first, but affects willingness to adopt.
Also, libraries have become much more powerful, noone wants to implement all that in yet another language. Try to borrow, make it easy to call existing code, and make it easy to be called by other code.
Targeting a VM - as itowlson suggested - is probably a good way to get started. If that turns out a problem, it can still be replaced by native compilers.
I'm pretty sure you do what's always been done.
Write some code, and show your results to the world.
As compared to the olden times, there are some tools to make your job easier though. Might I suggest ANTLR for parsing your language grammar?
Speaking as someone who just built a very simple assembly like language and interpreter, I'd start out with the .NET framework or similar. Nothing can beat the powerful syntax of C# + the backing of the entire .NET community when attempting to write most things. From here i designed a simple bytecode format and assembly syntax and proceeeded to write my interpreter + assembler.
Like i said, it was a very simple language.
You should not accept wimpy solutions like using the latest tools. You should bootstrap the language by writing a minimal compiler in Visual Basic for Applications or a similar language, then write all the compilation tools in your new language and then self-compile it using only the language itself.
Also, what is the proposed name of the language?
I think recently there have not been languages with ALL CAPITAL LETTER names like COBOL and FORTRAN, so I hope you will call it something like MIKELANG with all capital letters.
Not so much an implementation but a design decision which effects implementation - if you make every statement of your language have a unique parse tree without context, you'll get something that it's easy to hand-code a parser, and that doesn't require large amounts of work to provide syntax highlighting for. Similarly simple things like using a different symbol for module namespaces and object namespaces ( unlike Java which uses . for both package and class namespaces ) means you can parse the code without loading every module that it refers to.
Standard libraries - include the equivalent of everything in C99 standard libraries other than setjmp. Add whatever else you need for your domain. Work out an easy way to do this, either something like SWIG or an in-line FFI such as Ruby's [can't remember module name] and Python's ctypes.
Building as much of the language in the language is an option, but projects which start out doing either give up (rubinius moved to using C++ for parts of its standard library), or is only for research purposes (Mozilla Narcissus)
I am actually a kid, haha. I've never written an actual compiler before or designed a language, but I have finished The Red Dragon Book, so I suppose I have somewhat of an idea (I hope).
It would depend firstly on the grammar. If it's LR or LALR I suppose tools like Bison/Flex would work well. If it's more LL, I'd use Spirit, which is a component of Boost. It allows you to write the language's grammar in C++ in an EBNF-like syntax, so no muddling around with code generators; the C++ compiler compiles the grammar for you. If any of these fail, I'd write an EBNF grammar on paper, and then proceed to do some heavy recursive descent parsing, which seems to work; if C++ can be parsed pretty well using RDP (as GCC does it), then I suppose with enough unit tests and patience you could write entire compilers using RDP.
Once I have a parser running and some sort of intermediate representation, it then depends on how it runs. If it's some bytecode or native code compiler, I'll use LLVM or libJIT to process it. LLVM is more suited for general compilation, but I like the libJIT API and documentation better. Alternatively, if I'm really lazy, I'll generate C code and let GCC do the actual compilation. Another alternative, is to target an existing VM, like Parrot or the JVM or the CLR. Parrot is the VM being designed for Perl. If it's just an interpreter, I'll walk the syntax tree.
A radical alternative is to use Prolog, which has syntax features which remarkably simulate EBNF. I have no experience with it though, and if I am not wrong (which I am almost certainly going to be), Prolog would be quite slow if used to parse heavy duty programming languages with a lot of syntactical constructs and quirks (read: C++ and Perl).
All this I'll do in C++, if only because I am more used to writing in it than C. I'd stay away from Java/Python or anything of that sort for the actual production code (writing compilers in C/C++ help to make it portable), but I could see myself using them as a prototyping language, especially Python, which I am partial towards. Of course, I've never actually done any of this before, so I'm not one to say.
On lambda-the-ultimate there's a link to Create Your Own Programming Language by Marc-André Cournoyer, which appears to describe how to leverage some modern tools for creating little languages.
Just to clarify, I mean, not how do you DESIGN a language (that I can figure out fairly easily)
Just a hint: Look at some quite different languages first, before designing a new languge (i.e. languages with a very different evaluation strategy). Haskell and Oz come to mind. Though you should also know Prolog and Scheme. A year ago I also was like "hey, let's design a language that behaves exactly as I want", but fortunatly I looked at those other languages first (or you could also say unfortunatly, because now I don't know how I want a language to behave anymore...).
Before you start creating a language you should read this:
Hanspeter Moessenboeck, The Art of Niklaus Wirth
ftp://ftp.ssw.uni-linz.ac.at/pub/Papers/Moe00b.pdf
There's a big shortcut to implementing a language that I don't see in the other answers here. If you use one of Lukasiewicz's "unparenthesized" forms (ie. Forward Polish or Reverse Polish) you don't need a parser at all! With reverse polish, the dependencies go right-to-left so you simply execute each token as it's scanned. With forward polish, it's the reverse of that, so you actually execute the program "backwards", simplifying subexpressions until reaching the starting token.
To understand why this works, you should investigate the 3 primary tree-traversal algorithms: pre-order, in-order, post-order. These three traversals are the inverse of the parsing task that a language reader (i. parser) has to perform. Only the in-order notation "requires" a recursive decent to re-construct the expression tree. With the other two, you can get away with just a stack.
This may require more "thinking' and less "implementing".
BTW, if you've already found an answer (this question is a year old), you can post that and accept it.
Real coders still code in C. Just that it's a litte sharper.
Hmmm... language design? or writing a compiler?
If you want to write a compiler, you'd use Flex + Bison. (google)
Not an easy answer, but..
You essentially want to define a set of rules written in text (tokens) and then some parser that checks these rules and assembles them into fragments.
http://www.mactech.com/articles/mactech/Vol.16/16.07/UsingFlexandBison/
People can spend years on this, The above article talks about using two tools (Flex and Bison) That can be used to turn text into code you can feed to a compiler.
First I spent a year or so to actually think how the language should look like. At the same time I helped in developing Ioke (www.ioke.org) to learn language internals.
I have chosen Objective-C as implementation platform as it's fast (enough), simple and rich language. It also provides test framework so agile approach is a go. It also has a rich standard library I can build upon.
Since my language is simple on syntactic level (no keywords, only literals, operators and messages) I could go with Ragel (http://www.complang.org/ragel/) for building scanner. It's fast as hell and simple to use.
Now I have a working object model, scanner and simple operator shuffling plus standard library bootstrap code. I can even run a simple programs - as long as they fit in one file that is :)
Of course older techniques are still common (e.g. using Flex and Bison) many newer language implementations combine the lexing and parsing phase, by using a parser based on a parsing expression grammar (PEG). This works for recursive descent parsers created using combinators, or memoizing Packrat parsers. Many compilers are built using the Antlr framework also.
Use bison/flex which is the gnu version of yacc/lex. This book is extremely helpful.
The reason to use bison is it catches any conflicts in the language. I used it and it made my life many years easier (ok so i'm on my 2nd year but the first 6months was a few years ago writing it in C++ and the parsing/conflicts/results were terrible! :(.)
If you want to write a compiler obviously you need to read the Dragon Book ;)
Here is another good book that I have just read. It is practical and easier to understand than the Dragon Book:
http://www.amazon.co.uk/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=language+implementation+patterns&x=0&y=0
Mike --
If you're interested in an efficient native-code-generating compiler for Windows so you can get your bearings -- without wading through all the unnecessary widgets, gadgets, and other nonsense that clutter today's machines -- I recommend the Osmosian Order's Plain English development system. It includes a unique interface, a simplified file manager, a friendly text editor, a handy hexadecimal dumper, the compiler/linker (of course), and a wysiwyg page-layout application for documentation. Written entirely in Plain English, it is a quick download (less than a megabyte), small enough to understand in short order (about 25,000 lines of Plain English code, with just 4,000 in the compiler/linker), yet powerful enough to reproduce itself on a bottom-of-the-line Dell in less than three seconds. Really: three seconds. And it's free to all who write and ask for a copy, including the source code and and a rather humorous tongue-in-cheek 100-page manual. See www.osmosian.com for details on how to get a copy, or write to me directly with questions or comments: Gerry.Rzeppa#pobox.com

Best thing for 3D and raytracing

I want to play around with some graphics stuff. Simple animations and things. I want to fool around with raytracing too. I need help finding a library that will help me do these things. I have a few requirements:
Must be able to do raytracing
Must be for a high level language (python, .NET, etc.). Please no C/C++
Must have good documentation, preferably with examples.
Does anyone know of a good library i can use to fool around with?
Have a look at blender.org - it's an open-source 3d project with python scripting capabilities.
First thing that come to my mind is the popular open source P.O.V Raytracer (www.povray.org). POV scenes are defined entirely with script files, and some people made Python code to generate them easily.
http://code.activestate.com/recipes/205451/
http://jabas-unblog.blogspot.com/2007/04/easy-procedural-graphics-python-and-pov.html
I'm not aware of any libraries that satisfy your request (at least not unless I decide to publish the code for my own tracer...).
Writing a tracer isn't actually that hard anyway. I'd strongly recommend getting hold of a copy of "An Introduction to Ray Tracing" by Glassner. It goes through the actual math in relatively easy to understand terms, and also has a whole section on "how to write a ray tracer".
In any event, a "library" isn't all that much use on its own - pretty much every ray tracer has its own internal libraries but they're specific to the tracer. They typically include:
a base class to represent 3D objects
subclasses of that for each geometric primitive
vector and matrix classes (3D and 4D)
texturing functions and/or classes
light classes of various types (point light, spot light, etc)
For my own tracer I actually used the javax.vecmath packages for #3 above, but had to write my own code for #1 and #2 based on the Glassner book. The whole thing is well under 2k lines of code, and most of the individual classes are about 40 lines long.
I believe there are few people putting together ray-tracers using XNA Game Studio.
One example of this with code can be seen over at:
Bespoke Software » Ray Tracing - Materials
The well developed raytracers that are open source are
Yafray
Povray
For realtime 3D (it will be language dependant of course) there is JMonkeyEngine (Java) not sure whether that meets your "high level language" requirement.
You could consider a 3D game scripting language too, like GameCore or BlitzBasic

Resources