Unknown language identification - colors

I'm currently working with a dead network application that received packets for a chat.
I checked that it received text like this one:
hi there! {c:0000FF}foo{/c} sentence
I have checked but have not found the language that uses that color syntax. It is some famous language or it's surely a self made script/library?
Original application used c++ and python as developping languages.
Thanks you all in advance,
Rag.

Hard to prove a negative, but I'll wager it's a proprietary ('self made') syntax.

Related

BASIC to whatever language converter

Is there a BASIC to some other programming language (Perl, Python, R, ...)?
Having in mind that BASIC once was very widespread - every PC had a BASIC interpreter and BASIC was even taught at schools - I would think there would be a converter from BASIC to some other language, but I could not find one.
I have been mutilated before, and took me decades to recover. I don't want to experience this yet again. I am trying to convert an old program to R, but reading some 400 lines of BASIC code and finding 35 GOTOs is alreading taking its toll.
There is a BASIC to C Translator called BaCon. It offers a modern BASIC style without the need to use GOTOs. It supports also functions for making GUIs.
https://www.basic-converter.org/
http://basic-converter.proboards.com/

Toolkits to design a TTS (Text-to-speech) system for a custom language?

I'd like to create a TTS system for a native american language (wayuunaiki).
The language is written in latin (western) alphabet.
I also have information about the phonetics (the rules to convert each word into IPA symbols).
I'm planning to create a database of voice recordings from the native people. Then I want to somehow train that data, using the IPA equivalency information to generate a more accurate speech model.
I'm totally new to Natural Language Processing, so my question is.. which tools can I use to perform what I'm planning?
I've heard that HTK ans CMU Sphinx are quite good in speech recognition. No idea about speech generation. Also heard about Festival, but i read it only uses predefined most known languages: English, Spanish, and so.
Excuse my typing faults. I'm still learning English. Thanks in advance!
You can add new language in Festival, it's actually specifically designed to simplify new language creation. For more details read the festvox book:
http://festvox.org/bsv/
Another toolkit to consider is OpenMary, see their documentation too
https://github.com/marytts/marytts/wiki/New-Language-Support
It is more modern and might be easier for you.
In any case you will have to spend some time and write the code to describe your language. Usually it's about 300 lines of code. After that you can record single-speaker TTS database and run voice building process. The more you record the better the result would be.
Use Festival toolkit for text to speech (Tips : Use Linux operating system)

How to create a language these days?

I need to get around to writing that programming language I've been meaning to write. How do you kids do it these days? I've been out of the loop for over a decade; are you doing it any differently now than we did back in the pre-internet, pre-windows days? You know, back when "real" coders coded in C, used the command line, and quibbled over which shell was superior?
Just to clarify, I mean, not how do you DESIGN a language (that I can figure out fairly easily) but how do you build the compiler and standard libraries and so forth? What tools do you kids use these days?
One consideration that's new since the punched card era is the existence of virtual machines already bountifully provided with "standard libraries." Targeting the JVM or the .NET CLR instead of ye olde "language walled garden" saves you a lot of bootstrapping. If you're creating a compiled language, you may also find Java byte code or MSIL an easier compile target than machine code (of course, if you're in this for the fun of creating a tight optimising compiler then you'll see this as a bug rather than a feature).
On the negative side, the idioms of the JVM or CLR may not be what you want for your language. So you may still end up building "standard libraries" just to provide idiomatic interfaces over the platform facility. (An example is that every languages and its dog seems to provide its own method for writing to the console, rather than leaving users to manually call System.out.println or Console.WriteLine.) Nevertheless, it enables an incremental development of the idiomatic libraries, and means that the more obscure libraries for which you never get round to building idiomatic interfaces are still accessible even if in an ugly way.
If you're considering an interpreted language, .NET also has support for efficient interpretation via the Dynamic Language Runtime (DLR). (I don't know if there's an equivalent for the JVM.) This should help free you up to focus on the language design without having to worry so much about the optimisation of the interpreter.
I've written two compilers now in Haskell for small domain-specific languages, and have found it to be an incredibly productive experience. The parsec library makes playing with syntax easy, and interpreters are very simple to write over a Haskell data structure. There is a description of writing a Lisp interpreter in Haskell that I found helpful.
If you are interested in a high-performance backend, I recommend LLVM. It has a concise and elegant byte-code and the best x86/amd64 generating backend you can find. There is an optional garbage collector, and some experimental backends that target the JVM and CLR.
You can write a compiler in any language that produces LLVM bytecode. If you are adventurous enough to learn Haskell but want LLVM, there are a set of Haskell-LLVM bindings.
What has changed considerably but hasn't been mentioned yet is IDE support and interoperability:
Nowadays we pretty much expect Intellisense, step-by-step execution and state inspection "right in the editor window", new types that tell the debugger how to treat them and rather helpful diagnostic messages. The old "compile .x -> .y" executable is not enough to create a language anymore. The environment is nothing to focus on first, but affects willingness to adopt.
Also, libraries have become much more powerful, noone wants to implement all that in yet another language. Try to borrow, make it easy to call existing code, and make it easy to be called by other code.
Targeting a VM - as itowlson suggested - is probably a good way to get started. If that turns out a problem, it can still be replaced by native compilers.
I'm pretty sure you do what's always been done.
Write some code, and show your results to the world.
As compared to the olden times, there are some tools to make your job easier though. Might I suggest ANTLR for parsing your language grammar?
Speaking as someone who just built a very simple assembly like language and interpreter, I'd start out with the .NET framework or similar. Nothing can beat the powerful syntax of C# + the backing of the entire .NET community when attempting to write most things. From here i designed a simple bytecode format and assembly syntax and proceeeded to write my interpreter + assembler.
Like i said, it was a very simple language.
You should not accept wimpy solutions like using the latest tools. You should bootstrap the language by writing a minimal compiler in Visual Basic for Applications or a similar language, then write all the compilation tools in your new language and then self-compile it using only the language itself.
Also, what is the proposed name of the language?
I think recently there have not been languages with ALL CAPITAL LETTER names like COBOL and FORTRAN, so I hope you will call it something like MIKELANG with all capital letters.
Not so much an implementation but a design decision which effects implementation - if you make every statement of your language have a unique parse tree without context, you'll get something that it's easy to hand-code a parser, and that doesn't require large amounts of work to provide syntax highlighting for. Similarly simple things like using a different symbol for module namespaces and object namespaces ( unlike Java which uses . for both package and class namespaces ) means you can parse the code without loading every module that it refers to.
Standard libraries - include the equivalent of everything in C99 standard libraries other than setjmp. Add whatever else you need for your domain. Work out an easy way to do this, either something like SWIG or an in-line FFI such as Ruby's [can't remember module name] and Python's ctypes.
Building as much of the language in the language is an option, but projects which start out doing either give up (rubinius moved to using C++ for parts of its standard library), or is only for research purposes (Mozilla Narcissus)
I am actually a kid, haha. I've never written an actual compiler before or designed a language, but I have finished The Red Dragon Book, so I suppose I have somewhat of an idea (I hope).
It would depend firstly on the grammar. If it's LR or LALR I suppose tools like Bison/Flex would work well. If it's more LL, I'd use Spirit, which is a component of Boost. It allows you to write the language's grammar in C++ in an EBNF-like syntax, so no muddling around with code generators; the C++ compiler compiles the grammar for you. If any of these fail, I'd write an EBNF grammar on paper, and then proceed to do some heavy recursive descent parsing, which seems to work; if C++ can be parsed pretty well using RDP (as GCC does it), then I suppose with enough unit tests and patience you could write entire compilers using RDP.
Once I have a parser running and some sort of intermediate representation, it then depends on how it runs. If it's some bytecode or native code compiler, I'll use LLVM or libJIT to process it. LLVM is more suited for general compilation, but I like the libJIT API and documentation better. Alternatively, if I'm really lazy, I'll generate C code and let GCC do the actual compilation. Another alternative, is to target an existing VM, like Parrot or the JVM or the CLR. Parrot is the VM being designed for Perl. If it's just an interpreter, I'll walk the syntax tree.
A radical alternative is to use Prolog, which has syntax features which remarkably simulate EBNF. I have no experience with it though, and if I am not wrong (which I am almost certainly going to be), Prolog would be quite slow if used to parse heavy duty programming languages with a lot of syntactical constructs and quirks (read: C++ and Perl).
All this I'll do in C++, if only because I am more used to writing in it than C. I'd stay away from Java/Python or anything of that sort for the actual production code (writing compilers in C/C++ help to make it portable), but I could see myself using them as a prototyping language, especially Python, which I am partial towards. Of course, I've never actually done any of this before, so I'm not one to say.
On lambda-the-ultimate there's a link to Create Your Own Programming Language by Marc-André Cournoyer, which appears to describe how to leverage some modern tools for creating little languages.
Just to clarify, I mean, not how do you DESIGN a language (that I can figure out fairly easily)
Just a hint: Look at some quite different languages first, before designing a new languge (i.e. languages with a very different evaluation strategy). Haskell and Oz come to mind. Though you should also know Prolog and Scheme. A year ago I also was like "hey, let's design a language that behaves exactly as I want", but fortunatly I looked at those other languages first (or you could also say unfortunatly, because now I don't know how I want a language to behave anymore...).
Before you start creating a language you should read this:
Hanspeter Moessenboeck, The Art of Niklaus Wirth
ftp://ftp.ssw.uni-linz.ac.at/pub/Papers/Moe00b.pdf
There's a big shortcut to implementing a language that I don't see in the other answers here. If you use one of Lukasiewicz's "unparenthesized" forms (ie. Forward Polish or Reverse Polish) you don't need a parser at all! With reverse polish, the dependencies go right-to-left so you simply execute each token as it's scanned. With forward polish, it's the reverse of that, so you actually execute the program "backwards", simplifying subexpressions until reaching the starting token.
To understand why this works, you should investigate the 3 primary tree-traversal algorithms: pre-order, in-order, post-order. These three traversals are the inverse of the parsing task that a language reader (i. parser) has to perform. Only the in-order notation "requires" a recursive decent to re-construct the expression tree. With the other two, you can get away with just a stack.
This may require more "thinking' and less "implementing".
BTW, if you've already found an answer (this question is a year old), you can post that and accept it.
Real coders still code in C. Just that it's a litte sharper.
Hmmm... language design? or writing a compiler?
If you want to write a compiler, you'd use Flex + Bison. (google)
Not an easy answer, but..
You essentially want to define a set of rules written in text (tokens) and then some parser that checks these rules and assembles them into fragments.
http://www.mactech.com/articles/mactech/Vol.16/16.07/UsingFlexandBison/
People can spend years on this, The above article talks about using two tools (Flex and Bison) That can be used to turn text into code you can feed to a compiler.
First I spent a year or so to actually think how the language should look like. At the same time I helped in developing Ioke (www.ioke.org) to learn language internals.
I have chosen Objective-C as implementation platform as it's fast (enough), simple and rich language. It also provides test framework so agile approach is a go. It also has a rich standard library I can build upon.
Since my language is simple on syntactic level (no keywords, only literals, operators and messages) I could go with Ragel (http://www.complang.org/ragel/) for building scanner. It's fast as hell and simple to use.
Now I have a working object model, scanner and simple operator shuffling plus standard library bootstrap code. I can even run a simple programs - as long as they fit in one file that is :)
Of course older techniques are still common (e.g. using Flex and Bison) many newer language implementations combine the lexing and parsing phase, by using a parser based on a parsing expression grammar (PEG). This works for recursive descent parsers created using combinators, or memoizing Packrat parsers. Many compilers are built using the Antlr framework also.
Use bison/flex which is the gnu version of yacc/lex. This book is extremely helpful.
The reason to use bison is it catches any conflicts in the language. I used it and it made my life many years easier (ok so i'm on my 2nd year but the first 6months was a few years ago writing it in C++ and the parsing/conflicts/results were terrible! :(.)
If you want to write a compiler obviously you need to read the Dragon Book ;)
Here is another good book that I have just read. It is practical and easier to understand than the Dragon Book:
http://www.amazon.co.uk/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=language+implementation+patterns&x=0&y=0
Mike --
If you're interested in an efficient native-code-generating compiler for Windows so you can get your bearings -- without wading through all the unnecessary widgets, gadgets, and other nonsense that clutter today's machines -- I recommend the Osmosian Order's Plain English development system. It includes a unique interface, a simplified file manager, a friendly text editor, a handy hexadecimal dumper, the compiler/linker (of course), and a wysiwyg page-layout application for documentation. Written entirely in Plain English, it is a quick download (less than a megabyte), small enough to understand in short order (about 25,000 lines of Plain English code, with just 4,000 in the compiler/linker), yet powerful enough to reproduce itself on a bottom-of-the-line Dell in less than three seconds. Really: three seconds. And it's free to all who write and ask for a copy, including the source code and and a rather humorous tongue-in-cheek 100-page manual. See www.osmosian.com for details on how to get a copy, or write to me directly with questions or comments: Gerry.Rzeppa#pobox.com

Known "Z notation" applications?

I was just remembering back my university classes and was wondering to know if anyone out here even used the "Z notation" in a professional environment. I honestly must say that it was the single most boring class that I have ever attended in my life. Maybe because of the teacher, but at the time we really all thought it was a big waste of time. I might have been wrong, which is why I'd like to hear you about it.
If you are using it or some derived language (Z++), I'd just like to know how is it useful for you. Just curious to know some commonly-known applications of Z or your application.
For those who are not familiar : http://staff.washington.edu/jon/z/z-examples.html
It's worth looking at the B Method (http://en.wikipedia.org/wiki/B-Method). It's a slightly more pragmatic descendent of Z. The idea is that you can actually discharge a bunch of proof obligations through refinements steps (with the help of a theorem prover that is hiding behind the scenes) and then eventually generate code directly from your specification. I believe it has been used in a number of "real world" projects.
Z is (as you pointed out) a specification notation and not a programming language intended to facilitate formal verification.
One of the larger (publicly known) projects specified using the notation was the protocol used in the Mondex smart card platform. There was recently a revival to determine the correctness of the original manual proofs with mechanical checking by multiple teams that included verification of the original Z specifications. Not surprisingly, no new fundamental errors were detected, although a number of assumptions were shown invalid by most of the teams.
The National Security Agency Tokeneer project was specified in Z before implementation in the Spark Ada subset.
Given the expressiveness of the notation it is unlikely that it will be extended. This would also make proofs more complex and be counter-productive.
I first encountered Z notation when I read that XCB (a replacement for the original Xlib API in X11) was proven correct with Z-notation .
The Web Services Description Language (WSDL) was developed using the Z notation. You can find the specification with the Z notation here: http://www.w3.org/TR/wsdl20/wsdl20-z.html. The specification states that
The Z Notation was used to improve the quality of the normative text that defines the Component Model, and to help ensure that the test suite covered all important rules implied by the Component Model.
I had to do Z back in uni! Brings back memories, if you have a Linux install handy try this application CADiZ...
'Z++' is called object-Z. I haven't been active in Z since the early-'90s (working in part on a Windows port of CADiZ, which appears to have vanished), so have no idea if its current community, but some more recent papers have been published on using object-Z to formalise UML.

Why is software support for Bidirectional text (Hebrew,Arabic) so poor? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
While most operating systems and web browsers have very good support for bidirectional text such as Hebrew and Arabic, most commercial and open-source software does not:
Most text editors, besides the original notepad and the visual studio editor, does a very poor job. (And I tried dozens of them).
I could not find any file compare tool doing a decent job - No even Beyond-Compare.
Same thing for software and packages dealing with charting and reporting.
Some questions I have:
Do you share the same pain I do?
Is the software you write bidirectional compliant? Do you have bug reports about it?
Do you even know what are the issues involved? Do you test for them?
Any suggestions on how to make the software world a better place for bidirectional language speakers?
Do you share the same pain I do?
No. And that's probably the answer: most people have no idea how bidirectional languages work. I for example have some troubles working with that. Because I'm interested in that topic quite a bit I was reading pango sources a while back, and that's probably the second reason why the support sucks: it's damn hard to get right.
I think the GNOME project has one of the best support for bidirectional user interfaces thanks to Pango (of course I can't verify that because I wouldn't be able to spot the problems).
But because you said "open source": I think the globalization support in open source projects is generally outstanding. Linux sucks are pretty much everything, but internationalization is something they get right.
gettext is still one of the few translation systems that has a (I know half baked but) working pluralization system.
Is the software you write bidirectional compliant? Do you have bug reports about it?
Probably not. I'm working on a web publishing software currently and that's one of the things I haven't tested at all so far :-(
Do you even know what are the issues involved? Do you test for them?
Bi-directional support is not no the direct roadmap. So no tests for them, where the issues are I know from the translation interface I wrote for Plurk.
Any suggestions on how to make the software world a better place for bidirectional language speakers?
For an open source project: ask guys to help you that know where the issues are. For closed source? Hire someone who knows.
I think there are two main answers to this:
1) Most languages read left-to-right, so people either think they can get away with not having it or just don't even think about it in the first place.
2) It can be hard to support it, depending on what your project is. If your tools/libraries don't support it, your software probably won't either. And it's not just hard in a programming sense, but hard to get it right when the programmers aren't familiar with right-to-left languages. As I understand it, to really properly support bi-directional text, some things in the UI must also be flipped to look "right."
The only reason I know anything about this is because I work with a guy who speaks Arabic as his native language and I've talked to him about it a little. I still don't know much about it. Our products only pretty recently started supporting Arabic and I haven't been a part of that effort.
Simple, get more bidirectional language speakers to voice their concerns! With so few bidirectional language users around, I'd imagine that bidirectional text support is pretty low on most people's priority lists. The more bug reports you and other bidirectional language speakers file, though, the more the problem will be addressed.
If you break up a string into substrings and display them individually you will break the OS bidi rendering, also if you add some mostly innocent symbols (like a - for example) you will mess up the text display.
The two things you have to know to write bidi-compatible software is:
Always display entire strings, never try to display parts of a larger string.
Always test any formatting code with bidi text.
And if you are writing a text editor, word processor or anything that requires high end typography and you can't follow rule 1 above then writing a bidi rendering engine is a lot of work.
I'm left-handed, and deal with similar issues in the physical world. It's a natural part of being in the minority, that businesses primarily cater to the majority.
If you think there are problems with bidirectional text, you should check out the Turkish i problem sometime..
Anyhow, I think what will happen is either that text processing will become very standardized, and the libraries will do things correctly, or you'll have to wait until the app becomes big enough to warrant adding good support..
I know ltr text in Flash is a pain in the ass - I've heard it's easier for web pages, although you've got to be careful how you process strings so they don't get mixed up.
This is an awfully subjective question, by the way, one that's impossible to find a 'solution' for - are you sure this is the right place to ask it?
I myself has been researching around on how to add native BiDi to Android. Results so far: lots of work, Android practically lacks real BiDi.
The issue is that the world of computers is all about internet and sharing, especially open-source software. This means dominant languages are the concern, and if you note english is actually the standard and other (mostly western) languages are provided as side translations.
I speak Arabic/Hebrew/English. With computers I use almost only englis, with arabic/hebrew for local stuff (news, online tv, ...) which is handled well by web browsers. However since I bought Samsung Galaxy and started updating firmware I starting noting how big the problem is :(
A note regarding some of the answers - There are no "bidirectional languages". a language is either left to right or right to left (or top to bottom...). A Text or a String can be bidirectional if it contains both say Hebrew and English.
Regarding the question, Firefox seem to work swell for me. Also MSWord and that's pretty much everything I use Hebrew in.
Any suggestions on how to make the software world a better place for bidirectional language speakers?
Unfortunately, I don't think the situation will improve unless there are a lot more RTL-language-speakers participating in global affairs... which seems unlikely.
Currently we have Israel which is a very technologically advanced society, but very small and nearly all the educated people speak English. And then there are the Arab countries and others that use Arabic script, which don't produce and consume nearly as much information as the Western world, according to studies I've seen.

Resources