What languages are used for real time systems programming? [closed] - programming-languages

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I didn't find any useful information about programming languages for real time systems. All I found was Real Time Systems and Programming Languages: Ada 95, Real-Time Java and Real-Time C/POSIX (some pdf here), which seems to talk about extensions of Java and C for real times systems (I don't have the book to read). Also, the book was published in 2001, and the information may be obsolete now.
So, I'm dubious whether these languages are used in real world applications, or if real time systems in the real world are made in other languages, like DSLs.
If the second option is true for you, what are the outstanding characteristics of the language you use?

I am an avionics software engineer.
I was able to participate in several development projects.
The languages I used in those projects are: C, C++, and Real-time Java.
C is great.
C++ is not so bad but C/C++ require strict coding standards for the safety considerations such as DO-178B.
I think Real-time Java is the way to go but I don't see many avionics applications, yet.
Korean jet trainer T-50 will have a mission computer running RT Java application serving HUD and MFD displays, and all of the mission critical functions.

The Real-Time Specification for Java now has several commercial-grade implementations:
Sun's JavaRTS
IBM's WebSphere Real-Time
Aonix PERC
aicas JamaicaVM
Apogee Aphelion
These products span the continuum from compilation to native code (Aonix) to J2ME (aicas, apogee), to full J2SE (Sun, IBM). Most, if not all, have seen deployments in small numbers of safety- or mission-critical systems, but momentum is building. Examples include Eglin AFB's space surveillance radar modernization and the US Navy's use of RTSJ in the DDG-1000/Zumwalt destroyer. Sun also claims deployment in the financial transaction processing domain.
If you are interested in RTSJ, I suggest Peter Dibble's Real-Time Platform Programming, or Professor Wellings' Concurrent and Real-Time Programming in Java.
On a related note, there is also work underway to provide a Safety-Critical profile for the Java programming language, built as a subset of RTSJ. Also, an expert group has formed to explore a Distributed RTSJ DRTSJ, but the work is stalled.

The book covers use of Ada 95, the Java Real-Time System and realtime POSIX extensions (programmed in C). None of these is directly a domain specific language.
Ada 95 is a programming language commonly used in the late 90s and (AFAIK) still widely used for realtime programming in defence and aerospace industries. There is at least one DSL built on top of Ada - SparkAda - which is a system of annotations which describe system characteristics to a program verification tool.
This interview of April 6, 2006 indicates some of the classes and virtual machine changes which make up the Java Real-Time System. It doesn't mention any domain specific language extensions. I haven't come across use of Java in real-time systems, but I haven't been looking at the sorts of systems where I'd expect to find it (I work in aerospace simulation, where it's C++, Fortran and occasionally Ada for real-time in-the-loop systems).
Realtime POSIX is a set of extensions to the POSIX operating system facilities. As OS extensions, they don't require anything specific in the language. That said, I can think of one C based DSL for describing embedded systems - SystemC - but I've no idea if it's also used to generate the embedded systems.
Not mentioned in the book is Matlab, which in the last few years has gone from a simulation tool to a model driven development system for realtime systems.
Matlab/Simulink is, in effect, a DSL for linear programming, state machines and algorithms. Matlab can generate C or HDL for realtime and embedded systems. It's very rare to see an avionics, EW or other defence industry real-time job advertised which doesn't require some Matlab experience. (I don't work for Matlab, but it's hard to over emphasis how ubiquitous it really is in the industry)

Real time applications can be made in almost any language. The environment (operating system, runtime and runtime libraries) must however be compliant to real time constraints. In most cases real-time means that there's always a deterministic time in which something happens. Deterministic time being ussually a very low time value in the microseconds/milliseconds range.
Real time systems depend solely on this criteria, as the specificiations usually say something like 'Every x (period of time) (do something | check something)'. Usually this happens if the system interfaces with external sensors and controls life-saving or life-threatening systems.
I was working on an in-car navigation and infotainment system developed mostly in C/C++ with an operating system configured specifically to meet the real-time constraints to provide real-time navigation and media playback.
But this is not all to real-time systems: Usually the selection of algorithms in the entire system is made to have deterministic runtimes according to the Big-O notation, mostly using linear or constant time. Everything else is considered non-deterministic and thus not useable for real-time systems.

All of the real-time systems I have worked with were predominantly written in C with some bits of assembler, or written mostly in assembler with little bits of C. (Depending on whether we're talking the 90s and beyond, or the 80s, respectively.) However, some of the real-time systems I've worked with have used -- not exactly DSLs -- special homegrown code generators.

Real-time oriented language?
What is real-time
First we have to define what real-time mean.
Of course depending on how your tool will work against the physical environment pure real-time couldn't be effectively done, mostly because there will be a lot of third party dependencies.
If you are building embed stuff by using microcontrollers like arduino, the language to use will be limited by the hardware, but with more complex stuff like Raspberry Pi, the language choice is very wide.
Granularity
This is depending on what you are measuring, if you're working with:
weather temperatures, one read each 10 minute could be enough
people height or weight, one or maybe four read by day
server status, between 1 second for fine debugging to approx 1 hour for quiet unimportant secondary server.
atomic collision count: something finer...
Event based reading
The right (better) way for collecting data is based on value change event... whenever the device do permit it.
Your tool have to not poll values from device, but the device have to send values to your tool, when they change.
This could be done by using an hardware interrupt trigger or by using port protocole like RS-232 staying listening on some serial port, for sample.
Monitoring environment
The last thing to be warned is how legitimate user will interact with.
If you're building embed standalone device, like robot, you may use graphic libraries to interact with touch screen.
If you're building web based monitor, you may have to keep in mind that the client could be an old 800x600 monochrome screen, using poor internet connection and small processor... But depending on final goal if you may interact with clients, you could ensure strong hardware and strong internet connections. Anyway you have to watch for connexion loosing and event for communication delay between server and client. There is mostly third party dependencies.
Which programming language?
From there, the language choice is wide and clearly depend on
your knowledge.
granularity requested (by using event-based too, of course)
the amount of time you have to build the tool (money;)
delay, co-workers...
kind of device
kind of monitoring
some other political reasons
You could build real-time monitoring engine by using bash and sql only, I've seen sophisticated engines that was built under postgresql only... I've personally built a web based, solar energy monitor by using perl, mysql and javascript.

I cannot believe no one has mentioned LabVIEW programming language which is widely used for Real-time safety-critical systems. It has extensive libraries and well-known design patterns for architecturing and implementing for RT systems.
Also National Instruments makes various hardware (cRIO, PXI and etc) which are designed for real-time applications.
We use LabVIEW for Fracking (Hydraulic Fracturing) which is used in safety-critical environments.

PLCs run ladder and fbd code which is really a real-time dsl in the sense that your options are so limited that it is difficult to program in a way that would result in unpredictable runtime performance

A really purposeful application of the C language to real-time programming - and all related issues (such as parallel programming) - is offered by my Kickstarter
http://www.kickstarter.com/projects/767046121/crawl-space-computing-with-connel
It is called "Wide Programming" and I've been doing it most of my life. The rewards include a software library and a book - designed to be useful.

the company I've been working for since 2003 has been developing and deploying a Scada/Mes platform. Original implementation started in 1993, used Modula2 on OS/2. Later (1998) it was ported to Ada95 and Windows. Currently (2019) we use Ada compiler by AdaCore. Our system was ported and has been deployed to 32/64 Windows, HPUX, OpenVMS (and lately even to Raspberry). We have multiple installation in central Europe (gas industry, refineries, factories, power plants).
We feel Ada's features give our system a high degree of reliability and prevents a lot of errors that would easily occour if we used languages like C.
See also my blog
https://www.ipesoft.com/en/blog/what-language-is-the-d2000-written

Related

Are Mainframe systems replacable? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm actually working as a Developper on the mainframe field. Reading many documentations I understand that the real power of such systems is that they can treat many transactions (input/output) operation at the same time. They're also needed to keep high performances.
So I was wondering, aren't the modern systems capable of performing the same or even better?
To correct some mis-understandings.
Mainframe hardware is not "old" -- it has had continuous development
and undergone a refresh cycle every two or three years. The chippery involved
is in some ways more advanced than x86 -- things like have a spare cpu on each
chip -- most of the differences are aimed at reliability and availability rather
than raw performance.
Having said that both manufacturers are moving the same electrons around on the same silicon so actual per CPU performance is much the same.
Likewise mainframe software comes in two varieties "ancient" and "modern".
Some software like "CICS" was first developed in the 1970s and although it
is actively maintained it still has some of the original code.
Some software (IEBCOPY we are looking at you) was developed in the 1960s and was considered terrible even then has been left untouched for decades.
However zOS also runs a fully POSIX compliant UNIX shell in which you can run any compliant J2EE application or compile any C/C++ program to run in.
While a well set up x86 environment can match the raw processing power, they fall slightly behind when it comes to reliability and availability.
The main reason why so many large corporations stick with the mainframe is the large body of bespoke software written for COBOL/CICS, PL/1-IMS environments at a
time when hardware was expensive and coding efficiency was at a premium.
So you could re-write an old COBOL/CICS application in Java/J2EE, but, you
would need about five times the raw processing power for the new system,
always assuming you could work out what business rules and logic was
embedded in the older system.
There are many factors involved in choosing a platform. The fact is that existing mainframes (generally IBM z/OS systems is what is implied) have a massive amount of existing programs, business processes, disaster recovery plans, etc. that would all need to be refactored. Your talking about migrating existing applications based on runtimes that do not exist on other platforms. Not to mention that massive amount of data that exists both transactionally and historically.
For instance, Customer Interactive Control System (CICS) uses a specific API called CICS EXEC where program calls, database interactions, internal programming facilities like queues exist. All of these programs need to be re-written, ported and established by moving the programs, processes and data to new platforms. Its rewriting 50 years of a business' investment.
This is inherently risky to a business. You are disrupting existing operations and intellectual property and data to gain what? The cost of any such move is massive and risky for what benefit? It ends up being risk / reward.
Bear in mind, that there is a new legacy built on Windows and Linux that will likely be "disrupted" in the future and its not likely that one would move all those applications for the same reasons.
As #james pointed out, mainframes are close to, if not currently, the fastest single general computing platforms out there. New hardware versions come out every two years and software is always being added to the platform, Java, Node, etc. The platform continues to evolve.
Its a complicated subject and not as simple as "use other technology" to perform the same or better. Its moving the programs, data and processes, which is really the hard part.
"performing better" is highly unlikely because the mainframe segment is still highly relevant in business and its architectures are kept closely up-to-date with all evolutions both in hardware and [system] software.
"performing the same" is theoretically certainly possible, but it must be noted that some of the more popular mainframe architectures have significantly different hardware setups, e.g. the processors in z/OS systems are, comparatively, pathetically slow, but they delegate lots and lots of work to coprocessors, and it must also be noted that on the software side, mainframers tend to have a higher degree of "resource-awareness" than, eurhm, more "modern" developers.
That said, of course any answers to this question will necessarily be more opinion than hard-facts based, which makes it an unfortunate thing to ask here.

How the hardware platform impacts upon the choice for the programming language?

Long put short: The teacher who taught me through out the last year has only recently left and has been replaced with a new one. This new teacher has given me an assignment that involves things (like this) that we were never previously taught. So this task has showed up on the assignment and I have no idea how to do it. I can't get hold of the teacher because he's poorly and not coming in for the next few days. And even when I do ask him to explain further, he gets into a right mood and makes me feel like I'm completely retarded.
Describe how the hardware platform impacts upon the choice for
the programming language
Looking at my activity here on SO, you can tell that I'm into programming, I'm into developing things, and I'm into learning, so I'm not just trying to get one of you guys to do my homework for me.
Could someone here please explain how I would answer a question like this.
Some considerations below, but not a full answer by any means.
If your hardware platform is a small embedded device of some kind, then your choice of programming language is going to be directed towards the lower level unmanaged languages - you probably won't be able to (or want to) load a managed language runtime like the Java JVM or .NET CLR. This is down to memory and storage requirements. Similarly, interpreted languages will be out of the question as you won't have space for the intepreter.
If you're on a larger machine, it's more a question of compatibility. A managed language must run on a platform where its runtime is supported. In the case of .NET, that's Windows, or other platforms if you substitute the Microsoft CLR with the Mono runtime. In the case of Java, that's a far wider range of platforms.
This is by no means a definitive answer, but my first thought would be embedded systems. A task I perform on an embedded system, or other low powered battery operated computer, would need to be handled completely different to that performed on a computer which has access to mains electricity.
One simple impact.. would be battery life.
If I use wasteful algorithms on an embedded system, the battery life will be affected.
Hope that helps stir the brain juices!
Clearly, the speed and amount of memory of the device will impact the choice. The more primitive and weak the platform is, the harder it is to run code developed with very high level languages. Code written with them may just not work at all (e.g. when there isn't enough memory) or be too slow or it will require serious optimizations (i.e. incur more work), perhaps affecting negatively the feature set or quality.
Also, some languages and software may rely heavily on or benefit from the availability of page translation in the CPU. If the CPU doesn't have it, certain checks will have to be done in software instead of being done automatically in hardware, and that will affect the performance or the language/software choice.

What type of programs are C/C++ used for now? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
in which area is c++ mostly used?
I started off with C in school, went to Java and now I primarily use the P's(Php, Perl, Python) so my exposure to the lower level languages have all but disappeared. I would like to get back into it but I can never justify using C over Perl or Python. What real-world apps are being built with these languages? Any suggestions if I want to dive back in, what can I do with C/C++ that I can't easily do with Perl/Python?
To borrow some text from the answer I had for another related question:
Device drivers in native code.
High performance floating point number crunching (i.e. SIMD).
Easy ability to interface with assembly language routines.
Manage memory manually for extended execution runs.
Most of my work has been C and C++. I studied computer engineering in school and worked with embedded devices. My Master's degree had an emphasis in graphics and visualization. One of our visualization apps was written in Python, but for the most part, graphics demands C/C++ for the speed. I now work with embedded devices running Windows Mobile and Windows CE - all C++, though you can do a lot with C#. I previously worked in simulations, which was all C++ code on the backend. C++ is still king for time-sensitive IO, embedded applications, graphics and simulations.
Basically, if you need tight control of timing, you go lower level. Or if you need light-weight (ie, small program size, small memory footprint)
Somewhat unscientifically I took a look on Sourceforge and the top twenty projects/language break-down is currently thus:
Java(43,199)
C++(34,313)
PHP(28,333)
C(26,711)
C#(12,298)
Python(12,222)
JavaScript(10,307)
Perl(8,931)
Unix Shell(3,618)
Delphi/Kylix(3,353)
Visual Basic(3,044)
Visual Basic .NET(2,513)
Assembly(2,283)
JSP(1,891)
Ruby(1,731)
PL/SQL(1,669)
Objective C(1,424)
ASP.NET(1,344)
Tcl(1,241)
ActionScript(1,164)
Perl + Python together still total less than C alone. I have no idea why Java is so high, I know of no single Java developer and have not seen a single Java project, but I am sure someone is using it! For probably the same reason, you are not seeing much C/C++, you are just not working in a domain where it figures highly. I work in embedded systems where C and C++ are ubiquitous and Python comes nowhere. Different languages are encountered to different extents in different worlds.
You ask what you can do with C/C++ that you cannot do easily with Perl/Python; well the answer is plenty, real-time embedded systems for one; but if that is not what you want/need to do, then there is no reason to. On the other hand I might ask the reverse; I'd use C++ for things you might use Python for, simply because for me it would be easier and quicker (than learning a new language and getting the tools working)
C/C++ can be, and is, used for nearly all "types" of programs.
There are some major advantages to C and C++:
Potentially better performance
Easier to build interoperable libraries, especially if working with libraries usable from multiple languages.
well the interpreters for your "P's" languages are most certainly written in c/c++. Most OS code is written in C/C++. On the application side, if you are into games, they are generally written in c/c++. Anything that needs high performance and or low memory is a good candidate.
I've used Gsoap, a c++ soap client implementation for a web service that got HUGE traffic.
Most desktop/console applications with a bias toward graphics rely heavily on C++. This includes CAD software and AAA video games, among other things.

What languages should a microISV use to write commercial software? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I've been writing software in Java for many years now, but it was always for internal applications that would be deployed to a server. I'd like to get into writing desktop applications now but I don't know where to start. I've written a few Java/Swing applications but again they were for internal use.
My understanding is that Java and other semi-compiled and interpreted languages are too easy to reverse engineer, making them unsuitable for commercial software. I am aware that there are compilers for Java and some other interpreted language, but I've also heard that they are pricey and/or unreliable.
Assuming I start a microISV and wish to develop and sell applications to a broad audience, what's my best bet? I would prefer something that can be written close to once, and compiled for different operating systems but I am not opposed to .NET and a Windows-only audience if other languages would compromise the experience (installation ease & user experience) in Windows. My only issue there is that I don't have a large starting budget and paying out the wazoo for the required development tools is not really in the cards.
Why would people want to reverse engineer your software? They might pirate it, but you can't prevent pirating no matter what language you use. I doubt you have a top-secret algorithm that you're trying to hide either, in which case reverse-engineering might be an issue.
You should go with whatever you know best, and Java can work just fine.
If you are intent on switching to another language, I recommend taking a look at Qt. Qt is a free and open-source cross-platform toolkit for C++ that allows you to write applications in that will compile and run on Windows, Mac, and Linux with minimal effort. You CAN write commercial software for free with Qt with its LGPL license.
Edit: GCJ compiles Java to native code, but only supports Java 1.4.
Well, if you're trying to be an Independent Service Vendor -- and not a Software Vendor -- then in a sense it doesn't matter if you use a language like Java which can be decompiled. Because you'd be selling yourself as the best person to integrate and customize the software for your clients. The software is the delivery mechanism for the thing that will actually make you the money: you and your skills. Plenty of companies make a profit by giving away their software for free and contracting their services to set it up for their clients. You can mitigate the Java decompiling issue somewhat by using an obfuscator, but it's kind of fighting the wrong battle.
If you intent to make your money selling software and not service, then Java would be a relatively risky route to take.
It all depends on your business plan.
If you are starting a one-man company, then you are selling your personal expertise. So the language you use must be the one (or maybe two) that you are most familiar with and expert in. I'm surprised you felt it necessary to ask this.
Any code can be decompiled to some degree. I think you can obfuscate Java to a degree that will deter the casual user... but I think the other people hit the nail on the head. Of all the reasons not to use Java, the ease of decompiling should be very very low on your list. If that is all that is stopping you, go for it! Google Java obfuscater and you will find something.
I'm skeptical about the risk of reverse engineering a complex piece of software written in Java, but for purposes of your question I'm willing to stipulate it. I assume the same issues rule out any other language that is implemented only on the JVM.
The most salient aspects of Java are
Static type system
Class-based object system
Automatic memory management
No freestanding functions or modules outside the class/interface system
Generics
This combination could be replicated in a language like C#, but I assume the same objections you have about distributing JVM bytecode also apply to MSIL bytecodes.
I'm having a hard time coming up with a language that has all these features. Here are some nearby languages:
C++ has everything except automatic memory management, plus it allows freestanding functions. However the C++ generic mechanism (templates) is not for the faint of heart, and it doesn't (yet) support modular typechecking. Lots more flexibility than Java but also lots more ways to shoot your foot off. Use with caution.
Modula-3 has all of the above but it's essentially a dead language, plus like C++ there's no modular type checking for the generics.
I'm not familiar enough with Eiffel to be able to make good comparisons, but I think it's worth looking into.
Delphi may also be worth looking into. It seems to have everything above except generics. It's primarily a proprietary Windows environment (formerly known as Object Pascal), but there seems to exist an open-source 'Free Pascal' compiler that supports Delphi.
There are many object-oriented languages with automatic memory management and dynamic typing, among which one might highlight ruby, Python, and Smalltalk. None of these really compiles well and reliably to standalone native machine code, although all push toward some form of experimental compilation. And they are all dynamically typed, which is quite different from what you're used to.
If I were in your position I would probably go ahead an use Java and accept some risk of reverse engineering. Decompilers aren't as wonderful as you might think, and they don't produce wildly maintainable code, either. But if you really want to be able to produce native machine code, I would investigate Delphi and Eiffel. (I myself would use Modula-3, but that's because I once invested substantial effort in learning it. It's a very well designed language for its niche, but the user community is about gone and I think it's a dead letter. Pity.)

Machine dependent languages

Why might a machine-dependent language be more appropriate for writing certain types of programs? What types of programs would be appropriate?
Why might a machine-dependent language
be more appropriate for writing
certain types of programs?
Speed
Some machines have special instructions sets (Like MMX or SSE on x86, for example) that allows to 'exploit' the architecture in ways that compilers may or may not utilize best (or not utilize at all). If speed is critical (such as video games or data-crunching programs), then you'd want to utilize the best out of the architecture you're on.
Where Portability is Useless
When coding a program for a specific device (take the iPhone or the Nintendo DS as examples), portability is the least of your concerns. This code will most likely never go to another platform as it's specifically designed for that architecture/hardware combination.
Developer Ignorance and/or Market Demand
Computer video games are prime example - Windows is the dominating computer game OS, so why target others? It will let the developers focus on known variables for speed/size/ease-of-use. Some developers are ignorant - they learn to code only on one platform (Such as .NET) and 'forget' that others platforms exist because they don't know about them. They seem to take an approach similar to "It works on my machine, why should I bother porting it to a bizarre combination that I will never use?"
No other choice.
I will take the iPhone again as it is a very good example. While you can program to it in C or C++, you cannot access any of the UI widgets that are linked against the Objective-C runtime. You have no other choice but to code in Objective-C if you want to access any of those widgets.
What types of programs would be
appropriate?
Embedded systems
All of the above apply - When you're coding for an embedded system, you want to take advantage of the full potential of the hardware you're working on. Be it memory management (Such as the CP15 on ARM9) or even obscure hardware that is only attached to the target device (servo motors, special sensors etc).
The best example I can think of is for small embedded devices. When you have to have full control over every detail of optimization due to extremely limited computing power (only a few kilobytes of RAM, for example), you might want to drop down to the assembler level yourself to make everything work perfectly in those small confines.
On the other hand, compilers have gotten sophisticated enough these days where you really don't need to drop below C for most situations, including embedded devices and microcontrollers. The situations are pretty rare when this is necessary.
Consider virtually any graphics engine. Since your run-of-the-mill general purpose CPU cannot perform operations in parallel, you would have a bare minimum of one cycle per pixel to be modified.
However, since modern GPUs can operate on many pixels (or other piece of data) all at the same time, the same operation can be finished much more quickly. GPUs are very well-suited for embarrassingly parallel problems.
Granted, we have high-level-language APIs to control our video cards nowadays, but as you get "closer to the metal", the raw language used to control a GPU is a different animal from the language to control a general purpose CPU, due to the vast difference in architectures.

Resources