I am planning to develop a CAD application. I am bit confused in deciding which language to chose for the development. My main focus is on the application performance and quality. Right now I have 2 choices QT and VC++.
Which is better from the above 2? Is there any other language which I can use? Any CAD development idea/guide which would help?
Thanks
If you want to develop a CAD software, you first need a geometric kernel (unless you intend to do it yourself...). Most of them are written in C or C++.
The most known options are either:
OpenCascade (FOSS)
Parasolid (Proprietary)
A direct integration to an existing CAD system (PRO/E, CATIA, SolidWorks, NX, ...)
Once you have a geometric kernel, you can start developing a front-end to your application. QT would be a better option, since it is a well known cross platform framework.
You could use an open source development framework. pythonOCC provides such a development framework for python. From the website:
pythonOCC is a 3D CAD/CAE/PLM
development framework for the Python
programming language. It provides
features such as advanced topological
and geometrical operations, data
exchange (STEP, IGES, STL
import/export), 2D and 3D meshing,
rigid body simulation, parametric
modeling.
PythonOCC is based on the Open CASCADE, a software development framework developed in C++.
Related
Can some one provide me with a list of leading binary research tools for Windows OS and windows applications? I found BinScope from microsoft itself but was wondering if there are any other better tools around?
Thanks,
Omer
If you only have access to the binary your access is limited. If you want to peer into the inner workings of this binary your best bet is a Decompiler like IDA Pro and a assembler level debugger like OllyDBG.
Tom Reps, a professor at the University of Wisconsin and founder of GrammaTech, gave an impressive talk on this at Stanford last summer. GrammaTech is working on binary analysis (http://www.grammatech.com/research/contracts/HSARPA/HSARPA-2005-MCSB/), but I don't know whether it's available in their static analysis product yet.
Disclaimer: One of their VP's bought me lunch and got me to try a demo of their source code analysis tool while I was at Palm (before the binary analysis talk), but I think the results are confidential.
BAP is a toolkit for performing binary analysis on x86 programs. It lifts binary code to an easily understandable and analyzable language similar to compiler intermediate languages. It's not a point and click solution (i.e., programming is required to use it effectively), but it can be useful for people who want to write new program analyses on binary code without redefining the semantics of x86.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
in which area is c++ mostly used?
I started off with C in school, went to Java and now I primarily use the P's(Php, Perl, Python) so my exposure to the lower level languages have all but disappeared. I would like to get back into it but I can never justify using C over Perl or Python. What real-world apps are being built with these languages? Any suggestions if I want to dive back in, what can I do with C/C++ that I can't easily do with Perl/Python?
To borrow some text from the answer I had for another related question:
Device drivers in native code.
High performance floating point number crunching (i.e. SIMD).
Easy ability to interface with assembly language routines.
Manage memory manually for extended execution runs.
Most of my work has been C and C++. I studied computer engineering in school and worked with embedded devices. My Master's degree had an emphasis in graphics and visualization. One of our visualization apps was written in Python, but for the most part, graphics demands C/C++ for the speed. I now work with embedded devices running Windows Mobile and Windows CE - all C++, though you can do a lot with C#. I previously worked in simulations, which was all C++ code on the backend. C++ is still king for time-sensitive IO, embedded applications, graphics and simulations.
Basically, if you need tight control of timing, you go lower level. Or if you need light-weight (ie, small program size, small memory footprint)
Somewhat unscientifically I took a look on Sourceforge and the top twenty projects/language break-down is currently thus:
Java(43,199)
C++(34,313)
PHP(28,333)
C(26,711)
C#(12,298)
Python(12,222)
JavaScript(10,307)
Perl(8,931)
Unix Shell(3,618)
Delphi/Kylix(3,353)
Visual Basic(3,044)
Visual Basic .NET(2,513)
Assembly(2,283)
JSP(1,891)
Ruby(1,731)
PL/SQL(1,669)
Objective C(1,424)
ASP.NET(1,344)
Tcl(1,241)
ActionScript(1,164)
Perl + Python together still total less than C alone. I have no idea why Java is so high, I know of no single Java developer and have not seen a single Java project, but I am sure someone is using it! For probably the same reason, you are not seeing much C/C++, you are just not working in a domain where it figures highly. I work in embedded systems where C and C++ are ubiquitous and Python comes nowhere. Different languages are encountered to different extents in different worlds.
You ask what you can do with C/C++ that you cannot do easily with Perl/Python; well the answer is plenty, real-time embedded systems for one; but if that is not what you want/need to do, then there is no reason to. On the other hand I might ask the reverse; I'd use C++ for things you might use Python for, simply because for me it would be easier and quicker (than learning a new language and getting the tools working)
C/C++ can be, and is, used for nearly all "types" of programs.
There are some major advantages to C and C++:
Potentially better performance
Easier to build interoperable libraries, especially if working with libraries usable from multiple languages.
well the interpreters for your "P's" languages are most certainly written in c/c++. Most OS code is written in C/C++. On the application side, if you are into games, they are generally written in c/c++. Anything that needs high performance and or low memory is a good candidate.
I've used Gsoap, a c++ soap client implementation for a web service that got HUGE traffic.
Most desktop/console applications with a bias toward graphics rely heavily on C++. This includes CAD software and AAA video games, among other things.
I'm interested in creating a visual programming language which can aid non-programmers(like children) to write simple programs, much like Labview or Simulink allows engineers to connect functional blocks together without the knowledge of how they are internally built. Is this called programming by demonstration? What are example applications?
What would be an ideal platform which can allow me to do this(it can be a desktop or a web app)
Check out Google Blockly. Blockly allows a developer to create their own blocks, translations (generators) to virtually any programming language (or even JSON/XML) and includes a graphical interface to allow end users to create their own programs.
Brief summary:
Blockly was influenced by App Inventor, which itself was based off Scratch
App Inventor now uses Blockly (?!)
So does the BBC microbit
Blockly itself runs in a browser (typically) using javascript
Focused on (visual) language developers
language independent blocks and generators
includes a Block Factory - which allows visual programming to create new Blocks (?!) - I didn't find this useful myself...except for understanding
includes generators to map blocks to javascript/python
e.g. These blocks:
Generated this code:
See https://developers.google.com/blockly/about/showcase for more details
Best wishes - Andy
The adventure on which you are about to embark is the design and implementation of a visual programming language. I don't know of any good textbooks in this area, but there are an IEEE conference and refereed journal devoted to this field. Margaret Burnett of Oregon State University, who is a highly regarded authority, has assembled a bibliography on visual programming languages; I suggest you start there.
You might consider writing to Professor Burnett for advice. If you do, I hope you will report the results back here.
There is Scratch written by MIT which is much like what you are looking for.
http://scratch.mit.edu/
A restricted form of programming is dataflow (aka. flow-based) programming, where the application is built from components by connecting their ports. Depending on the platform and purpose, the components are simple (like a path selector) or complex (like an image transformator). There are several dataflow systems (just I've made two), some of them has no visual editor, some of them are just a part of a bigger system, and there're some which don't even mention the approach. (Did you think, that make, MS-Excel and Unix Shell pipes are some kind of this?)
All modern digital synths based on dataflow approach, there's an amazing visual example: http://www.youtube.com/watch?v=0h-RhyopUmc
AFAIK, there's no dataflow system for definitly educational purposes. For more information, you should check this site: http://flowbased.org/start
There is a new open source library out there: TUM.CMS.VPLControl. Get it here. This library may serve as a basis for your purposes.
There is Snap written by UC Berkeley. It is another option to understand VPL.
Pay attention on CoSpaces Edu. It is an online platform that enables the creation of virtual worlds and learning experiences whilst providing a more flexible approach to the learning curriculum.
There is visual coding named "CoBlocks".
Learners can animate and code their creations with "CoBlocks" before exploring and sharing them in mobile VR.
Also It is possible to use JavaScript or TypeScript.
If you want to go ahead with this, the platform that I suggest is the one used to implement Scratch (which already does what you want, IMHO), which is Squeak Smalltalk. The Squeak environment was designed with visual programming explicitly in mind. It's free, and Smalltalk syntax can learned in half an hour. Learning the gigantic class library may take just a little longer.
The blocks editor which was most support and development for microbit is microsoft makecode
Scratch is a horrible language to teach programming (i'm biased, but check out Pipes Visual Programming Language)
What you seem to want to do sounds a lot like Functional Block programming (as in functional block programming language IEC 61499 and other VPLs for mechatronics development). There is already a lot of research into VPLs so you might want to make sure that A) what your are trying to do has an audience and B) what you are trying to do can be done easily.
It sounds a bit negative in tone, but a good place to start to test the plausibility of your idea is by reading Davor Babic's short blog post at http://blog.davor.se/blog/2012/09/09/Visual-programming/
As far as what platform to use - you could use pretty much anything, just make sure it has good graphic libraries (You could use Java with Swing - if you like pain - or Python with TKinter) just depends what you are familiar with. Just keep in mind who you want to eventually launch the language to (if its iOS, then look at using Objective-C, etc.)
I am from .net C# background and I want to learn DirectX. I have knowledge of C++ but I am fairly new to graphic world.
I am little confused about how to start learning directx, should I start learning direct directly or buy a basic graphic book like hern and baker and then jump to directx.
Which is the recommended book for learning basic graphic concepts, is it hern and baker? Is there any directx book which will cover graphic concepts as well?
I think that keeping a basic graphics book is allways good, because i can use it as reference anytime
Any suggestions from experts here?
You say that you have a C# background so I am going to assume you are more comfortable with C# then C++. Also, you say that you have knowledge of C++ so I will assume that you already have an understanding of memory management.
If you just want to learn and become more comfortable with the graphics pipeline you should check out SlimDX and XNA. They both allow you to use DirectX without having to dive into C/C++.
As for whether to learn the theory or API first I don't think you should do either one first. It makes sense to learn them asynchronously. Pick up a book on the theory but mess around with an API at the same time.
I highly recommend XNA. People commonly say that you should stick with C++ if you want to develop games but I strongly disagree. XNA will allow you to learn more high level game concepts in less time than if you use C++ and DirectX alone. You will be able to focus on learning why you are doing something rather than how to manage the memory. If in the future you decide that game development is a serious passion then by all means C++ is the way to go. You will find that XNA's graphics pipeline closely mirrors DirectX 9 and wont have much trouble moving to C++.
Also, DirectX 9 should be good enough for any beginner and it will give you a better understanding of how and why things have changed in 10 and 11. However, if you really want bleeding edge technologies you can try out SlimDX which is a C# wrapper for DirectX.
With all this said, XNA offers many easy to understand samples that you can start playing with on their educational catalog page. Also, check out ziggyware (great collection of xna tutorials).
Also, there are many blogs you can check out. A lot of them have excellent tutorials on them. Here are some off the top of my head:
Reimer Grootjans
Shawn Hargreaves
Richard Dodsworth
Renaud Bédard
Nick Gravelyn
Finally, here are 2 graphics books that I highly recommend (they are pretty complex and will last you a long time):
Fundamentals of Computer
Graphics
Real-Time Rendering
They are not directly related to DirectX, but rather they cover the theory every graphics developer should know. (from linear algebra to texture mapping to volumetrix rendering...)
Well I have to disagree with the C# option. If you don't have a deadline to finish the game, then I recommend using the language that teach you most. Working with 3d graphics is A LOT about management so if you are avoding it you are not actually learning but just using it, ie. you not only have to manage memory but the actual render calls you make and the device state changes, a lot of things that you will never know by avoding lower level, and which applies for other APIs too such as OpenGL or for other kind of devices. I think the best way of knowing how the api works is by using the api, instead of a bunch of helper libraries. You can use the helper libraries when you really need it instead (which you can find in their C++ version anyway).
In the DX SDK you can also find the Sample Browser with some sample applications with their documentation and you have the DirectX Utility Toolkit which contains a framework and libraries to make a DirectX app without having to worry much about the nasty device things such as enumeration and config. It also comes with a GUI system and a settings dialog for the device config. I doubt you can find those in C# and they are very good if you want to start with DX.
Some resources that helped me when I started were
the zophusX tutorials
and a book called "Introduction to 3D Game Programming with DirectX 9.0c", by Frank D. Luna (there is the DX10 version now)
and probably the book 3d Game Engine Programming by Stephan Zerbst also helped me to understand some things about how to work better with the apis. Though you may have to buy them in order to read them. They are helpful to start with both some theory and using the API at the same time.
I think if your target is to learn how to make a game then you can use any language/library you want you don't even need to know a programming language :) but if your target is learning DirectX and graphics APIs you should definitively start with the C++ api which is the "actual" DX.
If you have a bit of extra money, I was very impressed with the DirectX graphics courses from http://www.gameinstitute.com. The textbook they provide was very good as far as the other DirectX books I've seen are concerned. The first module DirectX Graphics I starts off with a bit of a math review and some 3D fundamentals before diving into setting up and using DirectX. By the end of the first module you will have built a textured terrain renderer and an indoor scene.
Overall the courses are not that expensive when you consider how much books on the subject cost. I would definitely recommend checking it out!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I didn't find any useful information about programming languages for real time systems. All I found was Real Time Systems and Programming Languages: Ada 95, Real-Time Java and Real-Time C/POSIX (some pdf here), which seems to talk about extensions of Java and C for real times systems (I don't have the book to read). Also, the book was published in 2001, and the information may be obsolete now.
So, I'm dubious whether these languages are used in real world applications, or if real time systems in the real world are made in other languages, like DSLs.
If the second option is true for you, what are the outstanding characteristics of the language you use?
I am an avionics software engineer.
I was able to participate in several development projects.
The languages I used in those projects are: C, C++, and Real-time Java.
C is great.
C++ is not so bad but C/C++ require strict coding standards for the safety considerations such as DO-178B.
I think Real-time Java is the way to go but I don't see many avionics applications, yet.
Korean jet trainer T-50 will have a mission computer running RT Java application serving HUD and MFD displays, and all of the mission critical functions.
The Real-Time Specification for Java now has several commercial-grade implementations:
Sun's JavaRTS
IBM's WebSphere Real-Time
Aonix PERC
aicas JamaicaVM
Apogee Aphelion
These products span the continuum from compilation to native code (Aonix) to J2ME (aicas, apogee), to full J2SE (Sun, IBM). Most, if not all, have seen deployments in small numbers of safety- or mission-critical systems, but momentum is building. Examples include Eglin AFB's space surveillance radar modernization and the US Navy's use of RTSJ in the DDG-1000/Zumwalt destroyer. Sun also claims deployment in the financial transaction processing domain.
If you are interested in RTSJ, I suggest Peter Dibble's Real-Time Platform Programming, or Professor Wellings' Concurrent and Real-Time Programming in Java.
On a related note, there is also work underway to provide a Safety-Critical profile for the Java programming language, built as a subset of RTSJ. Also, an expert group has formed to explore a Distributed RTSJ DRTSJ, but the work is stalled.
The book covers use of Ada 95, the Java Real-Time System and realtime POSIX extensions (programmed in C). None of these is directly a domain specific language.
Ada 95 is a programming language commonly used in the late 90s and (AFAIK) still widely used for realtime programming in defence and aerospace industries. There is at least one DSL built on top of Ada - SparkAda - which is a system of annotations which describe system characteristics to a program verification tool.
This interview of April 6, 2006 indicates some of the classes and virtual machine changes which make up the Java Real-Time System. It doesn't mention any domain specific language extensions. I haven't come across use of Java in real-time systems, but I haven't been looking at the sorts of systems where I'd expect to find it (I work in aerospace simulation, where it's C++, Fortran and occasionally Ada for real-time in-the-loop systems).
Realtime POSIX is a set of extensions to the POSIX operating system facilities. As OS extensions, they don't require anything specific in the language. That said, I can think of one C based DSL for describing embedded systems - SystemC - but I've no idea if it's also used to generate the embedded systems.
Not mentioned in the book is Matlab, which in the last few years has gone from a simulation tool to a model driven development system for realtime systems.
Matlab/Simulink is, in effect, a DSL for linear programming, state machines and algorithms. Matlab can generate C or HDL for realtime and embedded systems. It's very rare to see an avionics, EW or other defence industry real-time job advertised which doesn't require some Matlab experience. (I don't work for Matlab, but it's hard to over emphasis how ubiquitous it really is in the industry)
Real time applications can be made in almost any language. The environment (operating system, runtime and runtime libraries) must however be compliant to real time constraints. In most cases real-time means that there's always a deterministic time in which something happens. Deterministic time being ussually a very low time value in the microseconds/milliseconds range.
Real time systems depend solely on this criteria, as the specificiations usually say something like 'Every x (period of time) (do something | check something)'. Usually this happens if the system interfaces with external sensors and controls life-saving or life-threatening systems.
I was working on an in-car navigation and infotainment system developed mostly in C/C++ with an operating system configured specifically to meet the real-time constraints to provide real-time navigation and media playback.
But this is not all to real-time systems: Usually the selection of algorithms in the entire system is made to have deterministic runtimes according to the Big-O notation, mostly using linear or constant time. Everything else is considered non-deterministic and thus not useable for real-time systems.
All of the real-time systems I have worked with were predominantly written in C with some bits of assembler, or written mostly in assembler with little bits of C. (Depending on whether we're talking the 90s and beyond, or the 80s, respectively.) However, some of the real-time systems I've worked with have used -- not exactly DSLs -- special homegrown code generators.
Real-time oriented language?
What is real-time
First we have to define what real-time mean.
Of course depending on how your tool will work against the physical environment pure real-time couldn't be effectively done, mostly because there will be a lot of third party dependencies.
If you are building embed stuff by using microcontrollers like arduino, the language to use will be limited by the hardware, but with more complex stuff like Raspberry Pi, the language choice is very wide.
Granularity
This is depending on what you are measuring, if you're working with:
weather temperatures, one read each 10 minute could be enough
people height or weight, one or maybe four read by day
server status, between 1 second for fine debugging to approx 1 hour for quiet unimportant secondary server.
atomic collision count: something finer...
Event based reading
The right (better) way for collecting data is based on value change event... whenever the device do permit it.
Your tool have to not poll values from device, but the device have to send values to your tool, when they change.
This could be done by using an hardware interrupt trigger or by using port protocole like RS-232 staying listening on some serial port, for sample.
Monitoring environment
The last thing to be warned is how legitimate user will interact with.
If you're building embed standalone device, like robot, you may use graphic libraries to interact with touch screen.
If you're building web based monitor, you may have to keep in mind that the client could be an old 800x600 monochrome screen, using poor internet connection and small processor... But depending on final goal if you may interact with clients, you could ensure strong hardware and strong internet connections. Anyway you have to watch for connexion loosing and event for communication delay between server and client. There is mostly third party dependencies.
Which programming language?
From there, the language choice is wide and clearly depend on
your knowledge.
granularity requested (by using event-based too, of course)
the amount of time you have to build the tool (money;)
delay, co-workers...
kind of device
kind of monitoring
some other political reasons
You could build real-time monitoring engine by using bash and sql only, I've seen sophisticated engines that was built under postgresql only... I've personally built a web based, solar energy monitor by using perl, mysql and javascript.
I cannot believe no one has mentioned LabVIEW programming language which is widely used for Real-time safety-critical systems. It has extensive libraries and well-known design patterns for architecturing and implementing for RT systems.
Also National Instruments makes various hardware (cRIO, PXI and etc) which are designed for real-time applications.
We use LabVIEW for Fracking (Hydraulic Fracturing) which is used in safety-critical environments.
PLCs run ladder and fbd code which is really a real-time dsl in the sense that your options are so limited that it is difficult to program in a way that would result in unpredictable runtime performance
A really purposeful application of the C language to real-time programming - and all related issues (such as parallel programming) - is offered by my Kickstarter
http://www.kickstarter.com/projects/767046121/crawl-space-computing-with-connel
It is called "Wide Programming" and I've been doing it most of my life. The rewards include a software library and a book - designed to be useful.
the company I've been working for since 2003 has been developing and deploying a Scada/Mes platform. Original implementation started in 1993, used Modula2 on OS/2. Later (1998) it was ported to Ada95 and Windows. Currently (2019) we use Ada compiler by AdaCore. Our system was ported and has been deployed to 32/64 Windows, HPUX, OpenVMS (and lately even to Raspberry). We have multiple installation in central Europe (gas industry, refineries, factories, power plants).
We feel Ada's features give our system a high degree of reliability and prevents a lot of errors that would easily occour if we used languages like C.
See also my blog
https://www.ipesoft.com/en/blog/what-language-is-the-d2000-written