Labview Linear Fit alternative? - statistics

In the process of calibrating a tool, we need to find the best line fit for three data points. LabVIEW has a nice Linear Fit.vi tool, but unfortunately that is only part of the Full Development System. This would cost $3000 that our small company can ill afford, just for one library VI. I wonder, has anyone out there written a good alternative bit of line-fitting code that would be willing to share it?

Related

APCS final project: Converting an audio file to a simpler MIDI file

Lets say I have the audio file for Happy Birthday. I want to convert that audio file into an audio file that sounds like this : happy birthday.
First, I'd like to know if I have the ability to program this? Can a highschooler who's almost finished with APCS program this?
If I can:
How would I change the bpm of the song? I've searched through a bunch of websites, but they weren't very helpful.
I know that audio files can be represented in waveforms. How would I scan for each individual wave in an audio file (I need this to isolate the notes)?
This is a very ambitious project, actually. One reason is that it involves using digital signal processing tools like FFT (Fast fourier transforms) to analyze the sound to pick out the pitches. You might be able to find a library that can do this, but as far as coding such a tool, that would involve a steep learning curve.
If you would like to look further into this, there is a good online resource called "The Scientists and Engineers Guide to Digital Signal Processing". I was able to work through and understand the discrete fourier transform with only high school math (lots of trig) and a bit of calculus. It was a lift, though.
Trying to analyze rhythm is also no easy task. Even with advanced tools provided in professional notation system such as Finale, people have trouble playing rhythms in time well enough for the best transcription tools. Algorithms that "quantize" the beats help but also limit the amount of detail that can be included in the playback.
My guess is that as interesting and worthwhile as this project would be, to bring it to completion before the semester ends would require putting together prebuilt pieces. A lot of programming is done that way, these days.
If you scale the project back to something like just getting your code to analyze a short sample of a single note and give its pitch, that would be both impressive and doable with a lot of work. It could be done with a DFT algorithm instead of requiring FFT, reducing the amount of info you'd have to acquire first. That way, you'd only have to work your way up to understanding and implementing the material on this link which is about calculating the DFT. Notice that there is example code in BASIC. The code examples throughout this book are a big help.

Programming a 3d game without the use of a graphics API

As the title says, I'd like to program a 3d game (probably a BattleZone clone), but without the use of an API like OpenGL, DirectX, and the like. At the heart of the matter, I'd just like to learn how to draw basic 3d shapes to the screen and manipulate them. Don't care if it looks like crap. I've used OpenGL to achieve similar ends before, but really didn't learn about these topics.
The problem is, I have no idea where to start. I downloaded the Doom source code, but it's a bit over my head. Although I've programmed a bit, graphical matters are very much out of my depth.
I'd be very grateful if anyone could offer links or code (in any language) that would help me along in my purpose.
Sounds like an exciting project. I did something similar in the late 90's. Before OpenGL and DirectX became popular, there were a ton of great books on the subject.
Fundamentally you will have to learn how to
Represent 3D geometry
Transform that geometry (translate and rotate)
Project that geometry onto a 2D screen.
Each of those major topics has many sub-topics (for example, complex objects can be constructed from a number of polygons. You may want to limit polygons to being constructed of triangles only, or support other polygons. You may want to load common model formats e.g. .obj files so that you can create models with off the shelf tools).
The topics are way too broad for a detailed answer here. Whole books are written on the subject, including
Black Art of 3D Game Programming (Book, amazingly still available)
For a good introduction to the general topics, have a look at:
http://en.wikipedia.org/wiki/3D_projection
http://en.wikipedia.org/wiki/Orthographic_projection
http://en.wikipedia.org/wiki/Transformation_matrix#Perspective_projection
Doom, which you already looked at, used a special optimization called heightfield rendering and does not allow for rendering of arbitrary 3D shapes (e.g., you will not find a bridge in Doom that you can walk under).
I have the second edition of Computer Graphics: Principles and Practice in C and it uses SRGP (Simple Raster Graphics Programming) and SIGGRAPH which is a wrap-around SRGP, if you look up articles and papers on graphics research you'll see that both these libraries are used a lot, and they are way more direct and low level than the APIs you mentioned. I'm having a hard time locating them, so if you do, please give a link. Note that the third edition is in WPF, so I cannot guarantee much as to it's usefulness, and I don't know if the second edition is still in print, but I have found numerous references to the book, and it's got it's own page in Wikipedia.
Another solution would be the Win32 API which again does not provide much in terms of rendering, but it is trivial to draw dots and lines onto a window. I have written a few tutorials on it, but I didn't cover drawing pixels and lines, so they'll only be useful if you have trouble with the basics of setting up a window. Note that it is not intended for real-time rendering, so it may get slow.
Finally you can look at X11 programming, the foundation of most modern operating systems with a GUI. I haven't found the libraries for Windows, but again I didn't invest too much time on it. I know it is available for CIGWIN and for Linux in general though, and I believe it would be very interesting to look at the core of graphics since you're already looking under the hood of 3D graphics.

Best for 3D network visualization: panda3d or Crystal Space 3D?

I am interested in creating a 3D visualization of network packets. A few years ago these things sold for tens of thousands of dollars, but now I think that I can hack one together in a few hours using an open source 3D kit.
I've looked around and have found two kids that look good --- one is Panda3D and the other is CrystalSpace.
My requirements are:
Fast to learn
Able to run from python or C++
Able to work with 50,000 polygons. (I want to represent each packet as a little brick in 3D space.)
This visualization doesn't need to run in a browser.
So I'm looking for advice. My questions:
Which is better for my application, Panda3D or CrystalSpace 3D?
Is there another engine that I should be looking at instead?
Thanks.
If you want to get something going in only a few hours I think your only viable option is Visual Python. It is much faster than Panda3d/Python for large quantities of primitives and has a much easier API. It does not have an option to work from C++ but since it is a very thin wrapper of a C++ back end I don't think you would be able to add too much performance dropping the Python. I can compute and display 8000 lit/shaded rotating boxes at 15fps on my system.

Procedural sound generation algorithms?

I'd like to be able to algorithmically create sounds (like monster growls, or distant thunder.) This isn't as widely covered on the net like more traditional procedural content (terrains, etc.) Any one have any algorithms on how to create these kinds of sounds?
This, in general, is a very hard problem. Just like drawing, each sound is its own thing, and needs its own algorithms, and, like drawing, some are more easily done by algorithm than others. There's no general algorithm for creating sound any more than there's a general algorithm for drawing all things like faces, insects, and mountains. Each is it's own project (and often quite a big one), unless you're just looking to draw circles or generate sine waves.
Most of the case studies I know of are the many attempts to generate musical instrument sounds, and generally each of these attempts is a PhD thesis.
For a time-efficient solution, sampling is the way to go.
Or, if you really need a procedural approach, you could ask the question for one specific type of sound, and people might be able to come up with an algorithm for it. For example, I'd be interested in taking a shot at a "distant thunder" algorithm, but don't want to bother if having just thunder but no monsters, etc, is not useful to you.
I would suggest checking out the many software projects and papers of Perry Cook who has done some great work in the realm of physical modelling (though his website is a bit of a nightmare to navigate). Though as tom10 says, it's a very hard area. If you have the stomach for a bit of signal processing then it's a very fascinating area to get into.

Learning about low-level graphics programming

I'm interesting in learning about the different layers of abstraction available for making graphical applications.
I see a lot of terms thrown around: At the highest level of abstraction, I hear about things like C#, .NET, pyglet and pygame. Further down, I hear about DirectX and OpenGL. Then there's DirectDraw, SDL, the Win32 API, and still other multi-platform libraries like WxWidgets.
How can I get a good sense of where one of these layers ends and where the next one begins? What is the "lowest possible level" way of creating a window in Windows, in C? What about C++? (A code sample would be divine.) What about in X11? Are the Windows implementations of OpenGL and DirectX built on top of the Win32 API? Where can I begin to learn about these things?
There's another question on SO where Programming Windows is suggested. What about for Linux? Is there an equivalent such book?
I'm aware that this is very low-level, and that there are many friendlier tools available, but I would like to at least learn the basics of what's going on beneath the surface. As much as I'd like to begin slinging windows and vectors right off the bat, starting with something like pygame is too high-level for me; I really need to make the full conceptual circuit of how you draw stuff on a computer.
I will certainly appreciate suggestions for books and resources, but I think it would be stupendously cool if the answers to this question filled up with lots of different ways to get to "Hello world" with different approaches to graphics programming. C? C++? Using OpenGL? Using DirectX? On Windows XP? On Ubuntu? Maybe I ask for too much.
The lowest level would be the graphics card's video RAM. When the computer first starts, the graphics card is typically set to the 80x25 character legacy mode.
You can write text with a BIOS provided interrupt at this point. You can also change the foreground and background color from a palette of 16 distinctive colors. You can use access ports/registers to change the display mode. At this point you could say, load a different font into the display memory and still use the 80x25 mode (OS installations usually do this) or you can go ahead and enable VGA/SVGA. It's quite complicated, that's what drivers are for.
Once the card's in the 'higher' mode you'd change what's on screen by accessing the memory mapped to the video card. It's stored horizontally pixel by pixel with some 'dirty regions' of pixels that aren't mapped to screen at the end of each line which you have to compensate for. But yeah, you could copy the pixels of an image in memory directly to the screen.
For things like DirectX, OpenGL. rather than write directly to the screen, commands are sent to the graphics card and it updates its screen automatically. Commands like "Hey you, draw this image I've loaded into the VRAM here, here and here" or "Draw these triangles with this transformation matrix..." take a fraction of the time compared to pixel by pixel . The CPU will thank you.
DirectX/OpenGL is a programmer friendly library for sending those commands to the card with all the supporting functions to help you get it done smoothly. A more direct approach would only be unproductive.
SDL is an abstraction layer so without bothering to read up on it I'd guess it would have different ways of working on each system. On one it might use semi-direct screen writing, another Direct3D, etc. Whatever's fastest as long as the code stays cross-platform..able.
The GDI/GDI+ and XWindow system. They're designed specifically to draw windows. Originally they drew using the pixel-by-pixel method (which was good enough because they'd only have to redraw when a button was pressed or a window moved, etc.) but now they use Direct3D/OpenGL for accelerated drawing (and special effects). Optimizations depend on the versions and implementations of these libraries.
So if you want the most power and speed, DirectX/openGL is the way to go. SDL is certainly useful for getting the most from a cross-platform environment and integrates with OpenGL anyway. The windowing system comes last but don't underestimate it. Especially with the stuff Microsoft's coming up with lately.
Michael Abrash's Graphics Programming 'Black Book' is a great place to start. Plus you can download it for free!
If you really want to start at the bottom then drawing a line is the most basic operation. Computer graphics is simply about filling in pixels on a grid (screen), so you need to work out which pixels to fill in to get a line that goes from (x0,y0) to (x1,y1).
Check out Bresenham's algorithm to get a feel for what is involved.
To be a good graphics and image processing programmer doesn't require this low level knowledge, but i do hate to be clueless about the insides of what i'm using. I see two ways to chase this - high-level down, or bottom-level up.
Top-down is a matter of following how the action traces from a high-level graphics operation such as to draw a circle, to the hardware. Get to know OpenGL well. Then the source to Mesa (free!) provides a peek at how OpenGL can be implemented in software. The source to Xorg would be next, first to see how the action goes from API calls through the client side to the X server. Finally you dive into a device driver that interfaces with hardware.
Bottom up: build your own graphics hardware. Think of ways it could connect to a computer - how to handle massive numbers of pixels through a few byte-size registers, how DMA would work. Write a device driver, and try designing a graphics library that might be useful for app programmers.
The bottom-up way is how i learned, years ago when it was a possibility with the slow 8-bit microprocessors. The direct experience with circuitry and hardware-software interfacing gave me a good appreciation of the difficult design decisions - e.g. to paint rectangles using clever hardware, in the device driver, or higher level. None of this is of practical everyday value, but provided a foundation of knowledge to understand newer technology.
see Open GPU Documentation section:
http://developer.amd.com/documentation/guides/Pages/default.aspx
HTH
On MSWindows it is easy: you use what the API provides, whether it is the standard windows programming API or the DirectX-family API's: that's what you use, and they are well documented.
In an X windows environment you use whatever X11-libraries that are provided. If you want to understand the principles behind windowing on X, I suggest that you do this, nevermind that many others tell you not to, it will really help you to understand graphics and windowing under X. You can read the documentation on X-programming (google for it). (After this exercise you would appreciate the higher level libraries!)
Apart from the above, at the absolutely lowest level (excluding chip-level) that you can go is to call the interrupts that switch to the various graphics modes available - there are several - and then write to the screen buffers, but for this you would have to use assembler, anything else would be too slow. Going this way will not be portable at all.
Another post mentions Abrash's Black Book - an excellent resource.
Edit: As for books on programming Linux: it is a community thing, there are many howto's around; also find a forum, join it, and as long as you act civilized you will get all the help you can ever need.
Right off the bat, I'd say "you're asking too much." From what little experience I've had, I would recommend reading some tutorials or getting a book on either directX or OpenGL to start out. To go any lower than that would be pretty complex. Most of the books I've seen in OGL or DX have pretty good introductions that explain what the functions/classes do.
Once you get the hang of one of these, you could always dig in to the libraries to see what exactly they're doing to go lower.
Or, if you really, absolutely MUST learn the LOWEST level... read the book in the above post.
libX11 is the lowest level library for X11. I believe the opengl/directx talk to the driver/hardware directly (or emulate unsupported ops), so they would be the lowest level library.
If you want to start with very low level programming, look for x86 assembly code for VGA and fire up a copy of dosbox or similar.
Vulkan api is an api which gives you very low level access to most if not all features of the gpu, computational and graphical, it works on amd and Nvidia gpus (not all)
you can also use CUDA, but it only works on Nvidia gpus and has access to computational features only, no video output.

Resources