MPC in simulink for reference tracking - reference

I have written a matlab code for tracking using mpc. how can i implement that on simulink? I dont want to use any custom block of mpc, so kindly help me by providing a simple example (specifically for tracking) because i also have implemented a regulator based mpc for inverted pendulum in simulink now i am confused how to make a "reference tracking" mpc in simulink.

Related

3ds Max and Python

I know that it is possible to create 3D objects and enviroments in python via using Pygame. But I wonder, if it is somehow possible to use the objects that you've created in 3ds Max within a python program. Because it seems that it takes a lot of time to code every detail in pygame to create 3D stuff.I think it would be easier if I could just use my mouse to draw objects just like in 3ds Max then to code every detail.Sorry if there are any gramattical mistakes.
3D is a world in itself - Pygame leverages 3D by providing some bindings to OpenGL - which are themselves quite low level.
So, if you happen to findout a format that can be expoted by 3DS and canbe supported directly in a call to OpenGL, the answer to your question would be "yes".
But I doubt so - even if there are object formats directly usable by OpenGL, you'd still have a tough time configuring the remainder of the scene.
The advice for you is to use a highr level binding to OpenGL, or 3D, and then check what formats are common between your binding (like Python-Ogre) and the formats 3DS can export too.
My personal recommendation would be for you to use Blender 3D for your Python environment. Not only it embeds a full game engine 100% programable in Python, as it leverages tens of different file formats to import - and certainly will be hundreds of times (no exaggeration here - I mean more than one hundred times as easy as) to code something like a finished game scene than using the raw OpenGL provided by Pygame. Also, as you get used to Blender, you might start making part of your modeling in it as well, avoiding having to switch environments between Max and Blender.

how to calculate the HOGDescriptor for specified locations in opencv

I am trying to find HOG descriptor for human activity recognition, how to find the HOG descriptor for the specified location using opencv
Actually, there is no inbuilt functionality for that. You should extract area yourself and give it as a "Mat" type for hog.compute() method.
hint: study OpenCV "hog.detectMultiScale" method which does similar kind of thing.

Command-Line linux OpenGL processing

I need to build a command line tool, that will take a 3D model as an argument, and will output photos of it, that may or may not be processed by this application. The tool will be deployed on Linux, but I want to make it as cross-platform as possible.
The program is not supposed to present a window of any kind, or accept any other input apart from the command line arguments.
I was wondering, how would someone approach this? I am currently able to display the 3D model on-screen with the help of GLFW, which actually drives my event handlers to peripheral input, and also my main loop. However, I don't know if using GLFW will help me if I want to make a command-line program with input-output as files.
Does anyone have any indications as to how to approach this?
create invisible/hidden window,
use its gl context to render to FBO and
use readpixels to save that to file
For OpenGL to work you need an OpenGL context. Which used to require some kind of windowing system active, that could produce you some drawable for which the context could be created.
Some OpenGL implementations, like Mesa, actually allow you to create an OpenGL context for drawables that are created without a windowing system; Mesa calls this "off-screen mesa". With Gallium3D drivers on Linux this even may give you GPU acceleration. But usually you end up in the "softpipe" software rasterizer.
Does anyone have any indications as to how to approach this?
Don't use OpenGL for it. OpenGL is mostly meant for creating interactive graphics; but of course if your goal is visualization of complex data, then a GPU would be better suited.
With NVidia hardware you'll need to use an X server for that; the X server must be running and active on the console for this to work. AMD hardware with the open source drivers and Mesa may give you off-screen capabilities without X (but I never tried that).
On Windows Server you don't have proper OpenGL support anyway (just v1.4 and very slow), so don't bother with it.

Exporting mtl files from 3ds max problem

I'm having a small problem with exporting mtl files in 3ds max. I would like to use an obj + its material library in an opengl program. The model gets exported just fine, but I loose all the reflective/refractive parameters of my materials when I export them (colors and such seem to be fine so it finds them just not completely but all materials get changed to standard). I tried exporting materials from the scene and from the mat. library aswell with the same results. Could anyone help me how to keep the reflective parameters of materials after exporting?
As you stated in the email you use Glut as API. there are several tutorials for this,
I initially thought you used a engine, and just needed some values.
But you needed more then this.
To use reflection \ environment mapping in opengl you need a shader that supports this.
So what you need to do is to implement a reflection shader, and pass the bitmap into your shader.
In the sample file it worked fine for rendered image (because 3dsmax supports it's own raytrace materials), but this does not get exported in any way.

If I build and link an OpenGL application using only OpenGL ES 1.x calls, will it still work?

I am writing an OpenGL game which will hopefuflly be for both linux and iphoneOS, I basically want to be able to build using the OpenGL ES 1.5 headers and run it on my linux desktop. Can I do this? IE, I want to only use the subset of API calls common between OpenGL and OpenGL-ES.
Doing the above and linking with normal libGL.a from my system gets me my screen but I seem to be able to do nothing but change the scene background colour.
I've done exactly that, and it worked well for me.
There are a bunch OpenGL|ES extensions that aren't available on standard OpenGL but very nice to have on a low spec platform. glDrawTexImage is such an extension. Emulating these extensions using a hand full of desktop OpenGL-calls is not a big deal though.
Also OpenGL|ES supports the fixed-point data-format for most entrypoints. Take glClearColorx for example. These aren't available for the desktop OpenGL, so you have to write a wrapper if you want to use them. It's a bit more work if you also store your vertex data in this format.
Oh - and note that OpenGL|ES does not come with the glu-library. You can use it on the desktop, but if you do you'll have to reimplement them later (see the 100 questions about gluLookAt and gluUnproject).
There is no such thing as OpenGL ES 1.5. Did you mean 1.1 ?
Also, how do you get a window ? This is platform specific.
In any case, you still should compile against the header that corresponds to the lib you will link against. You don't know for sure what the header sets up (e.g. on windows, which you don't care about but still, calling conventions are specified in there).
There are also some calls that don't map well between the 2. E.g. APIs that are only using doubles in GL are float in GLES (from the ES spec):
The double-precision only commands
DepthRange, Frustum, and Ortho are
replaced with single-precision or
fixed-point variants
So in short, there is a bit more work than just using the same code, although the work in question is still minimal if you stick to GL ES subset.

Resources