I have a model that I am trying to use in a web game using three.js.
When I render an image of the scene in blender, the quality of the image is very good. Specifically, the quality of the textures is very high and they are very crisp and matte.
When I setup the scene in my game, they look very dull and almost plain.
I've looked up Raytracing, Ambient occlusion, lightmaps. But all of these effect the lighting. They should not affect the quality of the textures. What am I missing here?
What all does blender's offline renderer do that real time renderers (like threejs's webgl render) usually don't do?
Thanks alot in advance.
Merry Christmas,
i guess the best way is to use baking... That means you save the high qualitiy lightning information into an image texture. (This should solve your problem with plain looking textures)
I recommend to check out this tutorial by Andrew Price(blenderguru.com):
https://www.youtube.com/watch?v=sB09T--_ZvU
And make sure your realtime client uses a proper texture filtering, has support for normalmaps
etc and that the webclient does not downscale your images for some reason.
Related
I am working on a 3D simulation program using openGL which uses a render loop with a fixed framerate to keep the screen updated as the world changes. Standard procedure really, and for a typical video game this is certainly the best approach (I originally took this code from an openGL game tutorial). But for me, the 3D scene will not be changing as rapidly and unpredictably as in a computer game. It will be possible for the 3D scene itself to change from time to time but in general it won't change between render calls (it's more of a visualisation tool for geometric problems). The user will be able to control the position/orientation of the camera but in general there will be times when the camera won't move for several seconds/minutes (potentially hundreds of render calls) and since the 3D scene is likely be static for the majority of the time, I wonder if I really need a continuous render loop...?
My thinking is that I will remove the automatic render loop and instead I will explicitly call my update method when either,
The 3D scene changes (very rare)
The camera moves (somewhat rare)
As I will be using this largely for research purposes, the scene/camera is likely to stay in one state for several minutes at a time and it seems silly to be continuously updating the frame buffer when it's not changing.
My question then is, is this a good approach? All the online tutorials for 3D graphics rendering seem to deal with game design but that's not really my requirement. In other words, what are the pros and cons of using a render loop vs. manually calling "update()" whenever something changes?
Thanks
There's no problem with this approach, in fact many 3D apps, like 3DS MAX use explicit rendering. You just pick what is better for your needs, in most games scene changes each frame so it's better to have update loop, but if you were doing some chess game, without animated UI you could also use explicit rendering only when the scene changes.
For apps with rare changes, like 3DS or Blender it would be better to call rendering only on change. This way you save the CPU/GPU but also power and your PC don't heat up so much.
With explicit rendering you can also have some performance tricks, like drawing simplified scene when camera moves, for better performance. Then when camera stops you render the full scene in background once again, and replace the low-quality rendering with the new one.
I have used pbrt to render my scene. I have specified the viewing angle in the scene file and on rendering it with pbrt I see the image from that specific viewing angle. I want to know if there exists a way by which I can rotate the scene rendered by pbrt using my mouse in real time
No.
To see if it is even possible, render a scene and time how ling it takes. In order to get it real-time you will need pbrt to render at least a few frames a second, preferably 60!
I don't think this is going to happen in 2016.
Alternatively you will need something like an OpenGL representation to perform the real-time interaction and then the rendered scene can only be displayed over the top (once the rendering has been finished). the frustums need to match in order for you to do this otherwise what the user interacts with will not be the same as what they see rendered.
If your editing the scene file, it sounds like your not in coding land and so the only possibility is to write some program that can display the scene (in GL) and update the scene file information to be the same as the current camera and render using pbrt. Its all going to take a long time (pbrt needs to parse the file each time, and re-buffer all the geometry) since supplying the file means pbrt won't save anything from the previous state and so will have to construct acceleration structures etc as well as rendering the scene. Each frame!
Even in code pbrt is not going to give you great performance. It's not designed for that, more to be a physically accurate path tracer (as the name suggests). In order to get anything remotely near real-time, you'll need some bad ass acceleration structures and better command of the light model you are using. If you really are interested your probably need to write your own renderer. Look into Metropolis Light Transport (MLT) and Vertex connect merge (VCM), which are much more refined/efficient models using Monte Carlo method.
Plus some pretty decent hardware with lots of cores, or a decent gfx card if wish to employ SIMD through Cuda or equivalent.
[EDIT] Also note that the pbrt renderer, is based on a book "Physically Based Rendering (From Theory to Implementation)" ISBN-13: 978-0123750792. Which outlines how to implement your own version of pbrt.
I made a game in unity3d, Its graphics looking perfect in unity engine. but when i built it and played in web-player its graphics become pixelated and blurry.
So how can i make it pixel perfect game for web-player?
This also happened to me once. But i got the answer after some searching on unity.
this is what you need to do.
Select the texture which becomes pixelated.
-From import settings
Texture type=texture
Filter mode=Trilinear
Slect web as a platform and check yes on override for web
Max size=max
Format=truecolor
and click apply this should and definetly help me.
Source
Try to change you image size in photoshop i think you save it in small size.
And always make you graphics in vector so it become pixel free and you always have achance to make a new image from vector.
I'm looking at some older code which is rendering some images, animations, etc... for a website by generating a web page containing significant SVG elements. The result is a fairly complicated, interactive, interface. I've been tasked with migrating the application to instead generate WebGL calls.
This is a non-trivial task, considering all of the niceties that come with SVG, which are not directly available if going straight to a WebGL implementation. I've been debating whether I should pitch migrating to using something like Three.js instead, but don't know enough about the available options to make a good decision.
What are some reasonable options I should consider when trying to build my battle plan here?
I would suggest you look at http://code.google.com/p/canvg/ as an option.
I assume it is using getContext("2d") not getContext("experimental-webgl") or getContext("webgl").
WebGL provides a 3d interface and I am not sure if there is any advantage to using it for 2d graphics, since you don't have any 3d transforms for the GPU to work on. If they are interested in Canvas not specifically webgl ... Canvg would bring over some of the niceties of SVG which would be the source content.
If the issue is lack of support for SVG in browsers http://code.google.com/p/svgweb/ goes a long way to solving that problem.
Are SVG graphics a viable option for an in-browser game, with a google-maps style interface? This would involve zooming in/out, and scrolling in two dimensions over a very large distance.
For example, the client might request some area to be drawn in from the server -- and rather than the server returning a generated image for that section, it would return a series of gzipped SVG images and their locations in the requested area. Then the user could zoom in and out without grabbing new "tiles" from the server, since SVGs are scalable.
Would this be better than generating pngs or jpegs and sending back tiles? Would it perform well if there were many clients requesting images all over the place? Would it perform well on the client? What are the downsides to this approach?
In my experienced. The downside is the achievable level of detail using SVG is lower than lossy image compression like jpeg and png. I had difficulties getting all my vector graphics to play nicely with each other. If your artists are comfortable with working in SVG then this may not be an issue. Another note is that SVG compatibility may very between browsers. For instance I'm not sure which browsers support SVG. Webkit does, and I think Firefox does mostly, but I'm fairly sure IE is out of the picture, so to speak.
Overall SVG will put higher demands on client machines and lower demands on your servers. Calculating hundreds of SVG images is a lot more work than arranging PNGs.
In really depends on your game. If you are writing Chess, it would probably work fine. If you want to do something more complex in real time( E.G. a 2d side scrolling game), I have no clue.
using this SVG clock in Raephael as an Example. I am running Chrome on Windows and periodically different bars "twitch" and "reset for a second"
Edit
I just saw this first person SVG Demo So it can be done.