Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Deferred shading?
#1
Interesting question from a reader, is deferred rendering possible on my Rpi?

Well...long answer is yes, short answer is no. Let me explain;

Currently the only kind of rendering we do, is known as forward rendering. Its a simple concept, all your shader outputs are effectivly sent or forwarded to your display framebuffer which is usually what you see on screen. It works pretty well.

But there are sometimes issues when you want to do complex light calculations that might depend on information you've not got direct access to, like previous pixels or multiple light directions, type, colour etc.  Also light of different types and value don't always show up when the pixels light value is just accumulated.

We do sometimes use a kind of rendering where  we have shadow maps in some demo's, in other words we made a depth buffer, stored it in a texture and then used it for a 2nd pass to work with. This is partially deferred rendering, we are collecting some info to use later, but we are using it during another forward rendered pass.

Deferred rendering takes this idea further, it is a way of generating an intermediate set of buffers held in textures known as the G buffer (Geometry buffer, but in fact its multiple different buffers or elements representing the scene to draw), which can then be used to do a much more accurate rendering as you can then keep track of and access, usually 4 main types of information, texture positions, normals. position info (usually world) and a standard light buffer, usually diffuse, LearnOpenGL.com has a great tut on this, and uses slightly different buffer types which helps to show the levels of complexity you can get into.
https://learnopengl.com/Advanced-Lightin...ed-Shading


Its a "2" pass process, the "1st" Geometry pass does the extraction of data to create the GBuffer and the "2nd" light pass does the per pixel fragment shading to realise all the different light effects you want to have. But our "1st" pass is going to pull  potentially more than 1 data element. On OpenGL this is fine a pass can fill multple buffers, becuase full OpenGL can define multple render targets (MRT)

Once you have framebuffers or textures in place many more calculations can be done on your 2nd pass, giving you access to much more data for more complex info to use in the FS.

So, we know we can send data down to different FB's which we can then render to textures for use in a 2nd pass, but the problem is the G buffer has more than 1 type, it usually has 4, and its not impossible to have many more, after all if you treat it as a Texture2D, its just a large sequence of RGBA values of variable size....there's lots of options. We treat a part of our RGBA values as depth, because ES2.0 doesn't naturally have depth buffer generation capabilties. (theres actually an extension on RPi we could play with, and most other ES2.0 systems have it too)

But...OpenGLES2.0 does not have MRT's, can can only write to 1 buffer at a time meaning you have to do seperate passes to create each element of the Gbuffer you want.
We can get away with 2 passes most of the time, our shadow maps do that and its usually ok as its a simple enough shader, but 4 passes,  as well as a per pixel fragment shader? Chugsville dead ahead, that will be very slow.
If you can live with that, then great, but it really is a limitation,  you need to have a fairly sparce scene, and make sure your shaders do as little as possible, to even have a chance of a decent frame rate. Oh and there is one other issue to consider, 4 more screen sized frambuffers in memory, is a big ol' chunk of GPU RAM, not something we're blessed with at the best of times.
Oh and as the LearnOpenGL site points out there's a couple of issues with filtering, the fix for which is not so easy on OpenGLES, as we don't have the ability to blit. So its basically something you have to let go. There's also the fact once you create your Gbuffer, you can only apply 1 type of Fragment Shader on the final pass...though depending on the data you generate in the geometry pass, you might be able to apply different lighting effects on different pixels, but you can start to sense the level of complexity (and resulting speed loss) building here.

As usual with our low power GPU's being able to do something, does not mean you should. It will very much depend on you project and how much you stress the GPU, if you can manage it, deferred rendering can produce stunning results, making for very real and effective lookiing graphics, but the price is speed.

Now, OpenGLES3.0+  does have Multiple Render Targets so you can generate complete Gbuffers made up as you like in 1 pass....and even though activating them does have a speed hit, its far far faster than doing individual accumulations of the buffers... This makes it more viable on the bigger GPU's 

This is a topic I'll go into in more detail in the future when we have more OpenGLES3.0 stuff up and running. I probably won't do anything on OpenGLES2.0 but feel free to if you want.
Brian Beuken
Lecturer in Game Programming at Breda University of Applied Sciences.
Author of The Fundamentals of C/C++ Game Programming: Using Target-based Development on SBC's 



Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)