Results 1 to 3 of 3

Thread: The GDI Target – Rendering Pixelbuffers

  1. #1
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,824

    Default The GDI Target – Rendering Pixelbuffers

    If I understand the GDI target correctly : basically it works by rendering triangles to a pixel buffer, which is then drawn to the screen by the GDI.

    The clever bit is the use of multi-core processors. It splits drawing into different threads, to utilise the CPU to it's full potential.

    At the moment, the drawing thread calls a "vertex shader" which calculates the scanline vertices. For each scanline, the vertex shader calls a "fragment shader" which draws the line.

    My suggestion is to pass the shader functions as parameters to the drawing thread.

    The advantage is that clanGDI can calculate the optimum shader (with specializations)

    If we take this further, maybe we can take this out of clanGDI and move it into clanDisplay, for software rendering. So the user can use a mix of opengl and software rendering for optimum performance.

    This could also be used for graphic cards that do not support frame buffer objects.

    Also, in theory, it should be trivial to make the GDI target compatible with Linux.

  2. #2
    ClanLib Developer
    Join Date
    Sep 2006
    Location
    Denmark
    Posts
    554

    Default

    Yes, the GDI target works by rendering triangles to a pixel buffer and then the result is drawn to the screen by GDI. However, it not a vertex shader that calculates the scanlines. A vertex shader takes the vertex attributes as input, then generates varyings from it, which is then used to rasterize a triangle. Each pixel of the triangle is then run through a fragment shader and then a blending operation.

    I know I'm being a bit pedantic here, but the reason is that if/when the GDI target will have a vertex shader, that step is performed before the rasterization phase. Technically its the CL_PixelPipeline::transform function that is acting as the vertex shader at the moment.

    I've made the following drawing to illustrate how the different steps look like:



    For performance reasons the CL_SoftwareFragmentShader class in the GDI implementation does both the fragment shading and the blending in the same step. And at the time the current implementation of CL_PixelPipeline keeps the varying at a constant 6 (two for the texture coordinate, four for the primary color).

    Theoretically, each box in the above drawing can be completely pipelined. So for instance one core is currently shading a vertex, while the next core is processing the output from the vertex shader, and then the next core is doing fragment shading and the last core is blending that result with the frame buffer. In practice, doing so without introducing serious overhead and keeping all the cores busy is far more tricky.

    The current implementation only attempts to parallelize the fragment shading and blending steps. But despite having tried to write a lockless list for the command queue, the overhead is unfortunately quite big. And then there's a "slight" problem of order. If the blending doesn't happen in exactly the same order as the triangles were queued, you won't get the output you were seeking.

    Another problem for the current implementation is that it copies too much data. It uses over 30% of the time copying varyings around, which is quite stupid and might explain partly why 4 cores doesn't give the performance boost expected.

    About the specializations. My plan for this is to actually to have the application and/or clanDisplay hint this. Currently the clanDisplay Render API still relies on fixed pipeline functionality when you do not specify a program object. Since this is unavailable in DX10 and OpenGL 3.0, the plan is to simply not allow any rendering without first specifying a program object. The clanDisplay Render API will then have a series of Standard Shaders which are used by the 2D API to perform its rendering. These shaders will have very specific purposes and usage restrictions, which means the GDI target can perform specializations based on which standard shader is active.

    One such restriction may be that the shader is only to be used with square 2D rectangles, intended for CL_Sprite and CL_Draw::texture. The GDI target can then skip the entire rasterize+fragment+blend step and simply execute it as a rectangular blitting operation (like it does with CL_PixelPieline::draw_pixels currently, but with multiple core support).

    Finally, yes, it would trivial to make the clanGDI target function in both Linux and OS X. The biggest problem is finding a new name for the target - any suggestions?

  3. #3
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,824

    Default

    Thanks for the detailed response. It all makes sense.

    Quote Originally Posted by Magnus Norddahl View Post
    [snip]
    Finally, yes, it would trivial to make the clanGDI target function in both Linux and OS X. The biggest problem is finding a new name for the target - any suggestions?
    I guess it depends on how clanGDI develops.

    If the clanGDI is virtually OS independant, we could simply call it clanGDI for Linux with various #ifdefs, like clanGL (GLX and WGL)

    Else it could be called clanHermes lol

    It probably is best to wait for clanGUI to be completed for win32 first

Similar Threads

  1. Font rendering acceleration in v0.9
    By z42 in forum Official ClanLib SDK Forums
    Replies: 3
    Last Post: 02-18-2008, 12:09 PM
  2. ClanLib crashes when creating CL_DisplayWindow with target SDL.
    By Otto (Strange) Halmιn in forum Official ClanLib SDK Forums
    Replies: 1
    Last Post: 01-19-2008, 08:38 AM
  3. Rendering into my custom image and then drawing this image to the display
    By ValkaVALES in forum Official ClanLib SDK Forums
    Replies: 4
    Last Post: 07-03-2007, 09:53 AM

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •