Results 1 to 13 of 13

Thread: ClanLib and OpenGL 4.1

  1. #1
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,785

    Default ClanLib and OpenGL 4.1


    This thread describes all ClanLib modifications and enhancements that have been made for OpenGL 4.1 support

    Note, this only applies to ClanLib 2.3 SVN. (It would break the API to patch ClanLib 2.2)



    All the OpenGL defines and procedures have been updated (ClanLib's version of gl3.h)

    CL_Texture's set_max_anisotropy was removed, since it is not part of the specification.

    The geometry shader example was modified. Adjusted the shader to add the layout keyword and remove "#extension GL_ARB_geometry_shader4"

  2. #2
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,785

    Default

    You can now select the OpenGL version as follows:

    Code:
    	CL_OpenGLWindowDescription desc;
    	...
    	desc.set_version(3,2,true);
    set_version has the following documentation:

    Code:
    	/// \brief Select the OpenGL version number
    	///
    	/// Defaults to OpenGL 3.0, with allow lower versions
    	///
    	/// \param major = OpenGL major number (e.g. 4)
    	/// \param minor = OpenGL minor number (e.g. 1)
    	/// \param allow_lower_versions = Allow lower versions of the specified opengl
    	void set_version(int major, int minor, bool allow_lower_versions);
    clanGL standard shaders will not work with OpenGL 3.2 because they require GLSL shader 1.5

    So we detect if GLSL shader is 1.5 or higher and use different shaders.

    For example, will add to add the following code

    Code:
    const CL_String::char_type *cl_glsl15_fragment_color_only =
    	"#version 150\n"
    	"varying highp vec4 Color; "
    	"out vec4 cl_FragColor;"
    	"void main(void) { cl_FragColor = Color; }";
    
    	...
    
    	if (use_glsl_1_5)
    		color_only_program.bind_frag_data_location(0, "cl_FragColor");
    But for an unknown reason, the Basic2D example running the clanGL target does not display anything (except gc.clear() works)

    So, desc.set_version(3,1,true); works but
    desc.set_version(3,2,true); does not

  3. #3
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,785

    Default

    Note GLSL 1.5, the attribute keyword is de-appreciated. Replaced with "in"

    The following changes have been made:

    Added the following functions to CL_OpenGLWindowDescription()
    Code:
    	void set_debug(bool enable);
    	void set_forward_compatible(bool enable);
    	void set_core_profile(bool enable);
    	void set_compatibility_profile(bool enable);
    	void set_layer_plane(int value);
    Improved the detection of opengl and glsl versions (and cache the result)
    Removed the release number for OpenGL version and GLSL shader version.

    Removed CL_PolygonRasterizer::get_face_fill_mode_front() and CL_PolygonRasterizer::get_face_fill_mode_back(), replaced with CL_PolygonRasterizer::get_face_fill_mode() (Changed for OpenGL 3.0)
    Separate polygon draw mode - PolygonMode face values of FRONT and BACK; polygons are always drawn in the same mode, no matter which face is being rasterized.

  4. #4
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,785

    Default

    Using CL_GraphicContext::set_primitives_array() ... if using vertex array's, you should use a vertex buffer object.

    Code:
    			// DEPRECATED FEATURES OF OPENGL 3.0
    			// Client vertex arrays - all vertex array attribute pointers must refer to buffer
    			// objects (section 2.9.1). The default vertex array object (the name zero) is Bug 3236
    			// also deprecated. Calling VertexAttribPointer when no buffer object or no
    			// vertex array object is bound will generate an INVALID OPERATION error,
    			// as will calling any array drawing command when no vertex array object is
    			// bound.
    So now we create a temporary vertex buffer object if it was not specified

    CL_OpenGLGraphicContextProvider::draw_primitives_l egacy() is called when OpenGL is 3.0 and a vertex buffer object was not specified.

    We just need that coding
    Last edited by rombust; 07-14-2011 at 04:03 PM.

  5. #5

    Default

    Ideally all rendering code should use only buffer objects (via CL_VertexArrayBuffer interface), not plain old vertex arrays.

    In GL1/SWRender display targets VBOs can be emulated with ordinary memory buffers. So there should be no problem implementing vertex array buffer providers for these targets.

    This is the most elegant solution but unfortunately it breaks the display module almost entirely.

  6. #6
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,785

    Default

    Yes, I totally agree.

    Tomorrow, I will modify clanGL so that it will work with existing code (the easiest solution)

    Assuming all developers agree, we could start forcing the user (and the ClanLib internals) to use the CL_VertexArrayBuffer interface.

    Maybe this function would be of some use... ... except it's not implemented
    Code:
    CL_VertexArrayBuffer CL_PrimitivesArrayBuilder::create_buffer(CL_GraphicContext &gc, CL_BufferUsage usage)
    {
    	throw CL_Exception("CL_PrimitivesArrayBuilder::create_buffer not implemented");
    }
    Or maybe that CL_PrimitivesArrayBuilder class should be removed.

    I think some more thought is required

  7. #7
    ClanLib Developer
    Join Date
    Sep 2006
    Location
    Denmark
    Posts
    551

    Default

    I am all okay with changing the provider interfaces to only support vertex array buffers. The less complexity in the abstration the better.

    However I am not sure it is a good idea to remove client side vertex arrays in our API. At least not without adding something at least as convenient. I really enjoy the convenience in using a slightly slower rendering method for debugging, proof-of-concept and low bandwidth data (i.e. one or two CL_Draw::texture calls). I also think that one of the reasons OpenGL became popular was because it was very easy to do a simple glBegin/glEnd block for beginners.

    CL_PrimitivesArrayBuilder was an attempt of mine to try make a nice and convenient way to build vertex arrays, but I abandoned the class because I couldn't get a satisfying result. As it is right now we lack a nice way to desribe the data in vertex array buffers. If we are going to remove client side arrays we really need to improve this part.

    An other issue is that if each CL_Draw function creates its own vertex array buffer it will become a lot more expensive to use it. Right now the OpenGL driver probably implements client side arrays by creating several vertex buffer arrays that it then copies the data to and issues the draw command. One thing we could do is to implement our own class managing this at the clanDisplay level and then make CL_Draw and the sprite batcher use this. Perhaps also use it for simulating client side arrays in CL_PrimitivesArray - unless we find an acceptable replacement for that.

  8. #8
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,785

    Default

    I have finished writing draw_primitives_legacy() that is called when the following function contains vertex arrays without a vertex buffer object.

    Code:
    void CL_OpenGLGraphicContextProvider::draw_primitives(CL_PrimitivesType type, int num_vertices, const CL_PrimitivesArrayData * const prim_array)
    The code should be very fast, it only does a single upload for strided values and multiple uploads for unstrided values.

  9. #9
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,785

    Default

    Well ...

    ClanLib running the Basic2D example (in clanGL mode)

    NVIDIA:

    OpenGL 3.3 context with or without WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB set works

    It also allows vertex arrays without a buffer objects!

    (Highest OpenGL context is 3.3)

    ATI (Catalyst 11.6):

    OpenGL 3.3 context without WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB works

    OpenGL 3.3 context with WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB fails at clVertexAttribPointer

    (Highest OpenGL context is 3.3)

    ATI (Catalyst 11.6b Beta ):

    OpenGL 3.3 context with or without WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB does not work (fails at clDrawArrays)

    OpenGL 3.0 context without WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB works

    (Highest OpenGL context is 3.3)

  10. #10
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,785

    Default

    That is all I can do.

    My graphic cards will not support OpenGL 4.1 (nvidia requires a Fermi based GPU )

    These look cool, and it would be great to create a ClanLib example:

    Triangle Tessellation with OpenGL 4.0 : http://prideout.net/blog/?p=48

    Extracting performance issues with the function CheckDebugLog() : http://sites.google.com/site/openglt...---tutorial-05

    (That's using clGetDebugMessageLogARB , and remember to use CL_OpenGLWindowDescription's set_debug(true) )

    Hopefully someone has a fermi card out there

  11. #11
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,785

    Default

    I have added "opengl_gl.h" to the ClanLib GL API

    It contains the GL prefix equivalents to the CL prefix OpenGL functions and defines.

    I feel that it is confusing for users to have to rename (for example): glEnable to clEnable, or GL_TRIANGLES to CL_TRIANGLES

    By default, that header is NOT enabled by default. OpenCL ( <CL/cl.hpp> ) includes <gl.h> and it conflicts with it.

    Historically we renamed "gl" to "cl", as some third party librarys included old gl.h files, conflicting with ours.
    Last edited by rombust; 07-18-2011 at 10:29 AM.

  12. #12
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,785

    Default

    See thread - http://www.rtsoft.com/forums/showthr...and-OpenCL-1.1

    SVN commit:

    Code:
    BREAKING CHANGE:
    The clanGL and the clanGL1 target now use the GL_ prefix for standard OpenGL definitions.
    The clanGL and the clanGL1 target standard OpenGL datatypes now use the GL prefix. 
        For example, CLint was changed to GLint I feel that everyone will agree, this is a correct commit.
    OpenGL users expect to be able to use GL_RGBA.

    This will avoid most namespace clashes with OpenCL.

  13. #13
    ClanLib Developer
    Join Date
    May 2007
    Posts
    1,785

    Default

    SVN Commit:

    Code:
    BREAKING CHANGE:
    The clanGL target OpenGL calls now use the standard gl prefix (from the cl prefix).
     For example, clBindBuffer is now glBindBuffer
    (Removed the opengl_gl.h convenience wrapper, it is not longer required)
    All complete.

Similar Threads

  1. ClanLib and OpenGL
    By zdaniez in forum ClanLib: Help
    Replies: 6
    Last Post: 09-29-2010, 09:26 AM
  2. ClanLib can work with OpenGL ES?
    By falsinfab in forum ClanLib: Help
    Replies: 3
    Last Post: 06-15-2010, 01:05 PM
  3. How to use OpenGl in ClanLib 2
    By xenus in forum ClanLib: Help
    Replies: 1
    Last Post: 09-16-2009, 02:15 PM
  4. OpenGL 3.0 and ClanLib 0.9
    By Judas in forum ClanLib: Developer Chat
    Replies: 0
    Last Post: 08-13-2008, 04:52 AM
  5. Little example ClanLib+OpenGL
    By rinsukaze in forum ClanLib: Help
    Replies: 3
    Last Post: 05-13-2008, 07:15 AM

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •