OpenGL loaders and Cakelisp

By Macoy Madson. Published on .

The following post was submitted to Handmade Network and is mirrored here.

I have been working through LearnOpenGL lately because my graphics programming knowledge lags behind. Last time I did graphics, it was all fixed-function pipeline.

The problem

In order to use OpenGL, it is generally recommended to use an OpenGL loading library. These libraries facilitate requesting the gl* function pointers to the functions you might use, depending on what version and extensions of OpenGL you use.

I am not a huge fan of this because it feels like it shifts the complexity onto the users of OpenGL. When designing APIs, it's good practice in my mind to try not to make things which spread complexity. I understand the desire for loaders because they facilitate creation of drivers with only partial OpenGL support, but it does make things harder for the end-user of the API.

Solutions I don't like

LearnOpenGL chose GLAD as their OpenGL loader. GLAD is a code-generator written in Python which generates OpenGL header/source files for your chosen version/extensions of OpenGL. There were several reasons why I was quickly put off by GLAD:

Surprisingly, many of the other loaders also weren't sufficient for my use case due to their being written in Python, Perl, etc. That rules out the following loaders:

Note that while these loaders can be used without using Python/Perl/etc., I want to be able to generate the actual header/source files from scratch if necessary.

The solution I like

I continued through the list of loading libraries until I found Galogen (GitHub).

Galogen strikes the perfect balance between flexibility and sane implementation to me:

Both a vulnerability and a possible feature is that the repository hasn't been touched in several years. This means I'm on my own supporting it, but it also means it probably hasn't needed to be updated.

Cakelisp's compile-time library makes running child processes very easy. This means I can both compile Galogen from source and generate fresh headers during Cakelisp's compile-time stage, which occurs right before building the final project. For example, here's the code I use to generate the configuration for GameLib:

     "--api" "gl" "--ver" "4.6" "--profile" "core"
     "--filename" gl-generated-output-path)
  (Log "error: failed to generate gl headers via galogen\n")
  (return false))

If the process fails for whatever reason, Cakelisp will print a relevant error and halt the build process.

This is really exciting to me, because this setup is implemented in the same language, in the same file.

Currently, I rely on the XML specification included with Galogen. If I wanted to make the system support cutting-edge OpenGL, I could build CURL and download the latest specification, still during project compile-time.

Another, more Cakelisp-y solution

Cakelisp offers several options for generating and inspecting code written in Cakelisp. For example, we could write all OpenGL calls like this1:

(gl BindBuffer GL_ARRAY_BUFFER vertex-buffer-id)

The space between gl and BindBuffer would allow for defining gl as a macro:

(defmacro gl (function-name symbol &rest &optional arguments any)
  (var function-name-str (* (const char))
    (call-on c_str (field function-name contents)))
  ;; Store a list of all the gl functions we actually use
  (get-or-create-comptime-var used-opengl-functions
                              (<> (in std map) (in std string) int))
  (set (at function-name-str used-opengl-functions) 1)
  ;; Generate the function name
  (var full-function-name-token Token (deref function-name))
  (var full-function-name ([] 128 char) (array 0))
  (PrintfBuffer full-function-name "gl%s" function-name-str)
  (set (field full-function-name-token contents) full-function-name)
  ;; Output the actual C function invocation
  (tokenize-push output
    (call (token-splice-addr full-function-name-token)
          (token-splice-rest arguments tokens)))
  (return true))

At this point, we now have our gl* invocations generated, but we still need the loader to create a header with the function signatures. The variable used-opengl-functions is a sorted and unique tree of all OpenGL functions our project actually uses.

We can generate the required header as part of Cakelisp's post-refrences-resolved phase, which is before building but after most code has been parsed. This is the only phase where code generation is possible. The generator might look something like this pseudo-code:

(defun-comptime generate-gl-loader ()
  (load-opengl-xml-spec) ;; This would be a separate comptime function handling this
  (get-or-create-comptime-var used-opengl-functions
                              (<> (in std map) (in std string) int))
  (var generated-signatures-tokens (<> (in std vector) Token))
  (each-in-iterable used-opengl-functions current-function
    (opengl-generate-signature-tokens generated-signatures-tokens
  (unless (evaluate-tokens generated-signatures-tokens)
    (return false))
  (return true))

This code would come out much larger in reality in order to handle reading the specification and generating other necessary boilerplate. However, it shows the huge power Cakelisp provides by offering compile-time code parsing and generation, all written in Cakelisp alongside your project's runtime code.


I am not planning on implementing the 100% Cakelisp solution for the time being. It would take a decent amount of time compared to just using Galogen.

However, I hope it gives you another example of why I think Cakelisp is awesome. All of that extra tooling can be removed when you have a programming language with full-power compile-time code generation and execution (like Cakelisp).

  1. You may not like the macro solution here. For example, you might not like how the space between gl and the rest of the function name impairs text-search, find references tooling, etc. You could instead use a compile-time function executed during post-references-resolved phase to scan every function for any invocations starting with gl and build the required list that way. It would be a O(n) scan over potentially large bodies of code, which is where the macro solution starts to look more appealing.↩︎