Forging redplanet (Day 2): Intro To IMGUI

With the first day behind me, I look onwards to the path ahead of me, and I am reminded of why I’d lost momentum in the first place. I had gotten wrapped up in a painfully basic problem - trying to create a pan/zoom behavior on the canvas layout the Immediate-Mode Graphical User Interface (a.k.a. IMGUI) provides.

IMGUI design

To understand the layout problem, I feel it conducive for today’s post to briefly explain the design principles driving the development of Forge’s IMGUI framework. Hopefully, it will be somewhat interesting to someone out there!

An IMGUI, as I understand it, aims to first and foremost eliminate state synchronization. State synchronization is often required in GUI frameworks where the state of any represented data is cached as part of the displayed interface objects. With Qt, for example, a signal is emitted when an underlying data model is updated, so that the view of the data knows it needs to update itself visually to reflect the changes. In principal this all sounds pretty sane, but in practice, it often becomes cumbersome to manage the communication between the data and its corresponding view (or views as there may potentially be multiple simultaneously active views of the same data). For the uninitiated, there is plenty of existing literature that addresses the benefits of designing a GUI framework in an immediate-mode manner. Let’s not waste precious words trying to convince you that IMGUI is the way to go - although it really is.

The design of Forge’s IMGUI is loosely inspired by existing solutions such as Dear ImGui and Nuklear. With additional pointers from the OurMachinery blog, my goal is to minimize the performance impact of the GUI within interactive applications and reserve processing power for actual business operations. The GUI should be snappy, simple, and most importantly: be able to run on low-end “toasters” to accommodate users with hardware limitations, yet also reward users who have powerful workstations. This is a sticking point for Forge in general.

Unlike Dear ImGui, the API is intended to be more atomic and robust. Providing UI elements that can be composed together in any desired layout, hopefully reaching a similar level of expressiveness as HTML and CSS. That being said there are some decided “limitations” - such as not supporting overlapping translucency - that I will perhaps uncover another time. In any case, the “limitations” generally discourage what I consider to be bad UI design practices, so they are acceptable.

API examples

As mentioned, the API aims to provide atomic elements that together build-up more complex behaviors. Creating a box element with a button on top of it is relatively simple.

gui_rect(
  context,
  &(struct gui_style){.color = {0.4, 0.4, 0.4, 1.0}},
  (vec2){400.0, 400.0});

struct gui_button_style const button_style = {
  .style[GUI_BUTTON_STATE_NONE] = {.color = {1.0, 0.0, 0.0, 1.0}},
  .style[GUI_BUTTON_STATE_HOVER] = {.color = {0.0, 1.0, 0.0, 1.0}},
  .style[GUI_BUTTON_STATE_ACTIVE] = {.color = {0.0, 0.0, 1.0, 1.0}},
  .states = GUI_BUTTON_STYLE_HOVER | GUI_BUTTON_STYLE_ACTIVE
};

bool const pressed = gui_button(
  context,
  hash_string("my_button"),  // uint64_t unique id
  &button_style,
  (vec2){256.0, 64.0},
  0);
if (pressed) {
  // Do something!
}

Bear in mind most of the verbosity currently lies in styling the GUI elements and can be vastly minimized with presets or external configuration. This is an approach to API design I feel to be quite empowering. By keeping the foundation flexible, developers are free to impose restrictions at higher levels of the API. I also particularly like that GUI elements do not have to live in some prescribed “root” window, unlike other IMGUI libraries. What you type is what you get!

Elements can be laid out with a few basic functions. push_gui_layout_container() for example, sets the space in which proceeding elements will be positioned. This can be ignored by setting the absolute parameter to true in any of the layout functions.

push_gui_layout_container(
  context,
  (vec4){32.0, 32.0, 800.0, 600.0},  // x, y, width, height
  false);

// The next GUI element will be positioned relative to the container
// origin (32, 32).
set_next_gui_position(context, (vec2){0.0, 0.0}, false);
gui_rect(...);

set_next_gui_position(
  context,
  (vec2){20.0 * cosf(time), 20.0 * sinf(time)},
  false);
gui_button(...);

pop_gui_layout_container(context);

Any fancier layout functionality just does a bit of arithmetic to figure out sizes and spacing for upcoming elements. Currently on the roadmap are a flex/flow layout - not unlike CSS flexbox - and a grid layout.

Aside from the absolute basics, I have been working on a canvas layout. The intention is to use this as a generic basis for node graphs with pan/zoom functionality. In the spirit of keeping these posts manageable, however, the implementation details for the canvas shall be deferred to another day. Until then, happy hacking!