wiki:opengl-intro

Version 3 (modified by leon, 11 years ago) (diff)

fix

Visualization with OpenGL

Essential approaches to programming computer graphics with Open Graphics Language (OpenGL) graphics library are described. This document serves as the basis for exercises in PRACE Summer of HPC Visualization training. Rationale for giving introduction to OpenGL is that such knowledge is important when developing codes that require some specific visualization for which OpenGL can be handy. Programming Graphical Processing Unit (GPU) through shaders is an important technique to accelerate graphics and other embarrassing parallel problems. OpenGL evolved from immediate mode to GPU only processing with the advent of OpenGL Shading Language (GLSL). Introduction to the subject is given by recipes to follow, discussing important techniques for visualization that can also be extended to general GPU programming for parallel computing. Instead of jumping to the latest OpenGL specification we use minimum required OpenGL 2.1 with the extensions currently available on modest hardware and still be able to use modern OpenGL 3.1+ programming principles.

Introduction

For the visualization of specific phenomena is usually not possible to use a general purpose visualization tools. Such cases occur especially in the visualization of engineering and physics problems. The modeling results are usually not only simple function plots but complex objects such as graphs, hierarchical structure, animation, motion mechanism, control channels, volume models of specific forms, ...

Through the time different standards were effective for computer graphics. This is mainly due to the complexity of implementation and closed code in the past. OpenGL remains the only widely accepted open standard, which was first introduced on Silicon Graphics workstations (SGI). There exist also a Microsoft Direct3D, which is limited to PCs with Windows and is not as easy to use as OpenGL, which is due to its openness and capacity provided on all operating systems and hardware platforms. OpenGL stagnated for some time with upgrades to the original SGI specification. Many extensions previously available from hardware vendors are now standardized with OpenGL 3+ where things dramatically changed. Immediate mode programming where communication from OS to GPU was regular practice and major obstacle to graphics performance. Programming knowledge of OpenGL 1.x is therefore not recommended for nowadays and can simply be forgotten and treated as legacy.

Pipeline Modern OpenGL changed previously fixed rendering pipeline to fully programmable graphics pipeline as shown in Fig.1 Processors that transform input vertex data to the window context at the end are called shaders. The Vertex shader and the Fragment shader are most important in the rendering pipeline. To use rendering pipeline as shown in Fig.1 one has to provide program for them as there is no default because they are essential part of every OpenGL program. Programming shaders is done in GLSL (OpenGL Shading Language) that is similar to C language with some predefined variables and reserved keywords that help describing communication between OS and GPU. Programs (for shaders) written in GLSL are compiled on-the-fly, assembled and transferred to GPU for execution.

OpenGL is designed as a hardware-independent interface between the program code and graphics accelerator. Hardware independence of OpenGL means also that in the language specification there is no support for control of window system events that occur with interactive programming. For such interactive control for each operating system were designed interfaces that connect the machine with the OpenGL system. Due to the specifics of different window systems (Windows, XWindow, MacOS, iOS, Android) it is required that for each system tailored techniques are used to call OpenGL commands in hardware. Portability is thus limited by graphical user interface (GUI) that handles OpenGL context (window). In order to still be able to write portable programs with otherwise limited functionality of the user interface, GLUT library (OpenGL Utility Toolkit) was created. It compensates all the differences between operating systems and introduces a unified methods of manipulating events. With the GLUT library it is possible to write portable programs that are easy to write and have sufficient capacity for simple user interfaces.

Legacy OpenGL coding

Basics of the OpenGL language are given in the (core) GL library. More complex primitives can be build by the GLU library (GL Utility) which contain the routines that use GL routines only. GLU routines contain multiple GL commands that are generally applicable and have therefore been implemented to ease OpenGL programming.

To get quickly introduced into OpenGL it is better to start with legacy (short and simple) program that will be later replaced with modern OpenGL after discussion that caused replacement with OpenGL 3.x. Before we can dive in OpenGL we need to revise windowing systems and how they interact with users.

Events

All window interfaces (GUI) are designed to operate on the principle of events. Events are signals from the Window system to our program. Our program is fully responsible for the content of the window. Windowing system only assigns area (window). The contents of the window area must then be fully controlled. In addition to the window assignment the windowing system to sends messages (events) to our program. The most common messages are:

display
The command asks for presentation of window contents. There are several possible occasions when this happens. For example, when another window reveals part of our window or when window is moved on the screen. Another example is when window is re-displayed after icon is being pressed at the taskbar. Interception of such events is mandatory, because every program must ensure that the contents of the window is restored window, when such event occurs.
reshape
Command to our program that occurs when the size and/or shape of the window changes. In this case the content of the window must be provided for a new window size. Event occurs, inter alia, when the mouse resizes the window. Immediately after reshape, display event is sent.
keyboard
Commands coming from the keyboard.
mouse
Describes the mouse buttons at their change when user pressed or released one of the buttons.
motion
This command defines the motion tracking of the moving mouse with pressed button.
timer
Program requests message after a certain time in order to change the contents of the window. The function is suitable for timed simulation (animation).
In addition to these events there exist some other too. In general it is not necessary that all events to a window are implemented in our program. It is our responsibility to decide which events will be used in the application. Usually program must notify windowing system which events will took over and for that window will receive events.

GLUT

For an abstraction of events (commands from the windowing system) we will use GLUT library (OpenGL Utility Toolkit). Many other GUI libraries are available (native and portable). GLUT falls into the category of simple operating/windowing system independent GUIs for OpenGL. An example of a minimal program that draws a single line is shown in Listing 1 (first.c). \lstinputlisting[caption=Drawing a line with OpenGL and GLUT., label=first.c ]{first.c} Program in C language consists of two parts: the subroutine display and the main program. Program runs from the start in main() and at the end falls into endless loop glutMainLoop that calls registered subroutines when event occurs. Before falling into glutMainLoop we need to prepare drawing context.

#include <GL/glut.h>

void display()
{
  glClear(GL_COLOR_BUFFER_BIT);
  glColor3f(1.0, 0.4, 1.0);
  glBegin(GL_LINES);
    glVertex2f(0.1, 0.1);
    glVertex3f(0.8, 0.8, 1.0);
  glEnd();
  glutSwapBuffers();
}

int main(int argc, char *argv[])
{
  glutInit(&argc,argv);
  glutInitDisplayMode(GLUT_DOUBLE);
  glutCreateWindow("first.c GL code");
  glutDisplayFunc(display);
  glutMainLoop();
  return 0;
}  
from OpenGL.GLUT import *
from OpenGL.GL import *
import sys

def display():
    glClear(GL_COLOR_BUFFER_BIT)
    glColor3f(1.0, 0.4, 1.0)
    glBegin(GL_LINES)
    glVertex2f(0.1, 0.1)
    glVertex3f(0.8, 0.8, 1.0)
    glEnd()
    glutSwapBuffers()

if __name__ == "__main__":
    glutInit(sys.argv)
    glutInitDisplayMode(GLUT_DOUBLE)
    glutCreateWindow("first.py GL code")
    glutDisplayFunc(display)
    glutMainLoop()

Structure of the program is usually very similar for all languages. Confer Listing 2 (first.py) rewritten in Python. All GLUT programs include commands in the following order:

*Include definitions of constants and functions for OpenGL and GLUT

with the include statement.

  • Initialize GLUT and setup other variables that are not directly related to OpenGL but rather to the object that is being visualized.
  • Set window parameters such as initial position, size, type, bit plane memory.
  • Create the window and name it.
  • Setup the features of the OpenGL machine. These are usually commands glEnable for setup of lighting, materials, lists, and non-default behavior of OpenGL machine.
  • Register call-back routines which will be called at events. Mandatory registration is just for glutDisplayFunc(display). The rest are optional.
  • The last command in main" is a call to "glutMainLoop, from which the program returns when the window is closed. At the same time the main program ends.

The command glutInit initializes GLUT library routines. It is followed by a request for window creation of a certain type. The constant GLUT_DOUBLE and the default GLUT_RGB suggests that we want a double-buffered window with a RGB space. Variable window keeps reference of window returned by glutCreateWindow and at the same time instructs the OS to set the window title. We have to tell to the window system which events the program will intercept. For example given, this is only display of the contents of the window. Call of the subroutine glutDisplayFunc instructs the glutMainLoop that whenever requests from OS for window redisplay occurs subroutine display should be called. Routines for handling events are usually called call-back routines as it reside in program as standalone code snippets that are called auto-magically at certain events from the windowing system. When some event occurs is up to the windowing system that follows user interaction. The main point to emphasize here is that registered call-back routines do get additional information on the kind of event. For example of keyboard event we can get also mouse (x,y) coordinates besides the key pressed.

We have seen that the subroutine display includes commands responsible for drawing in the window. All routines or functions there are OpenGL and have prefix gl to the name. Prefix is necessary to distinguish them and prevent name clash with other libraries. To understand the language one can interpret function names without prefixes and suffixes as the OpenGL is designed so, that the types of the arguments for all programming languages are similar. Subroutine display is therefore responsible for drawing the contents of the window. The glClear command clears the entire area of the window. When clearing we need to define precisely what we want to clear by argument given. In our case, this is GL_COLOR_BUFFER_BIT, which means clearing of all pixels in the color buffer.

The glColor command to sets the current color of graphic elements that will be drawn in subsequent commands. As an argument RGB color components are passed. Usually commands with multiple arguments are provide for different data types (integer, float, double) and some command can have different number of arguments for the same command. To distinguish them suffix is added. For the glColor3f suffix 3f therefore means that the subroutine has three arguments of type float. Choice of the arguments type depends on application requirements. Programmer can freely choose data type that suits most without the need of data type conversion. In our example we have two variants for vertex command with different number of arguments of the same type. glVertex2f means that we are specifying just two coordinates while the third is by default z=0. Types of the arguments specified as the suffix letter are as follows:

f
float in C language and real*4 in Fortran.
d
double for C and real*8 in Fortran.
i
integer (4 bytes).
s
short integer in C and integer*2 in Fortran.
Besides fixed number of arguments there are also functions that take as an argument vector (as a pointer to memory). For these the suffix contains letter v at the end. Below are some interpretations of suffixes:
3f
Three arguments of reals follow as arguments.
3i
Three arguments of integers follow as arguments.
3fv
One argument as a vector that contains three floats follows.
Variety of different arguments for the same command can be in glVertex command where we can find

  glVertex2d,  glVertex2f,  glVertex2i, glVertex2s,  glVertex3d,  glVertex3f,
  glVertex3i,  glVertex3s,  glVertex4d, glVertex4f,  glVertex4i,  glVertex4s,
  glVertex2dv, glVertex2fv, glVertex2iv,glVertex2sv, glVertex3dv, glVertex3fv,
  glVertex3iv, glVertex3sv, glVertex4dv,glVertex4fv, glVertex4iv, glVertex4sv.

Large number of routines for the same function is performance and language related in order to waive the default conversion and thus provide a more comprehensive and faster code. For languages with name mangling like C++ one can find simpler OpenGL wrapped functions (eg. just glVertex) that don't affects performance. But as many languages does not have name mangling built into compiler such practise is not widespread. Besides specifying single vertex each time one can use glVertexPointer and points to memory where number of vertices of specified type exist. This can save us of some looping, but as this is essentially copying of system memory into OpenGL hardware engine, the performance is not really improved.

Drawing of graphic primitives in OpenGL occurs between two commands glBegin(primitive type) and glEnd(). Primitive type given as argument at the beginning specifies how subsequent vertices will be used for primitive generation. Instead of giving primitive type as number several predefined constant are provided within include directive to ease readability and portability of the OpenGL programs. Before providing vertex position one can change OpenGL engine primitive state such as current drawing glColor3f or glNormal that is per vertex property.

The last command in the display subroutine is glutSwapBuffers(). For applications in which the contents of the display changes frequently, it is most appropriate to use windows dual graphics buffers, which is setup by using the GLUT_DOUBLE at window initialization. The advantage of such drawing strategy is in the fact that while one buffer is used for current drawing the other is shown. Drawing thus occurs in the background and when buffer is ready for display we simply flip the buffers. In particular it should be noted that such behaviour is system dependent and once upon a time when the GLUT_SINGLE (without double buffers) with the glFlush() at the end was used instead. Nowadays GLUT_DOUBLE is usually used, which is most helpful with high frame-rate applications such as animation. Only simple primitives are used within OpenGL. Reason for that is mainly due to the requirement of performance and possible hardware acceleration. There are three types of simple primitives: points, lines, and triangles. Higher level primitives (like quadrilaterals) can be assembled from simple ones. Curves can be approximated by lines. Large surfaces can be tessellated with triangles. For complex surfaces (like NURBS) GLU library can be used to calculate vertices. The following line primitives are possible:

GL_LINES
Pairs of vertices in a vertex stream create line segments.
GL_LINE_STRIP
Vertex stream builds connected lines (polyline).
GL_LINE_LOOP
Same as polyline above except that last vertex is connected by a line to the first.
Every surface can be assembled with triangles.
GL_TRIANGLES
For each triangle three vertices are required from vertex stream.
GL_TRIANGLE_STRIP
Strip of triangles. For first triangle three vertices are needed. For every additional vertex new triangle is created by using last two vertices.
GL_TRIANGLE_FAN
Triangles are added to the first one by using first and last vertex to create a triangle fan.

Modern OpenGL

Immediate mode programming with glBegin and glEnd was removed from OpenGL 3.x as such transmission of vertex streams and its attributes (colors, normals, ...) from system memory to GPU is considered as a major performance drawback. Display lists were previously used to save stream of OpenGL calls that also included vertex data and was just replayed at redraw. But this is inherently sequential operation that blocked parallel vertex processing. Requirement to store vertex arrays to GPU directly as an object can solve problem described. Storing vertex arrays into GPU also means that manipulation on them to build the model should be inside the GPU. Legacy OpenGL included many modelling utilities for transforming world coordinates into viewport. Transformations of coordinate systems in 3D space allowed manipulate model stack easily with glPushMatrix and glPopMatrix commands. But similarly to glBegin/glEnd such manipulations are not used outside GPU anymore. Instead all operations on vertex data is transferred to vertex shader. There operations on data can be performed with standard vector math in homogeneous coordinates.

Attachments (1)

Download all attachments as: .zip