Get the latest Education e-news
 
  • Concurrent Programming in the Design of a 3D Game Engine

    [02.16.10]
    - Jarrett Tierney
  •  The console and computer game market is a growing industry, where keeping up with the latest technological trends is crucial to make a game a success. With the advent of modern multi-core central processing units, CPUs, parallel programming has become a necessity that has been frequently discussed as an optimization and performance boosting technique in the development of game engines. Parallel programming utilizes a shift from procedural tasks to tasks that run at the same time.

    However, concurrency is not without its issues and challenges. Concurrent programming challenges both the way programmers think and the level of complexity of their programs. Concurrent programming is also a subject that hasn't been called one unified name. It is frequently referred to as parallel programming, multi-threaded programming, and concurrent programming.

    In this paper we will discuss the application of concurrent programming in the use of designing a modern three dimensional game engine. However, in order for us to understand the work entailed in the design and creation of a game engine using parallelism, we must first understand the concept of concurrency along with its benefits and issues. We will also discuss the application of concurrency to the solution of pre-existing bottlenecks in game engine design before moving on to the actual design.

    Introduction to concurrency

    Before the arrival of hyper-threading[G1] and multi-core central processing units, programs were only programmed sequentially. This means that tasks would run one at a time one after each other. Sequential programming is still prevalent with modern day programs and processors; however, in order to utilize multiple processors or multiple cores, parallel programming is a necessity. In a concurrently programmed piece of software one or more tasks run simultaneously. This change to simultaneous tasks creates a major design shift in the organization of a program. The programmer now has to manage multiple threads and protect the variables they access. Protection of variables is just one level of complexity concurrency presents to programmers.

    Parallel programming presents programmers with challenges that were not as much of an issue when sequential programming was the main practice. One challenge presented to programmers is an entirely new set of properties that [R4] refers to as system aspectual properties. These properties include structures and subjects like mutual exclusion, synchronization, and others.

    These properties go along with the challenge of having to protect shared variables with multiple threads. When moving tasks to run simultaneously versus sequentially, protecting variables and memory space that is accessed by two simultaneous tasks becomes crucial. Protecting variables and other system aspectual properties contribute to the complexity scare of concurrent programming. Programmers are frequently unwilling to commit to coding their piece of software concurrently because of the complexity of the task.

    As mentioned in [R4], concurrent software tends to get more complex as it grows larger and larger. This is because the larger the program, the more tasks there are running simultaneously and the more you have to be aware of problems such as deadlock. Deadlock[G2] has always been an issue in computer programming ever since the beginning of multiprocessing computers. Originally, it only concerned the wait time on a processor between different processes. Now, deadlock can also occur between two threads.

    Threads handle prevention of deadlock through the use of separate structures such as mutexes[G3]. Having threads wait on a mutex or a lock on a variable prevents an infinite loop of threads waiting on each other to continue. Mutexes, locks and other system aspectual structures are all proposed to be part of an aspect-oriented framework that is geared to assist in simplifying the task of parallel programming, as proposed in [R4].

    Benefits and Issues with Concurrency

    Concurrency presents multiple benefits to programmers. The most apparent benefits to using parallelism are those related to the hardware of modern day computers. Multi-core processors are designed to allow the simultaneous execution of both processes and threads. However, hardware benefits did exist for concurrency before multi-core processors.

    Multiple processor server machines and Pentium 4 driven computers both had the capability of running multiple threads at once. Even though the Pentium 4 processors were not multi-core, they emulated a multi-core processor using a technology called Hyper-Threading. So how does a multi-core processor handle a sequential program versus a threaded program? When a sequential single thread program enters the processor, it is usually recognized as one process and will only run in one core.

    When a multi-threaded program enters the processor, it starts to spread the multiple threads between the number of cores available, trying to create an even split of work between the cores. Concurrency can also be used to help separate tasks between different hardware devices such as the CPU and the graphical processing unit, GPU. While there are plenty of hardware benefits to use concurrency, purely software benefits can really depend on the platform your game engine is being developed for.

    If you are developing for the Xbox 360 or Playstation 3 game consoles, parallel programming is basically a requirement since their source development kits are geared to splitting tasks up between the cores (Xbox 360) or SPEs[G5] (PS3). So software benefits for programming concurrently ties in whether or not the console or operating system you are targeting has native support for threading. Most modern day operating systems now have support for native threading or threading through an external library.

    However, parallelism is not without its faults. When programming any kind of software concurrently, the management of threads becomes a crucial issue. Management becomes a serious issue because of the difficulty of debugging concurrent software. Unless you have a thread scheduler that relaunches threads that crash, it can be very difficult to figure out what the source of a crash is or which thread caused it. This challenges the programmer to utilize clear debug statements throughout their code.

    Another issue with concurrency as mentioned above is deadlock. Deadlock occurs generally when let's say thread A waits on a signal from thread B, however, thread B is currently waiting on a signal from A. This creates an infinite loop between the two threads waiting on each other. This could show up in a game as either a small part of the game freezing or even the whole game freezing and refusing to do anything. This once again presents a debugging challenge to programmers. Now they have to be able to have clear debug statements that will allow them to tell which threads are causing the deadlock so as to remedy the problem.

    Another major issue with concurrency is memory protection. Programmers now have to be even more careful about what is accessing certain memory addresses at different times. If multiple threads change the data in the same memory address, it becomes crucial that system aspectual properties such as Mutexes and Locks are utilized to make sure the data doesn't get scrambled.

Comments

comments powered by Disqus

UBM Tech