3000 Cray1 supercomputers in your PC
If you are a software developer, the thought of half a teraflop of performance in your PC for around $250 starts to get the creative juices flowing. You start thinking, "What can I do with all that power?" and "How do I program it to do what I want?" NVIDIA and AMD are giving us 10s, 100s, and soon thousands of processors that will execute in parallel ... now what?!
The Serial Universe We have spent most of our careers writing software that reads like a book. Line by line, the software code tells the processor what to do and the processor happily executes it serially. When the program is complete, the processor obediently waits for the next program to run. Operating systems (OS) such as Windows, Linux and MAC OS have helped us run many programs on one processor. Programs that need input or output from keyboards, printers, disk drives, or other programs, are scheduled by the operating system to run on the processor in the computer. They have helped us share the one processor with many programs or users in a fair and efficient way. Support for multiple processors in the OS has been increasing, but to get the highest performance out of an application, it must be written properly to take advantage of the processors and memory cache. Intel and AMD have built incredible serial processors that execute the majority of the applications on a PC today, but they are hitting performance barriers in clock speed and are adding processors to compensate. There is a new methodology on the horizon...
The Parallel Universe Parallel programming is not a new idea. Ever since someone connected 2 computers together people have been trying to run applications on multiple processors. What has changed, however, is that it is not only possible to get 128 processors on a single chip, but these devices are on their way to becoming ubiquitous. The devices contain many simple, general-purpose, processors that can execute instructions, in parallel, as fairly separate entities. NVIDIA has also provided an excellent language for us to program them all called CUDA (Compute Unified Device Architecture). It allows us to write a "program" that will run in parallel on as many processors that are available in the GPU. The language is very similar to the familiar C language with some simple extensions. CUDA allows us to solve problems using grids of one, two, or three dimensional groups of threads (CUDA blocks). Each set of threads can share a block of memory and thus a subset of the task at hand. Like an operating system, the device also includes a scheduler that selects a set of physical processors to run each CUDA block in parallel which is an excellent level of abstraction.
The Patio Problem I often get asked, "What is it like programming the GPU, and how is it different from the CPU?", I find I am always telling this story to explain it. I hope it helps! :) Around the time of founding Elemental, my wife and I decided we would build a patio in our front yard. It seemed simple enough- cut out all the grass, level the patio area, dig a French drain around the patio, lay gravel, lay sand, lay patio blocks (all 3000 of them), put in a simple sprinkler system, tamp it all down, plant the plants, and we would be having margaritas in the sun in a couple weekends! Well, three months later, I could say I was done. Ugh. Around this time, Brian and I had been writing the MPEG2 decoder on CUDA using the NVIDIA 8800GTX which has 128 processors. Due to the standard long hour, high concentration of a startup, everywhere I looked I would see blocks and grids and pixels - in my sleep, in buildings, and especially… in my patio. I realized there were very direct parallels in building the MPEG2 decoder and building that patio. The little spring time project that I will now refer to as the "Patio Problem," nicely illustrates the issues we've faced in developing codecs on the GPU. As I was cursing the "Patio Project" on the 4th weekend of spring, I thought to myself, "How would I do this if I had 128 friends helping me?" I couldn't just turn them loose on the project. Even if each person knew what the high level goal and plan of the patio was, there would need to be great coordination (and a lot of tasty beverages) required to effectively get 128 people to help finish my patio faster than my wife and I could do it. The first step of cutting out the grass would require putting each person on one little square of the grass have them cut it out. This would be very efficient, and would get the grass out very quickly. Similarly, the leveling of the ground and digging a French drain could be done effectively as long as each worker knew how far down they would have to dig. But what about getting the grass and dirt out and the gravel in? They would need to move the material in and out of the patio area in smaller groups in order for them not to bump in to eachother. Then the tricky part of how do all of those people lay the bricks in the right pattern efficiently? Do I have each of them grab 1 brick and lay it down in the correct spot which may depend on previously laid bricks, or do I have each lay down a line of bricks, or have small groups do a line, or do I have 2 groups start on opposite ends and work toward eachother? The building of the patio and the building of codec technology (and presumably many other applications) on a GPU are very similar. Some parts of a codec are easily solved using many parallel processors, but some parts are more difficult and require serialization of groups of processors. The NVIDIA devices are incredibly flexible in what each processor can do which allows us to come up with clever ways to solve the codec problems effectively. Just because we have a super computer on a chip doesn't mean that it solves the problems for us just as having 128 friends offer to help me on the "Patio Project" won't automatically get it done faster. Now, if I only had 128 friends, think what I could accomplish!
The Shift This fundamental shift in thinking is something that software developers will need to overcome to solve problems using parallel processors. There will be advances in tools and parallel languages, but there is no substitute for understanding of this method of solving problems. Some of the best parallel processor software writers may very well be tucked away in landscape companies, general building contractors, and assembly line managers. If you are someone who would like to join the shift in thinking, let us know!
The Elemental blog seems to attract an inordinate amount of spam, but it also gets the occasional reasonable question from folks. One of the more confusing aspects of video compression seems to be inter- vs. intraframe compression. Here's a brief lesson on both. Video in it's simplest sense can be thought of as a series of still images (hence the early days of film were called "moving pictures" until some marketing genius compressed the words and came up with "movies"). When these still images are flipped past the human eye fast enough, the eye interprets the frames as motion instead of unique still images. From a compression perspective, however, they are still just a series of images displayed in a specified order. Video compression then focuses on how to take these contiguous frames of video and minimize the amount of information needed to code the picture. The natural first step is to compress each individual image. This is known as intraframe compression, and uses only information contained in the current frame to minimize the image size. As an example, JPEG (the standard file format used for images on the Internet) uses the discrete cosine transform to rid images of high-frequency components, which are generally not perceptible by the human psychovisual system; by throwing this information away, a still image can be coded with much less data. This idea has been refined over generations, with early still-image standards like GIF and JPEG laying the foundation for more complex video standards like Motion JPEG and DV, the standard widely used in MiniDV videocameras. As demand for better picture quality at lower bit rates increased, however, the compression achievable by intra-only encoding became insufficient. Temporal compression or interframe encoding was introduced in the MPEG-1 standard, and has since been refined in the MPEG-2, VC-1 and H.264 codecs. These codecs include intraframe (I-frame) coded images as described in the preceding paragraph, but they also contain predictive-coded frames (P-frames) and bidirectionally-predictive-coded frames (B-frames). P-frames rely on images that were transmitted earlier in the sequence, and use data in these frames -- with minor changes -- create the current frame. B-frames are similar, but can use data from images earlier and later in video sequence. There can be many P- and B-frames between each I-frame, and since most video sequences have similar images for long periods of time dramatically higher compression can be achieved. The consecutive number of interframe images is referred to as Group of Picture (GOP) length. The benefits of intraframe-only compression are that it is generally less computationally expensive to process, since it doesn't require multiple frames to be stored in memory and accessed concurrently. There is also less latency in the encoding process, so compressed images are created much quicker. Hence, historically digital videocameras have captured intraframe-only formats (DV, DV50, DVCPRO HD, AVC-Intra). However, new generations of consumer camcorders with limited storage capacity are relying on interframe-encoded formats like HDV (a long-GOP version of MPEG-2) and AVCHD (a long-GOP version of H.264). These formats allow high-definition video to be stored on the same MiniDV tapes that previously could only capture standard-definition video. Editing these long-GOP formats is incredibly computationally intensive, as for each image displayed, many temporally adjacent frames need to be decoded first. Hence the need for GPU-accelerating decoding and Elemental's RapiHD™ software!
Elemental's RapiHD™ Video Platform is powered by fast, modern graphics processing units (GPUs). This is different from most computer software, which runs on the central processing unit (CPU). We often get asked why we selected the GPU as our hardware platform. The basic answer is straightforward: it is currently the most cost-efficient architecture for high-performance computing. Furthermore, there is a confluence of three major technology trends that have come together to make the time just right to use GPUs for more than just 3D games.
- Unsurpassed performance, and an unlimited performance trajectory. Semiconductor manufacturers have run into painful physics limitations that have reduced their ability to gain performance through faster clock rates. For example, Intel's top of the line CPUs have run right around 3 GHz for several years now. To increase performance Intel has been forced to add multiple cores -- and Intel has the best manufacturing facilities on the planet. The major GPU makers have had to contend with this problem for longer, and thus migrated to a stream processor architecture several years ago. Today, the highest-performance NVIDIA GPUs have 128 stream processors; ATI tops out at 320 stream processors.
- At long last, GPUs have gained more programmability. NVIDIA's release of the CUDA platform has created a way to take advantage of GPU horsepower with high-level C code. AMD/ATI has a similar initiative underway called CTM (Close to the Metal), although it is targeted at a significantly lower level than CUDA. Intel's Larrabee project, a high-performance discrete GPU that will go head-to-head with NVIDIA and AMD, is due out in 2009 and will reportedly contain a multitude of x86-based cores -- allowing the use of standard development environments to program the chip. All in all, the recent addition of reasonable programming models for GPUs makes them a good candidate for high-performance applications. At the same time, it doesn't make the task of finding talented developers that can efficiently implement parallel code any easier. Elemental is always hiring parallel processing superstars!
- Dramatically increased system bandwidth. In the "old days" of PCI and AGP, there was a very fast downlink from the CPU to the GPU. That is, the CPU could deliver data for the GPU to process and then output to the video display. However, the bandwidth for the reverse path -- from the GPU back to the CPU -- was extremely low. The development of PCI Express changed this equation. Now the bandwidth between the GPU and CPU is symmetric, allowing data to be sent to the GPU for analysis and then returned to the host system when processing is complete. This is critical for our application: frames of high-definition video are huge and require a tremendous amount of bandwidth to get schlepped back and forth quickly!