Rumors of NVIDIA's Next GPU

NVIDIA's next-generation GPU design, the G300, may turn out to be the biggest architectural leap the graphics chip maker has ever attempted. If the early rumors are true, NVIDIA has decided move the architecture a step closer to the CPU and make GPU computing even more compelling for HPC.
##CONTINUE##
Publicly, the company has been tight-lipped on what the upcoming architecture will look like, but that doesn't prevent GPU-obsessed journalists from speculating. Some of this speculation may be based on wishful thinking, but it looks like there may be a few loose-lipped NVIDIANs out there giving us a reasonably accurate peek at the silicon. It's also likely that sources at Taiwan Semiconductor Manufacturing Company (TSMC), NVIDIA's GPU manufacturing partner, are also feeding the rumor mill.

By the way, the G300 is often referred to as the GT300, but the latter refers only to the high-end Tesla version of the architecture, the one the supercomputing crowd would be most interested in. (Hat tip to Rick Hodgin at geek.com for clearing that up.) For our purposes, I'll just refer to it as the GT300 since HPC users are going to be mostly interested in the top-of-the-line parts anyway.

What follows is speculation heaped on top of speculation, so be warned that none of this may be true. But it makes for fun reading.

Back in April, Theo Valich in Bright Side of News reported that the new GT300 is going to have a lot more power and flexibility than the current crop of GPUs. Writes Valich:

GT300 isn't the architecture that was envisioned by nVidia's Chief Architect, former Stanford professor Bill Dally, but this architecture will give you a pretty good idea why Bill told Intel to take a hike when the larger chip giant from Santa Clara offered him a job on the Larrabee project.

According to Valich's sources, the GT300 will offer up to 512 cores, up from 240 cores in NVIDIA's current high-end GPU. Since the new chips will be on the 40nm process node, NVIDIA could also crank up the clock. The current Tesla GPUs are running at 1.3-1.4 GHz and deliver about 1 teraflop, single precision, and less than 100 gigaflops, double precision. Valich speculates that a 2 GHz clock could up that to 3 teraflops of single precision performance, and, because of other architectural changes, double precision performance would get an even larger boost.

In a later post Valich writes that the upcoming GPU will sport a 512-bit interface connected to GDDR5 memory. If true he says, "we are looking at memory bandwidth of 256GB/s per single GPU."

More importantly though, NVIDIA is said to be moving from the traditional SIMD (single instruction, multiple data) GPU computing model to MIMD (multiple instruction, multiple data) or at least MIMD-like. As the name suggests, MIMD means you can run different instruction streams on different processing units in parallel. It offers a much more flexible way of doing all sorts of vector computing, and is a standard way to do technical programming on SMP machines and clusters. Presumably CUDA will incorporate MIMD extensions to support the new hardware. MIMD also happens to be architecture supported by Intel's upcoming Larrabee chip.

In fact, both the GT300 and Larrabee may end up dropping into the market at the same time -- sometime in the first half of 2010. But, as I've reported before, Intel has said it is not targeting the HPC market with Larrabee, at least not for next year. NVIDIA, on the other hand, will almost certainly be pushing GT300 silicon into its HPC Tesla products as soon as possible.

There was some speculation that the GT300 would hit the streets this year, but reports of trouble with TSMC's 40nm manufacturing technology may have slowed NVIDIA's plans. Also keep in mind that the new architecture has to drag along a growing list of programming standards -- CUDA, DirectX 11, OpenGL 3.1 and OpenCL -- so getting the new chips to satisfy everyone is no small feat.

If the speculation about the GT300 is basically true, NVIDIA will significantly expand its commitment to the GPU computing market. The Inquirer's NVIDIA curmudgeon, Charlie Demerjian, thinks too much so. He writes:

Rather than go lean and mean for GT300, possibly with a multi-die strategy like ATI, Nvidia is going for bigger and less areally efficient. They are giving up GPU performance to chase a market that doesn't exist, but was a nice fantasy three years ago.

Demerjian's rant on the GT300 is slanted toward his focus on traditional graphics apps and his obvious antipathy toward NVIDIA, but the point he makes about NVIDIA's devotion to GPU computing is valid enough.

Let's face it, if Intel fails to connect with Larrabee, the company will just write it off and keep selling its gazillion other flavors of x86. AMD has a more conservative GPU computing strategy, so it has less to lose if the market fizzles. NVIDIA is the one that has really stuck its neck out. The GT300 just sticks it out a little further.

-----------------------------
BY HPC wire staff
Source:HPC wire

Copyright © 1994-2009 Tabor Communications, Inc. All Rights Reserved.

0 comments:

 

Copyright 2008-2009 Daily IT News | Contact Us