Hello, Dally: Nvidia Scientist Breaks Silence, Criticizes Intel

My hunger strike has ended. Bill Dally is free.
##CONTINUE##
Back in January, Nvidia hired Mr. Dally as its new chief scientist. The move caught a lot of people’s attention, because Mr. Dally served as the chairman of Stanford University’s impressive computer science department and had done pioneering work in the semiconductor and software fields.

Nvidia, however, has kept Mr. Dally under lock and key since his hiring, refusing to permit him to talk to the news media. Until now.

Earlier this week, Mr. Dally and I had a chat about his decision to bolt the comfy confines of academia for Nvidia’s life-or-death struggle with Intel and Advanced Micro Devices. (Mr. Dally continues to shepherd about 12 Stanford graduate students, although he’s full time at Nvidia.)

“It seemed to me that now is the time you want to be out there bringing products to the marketplace rather than writing papers,” Mr. Dally said.

Mr. Dally has bought into the notion — often espoused by Nvidia’s chief executive, Jen-Hsun Huang — that we’re on the cusp of a computing revolution.

Nvidia likes to say that its graphics chips will move from gaming machines and engineers’ workstations into all types of computers and servers. This assumption hinges on graphics chips’ ability to crunch through complex software faster than mainstream chips.

Today’s graphics chips rely on tens and even hundreds of tiny cores that can all work together on a specific software job at the same time. They divvy up the work and then assemble the result -– a mode of computing known as parallel processing.

Chips like Intel’s Core products tend to throw just a couple of very large, powerful processing cores at software. These cores are great at handling the majority of software, which runs faster when it travels through a beefier engine.

The problem, however, is that Intel and A.M.D. have found it more difficult to make their big engines run faster. Instead, most of the major chip makers have called upon software writers to start chopping up their work into smaller tasks that can be spread across lots of cores if they want to see anything close to historical performance improvements.

Under such a model, chips with more low-power cores will deliver the best performance, and graphics chips have a head start on multicore design.

Mr. Dally envisions a world where standard chips sit alongside the graphics chips in the same computer and split up work -– something the industry has branded as heterogeneous computing. A couple of the boring, old standard chips will handle the grunt work, while lots of the graphics chips will crank away on the more demanding jobs.

“Everything from people’s smartphones to laptop computers to servers will be filled with heterogeneous processor chips,” Mr. Dally said. “All of the value will be delivered by our processors.”

Even today, labeling all of Nvidia’s products as graphics chips is limiting. As I wrote last year, a number of large businesses have taken Nvidia’s chips and applied them to software jobs in the oil and gas, medical and manufacturing fields. On some jobs, software gets crunched 100 to 200 times faster than with standard processors.

A.M.D. has embraced the heterogeneous idea as well, although it trails Nvidia when it comes to pushing graphics chips into new markets. Intel also plans to release a radical multicore chip either this year or next.

But Mr. Dally argues that Intel isn’t going radical enough with its design, code-named Larrabee, which will still rely on the company’s beloved x86 architecture.

“Intel’s chip is lugging along this x86 instruction set, and there is a tax you have to pay for that,” Mr. Dally said.

Intel says that staying with x86 makes life easier on software developers familiar with such an architecture. Mr. Dally rejects this by saying Intel will need to take up valuable real estate on the chip to cater to the x86 instructions.

“I think their argument is mostly a marketing thing,” Mr. Dally said.

Mr. Dally considered working at Intel but decided against going somewhere with what he calls a “denial architecture.”

“Intel just didn’t seem like a place where I could effect very much change,” he said. “It’s so large and bureaucratic.”

While Intel can afford to throw hundreds or thousands of engineers at futuristic problems, Mr. Dally will lead a team of about two dozen people. He’s trying to look 10 years out and predict areas where the company should build up its expertise.

Mr. Dally is accustomed to such work, having spent years in universities trying advance the state of parallel processing far beyond the mainstream.

-----------------------------
BY Ashlee Vance
Source:The New York Times

Copyright 2009 The New York Times Company.

0 comments:

 

Copyright 2008-2009 Daily IT News | Contact Us