##CONTINUE##
From some time in the late eighties to perhaps mid 2005 the Unix approach didn’t change in any fundamental way: you bought the biggest gear you could afford, added the fastest disks you could find, set up some form of failover and data redundancy, and prayed your boss wouldn’t insist that you share the root password with idiots.
The small computer approach did change quite a lot. Specifically when Microsoft brought VMS ideas and technologies to Intel it tried to apply 1980s VMS clustering and software overlay ideas to turn the rackmount into a kind of synthetic SMP for Wintel; set in place the foundations for what we now call database sharding (albeit based on table, not index value, separation at the time); and stripped the whole “business intelligence” function out of production systems for handling via pre-processing and query management on additional server racks.
That got complicated quickly: by the time Windows 2003/XP came out the typical corporate PC accessed multiple servers and the servers themselves accessed multiple other servers in the process of assembling query responses.
During that same period, roughly from 1987 through 2003, high performance computing as practiced in the physical sciences progressed from a focus on parallelism obtained through array processors to parallelism obtained through arrays of processors - from the original CDC/Cray approach in which a single computational process is broken into parallel vector streams wherever possible to that of the Linux grid in which the primary process is split into many independent processes whose results are recombined as necessary to form the final output.
And, of course, other processes at work during that period affecting things too - in particular:
- the data processing people who had found their empires trojaned by the PC in the 80s staged a comeback to take over managing larger wintel data centers - and brought virtualization back as their preferred means of achieving their traditional utilization goals; and,
- the need to share, backup, and restore, client stored data drove the evolution of Novell’s 1979 shared disk device into the PC storage area network.
Overall, during that period, commercial Unix got better and machines got faster, but nothing really fundamental changed largely because none of the major vendors got solidly behind the second generation Unix ideas embedded in the Bell Labs Plan 9 OS product - a mid 80s technology aimed at taking Unix beyond the single machine focus to allow one OS instance to cover potentially hundreds of physically separated processing and data storage components.
In about 2004/5, however, things changed dramatically as two things happened:
- Sun released Solaris 10 - its third Unix generation (after SunOS and Solaris) and went public with its CMT/Coolthreads (SMP on a chip) ideas; and,
- IBM, Sony, and Toshiba got their Cell engine, a Linux grid on a chip, more or less working and announced initial programming tools for it.
Solaris 10 isn’t Plan9, but in many ways represents a reinvention on commercial foundations of the same core ideas - think of it as a Stevensesque blue guitar thing: different in every detail and yet the same.
Cell isn’t exactly a Linux grid either, but offers the same kind of scalability without incurring either x86 costs or the performance limitations imposed by the distances data has to traverse in a rack style grid computer.
Overall, I don’t think 2009 will be a year of significant technical change in IT, but I think it will be the year both of these technologies achieve a “hockey stick” shift in their adoption curves: cell despite the programming hurdle because it’s an order of magnitude faster and cheaper than a Linux grid; and, Solaris on SPARC/CMT because that combination lets IT managers greatly simplify data center operations by eliminating SANs and resurrecting much of the one big machine model from the 1980s.
And the other shoe, of course, is that most of the PC industry accommodations to small computers - in fact pretty much the whole multi-tier client server thing - will have to see that same hockey stick shift in adoption - but in the opposite direction.
-----------------------------BY Paul Murphy
Source:ZDNet
Paul Murphy (a pseudonym) is an IT consultant specializing in Unix and related technologies. See his full profile and disclosure of his industry affiliations.
0 comments:
Post a Comment