Q&A: Turing Award winner Barbara Liskov

The winner of the 2008 ACM AM Turing Award for lasting and major technical contributions to the computing community was announced last month as Barbara Liskov, a professor at the Massachusetts Institute of Technology.
##CONTINUE##
The Turing Award, which is named after British mathematician Alan Turing, has been awarded by the Association for Computing Machinery every year since 1966. Past winners include Vint Cerf, Google's Internet evangelist and commonly called the "father of the Internet."

Liskov, who heads up the programming methodology group in the Computer Science and Artificial Intelligence Laboratory at MIT--where she has conducted research and has been a professor since 1972--is only the second woman to receive the prize. IBM's Fran Allen won the 2006 award.

Silicon.com recently spoke to Liskov about her work and current research interests, and heard her views on a variety of IT issues--from data breaches and cloud computing to the relative lack of women in the IT industry.

Can you explain the significance of the work that won you the Turing prize. What impact has it had on computing?
Liskov: I won the prize for my work in developing techniques that...make it easier to build big software systems. The problem in software is there are no obvious boundaries to contend with. For example if you build some kind of electrical device you do that with discrete components--wires and boxes--and this naturally leads you to think about a design in which you go for pieces that may have very complicated insides but on the outside they have some sort of simple interface.

A good example here is a clock where there's a kind of standard interface but on the inside there may be a very complicated way to implement it. As far as the user's concerned, all you have to think about is, "What is that interface about?" and "What do I get to do with it?" You don't have to worry about the implementation details and you can use one clock or another clock and there really isn't--as far as you're concerned--any difference between them.

In software though there were no obvious boundaries like that--and the work that I did was to develop a way of putting complicated software systems into modules where each module presented to its users a relatively simple interface and then on the inside there could be a complicated implementation; the user didn't have to worry about that. And furthermore if that implementation changed then the user was unaffected by that change.

So this is the idea of abstract data types or data abstraction. What inspired it was some early work I did in building a system where along the way I began to think about, "How am I ordering this system and what is a good way to do it?" Prior to my work the only way that people modularized software was in terms of what were called subroutines, so you could think about, "I want a procedure that's going to sort a bunch of items into some increasing order." You could use the sort procedure to accomplish that and you didn't have to worry about the algorithm that was used inside.

What I did was to extend the idea to abstract from details of how data were represented. That gave us a much more powerful abstraction mechanism where the modules could be much bigger and a lot more information could be hidden.

I then developed a programming language that included this idea. I did that for two reasons: one was to make sure that I had defined everything precisely because a programming language eventually turns into code that runs on a machine so it has to be very well-defined; and then additionally because programmers write programs in programming languages and so I thought it would be a good vehicle for communicating the idea so they would really understand it.

That programming language, which was called CLU, never made it out beyond academic circles. It was definitely an implemented language. We used it at MIT and it was used at a number of other academic institutions but it didn't make it into commercial use. What happened instead was that the ideas in CLU moved into mainstream languages because they were accepted by the community as being important ideas. So they moved into Ada, which was a language that was developed by the Department of Defense. They moved into C++. Later they moved in Java and now into C sharp.

(My ideas) are now widely accepted and all programs are built using these techniques.

What inspired you to get into computing?
I got in sort of by accident. When I graduated from college I had a degree in math and I decided I wasn't interested in going to graduate school right away. So I decided to get a job and I couldn't find an interesting job as a mathematician--I was able to get a job where essentially they wanted me to plot graphs--but I got an interesting job as a programmer. Then I discovered that I had a real bent for computation and so that's what got me into the field.

How does one write good software? Are there a basic set of principles software engineers should follow?
It's not easy to write good software. It's not difficult to write tiny programs but when you try to write a big piece of software and--after all, in our everyday life nowadays we deal with very large pieces of software. For example Google--behind the scenes, immense quantities of software (make) that system run.

There are really two parts to it--one of them is understanding the basic techniques that you can use, so this idea of data abstraction and modularity is very important. The other part of it is more like a craft. Because you have to think about what's the right way--even when you have the right idea of what the building blocks should be still there's huge flexibility in how you decide to put the whole system together.

It's a craft and some people can learn it, and it has a lot to do with valuing simplicity over complexity. Many people do have a tendency to make things more complicated than they need to be.

There's another problem that people in the real world have to cope with and that's called feature creep. This means there's sort of the minimal features that a system needs to have in order to be useful. But there's always piled on lots and lots of other things that it would be nice to have. It's very, very difficult when you're in a commercial environment to stand up against this kind of pressure. The more stuff you throw in to the system, the more complicated it gets and the more likely it isn't going to work properly.

What are your current areas of research?
I work in what's called...distributed computing which has to do with applications which have parts that run on many different machines. This is the norm in many of the big systems that you see today--so for example Google runs on hundreds of thousands of machines, Amazon is going to be similar.

My particular focus recently has been on storage on the Internet. I really do believe that in the not too far distant future more and more of our storage is going to be moved off of our personal devices and on to storage provided thorough Internet providers. You can see Amazon is already offering something like this, Google's offering something like this. (Some) corporations are outsourcing the storage of their data to third party providers.

I'm also very interested in how...I could take advantage of this as a private person. For instance I would really like to get my data off of my personal device because I'd like to be able to access it from many devices like my cell phone or if I happen to be in an Internet cafe. I also would like to not have to worry about making backups. I'd like to not have to worry about what happens if my computer crashes and I lose all of my important information, so I've been looking at what is the underlying technology that...needs to be developed in order to make that vision a reality.

(However) the problem is with cloud computing, as it's defined today, it doesn't actually solve all the problems--for example there are huge confidentiality problems. You know you don't want to put your tax information out in cloud computing unless you have very strong guarantees that it's not going to be visible. There also are very important guarantees needed about preservation of data, reliability.

You need to be guaranteed that whenever you want to see it it's there. You'd like to be able to share it with some people but not others so there are a lot of things that aren't yet in place that need to be in place before this really becomes a commodity that everybody's going to feel comfortable using. I believe actually that when it comes to issues like confidentiality (it's going to require) a combination of technical work and laws (to solve it).

Can code ever become hack-proof?
I don't know. Right at this moment in time what you see going on is like a game. People who don't want to let the hackers in develop new techniques that the hackers at that point in time can't circumvent. But then before very long the hackers have figured out a way round it. I'd like to think there will be a time when a lot of our basic software will be hack-proof but there's a lot of...criminal activity out there. That's going to go on and you know what they're going to come up with is going to be hard to predict.

Is the Internet fundamentally insecure?
It is actually. The Internet was designed in an era when people didn't think about this kind of stuff. I mean "hacker" used to not be a bad word--hacker used to be somebody who was interested in building programs. But it's migrated into this other meaning, which is somebody that's doing bad stuff on the internet.

How might data breaches best be prevented?
First of all the data ought to be encrypted whenever it's put on any sort of removable media--that's sort of obvious--but that won't solve the problem entirely. That's really the kind of thing I'm working on--that's what I meant when I talked about confidentiality. What can we do to make it much less likely that the data gets released?

It's something that the research community is very interested in right now. What's going on is thinking about, "How do we conceptualize the problem? What can different techniques accomplish?" Security is not only about data, it's also about tracking what you're doing...everything you do on the Internet can be collected and people can mine that information, so there's another kind of breach of confidentiality that's lurking there. And so there's immense problems lurking--coming up in the future.

We're talking about things like identity theft, which is already happening...We're talking about government eavesdropping--which as you know is already happening also. These are problems that are clearly ahead of us that need to be dealt with.

Does Google know too much, have too much data?
Well you know if Google ever decided to misbehave I think there'd be big trouble. I mean people put a lot of trust in Google and actually all the other online providers.

How might computing languages and software engineering change over the next few years?
There is...a potential looming crisis which is that computer manufacturers are starting to come up with machines that have many processing units inside them--these are things called cores. And the machine on your desk has maybe two cores or four cores but people are talking now about machines with maybe hundreds of cores, thousands of cores--but people don't really know how to program these machines. So perhaps there will be some interesting advances in programming languages because of the need to figure out how to get programs to run on those machines.

You did your thesis in artificial intelligence. What excites you about the potential of AI? And what concerns do you have, if any?
One thing that we might see coming at some point is a much better search engine. Right now when you search on Google, if you happen to put the right keywords in you'll get what you want but we've all had the experience where you get lots of hits but they aren't really what you're looking for.

With artificial intelligence techniques it may be (possible) to come up with a much better way of finding what you're looking for. That's something I think we can expect from artificial intelligence in the future. Unfortunately those very same techniques will allow very efficient data mining of the patterns of your use on the Internet and so forth--so there's the good side and the bad side.

Will machines have emotions? Will they be able to interact with people the way that people do? Do we want that? I think there are ethical issues too (around artificial intelligence). One thing that's looming in the near future, I don't know how close this is, is the possibility of fighting wars with robots. This is a very scary thing and is going to require probably new conventions--international conventions--governing the use. It's like many of the advances we see in science, like biotechnology and some of the stuff that's going on there, there are ethical issues that come into play.

Do you think there are barriers to women getting into computing? And if so what are they?
Yes I definitely think there are barriers. It's a little hard to understand what they are but if you look at the numbers you can see that we're at maybe 20 percent...On the faculty at universities it's somewhere between 15 and 20 percent so it's a small fraction compared to the fraction of women in the population as a whole.

Exactly what the barriers are--people have theories about this (but) they don't totally understand it. I think a lot of it has to do with our societies. It's probably strongly related to other things we see going on like the fact that women are not well represented...in the top maths. If you look at, for example, who's doing well in the maths contests that happen internationally you'll see that women are underrepresented there.

I don't believe any of this has to do with basic abilities but I do think it has a lot to do with the way that our society is--what we think is appropriate for women to do (and) what we think is appropriate for men to do. I also think in the case of computers that the way computers (are seen as geeky)--the whole notion of the nerd sitting at the computer by himself or playing a violent game is also a turn off for girls. That's probably another reason why girls aren't going into computer science.

I do think it's mostly a problem before they get to college--if they get to college and they still are open-minded enough to be interested and they didn't cut themselves off by taking the necessary science and math ahead of time then I think at least they aren't going to hit the glass ceiling until they get quite far along.

We have seen a gradual increase in the number of women in the last few years (at MIT) and again I don't think we really understand what this is all about. It's possible that the advent of social networks and the use of the Internet for things like Twitter and Facebook...is making it seem more like something that girls would be interested in. Perhaps it's having an impact on the girls who're in there before they get to college and this is causing them to be more interested in the field, but I really don't know.

Do you think there is prejudice against women in the IT industry?
I think there probably is--it's very hard to identify it. Like in many professions it really starts to kick in now at higher levels. If you talk to women undergraduates or graduate students, they would probably think there's no discrimination at all. But if you start to talk to women who are out in industry and they're moving into upper management--even if it's a technical track of management--I think you'll hear a different story. It's like the legal profession, it's like so many professions--it's as you get up higher that this glass ceiling comes in.

There's an ongoing debate about how computing is taught at universities--and whether computer science degrees should have more input from industry so they focus less on theory and more on teaching students skills businesses need to plug employment gaps. What's your view on this?
This is a very long standing argument. Where industry keeps saying, "Oh we want somebody who knows how to program in C++." And universities say you need somebody who understands what computation's about. I'm absolutely on the university side here. It's such a short-sighted view to think that you should get people out of university who are training to do one specific thing. The problem is that the field moves and if you don't have somebody coming out of university with general skills that allow them to move with it, then in a few years they're obsolete.

Do you have a favorite operating system?
Well the operating system I use for my daily work is Linux.

-----------------------------
BY Natasha Lomas
Source:cnet

Natasha Lomas of Silicon.com reported from London.

©2009 CBS Interactive Inc. All rights reserved.

0 comments:

 

Copyright 2008-2009 Daily IT News | Contact Us