I went to a talk the other day by Andrew Chien, the director of research at Intel. It was pretty light on technical details (which was disappointing considering the audience was mostly hardcore CS people), but it was still a really cool presentation. The focus of the talk was "essential computing", which basically means making advanced computation more reliable, so that people will actually be able to count on it.
I mean more reliable in a broader sense than just not crashing, of course; like I've said before, not crashing should really be the bare-minimum standard for software. Technology can fail in a lot of ways, even when it's working perfectly - if you expect to be able to do something with a piece of software, and then you can't, that could be considered a failure by some stretch of the imagination. So, when Mr. Chien was talking about essential computing, he was mostly talking about making computers do things which they can't do right now, but which seem like things they should be able to do.
You know what else has computers that do what it seems like they should be able to do? Science fiction. A lot of the talk did actually center around sci-fi concepts, or ideas which got their start in sci-fi, which I thought was really pretty neat. It's like, here are people actually working on all this stuff that seemed ridiculous not too long ago, and getting results! There will probably be another blog post or two about the talk, but today there's just one specific comment that stuck in my mind.
Mr. Chien was talking about real-time image processing, and how current-generation systems take about 10 kilowatts of power to run the sort of processing they need. To run this same program on a handheld device, you'd want it to draw less than one watt, for it to be feasible. Now, in any other field, that flat out just won't happen - the internal combustion engine, to make up an example, won't ever become ten thousand times more efficient. In computer science, though, we can probably expect two orders of magnitude improvement in architecture, another two orders of magnitude from better algorithms, and we could see the necessary four orders of magnitude improvement in as little as five to eight years.
This statement really resonated with me, because as I've been realizing lately, this is pretty much my favorite thing about computer science. I've been reflecting lately on what I want out of life. Overall, I'm pretty content with stuff; I don't really care about money, or prestige, or any of the stupid social games people play. Maybe someday, I'd like to find a nice girl and start a family, but even that's not a priority. What I've come to realize is that the most important thing to me is always having something interesting to do. The history of CS has been a series of revolutions, some more significant than others. Based on its history so far, it seems reasonable to assume that the field will provide me with interesting things to think about pretty much indefinitely; and really, what more could I ask for?