Lost in the Translation

In the early days of the digital revolution, we joked about a future in which everything would be digitized; mundane, everyday objects would have a chip, whether they needed one or not. Computerized washers, dryers, and refrigerators would be everywhere; our homes would be digitally-controlled cocoons of comfort. Automobiles, too. Even our clothing, we laughed, would have cyber-intelligence woven in. As it turns out, of course, these things aren't nearly as funny as they used to be. Ridiculous, maybe, but there's nothing funny about the price tags on some of those digitally-enhanced appliances that perform exactly the same functions as before, only in a more "intelligent" manner. I won't even get started on the wired clothing; the pressure in my skull is telling me to move on.

What we didn't joke about was the application of the new technology to things like language processing, or other calculation-intensive tasks better suited to machine than human. Sifting through endless volumes of text, for example, in search of a particular word or phrase isn't the best use of a person's time, especially not when there's a happy—and rapid—cyber-idiot on the desk just waiting for your next command. Now, most people don't think twice about that kind of sifting and cataloging; a Google search has all the novelty of a trip to the restroom.

Clever as those high-speed search algorithms may be, they still rely on the same progression of high and low voltage levels, clocked through a microprocessor, that were used in the early days. The software that controls the hardware hasn't really changed, either; it's still driving bits through the pipe. The human-machine interface is better now, but the machine is still the happy idiot it always was, only faster. Between the two extremes of binary idiocy and creative human thinking lie our high-level programming languages, which attempt to bridge the gap. Over the years, that realm has seen the appearance of a variety of bridging methodologies—early languages such as LISP and FORTH come to mind—all with the intent of making more natural and intuitive the process of translating human thought processes to primitive binary code.

But even the most advanced symbolic logic and object-oriented programming languages fall short. To paraphrase a portion of David Byrne's The Numbered Universe discussion, we've attempted to use the languages of word and number to describe our world, even those aspects of it that transcend such language.

The digital world may be the climax of the reign of the world viewed as word and number, but it is only touching a part of out world, our lives. I sense that the search engine for gesture, image, sound and expression is a long way off, and may require a kind of "thinking", if we can even call it thinking, that is so vastly different [than] what we’ve been doing for 7 thousand plus years, that it may not be possible at all with the tools we’ve developed. I mean the technology and language and mathematics. I don’t mean we can’t discover the key to this other "language"

It seems the promise of artificial intelligence remains unfulfilled largely because of the difficulties involved in translation. It's tough to reduce the complexities of the human thought processes to a computer language, particularly when those human processes are so poorly understood in the first place. It's fuzzy logic alright, and probably more analog than digital. In fact, this may be the ultimate irony when all is said and done: we may find that we’ve spent many years attempting to convert analog processes to digital for the benefit of our machines, when all the while such a conversion was not only misguided, but unnecessary.

Maybe the key isn't out there so much as it is in here.

 

No comments:

Post a Comment