Marvin Minsky on AI
Found this post at /. which references the podcasts of Marvin Minsky on AI, but I am unable to locate them.
Found two comments which are very insightful and written by people whose thoughts were along the same direction as our discussions in proficience course over the weekends.
Quotes:
While much of the "traditional AI" hype could be considered dead, robotics is continuing to advance, and much symbolic AI research has evolved into data-driven statistical techniques. So while the top-down ideas that the older AI researches didn't pan out yet, bottom-up techniques will still help close the gap.
Also, you have to remember that AI is pretty much defined as "the stuff we don't know how to do yet". Once we know how to do it, then people stop calling it AI, and then wonder "why can't we do AI?" Machine vision is doing everything from factory inspections to face recognition, we have voice recognition on our cell phones, and context-sensitive web search is common. All those things were considered AI not long ago. Calculators were once even called mechanical brains. by SnowZero.
Personally I don't think it's quantum computers that will be the breakthrough, but simply a different architecture for conventional computers. Let me go on a little tangent here.
Now that we've reached the limits of the Von Neumann architecture [wikipedia.org], we're starting to see a new wave of innovation in CPU design. The Cell is part of that, but also the stuff ATI [amd.com] and NVIDIA [nvidia.com] are doing is also very interesting. Instead of one monolithic processor connected to a giant memory through a tiny bottleneck, processors of the future will be a grid of processing elements interleaved with embedded memory in a network structure. Almost like a Beowulf cluster on a chip.
People are worried about how conventional programs will scale to these new architectures, but I believe they won't have to. Code monkeys won't be writing code to spawn thousands of cooperating threads to run the logic of a C++ application faster. Instead, PhDs will write specialized libraries to leverage all that parallel processing power for specific algorithms. You'll have a raytracing library, an image processing library, an FFT library, etc. These specialized libraries will have no problem sponging up all the excess computing resources, while your traditional software continues to run on just two or three traditional cores.
Back on the subject of AI, my theory is that these highly parallel architectures will be much more suited to simulating the highly parallel human brain. They will excel at the kinds pattern matching tasks our brains eat for breakfast. Computer vision, speech recognition, natural language processing; all of these will be highly amenable to parallelization. And it is these applications which will eventually prove the worth of non-traditional architectures like Intel's 80-core chip. It may still be a long time before the sentient computer is unveiled, but I think we will soon finally start seeing real-world AI applications like decent automated translation, image labeling, and usable stereo vision for robot navigation. Furthermore, I predict that Google will be on the forefront of this new AI revolution, developing new algorithms to truly understand web content to reject spam and improve rankings.
by Modeless