Today’s target for artificial intelligence (AI) seems to be artificial general intelligence (AGI), a technology that is competent in many areas, like humans. AI is most often highly-specialized, focusing on one area with a narrow set of tasks. This sort of AI is best-suited for specialized audiences needing specialized tasks. But with AGI, the prophets of AI can achieve their dream: AI for everyone, everywhere.

Or so the prophecy claims.

We very well may achieve AGI, but I’m skeptical we’ll get there in the next decade (which, compared to the estimates from the prophets of AI, is an eternity). The simple truth is that we still know so little about how the human brain works. Regardless of how some may feel about humans, our brains are complex machines, calculating far more than given credit.

The developers of AGI seem hellbent on replicating and/or replacing humans. But can you replicate or replace what you don’t fully understand? Supplementing and improving upon human intelligence seems a far better goal. This is why I prefer the concept of augmented intelligence over AGI1.

Anyone familiar with SMART goals knows that goals should be attainable—that’s the ‘A’ in ‘SMART’, after all. And I’m not convinced that replicating or replacing human thought and processing will be attainable in the near future.

If Gary Marcus is right—if the hype seems to be dying and the return on investment just isn’t there2—then it feels as if AGI will be attainable much, much later than the prophets of AI would have us believe.

Jake LaCaze still believes in the potential of humans.

  1. AI Should Augment Human Intelligence, Not Replace It from Harvard Business Review ↩︎

  2. The ROI on GenAI might not be so great, after all by Gary Marcus ↩︎