OpenAI’s Q* is alarming for a different reason

When AI systems start solving problems, the temptation to give them more responsibility is predictable, but this warrants greater caution.

The hype around OpenAI's Q* has boosted excitement about the company’s engineering prowess, just as it’s steadying itself from a failed board coup. PHOTO: REUTERS
New: Gift this subscriber-only story to your friends and family

When news stories emerged last week that OpenAI had been working on a new AI model called Q* (pronounced “Q star”), some suggested this was a major step towards powerful, human-like artificial intelligence that could one day go rogue. What’s more certain: The hype around Q* has boosted excitement about the company’s engineering prowess, just as it’s steadying itself from a failed board coup.

Peaks of AI excitement about milestones have taken the public for a ride plenty of times before. The real warning we should take from Q* is the direction in which these systems are progressing. As they get better at reasoning, it will become more tempting to give such tools greater responsibilities. More than any concerns about AI annihilation, that alone should give us pause.

Already a subscriber? 

Read the full story and more at $9.90/month

Get exclusive reports and insights with more than 500 subscriber-only articles every month

Unlock these benefits

  • All subscriber-only content on ST app and straitstimes.com

  • Easy access any time via ST app on 1 mobile device

  • E-paper with 2-week archive so you won't miss out on content that matters to you

Join ST's Telegram channel and get the latest breaking news delivered to you.