The AI hype is about how we dream to interact with our tech
Getting the wrong idea of what AI can actually do because we’re trying too hard to put a human face on our devices.
Generative AI buzz is at an all-time high, with new gadgets, and companies continuing to push AI into their products. Machine learning is a big step in technology to be sure, but it’s nowhere near its potential yet. Just like self-driving cars are still years away from getting right, so is AI.
The promise
A lot of what products are promising AI to be are at best a disappointment, and at worst potentially harmful. They focus on processing large language models to allow for asking questions in a more natural way, and having the computer respond in a more natural, human-like way.
And their features are around multimodal interaction with the world by being able to understand what it “hears” through your voice and “sees” through photos and video. Google released a concept demo of where we could be in a few years.
The reality
But we are not there yet. And some companies are realizing that. Google has AI tech that it’s releasing mainly to developers, not as consumer products yet. And Apple is showing off its tech in the form of research articles.
The thing is, many hyped AI features are already available, and on our phones. We can ask Siri for information, and Google Lense allows for getting information about certain things we see. And they do a much better job.
We’re trying to get to the science fiction promise of being able to talk to our computers and have them answer back as if we’re talking to another person. And we’re willing to give current tech a huge benefit of the doubt because it sounds convincing. But all we’re doing is anthropomorphizing our computers.
Thanks for reading! Consider sharing this post with others: copy this link, or send an email. And follow my updates to get my latest posts.