ChatGPT is a scam: a bad product, built unethically

How OpenAI uses AI hype, and fear, to cover up its harmful practices.

Since ChatGPT launched, it felt like some kind of magic technology. Like we were close to actual artificial intelligence, where we could talk to our computer the way we would another person and it would give us the answers to anything. But the more we use it, the more we are finding that it's not magic at all, it's barely a product we can use. With made-up results and ripped-off sources of information, it's a technology that was released too soon from the lab.

However, it goes beyond a poor product to being total lie, where its creators, OpenAI essentially cover up the extent of its flaws by building up AI hype and their role in it.

Hallucination is another word for bullshit

It starts with how the technology works, where its made known that there are so-called 'hallucinations' from the fact that the core of the technology is generating language which can include gibberish. But it does it so convincingly, that people keep being surprised by results that have been found to be completely fabricated.

And hallucinations appear to be part of the company's marketing as it presents itself as not just a tool, but a savior of humanity's future with AI. Experts believe we are years, if not decades, away from artificial general intelligence. Focusing on a nebulous future state seems like a great way to draw attention from where their product is today: a half-baked concept.

Too important to be questioned

This grandiose vision also covers up copyright infringement. OpenAI is currently being sued by the New York Times, and others, with more companies expected to join in. And their response seems to be that they know better. Because they are the selfless leaders that will get us through the scary road of our AI future, right?

I can understand wanting to push technology forward despite some flaws, and making the case for a shared range of knowledge to pull from. But OpenAI's flaws are harmful to how people use their tool. And their practices for sourcing their information is harmful to content ownership as they use their lawyers to twist the intent of copyright law. The company's CEO Sam Altman seems to follow the 'move fast and break things' motto, which in the case of ChatGPT, seems to equate to promising to build good AI in the future by building bad AI right now.

Thanks for reading! Consider sharing this post with others: copy this link, or send an email. And follow my updates to get my latest posts.