AI for generating truth
Using AI to collate information and generate art for expression continues to raise questions.
AI and large language models continue to improve. Many use cases are behind the scenes. The more visible use cases, are however, still problematic. And it’s because the tech implies to do what it actually can’t seem to get quite right: output factual truth, and generate any kind of human truth, as art. But can it get there?
Collating information
Search is one way AI is being used at scale. The reason it makes sense is because bots scraping content is how search engines worked even before AI. AI’s natural language processing is able to provide summarized versions of results that you would normally have to browse and read through.
But it often falls short with inaccurate information at best, and completely made up information at worst. Studies have shown it to be wrong as much as half the time. So if you’re using AI search results for research, you’ll have to double-check its sources, and then keep digging for the parts it makes up. Is the potential effort saved worth using it at all?
And this collating applies to other areas, like summarizing documentation, or notes for a meeting. What happens when it gets those wrong? Apps are starting to embed similar tech. In these cases, you would have a tough time verifying what it gets wrong. Can you trust the app’s results at that point?
Facts seem like something that the technology can get better at. But there is no understanding from generative AI. It doesn’t know the difference between having melted cheese on pizza and a joke that you can use glue for cheese.
Art for expression
Art is another form of truth that’s not fact-based (though it can be), but provides human truths through its expression. AI is already problematic in generating any kind of art since it uses sources without credit, neither via permission, nor payment to artists. With services like Apple Music available that splits royalties with artists, why can’t AI companies do the same thing?
So far, the only AI art that seems to be gaining any traction are Sora video clips, which are basically just memes that are made for fun, the same way we have GIFs, or images with a text caption. Memes count as art because the prompts are made by humans. And they seem safe from copyright infringement because of the way we freely share memes.
Art comes from human experience. And memes, even with copyrighted material, could be considered a form of parody, in a similar way South Park is able to use Donald Trump’s face. Is there a difference between using generative AI tools to produce something in a few minutes and South Park using a small production team with computers to make an episode within 6 days?
It’s ultimately about taste. And it’s tough to qualify good taste vs bad taste. Plenty of court cases in the past have tried to do this with all forms of art. It’s really a balance between how the artist decides to produce their art and what their audience chooses to accept.
But there is a difference between good art and bad art. Good art resonates with someone on a deeper level, where bad art just feels like slop. It’s easy for AI to generate slop, but even slop can be a form of entertainment for people. There’s always been a lot of slop on TV, and online. We individually have to figure out our own method for filtering through the art that is meaningful to us.
Thanks for reading! Consider sharing this post with others: copy this link, or send an email. And follow my updates to get future posts as they come out. You can also donate to buy me a coffee.
