AI needs better interfaces

Apple’s approach to AI is a step in the right direction for how we interact with generative AI.

Apple recently held their yearly World Wide Developer Conference where they announce new features across their software platforms. As a surprise to no one, they focused a large portion of their keynote on AI. Not Artificial Intelligence, but what they called Apple Intelligence. It was a way to separate themselves from the ways that AI is being done wrong for other companies.

The ethics on where Apple sources its generative data from is still iffy. Its official stance is that it’s from the public (which translates to what’s accessible on the web), along with licensed content which they don’t provide any details on. But unlike companies like Google and its AI-based search, it does not attempt to use the data to replace the sources of the data.

AI tech is an evolution, not a revolution

Part of the features Apple presented were an extension of Siri, allowing Siri to do what you would expect from a virtual assistant. Many people seem to be getting ahead of themselves for what they want AI to do. What Siri does makes sense in connection to what the previous version did, while having guardrails around what is currently possible with AI, and with a glimpse towards future potential.

All the feature examples showcased were more grounded, working around aspects of what we currently do on our phones. In fact, most of their features only focused around personalized functionality that utilized machine learning and generative AI to allow natural language interaction within a given context. For example, it can generate text that sounds more “friendly”, “professional”, or “concise”, but it only does it based on your own already written text.

How we interact with AI is a work in progress

My biggest takeaway from the event is what I also feel is the biggest problem with AI: how we interact with it. Writing a prompt as an interface to have it do something is the result of design by engineers, programmers that use command line to do everything. It may work for some things, but not for everything. Or for everyone.

Apple’s features have specific interfaces for specific things that you’re doing. For writing an email you get proofreading, enhancement of your writing, and summarization of text. It uses image generation for what is essentially modern-day clipart by providing categories for ideas, and limited style options. You can ask Siri questions based on what is currently on your screen.

These interfaces make me actually want to use generative AI for day to day tasks which I have yet to do by using current generative AI prompt writing. It finally feels like AI has a bright future for us humans.


Thanks for reading! Consider sharing this post with others: copy this link, or send an email. And follow my updates to get my latest posts.