Focused, Helpful Robots

Intro

Most of my writing these days is either just for me, or for my job. When I do write with the intention of posting something in public, I either get stuck in perfectionist limbo, or find other excuses to stop myself from actually sharing anything online. In the spirit of anti-perfectionism, I’m periodically posting topics from my grab bag of ‘shower thoughts’ on how we use and think about artificial intelligence.

These were all intended to be longer posts. Maybe they still will be someday. For now, I’d like to share them as topics that might inspire others to go down their own personal rabbit holes, or serve as jumping off points for conversations with other technology wranglers and daydreamers.


Focused, Helpful Robots

One thing I like about using tools like ChatGPT or Claude for guiding my personal research is the simplicity and mental quiet of the interface. They also have a sincerely helpful attitude that can be equally rare when you go to ask for help online. The feeling I get from using them brings me back to watching “Star Trek: The Next Generation” as a kid. That was a future I wanted to live in. Not only was there all that stuff about Earth becoming a utopia - you could just say “Computer” and a helpful voice would always respond, that wasn’t just trying to get you to buy more stuff on Amazon.

Since then, the internet has turned into a noisy, distracting, and crowded place. There have always been pop-ups and viruses, but now it’s like we’ve designed the viruses right into the baseline experience. Automatically playing videos constantly invade your field of view. Even the stuff that isn’t technically ads is presented like ads, optimized to be attention-grabbing over anything else. Screen real estate on the web is maximized like a downtown office block or dystopian housing development, crammed shoulder-to-shoulder with nested content. Compared to this hectic digital landscape, using a simple text box on an otherwise mostly-blank page feels like stepping into a secluded Zen garden.

I also like that GenAI has a can-do attitude. It doesn’t aggressively question why you’re wanting to know about something in the first place. I’ve spent a lot of time on the internet trying to get help on forums. It turns out that a lot of the people who frequent forums are there to tell you not to do whatever it is you’re trying to do. I have yet to have GenAI tell me the same thing. It does couch its answers with precautions and caveats, which is what you actually want from people, too. And I do appreciate when people ask why I want to do something, to clarify intent. It’s something I’d like to start using more in my instructions to GenAI. But what I don’t want is for that ‘why’ to turn into an essay about how what I’m doing is doomed and wrong and pointless.

You might say that you get what you pay for. Subscriptions to GenAI tools charge you enough money that maybe they can ‘afford’ not to distract you. You could say something similar about the kind of advice you might get for free online. But I think there is a new pattern forming here that I hope continues to spread, as these kinds of tools evolve and become more central to user experiences. By now, people have accepted that they’re going to have to filter through a lot of noise on the internet to get a signal. We do this today using a combination of dedicated apps and simply scrolling past all the desperate cries for clicks. But what if in the future, clicks really didn’t matter? What if the attraction of a focused and helpful robot was able to create a positive emotional connection with an app or service, that didn’t require any clickbait or pop-ups or flashing lights?

I’m hoping the emotional reaction I have to using GenAI tools isn’t just a fluke or temporary condition. If it isn’t, then I can dare to dream that focused, helpful robots will someday reverse the trend of putting so much strain on the user to simply do what they came here to do. And maybe my Star Trek: TNG dreams aren’t totally dead after all.

Read More

Certified Organic Content

Intro

Most of my writing these days is either just for me, or for my job. When I do write with the intention of posting something in public, I either get stuck in perfectionist limbo, or find other excuses to stop myself from actually sharing anything online.

In the spirit of anti-perfectionism, I’m periodically posting topics from my grab bag of ‘shower thoughts’ on how we use and think about artificial intelligence. As someone who works with AI-driven systems, it’s both an inspiring and sometimes frustrating time to be alive. Not only are there now celebrity-level model architectures, but anyone is now able to interact with these models in the same way we would do a Google search or write an instruction manual. These posts will cover both the highs and lows of what I think this all implies for people on both sides of the two-way mirror (my current favorite metaphor for the way these models exist in the world).

These were all intended to be longer posts. Maybe they still will be someday. For now, I’d like to share them as topics that might inspire others to go down their own personal rabbit holes, or serve as jumping off points for conversations with other technology wranglers and daydreamers.


“Certified Organic Content”

The practice of adding disclaimers to AI-generated content is one way to make sure that people know that it might be misleading, or totally wrong. In other words, the creator knows that a model’s effectiveness is expected to vary, so they encourage the human seeing it to be more skeptical than usual and maybe cut it some extra slack.

However, since both AI and misinformation are both so increasingly embedded in our digital experiences, it could make more sense in the future to call out when content is 100% ‘organic’ or reviewed by a human expert.

One reason is purely a practical one, and the other is based on changing expectations. The scope and penetration of machine-made decisions is already difficult to unweave from the fabric of any content on the internet. And in the near future, the expectation that any given content is assisted or completely generated by a machine versus a human will be even more status quo. Personally speaking, I’ve already run into plenty of situations where I found myself questioning, ‘is this a bad writer, or just bad robo-content?’

When that time comes, I think a lot of people will be more interested in knowing that specific content is ‘Certified Organic’ and created by humans, versus being made aware of when its AI-generated. Just like how more people care more today about the food or other stuff they put in their body than they did a hundred years ago, there’s a growing awareness that the content we choose to ‘ingest’ affects our emotional health and even the wiring of our brains. As time passes, a growing number of people will become more selective about whether they’re consuming ‘processed’ digital content, versus something organically generated.

Of course, even if adding public disclaimers becomes less useful over time, there will always be a responsibility behind the scenes to catalog the jobs that AI is doing in a given system. For example, it’s critical to keep track of what decisions in a system are decided based on personal information. It’s also important to be aware of which decisions are probabilistic, regardless of whether there’s anything personal involved. For the former, there’s privacy concerns and evolving laws and regulations to reckon with. And for the latter, there’s always some cost involved when machines get things wrong. If we have a sense of what gets decided by a model, we can pay more attention to the riskiest decisions, in order to avoid the highest costs and cover up those mistakes in some elegant way.

Outside of the internal responsibilities of the system owners however, I still think that the people on the reflective side of the two-way mirror will ultimately care more about knowing what was created by other people, versus needing confirmation of what specifically came from machines. (Explaining why a machine did what it did is another area that could see more attention soon, and that could become either way harder or easier to do, the more opaque and powerful these models become. But that’s a topic for another day.)


Books I’ve read that probably influenced this post:

Read More