In Their Own Words

It feels like ages ago now - the pre-meeting-transcript-summary days. Being able to condense an hour of recorded conversation into an organized list of bullet points still feels like a magic trick.

Something I’ve thought about recently is that its not always what was said, it was the way it was said, that leads to some a-ha moment. Or I might have missed it completely the first time around, because I was looking at a cool bird outside or making the mistake of multi-tasking.

There’s also the risk that even the best summaries start to all blend together. If something starts to feel too ‘automated,’ my brain has a harder time latching onto it as important. This is probably a side effect of learning to filter out automated noise vs human signals that I should actually pay attention to.

So what can I do to keep some of those original statements intact? A very simple first experiment is to ask the model to include some quotes alongside the condensed version.

So far, I’ve played around with some variations of:

Include up to 3 unique, unusual, or especially illustrative quotes from each participant.

I can’t share any real examples from my job, but here’s examples of the results when I used it when summarizing a transcript from an interview with Jaron Latimer:

Source: https://www.youtube.com/watch?v=kc_Jq42Og7Q

(Video was transcribed with Tactiq’s free transcript generator: https://tactiq.io/tools/youtube-transcript, and no other pre-processing was done.)

Full prompt:

You are an expert at discourse on social media and its effects on society.  The following is a transcript of an interview with timestamps included. Your task is to:

1.) Identify the key points or takeaways from this discussion that relate to social media and its effects on society.
2.) Share up to 3 unique, unusual, or especially illustrative quotes from each participant. Add punctuation if necessary, to make them easier to understand.
3.) Identify at least one call to action.
4.) Ask at least one followup question of each participant to clarify something that they said.

GPT4o:

Key Points on Social Media and Its Effects on Society

  1. Manipulation by Algorithms: Social media platforms use algorithms to subtly manipulate users’ behavior, benefiting advertisers by slightly adjusting users’ preferences without their knowledge.

  2. Impact on Society: Social media has led to increased surveillance, behavior modification, political instability, and a rise in depression, particularly among teenagers.

  3. Addictive Nature: Social media platforms are designed to be addictive, using schemes of rewards and punishments to keep users engaged, which can have severe psychological effects.

  4. Negative Feedback Loop: The algorithms amplify negative responses faster than positive ones, leading to a more polarized and irritable society.

  5. Need for a Business Model Change: The current business model of social media, which relies on manipulation for profit, needs to be replaced with alternatives that do not exploit users.

Illustrative Quotes

Participant 1

  1. “You’re being subtly manipulated by algorithms that are watching everything you do constantly.”
  2. “Society has been gradually darkened by this scheme in which everyone is under surveillance all the time.”
  3. “The algorithms naturally catch the negativity and amplify it, introducing negative people to each other.”

Participant 2

  1. “The problem is that behavioral techniques are often invisible to the person who’s being manipulated.”
  2. “If you’re a young person and you’ve only lived with social media, your first duty is to yourself.”
  3. “We don’t have to get rid of the smartphone; we just have to get rid of the manipulation machine in the background.”

Call to Action

  • For Individuals: Take a break from social media for six months to gain perspective and understand yourself better without constant manipulation.
  • For Society: Encourage a small percentage of the population to stay off social media to maintain diverse perspectives and enable real conversations.

Follow-Up Questions

To Participant 1

  • You mentioned that social media algorithms manipulate users by amplifying negativity. Could you provide specific examples of how this has played out in real-world scenarios?

To Participant 2

  • You suggested that a different business model could address the issues with social media. What specific business models do you believe would be most effective in eliminating the manipulation aspect?

Claude3 Opus:

Here are the key takeaways, quotes, call to action, and followup questions based on the interview transcript:

Key Points/Takeaways:

  1. Social media platforms are manipulating users through algorithms that constantly observe behavior and subtly modify what people see to influence them, often in negative ways. This is bad for both individuals and society.

  2. The business model of major social media companies is fundamentally about addiction and manipulation. Even with efforts to tweak the services, as long as the underlying incentives remain the same, the core problems will persist.

  3. There is strong evidence linking social media usage to concerning issues like teen depression and suicide. The rapid feedback loops and greater amplification of negativity online is having a detrimental impact.

Illustrative Quotes:

From the interviewee:

“You’re being subtly manipulated by algorithms that are watching everything you do constantly and then sending you changes in your media feed, in your diet, that are calculated to adjust you slightly to the liking of some unseen advertiser.”

“Society has been gradually darkened by this scheme in which everyone is under surveillance all the time, and everyone is under this mild version of behavior modification all the time. It’s made people jittery and cranky. It’s made teens especially depressed, which can be quite severe.”

“As long as [Facebook’s] business incentives are contrary to the interests of the people who use it, who are different from the customers, then no matter how serious, and I believe he’s sincere, and no matter how clever he is, he can’t undo that problem. He has to go back to the basics and change the nature of the business plan.”

Call to Action:

The interviewee urges people, especially young people, to take a extended break from social media (at least 6 months) in order to gain perspective and truly know themselves without the constant manipulation. He believes if at least some percentage of society can exist outside these platforms, it will allow for clearer thinking and important conversations about the impact of social media.

Followup Questions:

For the interviewee:

You mention viable alternative business models for social media that don’t rely on manipulation, such as subscription services or public utilities. Can you expand on what you think is the most promising path forward to evolve these platforms in a healthier direction?

For the interviewer:

You pushed back on whether social media can really be blamed for serious issues like teen suicide, suggesting correlation doesn’t necessarily equal causation. If the evidence continues to mount that there is a direct link, at what point do you think regulatory action or major changes by the tech companies would be warranted? Where do you draw that line?


Looking at these results, it feels like a step in the right direction. Instead of just paraphrasing, we get a taste of each person’s way of saying things.

Since part of this experiment is to keep the essence of each participant’s point of view intact, I also want to play around with:

At least one of these quotes should illustrate the individual point of view of each participant, including statements that other participants might disagree with.

And to emphasize the commonalities to help find common ground for overcoming disagreements, it will be fun to play around with something like:

At least one of these quotes should illustrate what the participants appear to agree on.

And so on.

Of course, the calls to action and questions should also get tweaked according to my needs. For example, some conversations are going to need a much more comprehensive, action-oriented breakdown (like a meeting to plan next steps on a complex project). Others might focus more on diving even deeper with the followup questions to explore the topic in a less tactical way. Maybe one way to approach this could also be using a Role of ‘project manager’ for the former, versus ‘strategic researcher’ for the latter. The key step I’m focusing on here is to give people a chance to speak for themselves.

Read More

Speaking Up

One of the common pieces of feedback that stands out from earlier in my career is to speak up more in meetings. Sometimes people straight up said it. Other times, it was implied by other more vaguely worded feedback related to fuzzier concepts like scope of influence.

“Speaking up more” was challenging advice for me to follow. Sometimes it still is. I am the type of extremely introverted person who prefers to observe and analyze the situation for a while before voicing an opinion, like an owl hiding and watching from a hole in a tree, where hopefully nobody will notice me until I’m ready to emerge.

But when people ask you to speak up more, they aren’t asking you to just arbitrarily voice opinions. There are enough opinions. What people usually need is more people asking good questions.

One of the best parts about asking questions is that you don’t have to know anything about the topic at hand. In fact, you’re putting yourself in a position of someone who doesn’t know the answer, whether you actually think you do or not.

There is a reason why when reviewing a production incident, you ask five whys. It gets you closer and closer to the real problems you need to solve, to prevent similar issues to happening again.

This relates to the importance of cultivating curiosity. Following my curiosity has led to so much of the growth and momentum in my life. Curiosity is the thing that keeps you asking questions. Curiosity will lead you to the real answers, instead of the answers that agree with what you already believe. If you’re curious, you never need to worry about the things you can’t control. You’ll figure it out and find a way forward.

So if someone tells you that you need to speak up more, start by asking questions. If you can’t think of anything at first, you can start by asking why. Why are we thinking of doing X using Y? Why is this problem happening so often? That’s what starts the wheels turning. If you’re lucky, your curiosity engine will start revving to help you overcome your fears of speaking up.

Read More

Focused, Helpful Robots

Intro

Most of my writing these days is either just for me, or for my job. When I do write with the intention of posting something in public, I either get stuck in perfectionist limbo, or find other excuses to stop myself from actually sharing anything online. In the spirit of anti-perfectionism, I’m periodically posting topics from my grab bag of ‘shower thoughts’ on how we use and think about artificial intelligence.

These were all intended to be longer posts. Maybe they still will be someday. For now, I’d like to share them as topics that might inspire others to go down their own personal rabbit holes, or serve as jumping off points for conversations with other technology wranglers and daydreamers.


Focused, Helpful Robots

One thing I like about using tools like ChatGPT or Claude for guiding my personal research is the simplicity and mental quiet of the interface. They also have a sincerely helpful attitude that can be equally rare when you go to ask for help online. The feeling I get from using them brings me back to watching “Star Trek: The Next Generation” as a kid. That was a future I wanted to live in. Not only was there all that stuff about Earth becoming a utopia - you could just say “Computer” and a helpful voice would always respond, that wasn’t just trying to get you to buy more stuff on Amazon.

Since then, the internet has turned into a noisy, distracting, and crowded place. There have always been pop-ups and viruses, but now it’s like we’ve designed the viruses right into the baseline experience. Automatically playing videos constantly invade your field of view. Even the stuff that isn’t technically ads is presented like ads, optimized to be attention-grabbing over anything else. Screen real estate on the web is maximized like a downtown office block or dystopian housing development, crammed shoulder-to-shoulder with nested content. Compared to this hectic digital landscape, using a simple text box on an otherwise mostly-blank page feels like stepping into a secluded Zen garden.

I also like that GenAI has a can-do attitude. It doesn’t aggressively question why you’re wanting to know about something in the first place. I’ve spent a lot of time on the internet trying to get help on forums. It turns out that a lot of the people who frequent forums are there to tell you not to do whatever it is you’re trying to do. I have yet to have GenAI tell me the same thing. It does couch its answers with precautions and caveats, which is what you actually want from people, too. And I do appreciate when people ask why I want to do something, to clarify intent. It’s something I’d like to start using more in my instructions to GenAI. But what I don’t want is for that ‘why’ to turn into an essay about how what I’m doing is doomed and wrong and pointless.

You might say that you get what you pay for. Subscriptions to GenAI tools charge you enough money that maybe they can ‘afford’ not to distract you. You could say something similar about the kind of advice you might get for free online. But I think there is a new pattern forming here that I hope continues to spread, as these kinds of tools evolve and become more central to user experiences. By now, people have accepted that they’re going to have to filter through a lot of noise on the internet to get a signal. We do this today using a combination of dedicated apps and simply scrolling past all the desperate cries for clicks. But what if in the future, clicks really didn’t matter? What if the attraction of a focused and helpful robot was able to create a positive emotional connection with an app or service, that didn’t require any clickbait or pop-ups or flashing lights?

I’m hoping the emotional reaction I have to using GenAI tools isn’t just a fluke or temporary condition. If it isn’t, then I can dare to dream that focused, helpful robots will someday reverse the trend of putting so much strain on the user to simply do what they came here to do. And maybe my Star Trek: TNG dreams aren’t totally dead after all.

Read More

Certified Organic Content

Intro

Most of my writing these days is either just for me, or for my job. When I do write with the intention of posting something in public, I either get stuck in perfectionist limbo, or find other excuses to stop myself from actually sharing anything online.

In the spirit of anti-perfectionism, I’m periodically posting topics from my grab bag of ‘shower thoughts’ on how we use and think about artificial intelligence. As someone who works with AI-driven systems, it’s both an inspiring and sometimes frustrating time to be alive. Not only are there now celebrity-level model architectures, but anyone is now able to interact with these models in the same way we would do a Google search or write an instruction manual. These posts will cover both the highs and lows of what I think this all implies for people on both sides of the two-way mirror (my current favorite metaphor for the way these models exist in the world).

These were all intended to be longer posts. Maybe they still will be someday. For now, I’d like to share them as topics that might inspire others to go down their own personal rabbit holes, or serve as jumping off points for conversations with other technology wranglers and daydreamers.


“Certified Organic Content”

The practice of adding disclaimers to AI-generated content is one way to make sure that people know that it might be misleading, or totally wrong. In other words, the creator knows that a model’s effectiveness is expected to vary, so they encourage the human seeing it to be more skeptical than usual and maybe cut it some extra slack.

However, since both AI and misinformation are both so increasingly embedded in our digital experiences, it could make more sense in the future to call out when content is 100% ‘organic’ or reviewed by a human expert.

One reason is purely a practical one, and the other is based on changing expectations. The scope and penetration of machine-made decisions is already difficult to unweave from the fabric of any content on the internet. And in the near future, the expectation that any given content is assisted or completely generated by a machine versus a human will be even more status quo. Personally speaking, I’ve already run into plenty of situations where I found myself questioning, ‘is this a bad writer, or just bad robo-content?’

When that time comes, I think a lot of people will be more interested in knowing that specific content is ‘Certified Organic’ and created by humans, versus being made aware of when its AI-generated. Just like how more people care more today about the food or other stuff they put in their body than they did a hundred years ago, there’s a growing awareness that the content we choose to ‘ingest’ affects our emotional health and even the wiring of our brains. As time passes, a growing number of people will become more selective about whether they’re consuming ‘processed’ digital content, versus something organically generated.

Of course, even if adding public disclaimers becomes less useful over time, there will always be a responsibility behind the scenes to catalog the jobs that AI is doing in a given system. For example, it’s critical to keep track of what decisions in a system are decided based on personal information. It’s also important to be aware of which decisions are probabilistic, regardless of whether there’s anything personal involved. For the former, there’s privacy concerns and evolving laws and regulations to reckon with. And for the latter, there’s always some cost involved when machines get things wrong. If we have a sense of what gets decided by a model, we can pay more attention to the riskiest decisions, in order to avoid the highest costs and cover up those mistakes in some elegant way.

Outside of the internal responsibilities of the system owners however, I still think that the people on the reflective side of the two-way mirror will ultimately care more about knowing what was created by other people, versus needing confirmation of what specifically came from machines. (Explaining why a machine did what it did is another area that could see more attention soon, and that could become either way harder or easier to do, the more opaque and powerful these models become. But that’s a topic for another day.)


Books I’ve read that probably influenced this post:

Read More