Is Claude Getting Worse?

According to some, there is a conspiracy brewing at Anthropic. Their models mysteriously got dumber lately, and they refuse to answer to it. All kinds of anecdotes and explanations are popping up on the ClaudeAI subreddit, including from people who appear to know a lot about how these models work. This drama intensified after someone from the company dropped in to assert that they hadn’t noticed any widespread issues that would result in a global degradation. Everyone loves a good conspiracy, so as the “X-Files” music started playing in my head, I reflected back on my recent experiences with Claude to find some evidence that I was being lied to.

Read More

In Their Own Words

It feels like ages ago now - the pre-meeting-transcript-summary days. Being able to condense an hour of recorded conversation into an organized list of bullet points still feels like a magic trick.

Read More

Speaking Up

One of the common pieces of feedback that stands out from earlier in my career is to speak up more in meetings. Sometimes people straight up said it. Other times, it was implied by other more vaguely worded feedback related to fuzzier concepts like scope of influence.

Read More

Focused, Helpful Robots

One thing I like about using tools like ChatGPT or Claude for guiding my personal research is the simplicity and mental quiet of the interface. They also have a sincerely helpful attitude that can be equally rare when you go to ask for help online. The feeling I get from using them brings me back to watching “Star Trek: The Next Generation” as a kid. That was a future I wanted to live in. Not only was there all that stuff about Earth becoming a utopia - you could just say “Computer” and a helpful voice would always respond, that wasn’t just trying to get you to buy more stuff on Amazon.

Read More

Certified Organic Content

The practice of adding disclaimers to AI-generated content is one way to make sure that people know that it might be misleading, or totally wrong. In other words, the creator knows that a model’s effectiveness is expected to vary, so they encourage the human seeing it to be more skeptical than usual and maybe cut it some extra slack. However, since both AI and misinformation are both so increasingly embedded in our digital experiences, it could make more sense in the future to call out when content is 100% ‘organic’ or reviewed by a human expert.

Read More