5 Comments
Jul 8·edited Jul 8Liked by Chris Best

Very interesting conversation. The bit about pushing out his own point of view at scale is how I think we’ll see the browsing experience evolve over time, where you have some custom gpt filtering what you have rendered on your screen with the perspective of people you trust. Since the people you most trust can’t do that at scale, I think they’ll have an AI mirror do it for them, trained on a smaller corpus of their writing.

On the God stuff, interesting to hear a perspective of God in terms of “power” or ability whereas my definition is something like “the author of the moral shape of the universe” or “goodness itself.” Even giant weird space intelligences are just regular guys from my perspective.

Expand full comment

I'm glad you guys didn't pull any punches.

Especially regarding Zeus. Dude's an animal. Literally very often when he surprises women. RIP Noah ⚡⚡⚡

Expand full comment

Noah writes, "I'm kind of sad that the internet is going to be crammed with AI slop from now until the end of time."

Hmm.... I don't understand why people are worried about AI slop, but not human slop.

As example, social media is the biggest pile of human generated slop content in human history. ChatGPT output is of higher quality than what most humans post on social media most of the time. I've experimented with this on Notes, and it's super easy to create the highest quality comment in a thread simply by entering a quick one sentence prompt in to ChatGPT. Why don't we evaluate a piece of content based on what it adds to the conversation, instead of how it was created?

Here's how we might fix social media. Create a platform that uses a LLM as it's front end. The user enters a prompt in to the LLM, and the LLM generates text. If the user likes what the LLM has generated, they click submit and the text is posted to the platform. If they don't like the LLM text, they click delete and can try again. The LLM text can not be edited, and there is no other way to post.

Using a LLM as the front end to a social media platform would accomplish two important things.

1) All the negative name calling trolling type behavior that has driven so many off of Twitter and Facebook (and soon Notes) would never be published, so it wouldn't have to be managed.

2) The quality of the content on the platform would, on average, substantially exceed that posted by humans most of the time.

Think of a LLM as we would a human editor, whose job it is to take the author's idea and enhance it, while cutting out the crap.

Yes, this would be very unpopular with many users. Here's why. They're human beings, and so they care only about their own ego situation, and not the quality of the platform. Everything is all about "me". The equation driving social media is that users want attention and validation for the least possible effort on their part. And so most of the content is short form slop.

The Internet is going to be crammed with human generated slop from now until the end of time.

Expand full comment

Good interview. Would a big bird god create ai slop? AI can imagine an evil big bird, which makes it fundamentally a dick.

Expand full comment

V kool Noah & Chris...v interesting. Big Bird rules...religiop-eco-AI slop..v v kool 💯

Expand full comment