This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.
I had a viral sandwich skeet. I got more feedback on this sandwich than I’ve gotten on any post on my Twitter account in months.
(LAUGHING) First of all, your sandwich photo got 20 likes. OK?
On Bluesky, that’s half the user base!
Just let me have this.
Fine, you can have it.
[MUSIC PLAYING]
I’m Kevin Roose. I’m a tech columnist at The New York Times.
I’m Casey Newton from Platformer.
And you’re listening to “Hard Fork.”
This week, Bluesky is flying high. But why? Also, the AI jobs apocalypse has started but not the way you think. And finally, it’s time for some hard questions.
I hope they’re not too hard.
So last week on the show, we talked briefly about Bluesky, this new decentralized social media app that is basically a Twitter clone. And then in the past week, Bluesky really had a moment. It’s having a moment. It’s gotten a ton of new sign ups.
People are calling it the successor to Twitter. It’s sort of taking over at least the very online part of the culture that I inhabit. After our show last week, I would say I got at least 20 texts and DMs from listeners and friends of mine asking for invite codes to Bluesky.
Oh, yeah.
It is definitely a subject of a lot of curiosity and interest and speculation right now. So maybe we can work through it together and try to come up with some ideas about whether or not this actually is the Twitter clone that people have been waiting for.
Yeah, let’s do it.
So walk us through what Bluesky is, where it came from, and what the basic elevator pitch is.
Yeah. So Bluesky is an app. It’s basically a Twitter clone but is different in some key ways. And we’ll talk about that. But it helps to know that it was started by Jack Dorsey in 2019 while he was still CEO of Twitter. Jack had become convinced that Twitter wasn’t going to work as a public company. And so he started Bluesky to build basically a version of Twitter, the website, that couldn’t be controlled by any single company. He wanted to decentralize it.
So fast forward to now. And Bluesky is an independent company that makes the Bluesky app, which lets you view posts using something they created called the AT Protocol. And a really good way to think about the AT Protocol is that it’s like email. Anyone can host an email server. You can access your email from any number of applications. No central authority is in control of email. And as long as your app understands the underlying email protocol, you can access it.
Right, which is SMTP I think is the standard email protocol.
OK, if you want to get that nerdy, then, yes, Kevin. It’s SMTP.
[LAUGHS]: So basically, a protocol is a kind of fancy tech jargon for just a thing that allows different apps to talk to each other.
Yeah.
And Jack Dorsey was interested in this idea of a protocol for social media in part because he was unhappy with the way that Twitter was going and the fact that all of the moderation decisions about what should and shouldn’t be allowed on Twitter were being made by a small group of Twitter employees. Is that roughly right?
There were a bunch of things going on. That was definitely one of them. He was uncomfortable with all of the free speech questions that Twitter was having to answer that it was ultimately not accountable for. No one is elected to the Twitter board to vote on what posts stay up and come down.
And the other really important thing is that Jack Dorsey is a Bitcoin nut. And a huge part of Bitcoin is this idea of decentralization. No one entity is going to control the Bitcoin protocol. He wanted to bring that same idea to a social network. No one entity is going to control it. You’ll be able to build a better experience for yourself on Bluesky.
Right. So the problem that he was trying to solve with Bluesky is this problem of centralized control, the fact that you build a social network, and then one group of employees of that social network has to make all of the decisions about it. And my understanding, even though Jack Dorsey is very interested in crypto and Bitcoin, this is not a crypto thing, right? It does not operate on a blockchain in the way that we conventionally think of blockchains.
No, this is not a blockchain. And I really don’t like the word “decentralization” because you hear it, and your eyes glaze over immediately. And so let’s talk about why you might care.
People have very different feelings about what they want to see in their social feeds. They have different standards about nudity. They have different standards about curse words. They have different standards about hate speech. One of the ideas of a social network that is decentralized was that you would be able to have a lot more control over that. You would be able to either use a client that filtered all of that out on your behalf, or you’d be able to install some kind of plugin that would do that within the client of your choice.
I don’t know about you, but for me, the best social network is my group chats, a small handful of group chats. And the reason that they’re so good is that the people in them, they all know how to act. They all know what each other are going to find funny and interesting. And so whenever I get one of those notifications, I know I’m going to have a good time there. The dream of a decentralized social network is moving the current Twitter experience radically closer to your group chat experience. Only see friends and people who have been vetted. You can imagine the algorithms that it would take to create that. But the hope is that we’ll get there.
Right. So, for example, Bluesky has an official FAQ page. And one of the things that they say in that is that if, for example, the ACLU wanted to make a list of hate groups and make that publicly available, you as a user of Bluesky or another client of this decentralized protocol could just say, I want to block all of the accounts on that list. If you generally agree with the ACLU, and you don’t want to see those groups in your feed, you could just click a button and say, add the ACLU’s hate group block list to my feed, and it would do it. And if you, on the other hand, don’t agree with the ACLU and don’t want that stuff filtered out of your feed, you can use someone else’s algorithm. It allows you to choose your own adventure.
Right. And look. I mean, how much time does the average person want to spend fiddling with their algorithms? I don’t know. I imagine that there will be some default apps and some default algorithms that work for most people. But I think it’s important to say we have not actually had that opportunity yet on the internet. And it could be a really great thing if particularly the folks who just care a lot about these things were able to tweak that.
So let’s talk about the Bluesky experience.
Which was my band in high school. But go on.
[LAUGHS]: So I got my invite code last week just before we taped the show. And at the time, when I logged on, there was this AI Birduck. But the majority of the people who were posting seemed to be early adopter tech natives like software engineers, crypto fans, people who have been on Bluesky for a long time since the beginning a few months ago.
We shouldn’t say a long time because we’re talking about February.
[LAUGHS]:
But, yeah, they were the OGs.
Right. So the OGs were there. But then in the past week, Bluesky has had just a surge of new users. So —
Before we get there, we have to talk about your iconic first post. Do you want to talk about your post?
[LAUGHS]: Well, so I posted a picture. I had a very good sandwich last week after we taped the show. I had a chicken pesto sandwich on a Dutch crunch bread. Are you familiar with Dutch crunch bread?
Dutch crunch is — you can basically only get it in San Francisco.
It’s the best bread.
It’s the best.
I don’t understand why it’s not everywhere.
I don’t either.
So I took this photo of my sandwich in an homage to the kinds of stupid bullshit that people used to post on Twitter back in the day. And I posted my sandwich.
And it was great. And it inspired me to post I believe later that day I’d gotten a new knife sharpener. I just posted my picture of myself with my knife sharpener. There was no point to it. But you know what? I immediately got feedback saying, that’s actually quite a good knife sharpener. Oh, I see you’ve been reading “Cook’s Illustrated.” That’s their top pick. And I said, yes, that actually is why I bought it. Already I was feeling community. And I was having a better time than I’ve had on Twitter in months.
[LAUGHS]: Right. It did feel kind of refreshing to just start from zero followers and just not have so many people watching what you were doing. It just felt like a clean slate.
Absolutely. And I immediately had the sense that the folks on Bluesky — and by the way, this became much truer as the days went on and more people joined. But I truly felt that almost everyone I was seeing on that app was there to have a good time. And it made me realize that on the other apps that I have been using, people are not there to have a good time. They are there for warfare.
It is a battleground. And they are there to win. They are not there to make friends.
Totally. You got to suit up, put on your armor —
Yes.
— get your sword, and go onto the timeline. Whereas with Bluesky, it just felt like people were just messing around.
Absolutely. So, yeah, people are posting pictures not just of sandwiches and knife sharpeners. But they’re posting all kinds of things. I would say that the vibe of Bluesky right now is just a bunch of people who are happy to not be on Twitter anymore, who are using it as an opportunity to reset their own social media personas.
It’s also like — I think we should say like a lot of the people who have been most active on it at least in my experience are people who did not feel safe on Twitter. Trans people, members of marginalized groups. It does feel like those are some of the early adopters of Bluesky in part because those are people who probably stopped posting on Twitter because every time they posted, they would get some hateful mob in their replies.
Yeah, that’s right. And I think one reason why this site seems fun is that it is mostly like minded people. A problem that social networks have is context collapse, which is basically you get millions of people together in a space where everyone has different levels of understanding, different politics, different beliefs. And so, of course, it descends into warfare.
Bluesky isn’t like that right now because everyone who is there has been invited by someone else. And so it’s very ideologically homogeneous in a way that I think some folks out there would criticize. But I think it’s undeniably part of what has made it fun for the people who like those kind of people.
[LAUGHS]: Yeah. I mean, the sort of prototypical Bluesky user right now is a brain damaged shit poster. And I say that in the most loving way, someone who’s just had their brain completely damaged by social media, including me, or a journalist.
There’s also — so there’s on the actual app, it looks a lot like Twitter. Their version of the For You feed is something called What’s Hot, which users have taken to calling the hot feed. And I’d say over the weekend, I was looking at Bluesky a lot. And I would say it was a mixture of people posting nude thirst traps of themselves.
And I thought it was brave of you to do that.
[LAUGHS]: I did not post a nude thirst trap. But I did see a lot of other people’s nude thirst traps. And there was also just a jokey kind of self referential thing. There was also something called the hell thread. Did you see the hell thread?
I did see the hell thread. But fortunately, I was not part of the hell thread.
Basically, this is a very new app. They’re still working out some of the bugs. But if you create a thread on Bluesky that gets past a certain size, it just breaks the app.
Yeah.
And so users started tagging each other into the hell thread. So if I tag you in a reply in the hell thread, it just kind of ruins your whole notifications tab.
Well, importantly, I would get a notification every time anyone replied to the hell thread, which at some point was some significant percentage of the entire app was replying to one thread. And much in the same way that a reply all fail at your company can actually be one of the best and funniest things that ever happens in your company, it was much the same with the hell thread.
So Bluesky, it’s got more than 50,000 users right now, which is still tiny by social network standards. But I would say that it’s the first time that I’ve experienced a new social media app and actually thought that it might have the potential to dethrone Twitter at least for the subset of people who don’t like the current direction that Twitter is headed in.
Yes. I think that is fair. As somebody who writes mostly about social networks, I will say that I have seen this movie before. I have this pop up restaurant theory of social networking which says that new social networks are like pop up restaurants in big cities where they open up. They’re mostly just serving some reshuffled ingredients you could get at a lot of other restaurants. But they’re shiny and new. And all the early adopter foodies love to go there and check it out.
But guess what? In two weeks, it’s over. And everybody’s back eating at Olive Garden. Right? And we’ve seen it time and time again with apps named Peach and Ello and Vero and other names that you have forgotten. But people really do tend to come and go from these things.
Right. But I would say the thing that I appreciate about Bluesky is two things. One is I actually think there is something to this idea of decentralization.
Yeah.
I think that what we’ve seen over the past decade is that social media networks that are run by small groups of people, mostly people in San Francisco, their content moderation decisions become very controversial. They can get hauled in front of Congress and pressured to do certain things. They’re not robust in the way that frankly something like email is.
And so I do think there is value in experimenting with something that is less centralized. But I think that is actually my secondary thing that I appreciate about it. I think the primary one is just that people are having fun, right? It’s a place where it doesn’t feel like the stakes are as existentially high. People are just sort of goofing off. My favorite post that I saw, someone was describing Bluesky. And they were saying, it feels like everyone’s parents dropped them off at the mall at the same time.
Yeah.
And that’s just the vibe right now. It’s sort of a raucous, unruly party. No one really is investing in it because, honestly, you’re right. We don’t know if this thing is going to stick around. But people are having fun. And they’re being more loose and free than they maybe are on Twitter.
Yeah. And I’m glad that you mentioned that. This is a temporary phenomenon, right? This feeling that it has, it will not survive adding another million or 10 million users. And so there is a certain aspect of get while the getting is good there.
I think the question is, what happens when it doesn’t feel like this anymore? I’m glad it feels like this right now. And so I’m already starting to think, what is the next set of things that they need to do? And there are things they need to do. And we should talk about them.
Like what?
Well, number one, we don’t really know how content moderation works on a decentralized network, right? If there is a very bad post on Instagram, there is a team at Instagram who will remove that. That’s not really true on a decentralized network, particularly one where you have a little federation of servers that all have their own different rules.
Now, right now, on Bluesky, if your account is on the Bluesky server, the default server when you first sign up, there is a team at Bluesky. And they have banned people. There were people who were coming in and saying transphobic things. They had to get banned. And so it worked in that context. But once you expand this ecosystem, and there’s all kinds of clients, and there’s all kinds of servers, there are just many questions about how you’re going to moderate that stuff.
Right. It’s harder to take down someone doxes someone or threatens someone in a violent way. You may be able to delete that from one server or one instance of this protocol. But you can’t delete it from all of them. Is that right?
That’s right. I have also read that your block lists are public, that encoded in the metadata of your pound is who you block, which I think is probably necessary for other servers that are trying to figure out what they can show you. You need to expose that to them in some way so they can understand, oh, well, don’t show him Kevin’s sandwich tweets. That’s triggering for him.
[LAUGHS]:
You know? But you can imagine the misuses of being able to know who everyone is blocking. I mean, just publishing people’s block lists in a weird way might become an avenue for abuse. So I think that’s an issue.
And then, look, I think we’ve had a lot of fun talking about some of the nerdier aspects of this platform. I do think there is a very real question of how mainstream this kind of thing can be. Most people are never going to care about decentralization. They just want a shiny, fun social app that Instagram and TikTok are already providing for them.
So one of my questions is, can the Bluesky team make it feel like a really welcoming experience that your less tech savvy friends and family are going to want to use? Or is this truly just going to be the new Bloomberg terminal for Twitter nerds?
Yeah. Casey, I know it’s early. We’ve only been on this app for a week. But what is your prediction for how this shakes out?
I think Bluesky is going to get a lot bigger at least for a time. I think the next month is really critical for them. I think that the real test is how quickly can they ship new stuff? Because we’ve seen a lot of people come along in the past six months that also want to be the new Twitter. And guess what? They’re just not shipping very quickly. And it hurts. So the faster that these folks can get stuff out the door, the better off they’re going to be.
Yeah. I think it actually does have a real shot. And I would put it at a 50/50 chance that it takes off and becomes a viable alternative to Twitter. I’ve always thought that the thing that replaces Twitter won’t look like Twitter. It’ll be some very different behavior. It’ll be video based, some new user interface.
But I actually think that the mismanagement of Twitter under Elon Musk and the fact that the platform is rapidly becoming unusable for a lot of people has created this one time opportunity for something that feels pretty much like Twitter used to. And it doesn’t actually need a new user experience or a whole lot of new bells and whistles. It can just be what Twitter used to be. And that could actually work. It certainly feels like something that has momentum. And judging by the texts that I’m getting from people begging for invite codes, it does seem to be attracting a lot of the right people.
Would you say the sky’s the limit?
[LAUGHS]: I would say the sky’s the limit.
OK.
[MUSIC PLAYING]
We’ll be right back.
Casey, we’ve been talking a lot on this show about AI and how this new class of generative AI tools like Chat GPT will upend some established businesses and could actually eliminate a lot of jobs. But up until now, this discussion has felt a little speculative. We haven’t actually seen generative AI start to take a toll on existing businesses.
But this week, we actually started to see that happening inside a few big companies. So I want to start by talking about Chegg. Chegg is an educational tech company. It’s a public company. And —
It has one of the dumbest names of any public company. People say that social networks have dumb names. But Chegg with two Gs? Come on.
So Chegg is a company that I have never patronized but that is apparently very popular with college students.
Yeah, it’s a little after your time, Kevin.
So Chegg I think started as a textbook rental company. You could rent your physics textbook from them instead of buying it from the store. But they then pivoted into what sounds to me like a kind of Uber for cheating on your homework business.
Yeah. This was how you cheated in the world before Chat GPT.
[LAUGHS]: Right. So Chegg is a verb on college campuses now apparently. If you’re having trouble with your problem set, people will just say, oh, I’m going to Chegg it.
Yeah. Chegg I think is actually just French for “cheat.” That’s my understanding.
[LAUGHS]: So Chegg this week announced its quarterly earnings. And during this earnings call, its CEO made some comments that really stood out to me.
And thanks for listening to the Chegg quarterly earnings call by the way. I’m glad we had you on that assignment.
[LAUGHS]: I took one for the team.
[LAUGHS]:
So Chegg’s CEO basically admitted that the service had been struggling to keep up with Chat GPT because so many students are using Chat GPT for help with their homework, that they are not Chegging it anymore. And he actually said that they would stop providing full year guidance for their revenue forecasts.
Which means they no longer have any idea how much money they’re going to make this year because of AI.
[LAUGHS]: Because of this one tool, Chat GPT, that has taken off across colleges everywhere. So Chegg’s stock is down almost 50 percent on this news. And it is a big company. It’s got thousands of employees. And it is in real trouble right now. So you saw this story. What did you think?
Well, I was fascinated by this story because think about it from a business perspective, Kevin. You’ve got two businesses. One lets you cheat on your homework for a monthly subscription fee. The other one lets you cheat on your homework for free.
[LAUGHS]:
I think it’s clear which one of those businesses is going to succeed among 14-year-olds. Right? And so it does seem like Chegg kind of just got caught up by its own game a little bit. They provided something that was valuable until it wasn’t.
And this is one of the big questions we have about AI. What used to be valuable that no longer is? And Chegg was the first moment where I thought, OK, we’ve been talking in this very theoretical zone about the future of AI and automation. What’s it going to mean for the job market? But this week, it really felt like that chicken came home to roost.
Yeah, that chicken laid a Chegg.
That “Chegg-in” came home to roost.
[LAUGHS]: So I will say I don’t see a lot of people shedding tears for Chegg. It is not a beloved company. College students in particular seem to have a love hate relationship with it because it does cost money to use. And people have complained that the Chegg experts that you can ask questions of —
The “Chegg-sperts?”
The “Chegg-sperts.”
Sometimes they’re not very responsive, or they don’t give you the answer that you’re looking for in time to use it on your exam. So I’m not seeing a whole lot of love lost for Chegg in this scenario. But I do think that this is an early example of the kind of story that we’re going to be seeing a lot of, which is companies that used to do something related to knowledge work or knowledge production saying, we actually don’t know what the future of our business looks like because Chat GPT has eaten into our core revenue stream.
Yeah. Now, are there any caveats here? Is it possible that Chegg is going to survive the AI apocalypse?
Yeah, so Chegg didn’t just say on its earnings call that its revenue is in trouble because of Chat GPT. It also said that it is investing in more AI for itself. So it’s possible that Chegg incorporates some of these tools into its offering and uses that to catch up.
Which we’re seeing a lot of other companies do, by the way.
Yeah. I mean, I’m thinking of something like Duolingo, which is another app that I would say would have been threatened by generative AI for what it does, which is helping you learn languages and translate things on the fly. But they have since announced that they are incorporating generative AI into their product. So their stock has not taken the kind of hit that Chegg’s did.
There’s a whole class of these businesses. I think a lot about Grammarly, which is a company that you can pay a subscription fee to to improve your writing. And lots of folks use it. They’ve been emailing me a lot telling me about the investments they’re making in AI. And I’m just like, I don’t know. I just feel like Google Docs is going to implement a free version, Microsoft Office is going to implement a free version. And then good luck trying to out grammar [LAUGHS]: whatever AI they’re using at that point. So Chegg was really the first blow here. But I think a lot more are coming.
Yeah. Also, this week, IBM —
International Business Machines?
[LAUGHS]: I actually don’t think I could’ve told you what IBM stood for. IBM’s CEO Arvind Krishna said that the company expects to pause hiring for roles that it thinks could be replaced with artificial intelligence in the coming years. Krishna said that they would suspend or slow hiring in back office functions such as human resources that could ultimately be done by AI.
Krishna said, quote, “I could easily see 30 percent of that getting replaced by AI and automation over a five year period.” And he said that some HR functions like providing employment verification letters or moving employees between departments will likely be fully automated and that basically it’s not going to hire any more people to do those jobs. What did you make of that story?
Look, this is the kind of job loss that I think really freaks people out. If you worked in HR at IBM, that’s a good middle class job, right? You and your partner can probably afford to at least rent a nice house, have a couple kids, maybe get them into college. And when that goes away, those folks are going to need to potentially find a new line of work.
And if IBM is saying this, it’s not going to be just IBM, right? There are going to be a lot of other companies that also realize simultaneously that they don’t need as many people working on the back end too. So yeah, if I worked in one of those jobs, this is the kind of thing that would be sending a shiver down my spine right now.
So I’ve thought a lot and written a lot about how and when AI actually is a threat to jobs. And I think one misconception that a lot of people have is that AI is going to lead to mass layoffs. Your CEO will come in one day, and they’ll say, oh, we have this new AI powered tool that can do everything that the accounts payable department used to do. And so we’re going to just lay you off.
I think instead what’s going to happen is that there’s going to be a slow disappearance of these kind of back office or middle office jobs, these rote white collar jobs that maybe aren’t the sexiest applications of AI technology. But they’re where a lot of the productivity gains actually will be. And those jobs won’t disappear with a snap of the fingers one day.
But I think it will be the case that as people retire or as they change jobs and vacate those positions, they just won’t be refilled. That’s what basically IBM’s CEO is saying, that it’s not like they’re going to lay 8,000 people off tomorrow. But those positions will disappear over time.
But if you work in HR, does that actually matter? You’re still saying I’m going to lose my job. It’s just going to be on a slightly slower time frame.
I don’t think that’s what that means. I think it means that the composition of those jobs will change. So if you’re an HR person, and you used to spend 20 percent of your time giving people their benefits information or writing employment verification letters, maybe that shrinks to 0 percent of your time, but you spend your time doing something else. That task just gets taken over.
I think the real danger here is not mass layoffs at big companies due to AI. I think it’s something that we see over and over again with technology and automation, which is that new competitors enter a market that have many fewer employees than the companies that used to do that kind of work. And the smaller, leaner, more automated company gradually takes market share from the bigger, slower, more human dependent company in a way that results in net job losses. Do you know what I mean?
Yeah, yeah. I mean, that all make sense to me. I guess I’m just wondering if I’m the sort of person who is going to get one of these back office jobs in the past. Are you saying that, don’t worry, it’s going to be fine, we’ll find something else for you to do? Or does that person need to go find a different skill set?
I think this kind of work, the kind of necessary but boring back and middle office work that happens at big companies is actually where the disruption from AI will happen first. So, yeah, if I’m an HR person, and I don’t feel like my work is very creative or complex, if I’m basically just what they used to call swivelwear, basically a human who takes information from one place and puts it into another place, that kind of job I think is in danger.
I went to a dinner last night with the CEO of Box, the enterprise software company, and the CEO of HubSpot, which is an enterprise marketing company. And, of course, we were talking about AI and what is it going to do to the world. And Aaron Levie, who is the CEO of Box, was saying that historically, when we go through these kinds of transitions, it’s less often the case that jobs are lost, as you have been saying, and more often the case that we just try to figure out, well, what can the computer still not do? And that becomes the job.
And so it does feel like we’re moving into a world where the computers can do a lot more things. And so we are going to need to focus more on what the computer can’t do. And I think one reason why we’re scared is because you and I spend so much talking about how the computer can sure do a lot of stuff now. And it is starting to do it faster. And it’s improving exponentially.
And so I think the real question is, where do those two things meet? Is it the case that, as it has always been historically, we can always find things for the humans to do that the computer can’t? Or do we get to a place where the computer can just do so many things that we actually do have a kind of disruption we haven’t seen before?
Here’s how I’ve been thinking about this question of what jobs are actually safe from being replaced by AI. And I think they fall into three basic categories. The first is just stuff that the computer can’t do yet. And I think that’s what Aaron Levie it sounds like was talking about at this dinner was you just have to look at the AI that exists in the world right now and think, well, what can’t it do?
It can make art. It can write cover letters and college essays. But what are the things that it can’t do? I think right now, a very safe bet is that manual labor, things like plumbing, welding, construction, those things are very hard to automate, things that take place offline in the physical world.
Things you need hands for.
Yes, things you need hands for. That’s a pretty good bet that that is going to be hard to automate. So there’s a whole genre of jobs that AI can’t replace technically.
The second category is sort of things that we won’t want AI to do. That would include I think a lot of the jobs that we want to include human connection. So things like nurses or therapists or teachers. I actually think even if AI could teach you a math lesson as well as a teacher could, we’re still going to want teachers in our society because teachers do more than just conveying information. They help you. They nurture you.
They punish you.
[LAUGHS]: They punish you if you’re bad. So there’s a lot more to that job than just taking information from one place and putting it into a student’s head. The third category is just the jobs that I think are going to be protected, the jobs that we won’t let AI do. There are entire sectors of the economy that are very regulated. And I think there are just places where even if AI could technically give you advice like a doctor could, we have regulations that prevent just any old startup from inventing an AI doctor and putting that into every hospital.
Right. That’s why you’ll never see an AI compete on “RuPaul’s Drag Race.” The regulations simply will not allow it.
[LAUGHS]: So I do think that there are some jobs that are protected from automation that fall into one of those categories. But I think there are a lot of jobs that are actually at risk. And so if you are in one of those jobs, if you work in HR or accounting or another one of these white collar professions that have been pretty stable for a number of years, it may be time to start thinking about doing something else.
Or it may be time to start thinking about a union. We need to talk about the labor movement and its relation to all this. And specifically, I think we should talk about what’s happening with the Writers Guild of America.
Yeah. So catch me up on that.
Well, so this week, the WGA, which represents television and screenwriters, went on strike for the first time in 15 years. And it will not surprise you, Kevin, to learn that one of their concerns is limiting the way that AI is used in this industry.
So they have a couple of requests on that front. They want to make sure that literary material — so any writing of scripts or outlines — and also source material — so any of the ideas or drafts or projects — won’t be generated by AI. They don’t want studios essentially coming along and saying, hey, we had Chat GPT write the first draft of a script. Now you go polish it.
And I think this is really interesting that it’s already come to this. Chat GPT was only released, what, six months ago. And now you already have a major labor union in this country on strike saying, we’re drawing a line right here. And this is not going to happen to us.
Yeah. It’s a really interesting point. And I think I have a couple of things to say about it. One is I do not think that AI poses a short term risk to screenwriters. I have tried to do some screenwriting tasks with Chat GPT. It’s not very good. It can give you a passable attempt at a “Seinfeld” script. But I don’t think we are going to be seeing big budget movies that are scripted with Chat GPT anytime soon. It’s just not that good yet. I mean, maybe. And at the same time, how many horrible movies have you seen over the years?
Yeah. I mean, this is one where I’m glad the writers are fighting this because I do think there is a world — when you think about the most formulaic Hollywood blockbusters that are out there, I do think that could have a Chat GPT that is writing the bulk of that within a short amount of time. But I also don’t want to live in that world. I want the writers to continue to get paid.
And I think it’s important to note that one of the reasons that writers don’t want the studios to start using Chat GPT and its rivals for this sort of thing is that it just limits the number of things that writers will be hired to do. Because if you’re the studio, and you want to cut costs wherever you can so you can just keep more of the money for yourself, you want to figure out ways to not have to hire someone who’s represented by this union because they make more money.
Totally. It’s not that the writers are afraid of Chat GPT. It seems like it’s that they’re afraid that the studios will use Chat GPT to diminish their influence and their earning power, right? It’s easy to imagine a situation in which a studio uses AI to generate ideas for a screenplay or even draft some of that screenplay and then claims that those ideas are source material. And basically, instead of hiring a screenwriter to write a script, they’ll just basically say, OK, you’re polishing up this first draft. You are being hired as a punch up person for this thing that the AI has already created. And so we’re going to pay you less than we would’ve if you had written it from scratch yourself.
Yeah. So the writers are sort of the first ones to move here. But, Kevin, do you think this means that we’re going to see more unions coming together and fighting over these AI issues?
I do actually. When I was writing my book, I did a lot of research on how labor unions responded to the automation of factories. In the 20th century, there was this huge wave of robots coming into car factories and machine plants and things like that. And labor unions were very active. There were big clashes and strikes and backlash from workers at companies like Ford and GM over this question of automation and how much work should be automated.
And more to the point, when work does get automated and productivity and profits increase as a result, if you’re a car maker, and you used to be able to make 1,000 cars a day with a manual process, and through automation, you’re able to make 10,000 cars a day, and your profits soar as a result, who is getting those profits? Is it just the executives? Is it the companies that implement the automation? Or is it the workers? And I think labor unions were very instrumental in fighting for workers to actually see the fruits of all the increased productivity.
And so I think with this new wave of generative AI, what’s interesting is that the industries that it’s targeting are not historically unionized industries. They’re white collar industries. They’re more creative industries. I think the WGA is a rare example of a kind of union that represents white collar creative workers. And I do think there’s going to be more interest in labor organizing and union activity as these tools get closer to people’s jobs.
Mm. All right.
[MUSIC PLAYING]
Speaking of jobs —
Yeah.
— we have a new job this week, which is that we are going to attempt to be advice columnists.
We’re going to do something the computer can’t do.
Right. We’re going to tell you how to solve your problems. That’s coming up right after the break.
Kevin, every week, some new technology or feature enters our lives, and we are faced with the question, how do I use this ethically? If there are things that the computer can do that it didn’t used to do, am I allowed to use that in the way I want? Or there are some sort of guardrails that I need to prevent me from doing the wrong thing? And it is in that very dilemma that inspired our new segment, Hard Questions. And I believe I have a sound effect.
[ROCK MUSIC]
Hard Questions.
That was awesome. I want to fight a dragon now.
[LAUGHS]: So this is a new segment that we are trying out. It’s called Hard Questions. And the basic idea is there are these technologies in our lives that pose ethical and moral quandaries, the kind of stuff that might come up in the group chat where you say to your friends, I’m thinking about doing this thing with technology, or this thing happened with technology. How should I feel about it? How should I proceed?
That’s right. Now, listen. We’re not going to fix your printer. And if you’re having an issue with your router at home, we don’t want to hear about it. But if you’re facing a true dilemma where you can’t decide what to do, that could be good grist for Hard Questions.
Now, because this is the first time we’re doing this, we’ve also been scouring the internet for other dilemmas that give you an idea of the sort of things we want to talk about for this segment. And it was actually Reddit where I think we found maybe a great place to start.
So first up, this is a question that we found on a subreddit. And the title is “I’m using Chat GPT to breeze through freelance work. Do you think that’s ethical?”
All right. What kind of freelance work?
So this person says that they, quote, “make money across about six different websites on the internet.” And some of this is just summarizing massive pages of text that would take them a lot of time to go through manually. And they say that by using Chat GPT, they went from making $10 an hour to $35 an hour. That’s pretty impressive.
Yeah.
And they wrote, quote, “I legit don’t know if these clients know that this sort of work can be done by an AI in a matter of minutes depending on how much text there is. Every client pays and is very happy with my work. Is this something everyone’s just doing? Or am I lame for using Chat GPT to sift through loads of text for me when that’s what I’m being paid to do?” What do you make of this?
So this person should absolutely continue to use Chat GPT to do this kind of work. They’re making $10 an hour? That’s not enough for any job. Any job in America should be paying you more than that. And if you have found a way to triple your earnings in this way, I say go for it.
Now, I will also say that this little arbitrage grift you’re running has a shelf life, right? This is not going to last for three months. It is definitely not going to last for six months. So I say get while the getting is good. But at some point, you are going to need to find a new grift.
Totally agree. This is the kind of thing that software developers have been doing for years, which is, I need to build an app that does this thing for my company. And they just go to some open source repository. They pull an off the shelf tool. They install it, and they look like a genius. And then they charge for their work. So whoever posted this, keep doing this. But just know that there is a time clock ticking.
Now, does this feel in tension with you in any way with what the WGA folks are mad about? Because here we are saying, this person, yes, use AI to do this kind of writing and knowledge work. And we’re saying to the WGA, well, no, it’s actually better that you’re making a stand and not letting the studios use AI to do your knowledge work.
No, I think it really matters who is using the AI and why. So this is an example of a freelancer who’s using AI to improve their own productivity. I’m not opposed to WGA represented screenwriters using Chat GPT to punch up their own scripts. I think what they are protesting is the management of these studios using this technology to detract from their power and their autonomy. And that’s where I think they have an issue.
All right. Let’s turn to the next Hard Question. This comes from the subreddit No Stupid Questions. And the question, which is not stupid, is, if you catch your spouse having a deep relationship with an AI, would consider that cheating? Why or why not? Kevin?
I don’t know how to feel about this because on one level, if I found out that my partner had an AI husband that she was talking to all the time, I would feel a little weird about that. That would not be great.
On the other hand, it’s not literally cheating because there’s no other human involved. And so I would feel a little compromised about my ability to be upset about that. I don’t know. How would you feel if you were dating someone or married to someone, and you found out that they had an AI partner on the side?
Well, look. In the gay world, there are a lot of open relationships, you know? And I think that works for a lot of people and is fine. Sometimes, though, I will be around gay couples. There’s this one time I’m thinking of specifically. And I was walking down the street with this couple. And one of the boyfriends — we walked a mile. And the entire mile that we were walking, the boyfriend was on Grindr looking for somebody else to hang out with that night.
The partner said nothing. But I just looked over. I was like, if I was in the sort of relationship where my partner was just constantly trying to get with somebody else in front of me, that would just be annoying because, well, why am I in this relationship? If we’re in a relationship, we should be relating. And you should not be spending all your time scheming to get with somebody else.
So I want to be generally permissive about this sort of thing. If my boyfriend is talking to an AI and is providing some sort of emotional support, or he thinks it’s funny, maybe I’m hard to talk to about something, and the AI is really easy to talk to you about something. That seems fine by me. But if he never looks up from his dang phone when we’re trying to enjoy date night, then that’s going to be a problem.
Right. I think it’s about not the possibility that you’ll leave your partner for an AI. I think it’s more are you distracted? Are you present with the person that you’re actually with? Or are you just spending all your time chatting with this robot?
I will also say so many people are already in a more serious relationship with their phone than they are with their partner.
100 percent.
Walk around this world. Look at the couples at restaurants. Are they talking to each other? Or are they looking at their phones?
Totally. Totally. So I would put this in the same category as my partner has an addiction to a mobile video game or something. And they’re spending all their time playing that, not with me. That is a problem in a relationship. It is not cheating, but it is a problem. And that’s the same category that I would put this in.
That’s right. And I have been meaning to talk to you about your “Marvel Snap” addiction actually.
[LAUGHS]: I actually deleted it from my phone this week —
Good for you.
— because I was like, this is going to colonize my life. This is taking over. I can’t play this game anymore. I was cheating on my partner with “Marvel Snap.” And it was a problem.
[LAUGHS]:
OK. This next question comes from the Stable Diffusion subreddit. This person says, “I have been selling some of my or the AI’s work on T-shirts and NFTs. Is it ethical to sell art trained on such a wide array of real artists work? Am I in the wrong?” What do you think?
Well, so this is a great question because this is an unresolved legal issue, which is if you enter text into a text to image generator or a text video or Chat GPT, is the work product that has created a transformative use, a fair use of the material that was used to train the model? Or is that an illegal infringement on copyright or some other rights? And the courts have not yet decided this.
Here’s what I would say. If you are someone who is concerned about the ethics of using those images, which I think is a good thing to be, then you should hunt for image sets that are trained on licensed images. So Adobe has a beta of a product right now called Firefly. It does, among other things, text to image generation, not unlike Stable Diffusion or DALL-E.
And the gimmick is that they’re saying, all of the art that is in here, we have the rights to, that anybody whose work is in here is not going to come forward later and say they were never allowed to use that. My hope is that this is a really good image generator and that folks can use it to create transformative works and feel good about the things that they have made. And if we’re able to get to that world, then I think we can actually solve a lot of the current angst around the work product of these generative AIs.
See, I feel different about this, which is that I think that the people who are upset about AI imagery and copyright are basically drawing a line in the sand that has never been drawn before because these works, they are not borrowing portions of images from other copyrighted images. They are new creations. And artists have always borrowed and stolen from one another. No artist’s ideas are completely original. They have always studied and learn from other artists and other art in the process of coming up with their own ideas. And so I think this is sort of an automated version of that. But I don’t think it’s actually any different than what artists have been doing for millennia.
But the automation is what makes it feel unethical, right? Because if I’m Picasso, and I want to steal a move from Van Gogh, I still have to paint the dang painting, right? But if I’m just somebody who wants to create a Van Gogh like image, and I’m able to use a system that has all of his images, and, all of a sudden, I can trade on all of the equity that Van Gogh has built up in the images that he’s created, it does feel different.
I don’t know. I remember one time I went to the Louver in Paris. And outside the Louver, there was a guy who was painting the “Mona Lisa.” He had a stand on the street. And he was painting little miniature versions of the “Mona Lisa” and selling them to tourists.
And maybe that’s offensive to you if you’re a Leonardo de Vinci stan and you’re like, why is this man profiting off of replicas of this very famous painting? But the tourists didn’t care. They just thought it was cool that there was someone who could paint something that sort of looked like the “Mona Lisa” that they could buy and take home and put on the wall.
Yes, but he was painting, right? That’s the whole issue here is that these other folks aren’t painting. They’re typing.
Right. But how creative is that person who’s just making literal replicas of the “Mona Lisa?” That’s not a creative act. It might be a slow creative act. But I don’t actually think the speed of it matters at all.
OK. So when somebody inevitably in two years says, write me a book about automation in the style of Kevin Roose, and on the cover, it says, “Automation 2025 in the Style of Kevin Roose,” you say, well, that’s just baseball.
I think two things matter. I think what matters, A, is is the artist who is being synthesized, impersonated, copied alive or dead? Is this Leonardo de Vinci or one of the other great masters that people have been studying for hundreds of years and imitating? Or is it someone who’s alive today trying to make a living from selling art?
Like Banksy.
[LAUGHS]: Right. And I also think it’s important what is the sort of representation that is being made of the synthetic work? Is this person who is selling T-shirts and NFTs, are they saying, this is a Banksy? Or are they saying, this is an original creation that just maybe my prompt said something about Banksy in it, but it’s actually not being sold as an authentic Banksy? So I would say in general, I am less concerned about people copying or borrowing from other artists or creators that they admire because that kind of thing has been happening in a less automated way for centuries.
And I think that is fair. I would just say again that if you are cared about the ethics of this, you do have ethical alternatives, right? For example, if you want to do something with music, Grimes has now said, yeah, use my voice. Interesting. She later followed up and said, don’t use it to write Nazi lyrics, or I might actually make you stop that. So she drew a boundary, which I think makes a lot of sense.
But we now know. If you want to go make synthetic music with the voice of a popular artist, you can. And it will be ethical. So if you’re concerned about the ethics, find ethically sourced stuff.
OK. Casey, I have a Hard Question for you.
OK.
This has nothing to do with AI.
OK.
But it happened to me last night.
OK.
So a thing that I do for myself about once a year is that I go to Uniqlo, clothing store. And I buy a bunch of socks and underwear because it makes the best men’s socks and underwear of anywhere on the planet that I have found.
Great basics.
Great basics. But they don’t last very long. So about every year, I go to Uniqlo, and I buy half a dozen pairs of underwear and half a dozen pairs of socks. And I add them into my rotation.
This is starting to feel like a word problem. And I’m getting nervous.
[LAUGHS]: So last night, I went to Uniqlo to do my annual socks and underwear run. And I don’t know if you’ve been to Uniqlo lately, but they have these fancy, automated self checkout things.
OK.
Have you seen these?
No, I haven’t seen this.
OK, so it’s not like the supermarket where you have to individually scan every item and put it into the bag. It’s a cash register with a touchscreen, and then it’s got a little bin. And all of the items have little tags on them such that you can just dump them all into the bin, and it will automatically figure out what you bought and how much you owe. And then you pay for it.
What have we wanted from grocery stores for years if not exactly this?
[LAUGHS]: Right. It’s a cool system. So last night, go in. I take my six pairs of underwear, my six pairs of socks. And the lady says, just throw them in the bin. So I do that. And I pay, and I go home.
And I’m home. And I’m unpacking the stuff from the bag. And I look at the receipt. And Uniqlo’s automated checkout system has only charged me for three pairs of underwear.
[LAUGHS]:
So I accidentally shoplifted three pairs of underwear from Uniqlo. My question to you is, is this my fault? Do I need to go back to Uniqlo, return the pairs of underwear that I was not charged for? Or is this their fault because their automated fancy checkout system did not accurately tag the number of pairs of underwear that I had in my cart?
All right. Follow up question. What’s the value of the shoplifted underwear?
I think probably $20.
[SIGHS]: See, $20 is sort of right at my line. I think most of us have had the experience of you get out of the grocery store. You’re packing the stuff up into your trunk. And you look down, and there’s a watermelon that’s underneath the basket. And you forgot to put that on the conveyor belt.
You’re like, am I really going to — but you know what? God knows they’re overcharging me for razors and cheese in there. It’s all going to come out in the wash. At $20, I actually (LAUGHING) think you have an ethical obligation to go in. And now what I’m hoping is they will say, first of all, you’re the most honest customer we’ve ever dealt with in the history of Uniqlo.
[LAUGHS]:
And they’re going to take your picture, and they’re going to put it on a wall. And they’re going to say, more customers should be like this guy. And hopefully, they’re going to say, you know what? This one is on us. But I will say this. You will definitely feel better about yourself if you do that.
I was torn on this one because, on one hand, they didn’t have to implement this automated fancy checkout system. If they just had a normal register or even a self checkout system like the grocery store where it makes a beep when you scan each item, I wouldn’t have done this. But they promoted this to me. They said, look, you can just dump all your stuff in the thing, and it’ll track it all.
OK. But if a human had made this mistake, and we were having the same debate, you wouldn’t be sitting there going, well, look. They didn’t have to hire that guy who didn’t know how to count. They just did. That’s their problem.
Well, and this is the danger of over automation and why companies should be very careful about replacing humans with robots because sometimes they’ll give people free underwear. And if they don’t have the moral scruples that you do, they’ll just keep them.
[LAUGHS]: Also, 100 percent of people who hear this are going to be like, Casey’s virtue signaling. In the real world, he would never take the underwear back.
Yeah, I’m calling bullshit. I don’t think you would take the underwear back. I think you’ve got to have that conversation. I do. I don’t want to feel like every time I get dressed in the morning, I’m putting on my crime underwear.
So, I mean, if only to absolve myself of the moral guilt so that these underwear feel fairly procured, I may go back to Uniqlo.
Yeah. Those questions were legitimately hard in some cases.
Yes, very hard. I’m looking forward to getting more questions from our listeners.
Oh, me too.
If you have a question, an ethical dilemma — it doesn’t have to be about AI. It could be about any tech product that you are using that is giving you some moral pause or questions around it.
Or helped you shoplift on accident.
[LAUGHS]: Yeah, tell us about it. Send us a voice memo. Put “Hard Questions” in the subject line. And just tell us what you’re struggling with. And we’ll see if we can help.
This is fun. What other small crimes have you committed?
[LAUGHS]:
[MUSIC PLAYING]
“Hard Fork” is produced by Rachel Cohn and Davis Land. We’re edited by Jen Poyant. This episode was fact checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley. Original music by Dan Powell, Elisheba Ittoop, Marion Lozano, Sofia Lanman, and Rowan Niemisto. Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate LoPresti, and Jeffrey Miranda.
As always, you can email us at [email protected]. And if you’re thinking about texting me for a Bluesky invite code, just note that I don’t have any.
I actually have a few.
Oh, yeah. Well. No. OK.
[LAUGHS]:
I was thinking about starting a sandwich only alt account.
Maybe I’ll have to wait.
[MUSIC CONTINUES]
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here