Government Says No!
Here you go: The Pentagon wanted Claude for mass surveillance and autonomous weapons. Anthropic said no. Trump said fire them like dogs. Then it got weird. This week we're unpacking the biggest AI story of the year — the full-blown collision between the US government and the company that dared to build ethics into its AI model. We recap how Anthropic's constitutional AI framework put it on a collision course with the Department of Defence, what happened when the February 27th deadline passed,...
Here you go:
The Pentagon wanted Claude for mass surveillance and autonomous weapons. Anthropic said no. Trump said fire them like dogs. Then it got weird.
This week we're unpacking the biggest AI story of the year — the full-blown collision between the US government and the company that dared to build ethics into its AI model. We recap how Anthropic's constitutional AI framework put it on a collision course with the Department of Defence, what happened when the February 27th deadline passed, and why a label previously reserved for Chinese adversaries like Huawei is now being pointed at an American company.
We get into the leaked memo ordering military commanders to rip Anthropic's technology out of nuclear and cyber systems in 180 days — despite Claude reportedly being actively embedded in military operations right now. We look at the First Amendment lawsuit Anthropic has fired back with, OpenAI's eyebrow-raising decision to step in and take the contract, and the $25 million donation that might explain a thing or two.
And then we bring it back down to earth. Because underneath all the geopolitics, something genuinely exciting is happening with these tools — and if you wrote AI off six months ago, it's time to look again.
The government said no. The computer said no. The question is — what do you say?
00:00 Intro
00:31 The Backstory
08:35 Anthropic Fights Back
15:31 The AI Landscape
18:41 The Agentic Revolution
28:14 Outro
The United States of America will never allow a radical left woke company to dictate how our great military fights and wins wars. So wrote the president of the United States on Truth Social the week before the USA went to war in Iran. Welcome back. This is the AI Transition, and I'm Steven, and with me as always, Lauren.
SPEAKER_01Steven, woo-hoo! What a late-in! Oh my god.
SPEAKER_00Just another just sort of a small topic for us to cover today.
SPEAKER_01Look, there's not much out there on this one at the moment, so it's gonna be tricky.
SPEAKER_00So obviously, this is in the news all over the place from literally we're seeing a war getting played out, what played out a couple of weeks ago with what happened with Anthropic ourselves. Our plan is to uh recap the background. Then how the government said no. Computer said no. Computer said no to government says no.
SPEAKER_01Government says no.
SPEAKER_00The fight back from the computer dudes, and then potentially something you can do about it all as well.
SPEAKER_01Computer says no, gummy. Well, that's true.
SPEAKER_00What's just this? Who says no? Exactly. Well, I think we should maybe say no well we can, to be honest. So that sounds good to me. So maybe if we just quickly recap what happened in the previous episode and the lead up to this.
SPEAKER_01Sure. So with our with our last episode, Computer says no. We talked a little bit about um, well not a little bit, a lot about Anthropic's 23,000-word constitution for its AI model, Claude, you know, which was designed to embrace core ethics and values um directly in its model uh rather than a hard set of arbitrary rules, right? And we dove into this awesome badass philosopher that I had working with the team who helped develop that framework. But it was specifically there to prevent it from blind obedience even to government operators. And then we saw things get really interesting with the Pentagon, right? So the US Department of War still blows my mind that that's what it's called by the I know, I know. Sweet defense, and now it's war. Um, they approached Anthropic, right, saying, hey, we we we we've got a contract for you, but we need blanket permission to use Claude for mass surveillance of American citizens and also for autonomous weapon systems that you could fire without having a human in the loop. And guess what? That didn't really align with the AO constitution.
unknownYeah.
SPEAKER_01Fundamental to how Claude is meant to work.
SPEAKER_00Look, look, look in fairness, and I can't believe I'm saying in fairness to the Department of War. What the what the pushback was that they said that they wouldn't have anyone telling them what to do at all. Right. So so yes, these these are two red lines, but the the point was for the Department of War saying, no, no, no, you don't go to decide that. We we decide that. We might not go we might not do that anyway, but it's us that make that call, not you. So it's a real power battle that that was that that was going on. So that's where we left it a couple of weeks ago. And what happened next was the February 27th deadline passed, which the Department of War had set, and Secretary Pete Hesge, lovely man, um, designated anthropic a supplier.
SPEAKER_01Indeed.
SPEAKER_00Indeed, you know, you're kind of top of the top of the tree talent there.
SPEAKER_01Um that's where you go when you're thinking about your military strategy.
SPEAKER_00It is, it is, and also where do you go after you've been your uh your a news presenter? I mean, if you're not going to be you're in charge of the Department of War for you know the the superpower power of the world, what are you going to do? I mean, where is the next step?
SPEAKER_01Yeah. Where where can we go next ourselves, Stephen? It takes the question. How did I combine us into that whole con that was bad.
SPEAKER_00So he they set a deadline. That deadline passed. Um and they were designated, although it's a little bit ambiguous whether they were, whether they weren't. Do you do you believe the the the truth socials? But um this designation gives the uh the Secretary of War the authority to exclude a company entirely from competing with military contracts or subcontracts. So basically kicking them out of government, right? Which is for a company that's basically a B2B company, that's pretty serious if you know one of your major clients, who is the United States government, are saying, you're now a risk. And uh that label has only ever been used in the past for foreign adversaries and non-US companies, which is wild.
SPEAKER_01Oh, again, mind-blowing. It just sounds a little bit like this whole little tariff game that gets played as well. Here's the version for you, and we're gonna take you out of all the government procurement opportunities and shut you down that way. And I think did you did Trump seriously say, you know, there's a big meltdown on Truth Social? There's the words right there. Oh, we're going deep in this, aren't we? You know, it threatened the full power of the presidency and he's gonna fire them like dogs. How do you fire a dog, by the way?
SPEAKER_00Actually, uh why would you fire a dog? I don't know.
SPEAKER_01Well, it depends how obedient they are, huh?
SPEAKER_00Sorry, and I don't want to go there, but is that shoot them like dogs, not fire them? But sorry, Lauren.
SPEAKER_01I know you're a cat guy, but oh my god, I've been a kid. Um yeah, like you know, just talking about anthropics attempt to actually enforce these terms of service, which by the way are meant to be good, solid ethics, not to murder a bunch of people, um, as a disastrous mistake. Like, it's really ironic with what we're seeing going on in Iran.
SPEAKER_00Yeah. And and let's let's recap the what he actually said, which we we said at the top of the show as well. And he said, the US, the United States of America will never allow a radical left woke company to dictate how our great military fights and wins wars. We're going to let the Fox News guy, he's going to dictate how we wage war instead. There's times when you're I'd be living in a sci-fi movie, or and to be honest, it doesn't sound like a very good sci-fi movie.
SPEAKER_01How are they going to turn this around in the third act?
SPEAKER_00That's like, yeah, I don't I don't quite believe this one, to be honest. Um but then what moved on was there was a memo that was and that was that that was leaked, uh, interestingly, that all military commanders had to rip anthropics technology out of all nuclear missile systems, cyber systems in 180 days. Well, we know how easy it is to decommission systems, Lauren. I mean, we've done this all our life. Just a breeze. I mean, you basically just go in and you you you unplug it and then that's it, it's done.
SPEAKER_01And 180 days, right? So about six months. And we're talking about nuclear systems, drones, firefight, you name it. Oh, it's um war fighters, uh mission critical activity.
SPEAKER_00Yes. Yes. So the the Dark Department of Chief Information Officer, I know I'm kind of laughing to this, but it's my is the defense mechanism. Because this isn't funny stuff, right? Um, Kristen Davis said on March the 6th that that these vulnerabilities in AI pose catastrophic risks to war fighters. So the catastrophic risks that she's uh uh alluding to is these ethical boundaries. And that's a catastrophic risk. And so they've got 180 days in order to rip these systems out.
SPEAKER_01Again, it it's just mind-blowing. And this is, you know, the only official authorizing the government, if we can call it that, oh no, getting more and more controversial, to grant exemption, right, for these mission critical activities. So there's no viable alternative. So she's made this call, the government's standing behind it, 180 days to pull this out.
SPEAKER_00Uh meanwhile, by all accounts, and you know, some of this is absolutely obviously held by held behind secrecies and the rest of it, you know, the Pentacan has been using Claude. They used Claude all over the Venezuela raids and for from Maduro, supposedly. Um by all accounts, Claude is fully embedded for identifying targets in Iran just now. So it's not like this isn't getting used. And it's you know, and depending on where you stand in this, you know, is that use you know ethical or not? You know, I mean anthropic set the boundaries at mass surveillance of of the American public. Although I'm not sure if it's all public or just the American public, I'll have to go and double check that one. Right. Um, and whether the AI um systems have that final kill switch that you don't need a human in the loop, but all the way up till that, they absolutely are getting used just now. Um so you know, big big stuff.
SPEAKER_01Big stuff. And and again, this is just a little that we're we can kind of garner from what's out there in the news world as to what's really going on, how deep it's being used, and what it's actually being used for as well.
SPEAKER_00So this then sparked somewhat of a fight back that was going on. And there is a massive lawsuit that Anthropic has now filed. Um I had a brief look over the actual lawsuit itself. Okay, I took the lawsuit and I threw it into Claude and I got him to summarize it for me. Right. Uh and it's it's really, really uh interesting. I mean, this is First Amendment stuff, and this is unprecedented. As we said, you know, no other organization in the history of America has ever been um hauled over the coals like this. You know, they've they've done it to companies like you know, Huawei and other Chinese companies, etc., but they've never done this to an American company itself. Uh and so there's a massive lawsuit that's now going through the courts, but you know how long these things take. I mean, it you know, it won't even get through the first rung of the courts in the next three to six months, probably. So but we're fighting back.
SPEAKER_01Yeah, and again, it's uh it's all arguing that the government's actions are unconstitutional because they're explicitly designed to actually punish Anastropic for communicating its protective viewpoints and its safety. So when that constitution was discovered and people started to realize, hang on a second, isn't that part of the military? Then this, you know, forced this to come ahead to a head in terms of how much the government can control this software that's actually, if we can call it that, that's actually got ethics built into it.
SPEAKER_00So I've actually heard that so that it was interesting that Anthropic called a constitution because I've heard this now in a number of different podcasts and and and reading material when it's saying that the the American military has found this constitution within this model and it's not the American constitution. So how dare they? So how dare they? Because it should be the American constitution that's at the base of this and not this anthropic constitution. Interesting place of words, and playing to that kind of populist bent of well, you know, people are going to say, well, of course it should be the American constitution that's in there.
SPEAKER_01And of course, just forgets the global, you know, nature of what we're dealing with here and the arrogance of our friends. Yes.
SPEAKER_00Well, it's this is part of the fighting. We're fighting back. Okay, we're fighting back. This is part of the fighting back, yeah.
SPEAKER_01It's part of us fighting back.
SPEAKER_00But literally on the same day that the the deadline passed, and this was um Friday, the 27th of February, Sam Altmont, you know, a couple of hours afterwards came on and said, Hey, don't worry, we've taken the government contract.
SPEAKER_01We'll do it.
SPEAKER_00We'll do it. And there was quite a lot of jaw-dropping at that one that went on.
SPEAKER_01But with the same boundaries, like, hey guys, we're not we're gonna do it. No, we're not gonna do anything unethical, but but we'll do it. So I don't quite understand how this works because it says we haven't actually dropped any of our safety principles to secure this contract. So the ethics were not there before.
SPEAKER_00But or or well, actually the argument is that um this is exactly what Anthropic were asking for. That's what OpenAI has now signed. Now, a lot of people are incredibly skeptical on that. But maybe it's because, and I'm looking back to the the the notes here, maybe it's because that OpenAI isn't a radical left-woke company. Right? Right. Yeah.
SPEAKER_01They didn't come out with this ethical nonsense with a philosopher trying to teach its system some value.
SPEAKER_00Correct. Scottish philosopher as well, just to kind of Oh well, you're the problem.
SPEAKER_01There it is, right there.
SPEAKER_00Um, and you know, more part of that. So that then there was a backlash against NAI itself. There's a shock.
SPEAKER_01I think it's also like the human you start to realize, hang on a second, I have this on my phone, I'm paying a subscription for it. I'm kind of contributing here.
SPEAKER_00Yes.
SPEAKER_01Yes, get to it in a minute.
SPEAKER_00So it it turns out that the OpenAI president Greg Brockman had made a$25 million donation to Trump. So it's like maybe this is something to do with it. Maybe this is this is a small reason why OpenAI might have got the contract and Anthropic didn't. Because, you know, I'm surely that$25 million shows that they're not a radical woke-left organization.
SPEAKER_01Yeah. Oh my god.
SPEAKER_00Um but um and these numbers keep changing every time we look them up. But they doubled.
SPEAKER_01I think we started to write this, you know, late last week and then jumped in and checked the numbers. And you've got 1.5 million people cancelling ChatGPT over the last what was 700,000 last week. So it's doubled since we last checked. Over this behind there was a big campaign basically saying cancel your subscription because ChatGBT is a subscription-based model.
SPEAKER_00Yes.
SPEAKER_01So this is where your money's going to fund it. So lots of public outrage and big, you know, celebrities getting behind this and trying to, you know, get the stir the public's interest. You've even got some civil rights groups. I think there's a couple we've quoted there, Common Cause, Young Americans for Liberty, there's a great name. And I'm sending letters to Congress saying, hey, we need to actually halt the use of this altogether. You know, AI for mass surveillance and autonomous weapons. You know, we we spoke about this, I think when we first started the pod, that real red flag when you start to see this mass surveillance, let alone in this situation. And now we've got the big quick GPT.
SPEAKER_00And maybe it can are these things just too big and too big to fail now, and and we're nibbling at the elephant around the sides. And this is just going to play out with the, you know, our lords and masters are going to work this one out themselves. But you know, we'll see, right? It's this has um obviously become very public. Hopefully, this is breaking through now into the mainstream, you know, of how important all this stuff is. And that's why we're kind of focusing on this.
SPEAKER_01Absolutely. And obviously, you could go to a really dark place digging into all of it and trying to fix all of it. But it is quite empowering to know that you can have some. You've got some kind of way that you could at least, you know, put a little in. Like you said, there'll be under no illusions of as to how much of this we can solve. But it's important to know when you're buying into these paradigms, what you're actually where your money's going to.
SPEAKER_00Yeah. And you know, and and if this stands the American president can basically just take a dislike to a company like that and destroy them, you know, that's that's pretty serious for kind of from a free market point of view, from what happens to the software industry, et cetera. Or or we'd be naive and you know, this is the way that it runs now, right? Uh that this is the way that it's there.
SPEAKER_01Um it's yeah, again, I think we were talking a little while ago. We we probably weren't saying naive to it. We knew things would move fast, but in terms of what we've seen, um, capability and functional, you know, wise functionality, can't use my words, over the last even month has been huge. And particularly now that you see um front runners, and we'll talk about this in future pods, but uh like Claude coming out with different capability, how quickly all the competitors are rising to the occasion because they see that market opportunity.
SPEAKER_00Well, why don't we quickly take a bit of a detour there and talk about those competitors and the different sorts of models that were there? Because you're doing some really interesting research on that.
SPEAKER_01Greg, let's just touch on it, Steve. You're very kind. So when we're thinking about how the government was so easily able to impact Anthropic, you've got all these different uh main players in in the market at the moment, if we could call it such a thing. We've got the big ones being OpenAI, Anthropic, and Google. They're all running quite different businesses in terms of how they actually make their money. So you've got OpenAI, it's primarily a primarily a consumer subscription company, right? So they get 85% of their revenue from individual users. Um and it's actually a small percentage of those users that are paying a subscription that are actually keeping them afloat. But they're now pivoting, right? Because they've been trying to hit this huge revenue target out there in the market. So it's interesting times for OpenAI in terms of where it goes to next. And this is where you're starting to see uh the rise of them potentially putting ads in, which is a whole.
SPEAKER_00Well, and that was raised quite a lot, you know, a month or two ago, but um has quietly been pushed to the sides for a bit. So we'll see what happens there. But you know, that 125 billion revenue target, you know, if it's$20 a month, that's that's a lot of users to get to the city.
SPEAKER_01It's a lot of users, and you just lost a lot, you know, one and a half million pretty quickly. So then you've got Anthropic, and they're quite different because they're more of a B2B infrastructure company. So about 70% of their revenue is taken from API token consumption, right? So this is about 300,000 plus business customers, eight of the Fortune 10, and that Clawed code hitting and enabling about 2.5 billion in nine months.
SPEAKER_00I think it's really interesting the way that Anthropic is is a different sort of company with different sorts of products. And since we've been getting much more into the Anthropic ecosystem recently, how it probably works just a lot better from a from a business point of view as well. Um so it'll be interesting to see whether it's this consumer or the business-led one kind of kind of wins that or is it the dominant player, which is the last one, Laura?
SPEAKER_01Absolutely. And then the third, you've got um Google's Gemini, um, which is more a a bit, it's a bit well, I think they call it a defensive play where they're trying to they're they're protecting their existing, what,$200 billion ad empire, which kind of blows my mind. But when you think about the the Google um ecosystem and how it works. So in terms of its government risk, where you've got anthropic going, that deep enterprise integration, um, it really takes a while to unwind that relationship. Trevor Burrus, Jr.
SPEAKER_00We've got these three main players, and there's other players that we've not talked about as well. You know, like you know, Ilmos um X, you've still got Meta in the sidelines, you've got the Chinese models. Um but you've got these three very different players that are that are playing in the in these different ways. And and and and who knows which one is going to win out of this. Or maybe maybe there's multiple, maybe this is a multipolar of of how this ends up, rather than this being a single winner that that appears here.
SPEAKER_01We've seen kind of a big leap, dare I say, in the last few weeks. We've still had that kind of hype bubble around how we're actually gonna reduce you know costs of doing business, and we're still not seeing those productivity gains out there. So it's still interesting to see, you know, such volatility in the in the press around what's happening in um in the Department of War. And then it's giving you that inkling of this capability and really where we're going.
SPEAKER_00Why don't we take this down from the macro back down to kind of lower-level people like ourselves? Not that we are low-level people, but but you know what I mean. As in if we if we take our heads way out of the the clouds and down onto that kind of individual basis. Because I must admit, what I have seen over the last couple of months, and we're going to go into this in a lot more depth in the next couple of podcasts, is the real step change that's happened with these models, and in particular with Anthropic and Claude, which is the one that's um that's really hit home for me. This move to this uh a agentic workflow, this ability to stop using it like a search engine and just asking it a question and getting a response or giving it one thing and getting it back, but actually giving tasks and like decent tasks as well, and it just going away and doing it and it works. It's a big step change that's been going on.
SPEAKER_01And I think, you know, for us, and this is where we're, I guess we'll talk to now, we're starting to look at, you know, migrating away from ChatGPT, like you said, not just, oh, we're using it like a super smart Google, but it's getting to think a different game for yourself and what you want to actually get uh these agentics uh to agentics, I've got to love that word, to actually return for you and what it can do. So we're starting to see that next leap now where you've almost got to consciously go, hang on a second, do I really need to go in, open up all these different screens, copy and paste stuff from here and there? Where can I actually could I send an agent off to do my bidding and bring it back? That's that's the big leap that I've been really excited about.
SPEAKER_00I mean, it doesn't sound like much basically to say, well, I'm going to have an agent in the middle that will go and do that copy from here and scrape it over here and open that there and write this file there and then do that. But it actually is because you know there's a lot of skill usually involved in all those different steps and what happens and tying it together. Whereas the skill is now, well, working out what is it you actually want and being able to tell this agent in a way to say, can you go and do this for us? And it will go and do the four or five or ten or fifty or you know a hundred steps and then come back with with the output. That's that that's a significant way of interacting.
SPEAKER_01Exactly, that true productivity saving that we've all been looking for.
SPEAKER_00Yes.
SPEAKER_01You know? Um not just stealing it from the creatives where oh my god, look at my beautiful new graphic I've created. Oh, this song that I wrote. So now I'm getting really into my dark place. Um but we've both looked at making that move from ChatGPT off the back of this, you know, huge blowback in terms of what we've seen with the Department of War. And just seeing what Claude has come out with with cowork has been mind-blowing. So we thought we'd just take a few minutes to talk about how easy it is to actually migrate from ChatGPT, which alone, even in its name, going to something called Claude and Cowork, is quite the game changer.
SPEAKER_00And it's really straightforward. So to kind of ease anyone's concerns, so who's listening to this, if you do want to kind of bring a lot of the stuff that you were doing over in ChatGPT over into Claude, there's literally a button now. So once you s sign up to Claude and you go into the settings, there's an import button saying import from from a different LLM. And it and you press it, and it just gives you this prompt to go and put into the other one, and then it spews out all this stuff that you then copy back into Claude and then it goes away and Processes it and eventually, you know, 10 minutes, half an hour, an hour, you know, all day, depending how much stuff you've got there, it will chew through all of that. And then basically most of that memory is there. I mean, there's there's deeper exports that you can do with full data dumps and export that, but just that simple one, I've found enough, to be honest, of what I needed.
SPEAKER_01And it was really interesting what I was actually pulling out. Like you kind of think when I'm migrating from one tool to another, I'm just going to extract all of the data, which is all of your projects, all of your queries. No, it's actually more focused on pulling out how you think and what you've been using the tool for, which was really fascinating in and itself.
SPEAKER_00I think I I found out a bit because I think it was like, well, it's about two years or something been using it now. And what it knew about me and what was in there. And and as I was skimming over the file and it was going, Oh god, I'd forgotten about that. And it knows it.
SPEAKER_01I have to only have got a veggie patch, who would have done Exactly, right?
SPEAKER_00So all of these things are and how much we are sharing in here that's just part of this. So I mean, there's a whole separate conversation to have about you know data privacy and what's going on. But the the the it's mostly a good thing that you're the um this ability to migrate from one platform to another, in this respect for an individual, I think for an organization, very different. But for an individual, um, it's actually it's trivial, right? It's so easy.
SPEAKER_01Two screens next to each other, a bit of copying and pasting.
SPEAKER_00Yeah, you can do it while you're watching the TV, just you know, click, click, let it, you know, whir away, and then you're off and running. So that migration over, and then you've got uh when you go into Clauds, and we'll go into this a lot more in future episodes, where you've got a thing called co-work, which is that kind of more igentic business focused, and this thing called Claude Code, which basically allows you to become a coder using the vibe coding and getting into that, which I've been doing a lot of recently, and it's really, really powerful, like incre incredibly powerful. And it's those moments now we're having of going, oh I think there probably is something in this. This is big.
SPEAKER_01I think it's even the free version, like I think we've both invested in it a little bit, you know, the next level up, but even the free version has so much functionality in it. I think it's huge in terms of just even, you know, chatting to your buddies around the traps. Like some of them like, oh, I whipped up a website, or it's I've used it for my research papers, so I'm finishing my degree, all on the free subscriptions. It's quite powerful. And they keep, I think they've just um come out in the last 24 hours doubling some um access and availability for people too.
SPEAKER_00Yeah, they have. They're obviously they're trying to kind of ride this wave and bring people in and they keep dropping new functionality all the time. There was one earlier this week where if you wanted to um uh teach you something, you go you I want to be trained and blah, blah, and and one thing I tried to do was well, teach me about compound interest, you know, really exciting, wonderful stuff, right?
SPEAKER_01Uh a whole podcast in that one. Come back next week, everyone, compound interest.
SPEAKER_00Indeed. But what it came up with was on the fly, based on how I wanted it to look, it came up with a little training course, an interactive website, an explanation of how it was going on, some grass, blah blah. I've I've done this for decades, creating training courses and you know, and all of that's gone because it's it's literally you say, Can you just teach me how to do this? And it's off and running and and it's creating stuff. And it's not just text, it's images, it's interactive sites. It's it's incredibly powerful now.
SPEAKER_01Incredible. And that of you know the back of tools like Notebook LM and it is shocking, like Stephen, like you think back to the the many months and sometimes years it would take to get, you know, these uh transformations and training programs and now really it there's so much power at the tip of your fingers. Obviously, we know that there's going to be some you know percentage of errors there, but in terms of you tapping into your creative mind and what you can, you know, yeah, understand in the world, it's huge.
SPEAKER_00And what I'd probably encourage people is that if people who have been, you know, used this a year ago or two years ago, or even three, six months ago and went, eh, it's not very good. In fact, I I I was out the the other night uh and two of my good friends were laughing about how AI had failed in the one task that they gave them that day, and that just proves that AI area. Right once everyone. Uh and it's like, okay, but just there's all of these tools over here. There's there has been a step change in recent months, and I would encourage people to go and explore the new tools that are there and in a different way as well. So don't just use this as a search agent, you know, like like a Google search.
SPEAKER_01This is this is a different sort of daunting too with how simple and open it is. I know my um partner was doing some research other day. I'm like, oh, you should try um, you know, this particular tool um for that. And it was just a little bit mind-blowing around, hey, hang on a second, it just allowed me to, you know, whip up a whole podcast of my own in about 10 minutes based off a handful of sources that you threw in for some links. Like you're starting to see um, I guess a conversion of how some of these tools work together. And once you start to get to know tools like Claude, and there is a bit of a different style there with the way it interacts. I don't know, maybe I'm a little bit biased with what's happening, but there's a bit more of a warmth there. Am I being corny, Stephen, in interacting with Claude?
SPEAKER_00Or or or maybe, you know, the you more, you know, I I've got more affinity for these kind of um walk radical left organizations.
SPEAKER_01I don't know. That's the problem right there, isn't it? Yeah. So it's it's wide open in terms of your creative side of it, but it is a a shift in terms of how you work. Anyway, we've gone. We've gone segue. Either way, dig into it. It's very easy to move from these tools to another. Um get in there, get your hands dirty, play around with some free ones. Um and yeah.
SPEAKER_00So look, we started off with World War, so why don't we end with a uh with a bad dad joke, Lauren? Surely you must have one for us.
SPEAKER_01Stephen, I I do think there's some hope for us because in Tay right now, AI, the tet the dad jokes are not even growing with either. They're terrible.
SPEAKER_00They're hopeless. Yeah.
SPEAKER_01So I've learned on some of my old choles. Um here's what I've come up for you. Uh you be the judge. Did you hear about the agent who uh combined all the books ever written into one big novel?
SPEAKER_00Uh no.
SPEAKER_01But yeah, it's a long story.
SPEAKER_00Oh dear.
SPEAKER_01Oh dear.
SPEAKER_00Excellent, excellent. So we're still gonna survive. There's still a niche for us now, Laura himself.
SPEAKER_01Oh, look, as long as you need bad dad jokes, everyone, I'm available.
SPEAKER_00Look, that was a lot. Uh look, next time we're going to explore things more at the individual and company level. There's a lot of interesting parts that we're going to delve into. Uh, and particularly what's going on with teams these days as well, you know, which is at the heart of you know what most people work in. We all work in teams and how AI is going to affect them. But look, if this has got you thinking, share it with somebody who needs to hear it. Subscribe wherever you're watching or listening. It it genuinely helps us keep the channel going. And we'll see you next time. But remember, you still matter, at least for just now.
SPEAKER_01Thanks, David.
SPEAKER_00My Glorne.



