Connect with us

Tech

Interviews with Microsoft CEO Satya Nadella and CTO Kevin Scott About the AI Platform Shift

Published

on

Interviews with Microsoft CEO Satya Nadella and CTO Kevin Scott About the AI Platform Shift

Good morning,

This week we have two Stratechery Interviews: the first is with Microsoft CEO Satya Nadella, and the second is with Microsoft CTO Kevin Scott. I have interviewed both previously:

The context of these interviews were the Microsoft events this week. My interview with Nadella occurred after the Windows event I wrote an Article about, but before the Build keynote I wrote about yesterday. I interviewed Scott after the Build keynote (but before I wrote yesterday’s Update).

My discussion with both executives touched on Microsoft’s announcements this week, but also explored the strategic considerations undergirding the current AI wave. I asked both about the OpenAI partnership, and how Microsoft managed to pivot to AI across the company. We also covered competition with Google, and why the current levels of capital investment will pay off. Most importantly, we talked about the nature of platforms and how Microsoft is trying to capitalize on the AI platform shift (and what that actually means).

As a reminder, all Stratechery content, including interviews, is available as a podcast; click the link at the top of this email to add Stratechery to your podcast player.

On to the Interviews (jump to Kevin Scott):

An Interview with Satya Nadella About Aligning Microsoft and AI

This interview is lightly edited for clarity.

Microsoft’s Alignment

Satya Nadella, welcome back to Stratechery.

SN: Thank you so much, Ben.

Although I guess I’m in your neck of the woods here, it is transformed to say the least.

SN: You bet. Who would’ve thought here we have a new campus and a cricket field in there.

(laughing) Yeah, did they build that just for you?

SN: In fact, this is the first time this morning I walked up there, I had not seen it actually, and man, it’s just beautiful. On a spring day in Seattle, it feels just right.

You took away my precious soccer field.

SN: We have soccer too! We even have pickleball, courts, I believe.

Excellent. Well, I’m referring to the fact I worked at Microsoft. I was actually here for the first Build in 2012, which was a very dysfunctional event. I’m not sure if you remember, it was sort of competing with another consumer-focused event put on by the Windows team in New York City. Today, Microsoft seems like a much more unified company. What would you say has changed culturally and organizationally over that time, which obviously corresponds with your tenure as CEO.

SN: Well, let me put it this way, Ben. For me, my memories go back to the PDC [Professional Developers Conference] in ’91, when first we talked about Win32. That’s when — in fact, I was working at Sun at that time, and I’d not even joined Microsoft, and it was very clear to me as to what was going to happen, the PC and what it was going to do and then the server architecture for me was pretty clear even in ’91.

That’s what I feel is super important for any tech company, which is to have what I describe as a complete thought, when for me, the complete thought starts with, “What’s the system innovation?” — whether it’s silicon, the operating system, the app platform. Then why is this going to be desirable for any consumer or any app developer? So to me, that’s what it takes, whether it’s on the Azure side or on Windows side, and even this morning, the Windows Copilot+ PC, it’s been a long time since we had that complete thought where we have — the Arm stuff, it’s been a journey and I feel like we got that. We’ve got the application platform. Microsoft itself building applications, and third parties developing applications. So to me, culturally what allows you to build complete products is what I think one has to strive for.

You mentioned consumers and developers. You didn’t say the word “enterprise”. Do you feel that complete thought is working on all of those three levels right now?

SN: That’s a great question. You see, one of the things I think about is enterprises are also end users. In fact, I feel like at Microsoft, whenever we’ve been at our best, we’ve been able to — if you remember, we’ve always been a knowledge worker company.

Right. But there’s that bifurcation between buyers and users that I think does make enterprise sometimes different.

SN: Not the company I joined in ’90, in the early 90s, quite frankly.

That’s a good point because it was just developers back then.

SN: And end users. So one of the things that I go back to is always that, where we really thought about the end user. To me, Excel was a consumer category or an end user category.

It was in the 80s for sure.

SN: Long before it became an IT thing. So therefore going back, it doesn’t mean that I want us to now somehow not do things that are really addressing IT needs. In fact, Paul Maritz once said, “Hey, the magic is about end users, developers, and IT”. That harmonization you’ve got to be great at, in order to be a great enterprise company by the way. That means if you take that equation of developers and end users, you can be a great consumer company, and in categories, right? Consumer is now such a broad thing. It may mean many, many things. We are not going to be doing Hollywood movies or a lot of other things, but when it comes to gaming, and I’ll call it productivity stuff, we want to do fantastic work.

It is fair, I will grant you that. Microsoft, people forget, was very much the disruptive entry both in the consumer space and PC space, but also in the server space and using commodity hardware, things along those lines. Was there a shift from which comes first? I think there might’ve been a period of time where Microsoft had great aspirations around the consumer space, but maybe what built the PC was people use PCs at work, then they wanted PCs at home and the phone obviously turned out somewhat different in that regard, but is that still sort of a sweet spot for Microsoft?

SN: Yeah, I think that’s the sweet spot. Let’s take even just Windows, I want us to build great Windows PCs for the people who cross over between work and home, and even in the form factors. Let’s not even go fanciful about it, 200 plus million PCs are sold every year, and I want us to build the best PCs with the best battery life, with the best performance, and if this AI wave is upon us, then let’s redesign the operating system and the hardware. So that’s kind of what I want us to really do a good job of, and today was a good step in that direction.

Copilot+ PCs

Yeah, we’re talking just minutes after your Windows hardware event, I was pretty impressed, I’m not just saying that because I’m sitting across from you, I thought it was compelling, it seemed to deliver a real sense of why to buy Windows that has not seemed to exist basically since the browser came along and took applications off the table. How do you leverage that and lean into that? Is there going to be a big shift in your go-to-market? Is it just going to be leaning on OEMs and you think it’s going to sell itself? Are you going to heavily invest in a way you didn’t previously, or have you always been investing?

SN: We’ve been always investing, but the thing though is timing is everything right in tech, right? Which is we’ve been at it on Arm, we’ve been talking about NPUs for a long time.

Yeah, a decade ago, you launched an Arm PC, I was there.

SN: So the point though is it’s coming together. Think about what’s just happened. With all these models that are out there and the ability for us to have, whether it’s from a privacy perspective or a latency perspective, COGS perspective, to have onboard models because when you use-

You sounded like Apple up there, to be honest. Number one, you said “MacBook” more than Apple did in the Jony Ive era, so that was something, you’re leaning into the comparison, that’s for sure. But so much talk about local privacy, things along those lines, but you bring up the relevant point about these local models — COGS, if you’re using your customer’s energy, it’s effectively free from your point of view. How far do you think you can lean into this? For example, the AI PC, I was waiting for the specs, you did drop it, 16 gigabytes of memory, that’s good for Windows, still pretty small for AI.

SN: Now with the 45 TOPS, I feel like this Copilot PC takes, I love the first step we took with the AI PC, but with this Copilot+ PC, I think we are there. By the way, I’m a big believer that distributed computing will remain distributed, so it’s actually in concert. Take even Recall, which is I think a pretty killer feature which we’ve been working on again for a long time.

I’ve been running Rewind on Mac, and it’s kind of a superpower for sure.

SN: Yeah and the point is now with Semantic Index, the fact that I can type in a natural language query and Recall, and even the fact that we forgot, I remember things visually, I remember things by association, and now to be able to not learn search, but to be able to just type in my intent and recall it. But here’s the interesting thing about Recall — if you notice it, it’ll not only come back to the content, but I can invoke this very moment because of the Semantic Index, so that ability requires a lot of onboard compute.

The other fascinating thing is one of the demos I love and I was playing with it is you can be playing Call of Duty and taking the NPU, all 45 TOPS, and not damaging your battery for your gaming, that ability to have an operating system that knows how to use all the silicon and all the system appropriately, I think is going to be a real breakthrough for us.

It’s easy to demo this, and you did a really cool demo with the drawing and the enhanced drawing on the Surface PCs. The eternal question, Android ran into the same thing that Windows did before — yes, you have this defined spec for these AI PCs, how do you though deliver that consistent experience?

SN: That’s a great question, Ben. That has been one of the struggles of our ecosystem. I think we are all being schooled, quite frankly, on how to really, one, get the operating system right — it’s quite frankly the silicon, right? If I think about the amount of work we did with Qualcomm to get their silicon right. Now, what Intel is doing, what AMD is doing, this is the best — in fact, if I draw the parallel to what’s happening in the cloud, I’m thrilled to have some of the best folks who know a thing or two about silicon all putting their energy into building fantastic.

Is it a lot easier when you’re challenging someone instead of when you’re sitting on top to sort of herd cats, get them going in the right direction?

SN: I think so. It’s all competitive, juices flow better, we are more disciplined, we execute with more rigor when you have something to go win and so therefore, that’s good and so we have the best silicon innovation. I don’t know if you noticed, the OEM innovation —

I’ll go down there afterwards.

SN: You should check it out. Dell’s all in, HP is all in, Samsung, Acer, Lenovo. Again, in terms of us, when was the last time we were able, and by the way, there’s Surface setting the tone, but this is not about Surface setting the tone and no one following, right? The fact that we were able to bring everything together, it’s actually a testament to the ecosystem quite frankly, and the leadership there to say, “Look, let’s take our shot, this is it, these things come once in a decade, once in a generation”.

But I think, even going back, on this very fields 30 years ago, we launched Windows ’95. You could say, “Oh, that was the height of Windows” — except you know what, even in the height of Windows, we forgot one thing called the Internet. In fact, it was SR-1, that December when we launched whatever the Internet, which was the browser. Here we are though with the AI age, I feel much better structurally, both I have more to win and the entire ecosystem is innovating with us.

AI Platforms

There is a question. You keep referring to AI as this platform opportunity. The question that I had, even when you were doing your introduction, I was preparing this, is to what extent can there be a platform opportunity that is not associated with hardware, that does not have that paradigm shift, whether it be that go-to-market or fully revealing those features? In that regard, this presentation was interesting because it was a very tangible, “Look, this makes Windows better, it’s a device you can buy that gives you access to these capabilities”. I’ve written about how I think one of your great triumphs was basically uncentering Windows in terms of Microsoft. Of course it’s important for you, but it’s not going to be the hub around everything which everything pivots. Is there a bit where you’re able to come full circle in a way you weren’t before? How important is Windows going to be as a driver for you going forward? Is it actually essential to realizing this platform opportunity, or can you still get that opportunity on iOS or Android?

SN: Oh yeah. I mean one of the things is I’m very, very grounded on where the world is today, versus just magical thinking. Second, I want us to also, at the same time, bringing every layer of our stuff together into a cohesive architecture in the interests of developers and end users.

One Windows, I think there was a memo about that right when I left Microsoft, it was perfect timing, I got to write about it. I thought it was insane, but now it makes more sense.

SN: Because at some level, I quite frankly feel like we have to really make sure we do our best work for these 200 plus million devices that are sold. That doesn’t mean the other billion devices that are sold are not important. The other billion devices, we need to do great innovation, and I’ll come to that, but first let’s take the 200 million Windows users and say, “Hey, what can we do with this platform shift that is magical for them?”. That’s where from silicon to experience to third party developers — and by the way, not in isolation, Windows just doesn’t live on its own.

I don’t know if you caught that, but there was something today that was super key: take even AI. There are two challenges or two things I would love — I want my privacy and I want my safety. There’s no way to deliver safety on frontier models or latest models if you don’t have classifiers that are constantly learning based on all adversarial attacks that are happening like that last hour, and that is going to be done in the cloud. So I want to be able to call a cloud service. It’s kind of like Windows Defender, how do you have a Windows Defender if you’re not connected to the cloud? Same thing with AI safety. So you want the cloud doing what it does well and you want the client doing what it does well, and that’s, I think, the key.

The other interesting thing is I’m really excited about this Copilot Runtime. To me, I wanted a real namespace — by the way, the WebNN thing, which is so cool, I can write some JavaScript and use WebNN to take a model, and then have the NPU go off locally. I can go to GAP.com or any website, and now I can start adding AI features and offload the AI locally. That’s the type of stuff that I think with the cloud, the web, and the edge coming together is a cohesive thought.

That, in fact, gives us a leg up on when we build for Android. In fact, one of the things you’ll hear us talk at Build tomorrow, which I’m excited about, is take Phi, right? You now as a developer can use Phi in the cloud on Azure AI as a managed model as service, you can use the silicon thing that is there, basically Project Silica, which is onboard on Windows, or you can wrap it into your app and then get it to Android and iOS as well. That’s how I think we’ll go about it.

The compliment to the presentation is I told [Microsoft Chief Communications Officer] Frank [Shaw] before, we’re not going to talk about Windows at all. I have some bigger picture things, but I thought it was that compelling where it felt like there’s actually something meaningful where to consider even the category in a different way.

SN: We’ll get you back on Windows, Ben.

I have no obstacle to Windows, I just hate change!

The OpenAI Partnership

Microsoft seems like a much more unified company, I mentioned that a bit before, how important is that when you go to organizations, to know they have the weight of a company that’s aligned behind them?

SN: You mean inside Microsoft?

No, going to external customers, like big enterprises, big companies.

SN: I think that what customers expect from us is, one, both, I would say, one thing that I’m very, very focused on. Just because we are one company and all these pieces come together, integration matters, but each layer also has to stand on its own.

So to me, the way I think about Microsoft is yes, ultimately we are not like a conglomerate, we have to have a real thesis that there is a cohesiveness to architecture. Customers care about us and bringing that integration value, but they also very deeply care about each thing being competitive. So yes, the customers care about it, and internally we have to hold ourselves to it. In fact, we are at our best when it’s just not integration, it has to be integration plus competitiveness of every layer of the stack.

So when you talk about the integration of One Microsoft though, how do you resolve that with the OpenAI partnership? Has there been an increase in concern about that? Say, “Look, you as Redmond is great, you’re all moving in the right direction, but there seems like there’s this dependency here that we’re not sure you have control over, which means we don’t have control over it”, how are those conversations going?

SN: To us, I would say the OpenAI partnership is at the same class as say the Intel partnership back in the day, or the SAP partnership when we were building SQL or what have you, because it’s industry-defining and a Microsoft-defining, so therefore we are very invested in that partnership. It’s simple logic which is, “Hey look, this is about compute”, so therefore—

He who owns compute runs the world?

SN: Right. The unconventional bet was back in 2019 when we said, “Wow, maybe we should throw a lot of compute”, because that was the thing that OpenAI was more convicted on than anybody else, even including people inside Microsoft, and so that’s why we took that bet and it’s worked for the last five years, and I’m all focused on making sure that for the next five years and the next five years and these partnerships are always, as you know, Ben — in fact, it’s that crucial period when both sides succeed, how to make sure that there’s long-term stability, which is long-term stability comes from both sides winning on a continuous basis and that’s how at least I approach it.

I think that for them, we are the infrastructure, they’re the model builder. They build apps, we build apps, third parties build apps, and so it goes. There’s going to be competition, and there’ll be some competition which is fully vertically integrated. Vertically integrated works beautifully until one layer of yours is not competitive. If you want to check, check Microsoft, you don’t have to go along in history. And so therefore, you have to be open-minded that at the end of the day, sometimes partnerships are the only way to get ahead.

Integration vs. Modularization in AI

You mentioned that OpenAI had conviction about compute, and that’s something that Microsoft leaned into for sure, is there or should there be a sort of anti-Google alliance in AI, given their head start in models and especially infrastructure? Are we seeing that emerge, not just Microsoft and OpenAI, but potentially Apple?

SN: I look at it and say, look, I think there’s room always for somebody to vertically integrate. I always go back, there’s what is the Gates/Grove model, and then let’s call it the Apple or maybe the new Google model, which is the vertical integration model. I think both of them have plays.

I would say if you really think about in the long run, I’m more of the believer in the horizontal specialization. Just to take silicon, [Nvidia CEO] Jensen [Huang]’s sitting there really aggressively executing on some unbelievable roadmap. Today, guess what? He’s grounded on the fact that he needs to make sure that the leading AI model is trained on Nvidia. Guess what? Google’s not trained on Nvidia, Google sells Nvidia, but Google’s trained on TPUs. I think that registers with Jensen too. [AMD CEO] Lisa [Su] is there innovating. We are building our own chips. So everyone who says, “Okay, let’s go bring silicon innovation, let’s bring model innovation”, it’s OpenAI, there’s [Meta CEO] Mark [Zuckerberg] with Llama, there’s Mistral, there’s all these small language models, there’s a lot going on there.

In any event, any application of ours, take Copilot, yes, we absolutely are going to use GPT-4o and mix it up with Phi and others. So I think any enterprise application, really what they’re most excited about is models-as-a-service. So I think this is going to be a much more diverse, at least my history lesson is there are very few winner-take-all, and to be very clear about that and make sure you play that for winner-take-all. But everything else, take that broad-tent platform approach.

That certainly makes sense, and you’re speaking to idea of models being commoditized. Microsoft has hired a lot of talent from Inflection AI, it seems you are going to make sure there’s diversity in the model offering on your side. But if models are going to be commoditized, why would the dynamics in the cloud be any different than they have been for the last 12, 15 years? Is this actually going to be anything new?

SN: I think it’s a very good point. I think hyperscalers have a fundamental structural advantage in this, which is, in some sense, if you sort of say what does the world need more five years from now, I would say hyperscale compute utility available everywhere. If you think about it, the new formula of economic growth, I think is as clear as it has ever been, which is you need more power powered by renewable, a better grid, and better compute and if you have that, then every other sector of the economy can really benefit from these two things. Any country, any community that has that at the best frontier of efficiency just has a leg up and tailwind on economic growth. So if you take that high-level premise, then absolutely.

But what about the inter-competitive dynamics? Because Amazon got there first, they got basically the whole SaaS enterprise of companies you started on Amazon. Microsoft moved to the cloud along with their enterprise customer base. Google’s like, “Look, ours is the best, just try it out,” and they were kind of a distant third. Is that going to play out in a similar way? Is the data gravity just going to be predominant? Maybe AI is this big new thing, but the actual competitive dynamics are still-

SN: I think I have not met at least an enterprise customer who is single cloud. I remember when I first started on cloud, everybody would talk about it as if this was going to be like, “Oh my god, it’s winner-take-all”, and I always thought like, “Man, I grew up in servers”, and when people even say we won, I didn’t quite get it. Every category of the server, whether it’s the operating system, whether it was databases, whether it was web servers, and all of those middle tier things, all had two or three players.

So fundamentally, I think hyperscale has definitely room for two, if not three and there is distance. Revenue share, this is something [Former Microsoft CEO] Steve [Ballmer] used to always tell me about — revenue share versus market share are two different things in a multiplayer market, and so on and so forth.

But nevertheless, I do think that there’s room for all three and remember, we started with, Amazon had what, a six year, seven year run with no competition? Guess what, competition arrived and here we are, and I feel very, very good about this next phase. I’m not starting from behind. In fact, if anything, we have a start, and that changes. Take the B2C customers, whether Shopify, Spotify, whatever. Thanks to the OpenAI API, none of those folks were Azure customers. For the first time, they’re not all on Azure, but they’re also Azure customers, which is a massive, massive change in our fortunes.

Capex and the Future

Is there a bit about this competitive dynamics where, you’ve talked about you have visibility into revenue spend. There’s no question you have to invest in AI, but is there a bit where — your CapEx relative to gross profit has gone from 13% to 26% in the last seven years, a massive increase — what gives you confidence that will pay off, or does it not matter because the competitive dynamics are you’re going to invest regardless?

SN: I think the laws of economics, I think you rightfully pointed out, we are a CapEx-heavy entity. Most people are focused on our CapEx just because of AI. But come on, just take out even AI, we are a knowledge-intensive and a capital-intensive business, that’s what it takes to be in hyperscale. You can’t just show up and say, “Hey, I want to enter the hyperscale”, if you can’t now at this point put $50, 60 bill a year into CapEx, so that’s what it takes to be in the market.

But then also, it’s always going to be governed by what’s happening in the marketplace. You can’t far outstrip your revenue growth. And so therefore, there is an absolute governor, which is yes, the training chunks go where there is step function changes to allocation of training compute, but ultimately inference is demand-driven. So if you take that combination, I feel like if there is something that happens cyclically even, adjusting for it is not that hard. As a pure business management thing, I’m not managing it for a quarter, but it doesn’t scare me.

You’re not as worried as the Street is. One quick question because I like this one. Bill Gates said we overestimate what happens in two years and underestimate what happens in ten. Are those still the right units, because it feels like a lot has happened in two years?

SN: I think those are probably the right units, except that maybe I could sort of say — here’s the biggest issue, if you take the Moore’s Law period, man, I love those 18 months. In fact, there’s this beautiful chart at Epoch AI I like a lot where they just talked about just machine learning, flops given to machine learning algorithms since basically whatever, 1950 or whatever and it just followed Moore’s Law. It was doubling every 15, 16 months and then 2010, it went up 3x, and it’s actually inflected even more, I think. So it’s doubling perhaps every six months, or even less than that, it’s hard to keep your head straight. Everybody says, “Oh, I get exponentials”, believe me, living in that world-

You increase that exponent and it changes a lot.

SN: Yeah, it’s very hard. So therefore, to your point about what happens when especially you have emergent capabilities, that’s why I think AI safety is a super important thing, we’ve got to keep that in mind, but we also have to keep in mind that there’s going to be new innovation that shows up. So how do you harness that new innovation for good, keep safety in mind? It’s a very different ballgame.

Satya, I know we had limited time, but thank you, it was good to talk to you again.

SN: Absolutely, Ben. Thank you so much for having me.


An Interview with Microsoft CTO Kevin Scott About Building Platforms on AI

This interview is lightly edited for clarity.

AI Platform

Kevin Scott, welcome back to Stratechery.

KS: Thank you for having me.

So let’s rewind 10 years or so and walk me through your thought process that led the journey that Microsoft has been on from AI, from high performance compute or AI Compute, from the partnership with OpenAI. Was there a specific point that made you realize that this is the path you needed to go on?

KS: Yeah, certainly. The interesting thing is I’ve only been at Microsoft for a little over seven and a half years now, but I do think 10 years ago, I was still at LinkedIn running the engineering and operations team there, and it was already just super obvious that we were on a very interesting new curve with AI. It wasn’t the flavor of generative AI that we have right now, but the things that people were doing with really complicated statistical machine learning and how much benefit we were getting already 10 years ago from the scale up of these systems was just faster than I expected.

I’ve been doing this for a relatively long time, so I built a bunch of machine learning systems at Google right around the time that Google IPO’d, including working on the big machine learning systems that ran the ads auction at the time. It was already obvious back then that scale mattered an awful lot. But the thing that was a relatively new update, call it six years ago, is that this scale-up was leading to AI models that started to behave like platforms.

So instead of having a model that was purpose-built for one particular thing, and then you applied a lot of scale and that one particular thing, like CTR (Click-Through Rate) prediction for advertisements got really good, we began to see the scale up properties in these large language models lead to the large language models being reusable for lots and lots and lots of different things.

Well, that’s actually a question I want to get to because you and Satya, you keep talking about this platform shift, platform shift, and the word “platform” keeps coming up.

KS: Yep.

I was going to ask you what you meant by that, but what I’m hearing from your answer is by platform, you mean the fact that it’s generalizable.

KS: Correct. And that it is a component that composes with software systems that you’re building in a very flexible, general way. So rather than having this world of AI where a company like Microsoft might have a hundred different teams who, top to bottom, had to be responsible for, “This is my data”, and, “This is my machine learning algorithm”, and, “This is how I’m going to train the machine learning system on this data and this is how I’m going to go deploy it”, and, “This is how I’m going to get feedback from the deployment process and real usage into the model to improve everything over time”, and you’ve got a hundred small flywheels turning, instead you’re able to invest in a central model training effort, and the thing that you get out of it is very widely useful across a huge range of applications that you already have and it opens up the possibility to build a bunch of new things that were impossible before.

So I want to push on this even just a little bit more, this platform idea. What I am hearing from you, and maybe I’m not hearing from you, I already thought this before I talked to you, but there are platforms like Windows being a platform and that’s what we think of a platform. You have APIs and there’s network effects, it’s a two-sided network, you have developers and users on the one side, but then there are platforms like x86 as a platform.

KS: Yep.

Am I right to think of your use of the word platform as being closer to x86 as opposed to Windows?

KS: Yeah, I think that’s probably right.

Or maybe the better example is processing in general, because when you talk about going from specialized to general use, that sounds like the shift from dedicated processors that did one thing to generalized logic chips that were broadly programmable.

KS: Yeah and look, I think x86 is probably a pretty apt comparison here, because the thing that made x86 interesting is it was a general purpose piece of infrastructure that allowed lots and lots and lots of software to be written, and the power of the system, the platform, just increase over time because it was getting cheaper every 18 months or so and more powerful simultaneously. And so you just had this rapid progression of capability flowing into the hands of lots of people that were building things on top of it.

There was a clear separation between the x86 and the operating system and the PC manufacturers and the people who are building applications on top of it. And sometimes, Microsoft built both applications and operating systems so there’s a little bit of the both, but there was a whole universe of possibility there for people to do things on top of the Wintel platform that had nothing to do with Microsoft predicting what all of the useful things were and people could trust that it was an interesting platform because you had this exponential called Moore’s Law that was just going to ultimately result in the thing being completely ubiquitous.

Right. We’ll get to the Moore’s Law point, I know that’s one that both you and Satya have been hitting a lot and you want to get to, but you mentioned Wintel there. The way it turned out with x86 is in the fullness of time, you had Windows and you had Linux and you actually eventually had even Macs or whatever, so you did have layers, but by and large, from a developer perspective, their level of abstraction that they cared about was the operating system layer.

With AI models, my question is where is the actual opportunity going to arrive? So let’s back up. I think an interesting thing with Nvidia right now is obviously, there’s lots of reasons to be bullish for Nvidia for secular reasons, but I think there’s a structural reason to be concerned, which is that the CUDA layer, that’s where we are all specialized and we’re doing frameworks for this and frameworks for that. The LLM’s generalized that, and now actually, there’s stuff happening at a higher level where you don’t need to know CUDA to build an AI application. But is that the actual layer or will there be an operating system that sits above that?

KS: Hard to say. Probably not an operating system in the sense that—

Not a traditional operating system, but that sort of context.

KS: Yeah. Look, I think this is the history of computers writ large, the level of abstraction that people always increases.

And we’re just resetting because of this new model.

KS: Yes, a hundred percent, I think that’s absolutely true. So I don’t know exactly what the level of abstraction is going to be, but it’s already very different now. We have prompt engineers now who are able to coax these systems into doing very complicated things simply by instructing it in natural language, like what you would like the system to do or not to do. We’re developing all sorts of tools for figuring out, you’ve got to stuff a bunch of stuff into a context window for a large language model in order for it to do what you want to do. We built things at Microsoft Research like this GraphRAG system that does a graph structured context composition, so that you’re very efficiently using the context and you’re not sending unnecessary tokens into the model, which is a good thing because the more tokens you send, the more it costs and the higher the latency is where you just need to get at the information that it needs to have in order to answer the question you need answered or to complete the task you need completed.

So I don’t know what the full set of abstractions are, so why we talk about this notion of a Copilot stack, it’s not like we’ve even figured out exactly what everything in that Copilot stack is. We figured a ton out over the past couple of years that you have to have in order to deploy a modern application, but even as the models become more powerful, the abstractions get higher. So going back to your Windows analogy, the first version of Windows didn’t have DirectX in it because the graphics weren’t powerful enough in those machines for you to even contemplate having shaders.

It’s not that no one created it, it’s that no one had even thought about it yet.

KS: Right, so a bunch of that is still yet to come here. But I think what you will see from us at least is we’re going to have at least one opinion, and I don’t know whether it will be the ultimate opinion or the right opinion, but it will be an opinion about what all of those components are that you need to have in addition to a powerful frontier model in order to build these really rich, interesting, new applications.

Well, what is that opinion? Is it the Copilot stack articulation? In the long run, in this opinion, if you’re a developer, if you’re a developer thinking about building a computer application in 1975 versus 1985 versus 1995, your decision set is completely different. It’s very obvious in some respects what you’re going to do, so what is your opinion on how that evolution will happen?

KS: Well, so I think the opinion that we have right now is as you do when all of these platforms emerge, you’ve got a layering of these abstractions. So you have at the bottom of the abstraction stack, you’ve got a large foundation model, then you have your data set and a retrieval mechanism that, in a very careful way, makes sure that the model has access to the information that it needs to have access to in order to complete the task. You’ve got a bunch of stuff that’s sitting on top of that that is doing orchestration, so it may be that you need to operate across multiple models in order to accomplish the thing you want to accomplish, and you may need to do that for cost reasons or for quality reasons or for data privacy reasons.

One of the things that we really expect to see over the coming year is a decomposition of where inference happens. So some of it will absolutely be able to happen on a device, like on your PC or on your phone, and if you can do that, I think you want as much of that inference, as much of the AI happening on the device as humanly possible and only when you run out of capability or capacity on the device do you have to go call the more powerful, complicated thing that’s sitting inside of the cloud.

Is the most important agent to be built this orchestration agent that figures out number one, “Do we go local?”, “Do we go to the cloud?” — number two, if we go to the cloud, “How do we rewrite this prompt or request?”? To your point you said before, so we minimize the number of tokens, it’s super efficient. Is there a bit where you’re talking about the Phi models on Windows, and actually the most important part of those is not that you can do a co-drawing thing on Windows, but you can actually keep your COGS down for Copilot in the cloud.

KS: I think it’s an important thing and there’s just an increasing number of abstractions. The important thing is, if you have a truly useful thing that you are making where your audience is very large, you want to be able to distribute it to as many of those people in that audience as humanly possible, and so COGS is definitely a factor in that and so if there are ways that you can get them a good high quality product where you’re doing something like cloud offload to a small model in order to deliver that application, that’s great. You should absolutely be doing that.

But I thought you told us we shouldn’t be worrying about COGS, because everything’s going to be very cheap soon.

KS: It will!

AI Scaling

Here’s your pitch. This is the segue for you to give me your scaling pitch.

KS: Yeah, and I give this pitch all the time. So the interesting thing I think about this whole space right now is you do have exponentially improving capability in the frontier models and I don’t think we have approached the point of diminishing marginal return on the scale.

If we hit a scale wall, what is it going to be? Is it going to be data or what do you think?

KS: Well, I think data’s already hard. At the scale of some of the frontier models, I think, and everybody who sort of runs into this, it’s a challenge to have enough data to feed it. I mean, one of the things with the Phi models that was a big innovation is you actually use a frontier model to generate-

Synthetic data.

KS: And we’ve been doing this for years, we used to do it for the non-generative models. So if you wanted to build a computer vision classifier model and you wanted to make sure that it was trained to not have biases reflected through from underlying training data sets, you could go use a generative model to generate a whole bunch of synthetic data to get a fair distribution of training data so you could get model performance that you wanted. So I think that’s an increasingly powerful way for people to build both small models and large models, is generating synthetic data. Particularly for reinforcement learning, I think it’s really valuable.

To this scaling bit, what is driving it? You’ve mentioned the foundation models and you used your analogy with Sam Altman on stage of like we started with the, what was the smaller animal?

KS: A shark.

Shark, and then the orca, and now the blue whale is training as yet unnamed model, which apparently will be released at some point. So is the answer that a lot of the efficiencies and a lot of the scaling capabilities is all in smaller models because these big models can generate all the synthetic data, can provide all the — and you can optimize, but that doesn’t answer the question for the foundation models. What are you confident their scaling is?

KS: It is sort of hard to completely answer that question without giving away a whole bunch of things I prefer not to give away.

That’ll be a sufficient answer I think.

KS: But look, I do think that the synthetic data is useful for training the large foundation models as well, so just imagine if you want to train a foundation model to be really, really, really good at coding, there are plenty of ways to generate synthetic programs that have particular characteristics and because programs are these deterministic entities, you can sort of generate something synthetically and you can run it through something like a model checker, to prove, “Is it compilable?”, “Does it produce a set of outputs?”, “Is it a valid input that you can put into a training process?”. You can literally design a training curriculum or at least a partial training curriculum for any model, by generating synthetic data in these domains where it’s pretty straightforward to generate well-formed training inputs to get the model to be better at that particular curriculum you’re trying to train them on. Now it doesn’t mean that all the data can be synthetic.

You used an example before of models doing click-through rates prediction, ad targeting, you’re talking about coding. The benefit of all these is although you’re producing a probabilistic model, you’re using data that is deterministic, right?

KS: Correct.

So when you get to this generalizable function, what gives you confidence that the generalizability can extend to domains where it’s almost like on one extreme you have pure creativity where there is no wrong answer, works well there, there’s the other extreme where you’re operating in a domain with a validation function so that you can actually bring AI to bear in a parallel fashion and get the best answer, because you can grade it. But then there’s a whole middle area where I think people — I call it the lazy function — people want AI to do their jobs for them, but the reality is the problem is, there isn’t necessarily a grader in place. So can it generalize to that?

KS: Yeah, look, I think we’re going to be able to generalize a lot of things. So one of the things that we wrote about in the Phi paper, which is I think is titled Textbooks Are All You Need, is that, and this is the thing that gives me confidence by the way, is we have the ability to train expert humans on pretty finite curriculum to be able to do very expert things.

Like it’s how I was trained as a computer scientist, I read a whole bunch of computer science papers and computer science textbooks and did a whole bunch of problem sets and I practiced, practiced, practiced, and then after some number of years, I was competent enough to actually do something useful in the world. So I think that is the thing that gives me confidence that we will be able to figure out how to generate enough of a curriculum for these models and to find a learning function that will let us build things that are cognitively quite capable.

Now the thing that I don’t know, and this is going to be sort of an interesting thing we will figure out, I think soon, is — it’s a bet I’ve got with a bunch of people — so I would imagine that a computer will prove the Riemann hypothesis before a mathematician will. For your listeners, the Riemann hypothesis is one of these century-old problems in mathematics that I think [David] Hilbert proposed as one of his grand challenges at the end of the 19th or the early 20th century and people have been pounding away at this thing. The Riemann hypothesis is basically a statement about what the distribution of the prime numbers are and it’s a hard, hard problem. It’s one of those things that’s easy to state and there’ve been just crazy, brilliant people trying to come up with a proof of this for a very long time now and so I actually believe that it’s one of those problems that is likely to be incredibly complex, where the proof is going to be just mind-boggling and my prediction is a computer will be able to do it before a human being will be able to, and will probably involve human assistance. It won’t be a totally autonomous.

AI as Tool

Well, on that human assistance sort of point. You said in your keynote today, you’ve loved tools your whole life.

KS: Yeah.

And is AI going to remain a tool, it’s clearly a tool today.

KS: Yes, I think so.

Why is that? Why is it not going to be something that is sort of more autonomous?

KS: Yeah, I don’t really see…

None of us know.

KS: Yeah, well, so none of us know, but I do think we’ve got a lot of clues about what it is humans are going to want. So there hasn’t been a human being since 1997 when Deep Blue beat Gary Kasparov at chess, better than a computer at playing chess and yet people could care less about two computers playing each other at chess, what people care about is people playing each other at chess and chess has become a bigger pastime, like a sport even we make movies about it. People know who Magnus Carlsen is.

So is there a view of, maybe the AI will take over, but we won’t even care because we’ll just be caring about other humans?

KS: I don’t think the AI is going to take over anything, I think it is going to continue to be a tool that we will use to make things for one another, to serve one another, to do valuable things for one another and I think we will be extremely disinterested in things where there aren’t humans in the loop.

I think what we all seek is meaning and connection and we want to do things for each other and I think we have an enormous opportunity here with these tools to do more of all of those things in slightly different ways. But I’m not worried that we somehow lose our sense of place or purpose.

How is AI going to make life better in Virginia where you grew up?

KS: The story I told on stage this morning about my mom, I think she had a pretty rough go of it health wise last fall and you look at the demographics of the world right now, we have a rapidly aging population and—

A shrinking population.

KS: A shrinking population in many places. So shrinking in China, shrinking in Italy, shrinking in Japan, I think Germany has tipped over to shrinking, China’s population is shrinking. You can go look at when we hit peak population in a bunch of places. I think France will hit it sometime in the early 2030s, ex-immigration like the United States would already have a shrinking population so none of us have lived in a world where we’ve had population decline in our lifetime, and so the thing that must happen in order for us to maintain our standard of living, and for us, God forbid, to have a better standard of living over time when you have fewer people to do the work of the world is you have to have big productivity boosts. There has to be some way to, with fewer human beings, do all of the things that need to get done.

There are places in rural America where there are canaries in the coal mine for this problem that we’re all going to face at some point where you don’t have doctors lining up to go move to Gladys, Virginia to take care of the rapidly aging population there. So I think the way that AI shows up in those places is it lets people have equitable access to all of the things that you need access to have dignity and live a good life, and you don’t have to rely on how to get the humans to the right places in order to do it when you don’t have enough humans to go around. I know all of that sounds super abstract, some far distant problem.

I’m from a small town in Wisconsin, I know exactly what you’re talking about.

KS: Yeah, it’s a real thing, and I think about this healthcare crisis that my mom got into, and I think everybody in the system there was trying to do their absolute best and the absolute best still wasn’t good enough. I think if I hadn’t intervened, she would’ve had a really different outcome and I think about all of the old ladies who don’t have a son who can intervene, and if AI can help play some role in that intervention to let people have more agency over their healthcare, more agency over their education, more agency over their entrepreneurial opportunities, I think it’s nothing but goodness. That doesn’t mean it’s unalloyed good and we get to not think at all about the risks and the downsides.

I think the risk in general, from my perception, particularly amongst our circles as it were, unlike a lot of other technical revolutions, there is insufficient thought being given to the upsides. There’s this like, you felt you were talking about all these good things and you’re like, “Oh, I better include the safety/security bit”. You go to anyone outside of this area and they’re like, “Oh, we know about the upsides” — that’s the part that gets waved away. It’s like, “No, wait, can we stop on that for a little bit? Can we actually talk through what those are?”, so I enjoy your articulation of that there.

KS: I do think there is another, at least one, technological revolution that had this sort of property and it’s the Print Revolution where you had the printing press.

The Church — it took 10, 15 years, but they caught up in the grand scheme of time pretty quickly.

KS: Yeah, actually, it took longer than that, you had about a century of turmoil and upheaval and what you netted out to in the end was a thing that we all just absolutely take for granted. You just can’t imagine a world without the written word, without books and free flow of information.

We also ended up with a completely re-organized Westphalian system. We had the years of war, we had lots of stuff. We had the breakup of the entire Reformation, a lot happened?

KS: Yeah, my wife is an early modern European historian by training, she’s a philanthropist now, but that is her period, the Print Revolution, and so this is part of our household conversation.

I’m going to give you this mic when you go over, if you can just record a couple episodes, we’ll post them for you.

The OpenAI Partnership

Was it because you were out an outsider, you had been at Google, you were at LinkedIn, that you could come to Microsoft and say, “I’m not sure you realize how far behind you are from Google and you need to do something pretty radical here”?

KS: Maybe. I think there was actually a recognition that we were far behind.

Broadly speaking, you didn’t need to convince anyone?

KS: Yeah. The question was, “What do you go do about it?” and I think the interesting thing there was I’ve always been attracted to problems.

You need a place to use your tools.

KS: Yeah, I do. It’s funny, I’ve been an engineer for a really long time, and the thing that you will notice is you have all sorts of different types of problem solvers. So you have people who are good starters and people who are good finishers and it is very rare to have someone who’s a good starter and a good finisher. The choices that I made in things that I went to go work on were largely about — it was almost like Nanny McPhee, I don’t know whether you ever saw that movie.

I did not. Sorry, it went over my head.

KS: Nanny McPhee was this fictional nanny character who was, I’m not saying I’m magical, but her shtick was when the kids needed her but didn’t want her, she had to stay, and when they no longer needed her but wanted her to stay, she had to go. That’s the thing that attracts me to things. It’s like, “Okay, this is a really sticky situation. I think I can help solve this particular problem”, that’s what I want to go work on.

And so Microsoft being behind, you were rubbing your hands together?

KS: Look, it wasn’t that the whole company — Microsoft was fine, they were roaring through growth and cloud. This was about, “okay, we’re behind in AI”, and AI wasn’t as obviously important in 2017.

Is this more function of their product mix? For example, maybe if Bing had been bigger and they had a larger advertising business, they would’ve been more — or was it an oversight? What drove them?

KS: Hard to say. I think one of the things that is just super clear about investing in AI right now is you have to be disciplined about how you’re investing. It’s not one of those areas where you want to let a thousand flowers bloom and you spread your resources across a bunch of different bets and all of that’s going to add up to something great in the end.

So I think Microsoft had been spending quite a substantial amount of money and had a huge number of people working on AI, but it was really diffused across a bunch of different things, and it’s just too expensive and too complicated in enterprise to let it be diffused across a bunch of different things. I think that’s a thing that still people struggle with.

So how did you convince Microsoft to say, “Look, you were spending all this money, we get it. You have Microsoft Research, XYZ. Actually, what you just need to do, your core capability, Microsoft, is the ability to spend money and there is this organization in OpenAI that doesn’t have money, but has the capability to build what needs to be done and we have to work together”?

KS: Well, I would challenge that characterization. I don’t think our core capability is spending money. I think our core capability, if you just look at the DNA of the company, is building platforms to try to identify opportunities to build things that lots of other people are going to build their businesses and their products on top of.

Fair enough. Which throws off a lot of money that you’re able to spend.

KS: Yes, I think in success that is true. So the argument was basically almost exactly the one that we’ve been having so far. It’s like, “Hey, we now are seeing a trend in the technology where it’s behaving like a platform where the platform itself is going to really benefit from focus and having a point of view about what’s the thing that you want to put your dollars into”. Not just your dollars, but you’ve got all of this opportunity cost that you’re spending on the development of this new platform.

Microsoft has always defaulted though towards “Build, not buy”. In this case the question isn’t buy, it’s, “Build vs. partner”, which is an even more precarious position. What was the evidence or was there a moment that convinced the board to say, “We don’t have time to catch up”?

KS: I think it’s right around the time that we did the first deal with OpenAI in 2019 so we had a pretty clear sense what the scaling laws were going to look like by then, and we knew that we had to just move immediately, and there were two or three options, and this one in my judgment and then in Satya’s judgment was going to be the fastest way to get ourselves bootstrapped and into position.

The risk though is you’re putting so much in the hands of an entity you don’t control. As a major proponent of this, how stressful was November 2023?

KS: It was stressful, but look, again, the thing that I would say in general is I think Microsoft as a platform provider has actually been pretty good over the years at building really complicated things with partners. It’s not like the PC Revolution was all Microsoft, it was Microsoft plus Intel plus Nvidia with graphics cards plus an entire OEM ecosystem, so it’s rarely just the thing that we are building alone. Even Azure, Azure is only successful because we’ve got a bunch of other infrastructure like Databricks and Snowflake, and a bunch of stuff that runs on top of our cloud and that also runs on top of a bunch of other people’s clouds. So I think you really, in the modern era, if you’re really talking about these super, super large-scale platforms, you have to be reasonably good at partnering. You can’t have this thought that I’m going to do everything myself, it’s just too hard.

To go back to the abstraction question, and in this context, do you feel confident, broadly speaking, it makes you sleep at night, then beyond the fact that who owns compute runs the world, that models are ultimately going to be commoditized? And if push comes to shove, sure, you’ll have to do some work, but the Office applications could run any model, it doesn’t have to be the OpenAI model.

KS: Well, look, I think it’s less about commoditization and more about what this two-step dance is that we’re doing right now, which is like you have a frontier that is advancing pretty rapidly and I think it’s just table stakes that if you’re going to be a modern AI cloud or you’re going to build modern AI applications, you better have access to a frontier model. OpenAI is doing a brilliant job, I think, building these frontier models and making very, very good use of compute resources and then as the frontier pushes forward, you have an entire ecosystem of really, really super clever people who are figuring out how to optimize all of the bits and pieces of it.

Scaling Laws

Did you feel like Phi was a real validation of your strategy, that you went from not being able to do anything to building the best small model basically in a number of years?

KS: Yeah, and I think the interesting thing about Phi is not that it’s replacing anything, is that it composes well with what we already have because you can do so much with a frontier model, and I don’t want anybody getting confused. I think, again, half of my message at Build today was like, “You really need to be thinking about how fast this frontier is advancing and a real category error here is getting too caught up in this sort of linearity of all of the optimizations that everybody’s doing”.

Is there a bit where tech has forgotten what it was like to build on top of Moore’s Law? You go back to the 80s or 90s and it took a while to get the — you needed to build an inefficient application because you wanted to optimize for the front end of the user experience, and you just trusted that intel would solve all your problems.

KS: Correct.

Is there a bit where that’s been lost?

KS: I think so.

Because everyone complains about bloat, but there’s a bit where no, actually you want bloat because it will get taken care of.

KS: Yeah, you can go sort bloat out in arrears.

That’s right.

KS: You don’t want pointless bloat, but you also don’t want-

You don’t want to over-optimize to get rid of it.

KS: If I think about earlier in my career, one of the things that people used to pride themselves on is you write these programs and you go just sort of go to the inner loop of your critical path function like write a bunch of —

It’s like, good, you went from 0.0002 millisecond to a 0.0001 or whatever.

KS: There was a point in time where that mattered, where that was the difference between having a useful thing and having a piece of junk, but because you had this Moore’s Law, this exponentially improving process, if you didn’t recognize that and all you did was spend your time writing a bunch of interloop assembly language for something, you were just missing all the opportunity to write fundamentally more powerful software.

I mean, I was a compiler optimization guy when I was in grad school and I had this friend, Todd Proebsting, who was a professor at the University of Arizona and was at Microsoft Research for a while and he had this thing called Proebsting’s Law, which was a goof on Moore’s Law and Proebsting’s Law said that the work of compiler optimization researchers double the performance of computer programs once every 18 years.

(laughing) Puts you in your place.

KS: Yeah, and he was kind of right. I wrote a paper about this when I was in grad school and it was a little bit worse than that, and so it is one of the reasons why I decided not to be a compiler optimization person anymore because you could go crank away on something that was very, very complicated for six months of your time and move a benchmark by 4% and in that same period of time, like the material scientists and the architects that in tow were going to make this thing twice as fast. So what are you doing? You’d be way better off trying to figure out how to harness all of that new fast that was coming rather than trying to optimize away the old slow.

What is driving this? You were talking about a new model coming, but you also mentioned in the GPT-4 context, “GPT-4o, a 12x decrease in costs, six times increase in speed”. Now, I think if you really dig in, GPT-4o was not as good as GPT-4 at some things, it’s good at some other things it’s been optimized for. Is it just an optimization of the model? Is this a solution and inference, you figure out new ways to approach it? What are some of the drivers of this?

KS: Yeah, so I think you’ve got two fundamental things. So one is the hardware is actually getting better, so God bless them. Like Nvidia is doing tremendous work, AMD is doing good work now, we’ve got at Microsoft first-party silicon efforts underway, like a whole bunch of other people in their contexts are building their own silicon and I think we’re at this point where even though you don’t quite have a functioning Moore’s Law anymore, where smaller transistors are getting cheaper and give you more power for general-purpose compute, we are at least at the moment, still innovating enough on how to put those transistors to work for this embarrassingly parallel application that we have in AI.

Well, there’s always more rooms for innovations. You innovate on networking, even if you don’t get it just from the transistor sides, you can get it.

KS: Yeah.

“We’ll recreate it in the aggregate”, there’s a Moneyball reference.

KS: You’re getting a ton of price performance advantage from the hardware, but even more significantly than that, there’s just a ton of innovation that we’ve all been doing. Everything from how you optimize the whole system software stack to how you make use of new data types. A ton of what’s happening right now is using these faster data parallel data types, FP8 instead of doing all 32 bit arithmetic for these models, and that lets you make better use of memory to have more operations.

So kind of counter-intuitively, there’s a bit where you use less precision, it’s dumber to a certain extent and that actually turns out to be the better answer because price and speed matter more?

KS: So far less precision is not making anything dumber, it’s just you look at all the activations in a neural net and they’re just super, super sparse. The networks are big, but there isn’t a ton of signal in each one of those activations.

It strikes me about just the big thing in general that this parallel approach, that is the level of abstraction everywhere. It strikes me the most compelling applications are ones that can bring parallelism to bear and this is sort of a thing here where don’t get hung up on the precision of any one calculation. If the cost is reduced parallelism, which you just need more, more and more.

KS: Correct. So the hardware is getting better and then we’re just getting a lot better, like even faster than the hardware is getting better the techniques for training and the techniques for building inference engines are just getting tremendously better.

Microsoft’s Spend

How do you feel confident about the spend that’s going into this? That’s probably the question people have. Like, oh, you’ll say, “Well, we have visibility into our revenue”, what’s that visibility? Is that Office Copilot revenue? Is that API use? What gives you confidence in knowing is it better to over-invest or is it better to not have enough compute and on the inference side, or is it better to have too much and risk going — or is that just, it’s good for everybody if you have too much compute?

KS: What we’re seeing right now is there’s relatively little downside in having an excess of compute, and that’s theoretical because the reality is we do not have excess compute. The demand for all of these AI products and services is so high right now that we are just doing crazy things to try to make sure that we’ve got enough compute and enough optimization of the entire system so that we can fulfill the demand that we’re seeing.

If you look forward, there are just huge amounts of economic opportunity. I think the API businesses, I mean, it went from nothing to a very, very large business quicker than anything that we’ve ever seen. The Copilot business has a huge amount of traction. It was the user engagement on Copilot is the highest level of engagement we’ve seen in any new Microsoft 365 in any Office product, maybe in history. So a lot of times you will go sell somewhat a new enterprise product and then it takes a fairly long time for the product to diffuse out into the organization.

It sounds like if you could spend more on CapEx you would, are you limited just by supply?

KS: Absolutely. We’re limited in a whole bunch of different ways, but yeah, I mean, if I could spend more-

Data center energy.

KS: Yeah, if I could spend more on CapEx.

You have no demand concerns?

KS: No, not now.

Kevin Scott, thank you very much.

KS: Thank you for having me.


This Daily Update Interview is also available as a podcast. To receive it in your podcast player, visit Stratechery.

The Daily Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.

Thanks for being a supporter, and have a great day!

Continue Reading