I think google ends up the winner. They can keep chugging along and just wait for everyone else to go bankrupt. I guess apple sees it too since the signed with google and not
OpenAI.
In addition to that, Google and Apple are demonstrated business partners. Google has consistently paid Apple billions to be the default search engine, so they have demonstrated they pay on time and are a known quantity. Imagine if OpenAI evaporated and Siri was left without a backend. It'd be too risky.
The minute Apple chose Google, OpenAI became a dead duck. It will float for a while but it cannot compete with the likes of Google, their unlimited pockets and better yet their access to data
I think it points to OpenAI trying to pivot to leveraging their brand awareness head start and optimizing for either ads or something like the Jony Ive device- focusing on the consumer side.
For now people identify LLMs and AI with the ChatGPT brand.
This seems like it might be the stickiest thing they can grab ahold of in the long term.
Consumer AI is not going to come close to bailing them out. They need B2B use cases. Anthropic is a little better positioned because they picked the most proven B2B use case — development — and focused hard on it. But they'll have to expand to additional use cases to keep up with their spend and valuation, which is why things like cowork exist.
But I tend to agree that the ultimate winner is going to be Google. Maybe Microsoft too.
Unless you're totally dumb or a super genius, LLMs can easily provide that kind of monthly value to you. This is already true for most SOTA models, and will only become more true as they get smarter and as society reconfigures for smoother AI integration.
Right now we are in the "get them hooked" phase of the business cycle. It's working really damn well, arguably better than any other technology ever. People will pay, they're not worried about that.
It would have to be $60-$80/mo. in value over and above what you could get at the same time with cheap 3rd party inference on open models. That's not impossible depending on what kind of service they provide, but it's really hard.
The value is well worth over $60-$80/mo. But conflating that with the market condition is very different.
In the world where you cheap open weight models and free tier closed sources models are flooding the market, you need very good reason to convince regular people to pay for just certain models en masse in b2c market
After 30 years with a shit operating system known as Windows, Linux still cannot get over 5% adoption. Despite being free and compatible with every computer.
"Regular People" know ChatGPT. They know Gemini (largely because google shoves it in their face). They don't know anything else (maybe Siri, because they don't know the difference, just that siri now sucks). I'm not sure if I would count <0.1% of tokens generated being "flooding the market".
Just like you don't give much thought to the breed of grass growing in your yard, they don't give much thought to the AI provider they are using. They pay, it does what they want, that's the end of it. These are general consumers, not chronically online tech nerds.
> After 30 years with a shit operating system known as Windows, Linux still cannot get over 5% adoption. Despite being free and compatible with every computer.
You need to install linux and actively debugging it. For ai, regular people can just easily switch around by opening an browser. There are many low or 0 barrir choices. Do you know windows 11 is mostly free too for b2c customers now? Nobody is paying for anything
> "Regular People" know ChatGPT. They know Gemini (largely because google shoves it in their face). They don't know anything else (maybe Siri, because they don't know the difference, just that siri now sucks). I'm not sure if I would count <0.1% of tokens generated being "flooding the market".
You just proved my point. Yes they are good, but why would people pay for it? Google earns money through ads mostly.
> Just like you don't give much thought to the breed of grass growing in your yard, they don't give much thought to the AI provider they are using. They pay, it does what they want, that's the end of it. These are general consumers, not chronically online tech nerds.
That's exactly the points, because most of the internet services are free. Nobody is paying for anything because they are ads supported
It's nothing to do with Windows but with the applications (including games) that just run on it and the fact that most companies just run it it by default.
I don't see that. I've used LLMs and I've seen very little direct value. I've seen some value though Photoshop etc. But nothing I'd pay for a direct subsciption for.
It doesn't matter. I firmly believe both OpenAI and Anthropic are toast. And I aay this as someone that uses both Codex and Claude primarily.
I really dislike Google, but it is painfully obvious they won this. Open AI and Anthropic bleed money. Google can bankroll Gemini indefinitely because they have a very lucrative ad business.
We can't even argue that bankrolling Gemini for them is a bad idea. With Gemini they can have yet another source of data to monetize users from. Technically Gemini can "cost" them money forever, and it would still pay for itself because with it they can know even more data about users to feed their ad business with. You tell LLMs things that they would never know otherwise.
Also, they mostly have the infrastructure already. While everyone spends tons of money to build datacenters, they have those already. Hell, they even make money by renting compute to AI competitor.
Barred some serious unprecedented regulatory action against them (very unlikely), I don't see how they would lose here.
Unfortunately, I might add. i consider Google an insidiously evil corporation. The world would be much better without it.
They also have tons of data on the users' habits and desires which they can use to inform the AI with each specific user's preferences without them having to state them. Because so many people use Google maps, Gmail etc. It's not just about training data but also operational context. The others lack this kind of long-term broad user insight.
I'm not using Google services much at all and I don't use Gemini but I'm sure it will serve the users well. I just don't want to be datamined by a company like Google. I don't mind my data improving my services but I don't want it to be used against me for advertising etc.
Yes, I think that’s their plan. Remember when Altman got fired from OpenAI? Msoft was right there with open arms. Msoft is probably letting OpenAI do the dirty work of fleecing investors and then when all their money is gone doing the R/D, MSoft scoops up the IP and continues on.
sort of not really but effectively yes. their deal with OpenAI gave them unrestricted use of all of OpenAI Models and IP (source code, weights, patents probably any other data) except the eventual hypothetical end products of Artificial General Intelligence which would belong to OpenAI alone but Microsoft would still have everything leading to it so could probably make that jump on their own (not a great deal on OpenAI's part as it doesn't give them much of a moat). so when OpenAI runs out of money well Microsoft won't own the IP but will have unrestricted use of it some one else could buy it at bankruptcy but microsoft could still use it. As for the staff well they already showed a willingness to jump ship to Microsoft back when the OpenAI board tried firing Sam without giving a reason, and if OpenAI dies Microsoft would probably hire any of the top talent that applied. So kinda sort of but on paper no but yeah they would have everything of value they would choose have.
Don't want to sound rude, but anytime anyone says this I assume they haven't tried using agentic coding tools and are still copy pasting coding questions into a web input box
I would be really curious to know what tools you've tried and are using where gemini feels better to use
It's good enough if you don't go wild and allow LLMs to produce 5k+ lines in one session.
In a lot of industries, you can't afford this anyway, since all code has to be carefully reviewed. A lot of models are great when you do isolated changes with 100-1000 lines.
Sometimes it's okay to ship a lot of code from LLMs, especially for the frontend. But, there are a lot of companies and tasks where backend bugs cost a lot, either in big customers or direct money. No model will allow you to go wild in this case.
My experience is that on large codebases that get tricky problems, you eventually get an answer quicker if you can send _all_ the context to a relevant large model to crunch on it for a long period of time.
Last night I was happily coding away with Codex after writing off Gemini CLI yet again due to weirdness in the CLI tooling.
I ran into a very tedious problem that all of the agents failed to diagnose and were confidently patching random things as solutions back and forth (Claude Code - Opus 4.6, GPT-5.3 Codex, Gemini 3 Pro CLI).
I took a step back, used python script to extract all of the relevant codebase, and popped open the browser and had Gemini-3-Pro set to Pro (highest) reasoning, and GPT-5.2 Pro crunch on it.
They took a good while thinking.
But, they narrowed the problem down to a complex interaction between texture origins, polygon rotations, and a mirroring implementation that was causing issues for one single "player model" running through a scene and not every other model in the scene. You'd think the "spot the difference" would make the problem easier. It did not.
I then took Gemini's proposal and passed it to GPT-5.3-Codex to implement. It actually pushed back and said "I want to do some research because I think there's a better code solution to this". Wait a bit. It solved the problem in the most elegant and compatible way possible.
So, that's a long winded way to say that there _is_ a use for a very smart model that only works in the browser or via API tooling, so long as it has a large context and can think for ages.
You need to stick Gemini in a straightjacket; I've been using https://github.com/ClavixDev/Clavix. When using something like that, even something like Gemini 3 Flash becomes usable. If not, it more often than not just loses the plot.
Every time I've tried to use agentic coding tools it's failed so hard I'm convinced the entire concept is a bamboozle to get customers to spend more tokens.
My guess is that Google has teams working on catching up with Claude Code, and I wouldn't be surprised if they manage to close the gap significantly or even surpass it.
Google has the datasets, the expertise, and the motivation.
I've had the same experience with editing shaders. ChatGPT has absolutely no clue what's going on and it seems like it randomly edits shader code. It's never given me anything remotely usable. Gemini has been able to edit shaders and get me a result that's not perfect, but fairly close to what I want.
have you compared it with Claude Code at all? Is there a similar subscription model for Gemini as Claude? Does it have an agent like Claude Code or ChatGPT Codex? what are you using it for? How does it do with large contexts? (Claude AI Code has a 1 million token context).
I tried Claude Opus but at least for my tasks, Gemini provided better results. Both were way better than ChatGPT. Haven't done any agents yet, waiting on that until they mature a bit more.
Gemini 3.1 (and Gemini 3) are a lot smarter than Claude Opus 4.6
But...
Gemini 3 series are both mediocre at best in agentic coding.
Single shot question(s) about a code problem vs "build this feature autonomously".
Gemini's CLI harness is just not very good and Gemini's approach to agentic coding leaves a lot to be desired. It doesn't perform the double-checking that Codex does, it's slower than Claude, it runs off and does things without asking and not clearly explaining why.
(Claude Code now runs claude opus, so they're not so different.)
>it's [Gemini] nowhere near claude opus
Could you be a bit more specific, because your sibling reply says "pretty close to opus performance" so it would help if you gave additional information about how you use it and how you feel the two compare. Thanks.
On top of every version of Gemini, you also get both Claude models and GPT-OSS 120B. If you're doing webdev, it'll even launch a (self-contained) Chrome to "see" the result of its changes.
I haven't played around Codex, but it blows Claude Code's finicky terminal interface out of the water in my experience.
It is a rather attractive view, and I used to hold it too. However, seeing as Alphabet recently issued 100-year bonds to finance the AI CapEx bloat, means they are not that far off from the rest of the AI "YOLO"s currently jumping off the cliff ...
They have over $100B in cash on hand. I can't pretend to understand their financial dealings, but they have a lot more runway before that cliff than most of the other companies.
This is the conclusion I came to as well. Either make your own hardware, or drown paying premiums until you run out of money. For a while I was hopeful for some competition from AMD but that never panned out.
Google has proven themselves to be incapable of monetizing anything besides ads. One should be deeply skeptical of their ability to bring consumer software to market, and keep it there.
They don't have the know how (except by proxy via OpenAI) nor custom hardware and somehow they are even worse at integrating AI into their products than Google.
They don’t need to. Just like Amazon they are seeing record revenues from Azure because of their third party LLM hosting platforms only being gated because no one can get enough chips right now
I was thinking about that (I definitely agree with you on the software and data angle).
But when you think about it it's actually a bit more complex. Right now (eg) OpenAI buys GPUs from (eg) NVidia, who buys HBM from Samsung and fabs the card on TSMC.
Google instead designs the chip, with I assume a significant amount of assistance of Broadcom - at least in terms of manufacturing, who then buys the HBM from the same supplier(s) and fabs the card with TSMC.
So I'm not entirely sure if the margin savings are that huge. I assume Broadcom charges a fair bit to manage the manufacturing process on behalf of Google. Almost certainly a lot less than NVidia would charge in terms of gross profit margins, but Google also has to pay for a lot of engineers to do the work that would be done in NVidia.
No doubt it is a saving overall - otherwise they wouldn't do it. But I wonder how dramatic it is.
Obviously Google has significant upside in the ability to customise their chips exactly how they want them, but NVidia (and to a lesser extent) AMD probably can source more customer workflows/issues from their broader set of clients.
I think "Google makes its own TPUs" makes a lot of people think that the entire operation in house, but in reality they're just doing more design work than the other players. There's still a lot of margin "leaking" through Broadcom, memory suppliers and TSMC so I wonder how dramatic it is really is
My take is it's the inference efficiency. It's one thing to have a huge GPU cluster for training, but come inference time you don't need nearly so much. Having the TPU (and models purpose built for TPU) allows for best cost in serving at hyperscale.
Yes potentially - but the OG TPUs were actually very poorly suited for LLM usage - designed for far smaller models with more parallelism in execution.
They've obviously adapted the design but it's a risk optimising in hardware like that - if there is another model architecture jump the risk of having a narrow specialised set of hardware means you can't generalise enough.
Prefill has a lot of parallelism, and so does decode with a larger context (very common with agentic tasks). People like to say "old inference chips are no good for LLM use" but that's not really true.
NVidia is operating with what, 70% gross margin? That’s what Google saves. Plus, Broadcom may be in for the design but I’m not sure they’re involved in the manufacturing of TPUs.
Just by removing nvidia's profits Google gets their TPUs for like 25-30% of what they cost to get them from nvidia, assuming similar cost structures. Google's cost structures are probably higher than nvidia's so realistically theyre probably paying around 50% of nvidia charges, but thats still billions of dollars a year and allows them to create a product that is tailor made for their needs.
Yeah this is a bummer. If it goes south everyone in power will also have perfect hindsight and say they saw it coming because obviously you shouldn't have this much built on such a small footprint. And yet...
> Yeah this is a bummer. If it goes south everyone in power will also have perfect hindsight and say they saw it coming because obviously you shouldn't have this much built on such a small footprint. And yet...
It'll be true, everyone does see it coming (just like with rare earth minerals). But the market-infected Western society doesn't have the maturity to do anything about it. Businesses won't because they're expected to optimize for short-term financial returns, government won't because it's hobbled because biases against it (e.g. any failure becomes a political embarrassment, and there's a lot of pressure to stay out of areas where businesses operate and not interfere with businesses).
America needs a lot more strategic government control of the economy, to kick businesses out of their short-term shareholder-focused thinking. If it can't manage that, it will decline into irrelevance.
When USSR fell there was a lot of talk about how it was meant to be since the US system is the best of all the terrible systems. It deserved to win and the USSR system deserved to die.
>America needs a lot more strategic government control of the economy, to kick businesses out of their short-term shareholder-focused thinking. If it can't manage that, it will decline into irrelevance.
If it is meant to be then its meant to be. If the US decides to cling to its old system and it fails well then we would know that it wasn't the best system after all. Humankind will keep moving forward even if it means that another continent controls the show.
While I think Gemini is the worst of the three big competitors, Waymo is an superb example of this talent. Kudos to Google engineers for producing so many diamonds despite producing many terrible flops over the years. We might find out their system of organization was the best one after all.
You better hope Anthropic and OpenAI thrive, because a world in which Google is the sole winner is a nightmare.
Google's best trick was skirting the antitrust ruling against them by making the judge think they'd "lose" AI. What a joke.
Meanwhile they're camping everyone's trademarks, turning them into lucrative bidding wars because they own 92% of the browser URL bars.
Try googling for Claude or ChatGPT. Those companies are shelling out hundreds of millions to their biggest competitor to defend their trademarks. If they stop, suddenly they lose 60% of their traffic. Seems unfair, right?
I'm waiting to see a more egregious company than openai and a bigger scammer ceo like altman. no, thank you. i hope openai goes bankrupt. especially since the ousting of ilya.
Honestly at this point, I don't care which company lives or dies.
Because recent open source models have reached my idea of "enough". I just want the bubble to burst, but I think the point of the bubble burst is that Anthropic and OpenAI couldn't survive whereas Google has chances of survival but even then we have open source models and the bubble has chances of reducing hardware costs.
OpenAI and Anthropic walked so that Google or Open source models could run but I wish competition and hope that maybe all these companies can survive but the token cost is gonna cost more, maybe that will tilt things more towards hardware.
I just want the bubble to burst because the chances of it prolonging would have a much severe impact than what improvements we might see in Open source models. And to be quite frank, we might be living an over-stimulus of "Intelligence", and has the world improved?
Everything I imagined in AI sort of reached and beyond and I am not satisfied with the result. Are you guys?
I mean, now I can make scripts to automate some things and some other things but I feel like we lost something so much more valuable in the process. I have made almost all of my projects with LLM's and yet they are still empty. Hollow.
So to me, the idea of bursting the bubble is of the utmost importance now because as long as the bubble continues, we are subsiziding the bubble itself and we are gonna be the one who are gonna face the most impact, and well already are facing it.
in hindsight, I think evolution has a part in this. We humans are so hard coded to not get outside of the tribe/the-newest-thing so maybe collectively us as a civiliazation can get dis-enchanted first via crypto now AI but we also can think for ourselves and the civilization is built from us in my naive view.
So the only thing we can do is think for ourselves and try to learn but it seems as if that's the very thing AI wants to offload.
Also, Sam Altman (at least) gives the impression of being a bit of a manipulative psychopath. Even if there are others out there like him, who are just more competent at hiding their tendencies, I really don't want him to win the "world's richest man" jackpot; it'd be a bad lesson to others. Steve Jobs hero-worship is bad enough.
downvote all you want. google has all the money to keep up and just wait for the others to die. apple is a different story, btw, can probably buy openai or anthropic, but for now they're just waiting like google, and since they need to provide users AI after the failure with Apple Intelligence, they prefer to pay for Google and wait for the others to fight against each other.
openai and anthropic know already what will happen if they go public :)
That’s not a well informed argument. Even if Apple could finance the $1T+ it would cost to buy Anthropic - they’re not making that money back by making the iPhone a little better. The only way to monetize is by selling, as Anthropic does, enterprise services to businesses. And that’s not Apple’s “DNA,” to use their language.
Google is vulnerable in search and that already shows as we see a decline as many parallel paths emerge. At the beginning it was a simple lookup for valid information and it became dominant - then pages of pay ranked preference spots filled pages that obscured what you wanted = it became evil.
We see no such thing. Google just announces review revenue and profit and Apple hinted at it not seeing any decline in revenue from their search deal with Google which is performance based.
And Gemini is already integrated into the results page and gives useful answers instantly, alongside advertising... What problem for google are you seeing?
Google is the new Open AI.
Open AI is the new Google. Guess who wants to shove advertisements into paying customers' face and take a % of their revenues for using their models to build products? Not Google.
Google's main revenue source (~ 75%) is advertising. They will absolutely try to shove in ads into their AI offerings. They simply don't have to do it this quickly.
> Guess who wants to shove advertisements into paying customers' face and take a % of their revenues for using their models to build products? Not Google.
A search engine which you leech for free, not pay and get ads shoved onto your face. The core argument here is getting ads into your face despite having paid for the product. And also, no one is forcing you to use Google when you have so many alternatives.