Hacker News new | past | comments | ask | show | jobs | submit login
Bard and new AI features in Search (blog.google)
988 points by jmsflknr on Feb 6, 2023 | hide | past | favorite | 954 comments



I agree this is bland corporate speak. But it reminded me of a question that's been floating around:

A number of pundits, here on HN and elsewhere, keep referring to these large language models are "google killers." This just doesn't make sense to me. It feels like Google can easily pivot its ad engine to work with the AI-driven chat systems. It can augment answers with links to additional sources of information, be it organic or paid links.

But I guess I'm wondering: what am I missing? Why would a chatbot like ChatGPT disrupt Google vs forcing Google to simply evolve. And perhaps make even more money?


Those new language models are "Google killers" because they reset all the assumptions that people have made about search for several decades. Imagine that people start using those chat bots massively as a replacement for Google search. Then the notion of keyword disappears. Google AdSense becomes mostly irrelevant.

Of course, Google is a giant today with a history of machine learning innovation. So they have a good chance of being successful in that new world. But the point is that many other companies get a chance again, which hadn't happened in 20 years. You couldn't displace Google in the old search/keyword/click business model. Now everyone gets a fresh start.

Who knows what the economics will be. Just like page rank early on, it was expensive to compute. But the money in advertising made it worth it, and Google scaled rapidly. Which language model do you run? The expensive one, or the light one (notice how Google in this announcement mentions they will only offer the significantly smaller model to the public). Can you make this profitable?

Other fun questions to answer if the industry moves to chat vs. search, in a 5-10 year horizon. What is the motivation to write a blog post by then? Imagine no one actually reads web sites. Instead a blog post to share an opinion, I'll probably want to make sure my opinion gets picked up by the language model. How do I do that? Computed knowledge may render many websites and blogs obsolete.


The point about the economics of running these models is an important one that slides under the radar a lot of times. The training costs for large language models like GPT are enormous, and the inference costs are substantial too. Right now things like ChatGPT are very cool parlor tricks, but there's absolutely no way to justify them in terms of the economics of running the service today.

Obviously this is all going to change in the near to mid future: innovation will drive down costs of both training and inference, and the models will be monetized in ways that bring in more revenue. But I don't think the long term economics are obvious to anyone, including Google or OpenAI. It's really hard to predict how much more efficient we'll get at training/serving these models as most of the gains there are going to come from improved model architectures, and it's very difficult to predict how much room for improvement there is there. Google (and Microsoft, Yandex, Baidu, etc.) know how to index the web and serve search queries to users at an extremely low cost per query that can be compensated by ads that make fractions of a cent per impression. It's not obvious at all if that's possible with LLMs, or if it possible, what the timescale is to get to a place where the economics make sense and the service actually makes money.


+1

For a group of people on a site frequented by startup people, did nobody read the terms of MS's investment into OpenAI?

"Microsoft would reportedly put down $10 billion for a 75% share of the profits that OpenAI earns until the money on the investment is paid back. Then when Microsoft breaks even on the $10 billion investment, they would get a 49% stake in OpenAI.

These are not the terms you would take if, tomorrow, or even two years from now, you were about to be wildly profitable because everything was about to be so easy.

These are the terms you would take if Microsoft was the only hope you had of getting the resources you need, or if getting somewhere was going to be very expensive and you needed to defray costs.

Honestly, with the level of optimism in the rest of this thread about how easy this will all be, they would probably be profitable enough to just buy MS in like 3 years , and wouldn't have needed investment at all!


  > Microsoft would reportedly put down $10 billion for a 75% share of the profits
  > that OpenAI earns until the money on the investment is paid back. Then when Microsoft
  > breaks even on the $10 billion investment, they would get a 49% stake in OpenAI.
To put that in perspective, which is often difficult with large sums of money like this, $10 billion is _half_ of what Facebook paid for Whatsapp.


Not having read the terms in any detail, I will say this: it can be very easy to not report profits for a very long time.


The point was not that the profits are long tailed but rather if openai thought there were massive future profits and the potential to become a google killer then they wouldn’t give away half of their company today for $10B.


But how do you know that they didn't predict that it is only by having a $10 billion that they _can_ become the next google killer, because of the initial cost outlay, and the inevitable fight back from google (which would then require a warchest, of which i'm sure this 10 billion is a part of)?


A couple things. I think the google killer thing is kind of funny in a few senses:

OpenAI talks mostly about trying to help change humanity, not about winning at business. Their mission is still "advance AI safely for humanity". It's not even obvious they care about winning at business. We seem to be putting that on them.

In that sense, i'm not actually sure they care whether they beat Google or not. I mean that honestly. If they care about the stated goal, it would not matter whether they do it or Google does it. I'm not suggesting they don't want to win at all, but it doesn't seem like that is what is driving them except to the degree they need money to get somewhere?

If they really succeed at most of their mission, killing Google might be a side-effect, it might not, but it would just be collateral damage either way.

Beyond that, I don't disagree, i actually agree with you. My point on that front is basically: "Everyone thinks this will be cheap and easy very soon, and change the world very quickly".

I believe (and suspect OpenAI believes) it will be very expensive upfront, very hard to get resources in the short term, and change the world slower as a result


Plenty of people have already said "Google is dead" in certain terms.

If somebody needs ten billion dollars in additional work and investment to make that a reality, how certain can they be?

OpenAI has 375 employees (according to Google). At 300,000 a head thats 110M per year in compensation. Let's say that their compute costs are enormous and go to 200M in expenses per year. 10B is fifty years of expenses. So if they need 10B in investment it becomes obvious that they believe that they have to change something about their business in a fundamental way. Maybe that is going to be enough, but if it is so certain it becomes hard to believe that they'd need this kind of investment.


To me, the fact that MSFT invested on these terms doesn’t mean that the financials are shaky, it means that there’s a huge first mover advantage as well as a huge start up cost. OpenAI could make their money back or they could get taken out back by Google if they’re bottlenecked by cash.


OpenAI is projected to make 200m in 2023 and 1bn in revenue in 2024. If they can (roughly) keep this YoY growth rate, they will become a money printer in 3–5 years.


OK. So then microsoft still takes their $10B back and owns half of the company.


That's the best case scenario. Worst case is that they get nothing back and own nothing. (Based on the quick summary of the deal above; I don't know more than what's here.)


GLM-130B[1] is something comparable to GPT-3. It's a 130billion parameter model vs GPT-3's 175 billion, and it can comfortably run on current-gen high end consumer level hardware. A system with 4 RTX 3090s (< $4k) gets results in about 16 seconds per query.

The proverbial 'some guy on Twitter'[2] got it setup, and broke down the costs, demonstrated some prompts, and what not. The output's pretty terrible, but it's unclear to me whether that's inherent or a result of priority. I expect OpenAI spent a lot of manpower on supervised training, whereas this system probably had minimal, especially in English (it's from a Chinese university).

If these technologies end up as anything more than a 'novelty of the year' type event, then I expect to see them able to be run locally on phones within a decade. There will be a convergence between hardware improving and the software getting more efficient.

[1] - https://github.com/THUDM/GLM-130B

[2] - https://twitter.com/alexjc/status/1617152800571416577


Agree, but much less than 10 years. Now that the transformer is establishing itself as the model, we’ll see dedicated HW acceleration for transformers, encoders, decoders, etc. I will eat my hat* if we don’t see local inference for 200B+ parameters within 5 years.

* I don’t own a hat


i would imagine these models would be considered trade secrets, and that the models (esp. good ones that take a lot of resources to have trained) would not leave the data center. Your access to such models would be dictated by an api instead of locally run.


Which chip company do you think will benefit from the move to transformers?


No-one in particular. It will be integrated in the processors (the intels/AMDs on PC, the Mx on Macs and the Qualcomms, Mediateks and Samsungs on phones).

Not much different from video and graphics accelerators being integrated today, or DSP focused instructions in the instruction set.

They just have to do it to stay relevant.


Not sure about 200B+ models on phone hardware in 10 years. But I think we'll be able to deliver the same predictive performance with models 1/10 the size and those will fit. That is what happened with CNN vision models over the last 8 years.


Google currently handles 100,000 queries per second. The costs to run GLM-130B or GPT-3 at this rate would be astonishingly high.


Would they? An array of a million of these machines would cost $4 billion at consumer retail prices. That's 1.4% of Google's annual revenue for one-off bulk cost. The operational cost, at consumer retail electric using current inflated levels, was a small fraction of a cent per query. This is ignoring economy of scale, better electric pricing, caching, etc.


Where are you getting a CPU + RAM + RTX 3090 for $1k? To even install a million of these machines, you'd have to build a new datacenter, the capital costs are going to be beyond just the wholesale price of GPU boards, and you'll have to hire a ton of datacenter technicians.

But leaving that aside, look at OpenAI's pricing. $.02/1K tokens. Let's say the average query would be 20 tokens, so you'd get 50 queries/$.02 = 2500 queries/1$, or for 100k, $40/sec * 86400 * 365 = $1.2b. My guess is OpenAI's costs right now are not scaled to handle 100k QPS, so they're way underpriced for that load. This might be a cost Google could stomach.

I just think blindly shoe-horning these 100B+ param models into this use case is probably the wrong strategy, DeepMind's Chinchilla has shown it's possible to significantly reduce parameter size/cost while staying competitive in accuracy. I think Google's going to eventually get there, but they're going to do it more efficiently that brute forcing a GPT-3 style model. These very large parameter models are tech demos IMHO at this point.


You can get an RTX 3090 for < $1k. I was largely handwaving away the rest of the costs since all the processing is done on those cards and basic hardware is really cheap now a days. But in hindsight that might not be entirely reasonable because you would need a motherboard that could support a 4x setup, as well as a reasonable power supply. But the cost there is still going to be in the same ballpark, so I don't think it changes much.

That said I do agree with your final conclusion. Bigger is not necessarily better in neural networks, and I also expect to see requirements rapidly decline. I also don't really see this as being something that's going to gets ultra-monopolized and centralized. One big difference between natural language interfaces and something like search is user expectations. With natural language the user has an expectation of a result, and if a service can't meet that expectation - then they'll go elsewhere. And I think it is literally impossible for any single service to meet the expectations of everybody.


Why would cost per query go up measurably for a highly parallelizeable workload?


> Google currently handles 100,000 queries per second.

There's a lot of duplication in those queries. If the answers can be cached, a more useful metric would be unique queries per some unit of time (longer than a second).

That said, I don't have the numbers. :)


Stable diffusion can already run on an iphone. Hopefully that trend will come to LLMs too.


Oh it will. We'll be asking our iphone questions and it'll be returning seo spam and ads for viagra in a chatbox. Meanwhile, the big boys get the firehose feed.


1. How much did it cost to train ChatGPT/GPT3? The only estimate I’ve seen was not enormous in the grand scheme of things (eg more money than I have but less than Google have stuck down the back of the sofa). I think that number didn’t count training precursor models or paying for people to come up with the models/training data/infra.

2. Don’t Google have specialised hardware for training neural networks? If the costs of training/inference are very significant won’t Google (with their ASIC and hardware design team) have a significant advantage? It seems to me that their AI hardware was developed because they saw this problem coming a long way off.


1. We don't know the exact cost, but its well into the millions. When Microsoft invested in OpenAI, it did so in the form of ~$500M in azure credits, so they're expecting at least that much in compute spend. Another company estimated that GPT-3 alone cost the equivalent of $4.5M in cloud costs (ignoring OpenAI's other models).

2. Yes. Yes they will/are developing custom silicon that will likely be a significant advantage here. GPU costs were always crazy, and many companies are designing AI chips now. Even the iPhone chips have custom AI Cores. We'll see if Azure releases AI co-processors to aid them...


Yeah I figured that most of the costs would be from iterating and training many models over time. $4.5m is surely not the kind of spending that will make google nervous or will give OpenAI much of a moat.


to clarify -- Microsoft invested $1bn in 2019, half of which was Azure credits. The other half is cash. Since then, they invested a further $2bn.


And META reportedly has spent billions on the meta-verse. It's kind of interesting that language models are now making meta-tech look outdated.

Then again maybe language models will create the metaverse


Did metaverse ever not look like a VR rehash of second life? I’m genuinely curious, I‘ve had VR headsets since Oculus DK1 I’ve never seen anything very compelling on the VR persistent alternate reality front.


Metaverse wanted to do what VRChat had already done tbh. VRChat just needs polish & moderation/to create a proper platform.

Realistically VR clients needs to pose as browsers to load vr:// links which can be connected to objects/actions in VR ie <portal color="green" size="200,200" pos="122.469,79.420,1337.25" href="vr://some-online-shop-showroom.domain.tld" />

That way it's done in an open, browsable way compatible with the expectations that we've gained from regular web experiences. Ie you have your VR home, there's a "bookmark" door you can walk through to go to Amazon's showroom, you search for a particular product and it lets you walk around and pick-up and examine the various options, you can then jump to a particular brand's individual showroom, etc.

Some people might feel that's a little dystopian I suppose but I think it's cool.


Maybe we need to progress tech a bit before we try to move up a level to meta tech.


Inference costs are substantial, but not insurmountable, especially in the endgame.

A decent chunk of the tech community can already run smaller T5 Flan models or the 6B EleutherAI LLM GPT-J (and the likely similarly sized upcoming Open Assistant) on their own machines, at decent inference speed (< 0.2 s per token, which is ok enough most of the time). By 2027 or so the majority of consumers will likely exceed that point.

What happens when models are updated every day automatically and you can run all your search and answer tasks on your local machine?

With GPT-J - which is unreliable and aging now - I can already learn core intro to psychology topics more accurately/faster than with google or wikipedia, and I can do that offline. That's a cherry picked use case, but imagine the future.

Why would you use something that has ads when you can run it locally, and perhaps even augment it with your documents/files?

In the end game this is where Google is in the same place as Kodak in my opinion right now. Sure it's $0.01 or more a search for OpenAI now, but it won't stay that way (they reduced prices by 66% half a year ago), and at that rate you can already make the unit economics work anyhow as a startup.


It is a loosing uphill battle to reduce the operational costs as deep learning models get larger and more complex. Nvidia CEO says hardware alone will not be able to address the growing compute demand. The solution is computational optimization which is what we do.


Yes, and also HW optimization further up the stack.


1- the chips are not efficient currently (graphic card reused as neural net) 10X-100X gain

2- Moore’s Laws

3- Algorithm/architecture improvement


1. Im not sure who you believe will produce these chips, or who will use them. You are correct that specialized inference chips will get you a 10x gain. So what? If you want several million of them in a datacenter, that's a tall order.

That's on top of

A. the hundreds of millions it will take to get a design to production.

B. The complete and total lack of allocation at anybody who could make you chips, except at very very very high cost, if at all - have you forgotten that automakers still can't get cheap chips made on older processes? Most allocation of newer processes is bought out for years.

While there is some ability to get things made at newer process, building a chip for 7nm is 10-100x as expensive as say 45nm.

C. The fact that someone has to be willing to build, plan, and execute putting them in datacenters.

This will all happen, but like, everyone just assumes the hard part is the chip inefficiency.

We already can make designs that are ~10x more efficient at inference (though it depends on if you mean power or speed or what). The fact that there are not millions in datacenters should tell you something about the difficulty and economics of accomplishing this.

People aren't sitting around twiddling their thumbs. If Microsoft or Google or anyone could make themselves a "100x better cloud for AI", they would do it.

2. Dead. Dennard scaling went out the window years ago. Other scaling has mostly followed. The move to specialization you see and push to higher frequencies is because its all dead.

3. Will take a while.

The economics of this kind of thing sucks. It will not change instantly, there isn't the capability to make it happen.


I think you missed the TPU, which is a Google chip that gets you the 10x in inference, and there are millions of them ALREADY in the datacenters, designed, fabricated and installed. You can use one, for free, with Colab


I know a surprising amount about of Google and TPU's :)

This is not accurate - they are neither cheap nor really built for inference.

I'd have to go look hard at what is public vs not if you want me to go into more.


I am thinking about Tesla with Dojo and Tenstorrent.

Both have a similar architecture (different scale) where they dich most of the vram for a fabric of identical cores.

Instead of being limited by the vram bandwidth they run at the chip speed.

Nvidia/Intel/AMD/Apple/Google and others surely have plans underway.

As the demand for AI grow (now clear that there is a huge market) I think we will see more players enter this field.

The landscape of software will have a dramatic shift, how much of the current cpu running in datacenter will be chips for AI in the future, I think it will be most of them.

Jim Keller has a few good interviews about it.


TSMCs earnings show a significant decrease is demand this past quarter. AMD, Nvidia, and Intel all report falling demand. There will likely be allocation of 7nm and 5nm opening up even in the near future, especially as 4nm and 3nm come online in the next few years.

Shortages of 28nm and older nodes are not indicative of other nodes, because 28nm is (or at least was) the most cost effective node per transistor, (so plenty of demand) but no new fabs are being built for that node.


All these conversations have one glaring omission. As it stands right now, ChatGPT is a net negative on the ecosystem it exists in. What does that mean?

ChatGPT extracts value from the ecosystem by hoovering up primary sources to provide answers, but what value does ChatGPT give back to these primary sources? What incentivizes content creators to continue producing for ChatGPT?

Right now, nothing.

ChatGPT (or its descendants) must solve this problem to be economically viable.


They don't have to. Ad & search dependent companies need to answer for themselves how to handle the coming disruptions. As an analogy, Kodak was a disruption opportunity, not a problem, for apple and flickr.

Yes, maybe some content dries up -- no more stock photo sites -- but entirely unclear how important and they can wait to see how zombie companies adjust. Ex: ChatGPT encourages us to put more api docs online, not less.


Google has already taken a lot of heat from increasingly keeping people on the search results page rather than sending them to the content providers. Chat interfaces are going to take that problem to the next level since they not only present someone else’s content but do it without linking to them.

At some point that destroys the web as sites move behind paywalls. Google or Facebook giving you less revenue is still a lot better than receiving nothing.

In some cases, that’s fine (AWS doesn’t mind you learning how to call their metered APIs on someone else’s site) but there are a ton of people who aren’t going to create things if they can’t make rent. Beyond the obvious industries like journalism, consider how many people are going to create open source software or write about it when they won’t get credit or even know if another human ever read it.


That's not really a problem for the adoption or economic viability of ChatGPT, though. At some point, it hoovers up all the knowledge of the Internet, encodes it into its model, and then - the model just stagnates as content providers stop providing content. That's not a big deal for it - it'll continue to be the primary place people go for answers even when the source material has thrown in the towel and decided they don't want to play, just like how people continue to go to Google even though webspam & SEO have long since made the web a place where you don't bother to contribute.

Eventually the ecosystem might collapse when people realize they get more accurate, up-to-date information from sources other than ChatGPT. But considering that ChatGPT's answers are already more "truthy" than "truth", accuracy does not seem to be a top priority for most information-seekers.


Once all competing language models and providers have hoovered up all the existing knowledge and can do similar things with it then margins for that part of the story will shrink rapidly.

It will all be about access to new information, access to customers (such as advertisers) and access to users attracted to other aspects of the platform as well.

I think producers of new content and their distribution platforms will have a lot of leverage. Youtube, Facebook, TikTok, Spotify, Apple, Amazon, Netflix, traditional publishers and perhaps even smaller ones such as Substack and Medium, are all gatekeepers of new original content.

I think Google is best positioned to make the economics work. Unfortunately, they don't appear to have the best management team right now. They keep losing focus. Perhaps the danger of their core business getting disrupted will focus their minds.


The content is a bootstrapping tool. Once the language model gets critical mass it gets further training data from its interactions with users, which are private to the language model's developer. It's like how Google Search bootstrapped off the content of the web, but the real reason you can't replicate Search today is all the information about who is searching for what, which links they actually click on, and how long they spend on the page.


They don't need to solve that problem. Lots of things cannabilise on others without needing to pay them back to be viable. Wikipedia is really just a collection of sources, summarized. It owes nothing to the authors of the source material and does not seek to redress the balance. Google is a sophisticated filter for sources, it doesn't need to pay anything back to them to provide value for the searcher. Same with chatgpt, it filters and transforms its source material but owes nothing in return. News will still be published, data will still be generated at scale.


> Wikipedia is really just a collection of sources, summarized. It owes nothing to the authors of the source material

And yet it provides references and attribution where possible most of the time.


...which do nothing for the websites that had the original content.


This is the opposite of true in my experience: if you run a content-heavy site, Wikipedia is going to be one of your top traffic sources — especially for time on site since the visitors who arrive tend to have a very low bounce rate.


Look at my profile, I own content-heavy sites and have for many years. I can show you logs - Wikipedia does virtually nothing. And the content of my sites has been regurgitated by Wikipedia thousands of times.


It maybe doesn't drive much traffic directly from Wikipedia, but you might have a higher SEO rank when people search in google for whatever your sites are about. Thanks to the links from Wikipedia.


Which are tagged with ugc or nofollow :DD. You have to realise that the current model is based on outrageous theft of people's hard work.


If you view the source of any Wikipedia page, they purposefully include "nofollow" tags, so Google ignores these links!


"nofollow" or not, it does not mean that search engines do not take that into account. They maybe don't scan the linked site there and then, but I would be surprised if they did not take note of that someone linked to it.


All I can say is that my experience has been very different. Wikipedia editors have been very good at citing our primary sources.


I mean my sites are cited over a thousand of times, but it only makes sense: a tiny percentage of visitors who view the Wikipedia page even reach the bottom of the page and then click on one of the links. And there is no benefit in terms of Google rankings.


Are you concerned about all your data being scraped and getting directed for references by chat bots?

How would that affect your monetization?


Yes, it's pretty much game over for free-to-access factual content sites. I've been focusing heavily on AI in recent years, so I saw it coming. It's been a death by a thousand cuts, with Google incorporating long snippets, etc.


The majority of content on the web is just rehashes/remixes/duplication of existing content. The percentage of unique, original content is small in comparison, imo.

Ie there may be 10-100 news articles about an event all with the same source. Youtube has tonnes of duplication/"reaction" videos where the unique content portion is very minimal.


> but what value does ChatGPT give back to these primary sources?

The dissemination of their thoughts and ideas.


With no attribution or way to discover the source. That's great for propagandists but maybe less great for everyone else.


When a real person tells you something in person today, how do you know the original source?


Well that person has reputation/credibility and some reasoning they apply, before passing on the information. Just because you read that the world is flat are you gonna start telling people that? Now let's be clear, some people do mindlessly regurgitate nonsense, but their creditability is typically very low, so you ignore them. There is a grey area where some things aren't clear, but on the basics people of average intelligence are fairly robust, I'm not convinced chatGPT is.


You can ask where they heard it from.


Where did you hear about the economic benefits of Georgeism from? Do you appropriately attribute sources if you mention it to someone?

I know all sorts of things, many in great detail and with high confidence, that I would be very challenged to appropriately source and credit the originator/inventor. I suspect most people are similar.

Substitute “memory safety of Rust” or “environmental concerns with lithium batteries” depending on your interests


Maybe the next generation of LLMs will have more favorable things to say about you if you have published interesting things on your blog. Which in turn would be visible to any employer looking you up in that LLM.


Unattributed thoughts I'm not convinced that is giving back, further I do think this is susceptible to attack, how many flat earth articles do I need to pump out for chatGPT to consume and come to very wrong conclusions?

Perhaps there are some mitigations for this I'm unaware of?


The preview is over, so I can’t link it, but Kagi had GPT3 assisted search, where the model would explain something and provide links and references. They are planning to integrate it into their search, can’t wait, it seemed useful.


> Imagine no one actually reads web sites. Instead a blog post to share an opinion, I'll probably want to make sure my opinion gets picked up by the language model. How do I do that? Computed knowledge may render many websites and blogs obsolete.

Realising that has made me wonder why I should bother write anything publicly accessible online.

Aside from pure altruism and love for my fellow human, or some unexplainable desire to selflessly contribute to some private company’s product and bottom lime, in a world where discovery happens through a language model that rephrases everything it’s way and provides all the answers, why should I feed it?

What do I stand to gain from it, apart from feeling I have perhaps contributed to the betterment of humankind? In which case, why should a private company reap the benefits and a language model the credit?


> What do I stand to gain from it

The AI will absorb your words, and some small part of you will gain immortality. In some small but very real way, you'll live forever, some part of you ensconced safely for all eternity in a handful of vectors deep inside a pile of inscrutable matrices.

...at least, until some CEO lays off that whole team to juice the stock price.


That sort of thing always felt meaningless to me.

Sure I could carve my name or a blog post into a cave wall… so what.

“Some small part” of me doesn’t live on.

Even some small part of Aristotle or Cleopatra doesn’t live on. Ideas and stories live, but people die.

The death of personality is currently total and final.

I don’t know why Billionaires don’t invest their entire fortunes into research on reversing this.


I think relatively few people share this kind of existential dread. It actually has never crossed my mind personally.

If I think 500 years into the future, what would be great is if my descendants are ample and thriving, and my values are upheld. That feels like such a win to me. The fact that I won't physically be there is irrelevant.

On the other hand, artificial continuation of an otherwise impact-less life sounds awful to me.

I suspect that billionaires (certainly, the 2 that I have some insight into having worked for them) think much more about impact they are creating, than some sort of "hang on forever like a barnacle" type of existence.


Yes, that was my take too. Except that for those who care about such things, it is already supposedly achieved thanks to the internet archive.

> ...at least, until some CEO lays off that whole team to juice the stock price.

Or the model is retrained on a different somehow more relevant dataset. Or the company shuts down because of a series of poor choices. Or something new and vastly better comes along.

Or... who knows? The possibilities are so vast that seeking immortality is ultimately futile.


"The AI will absorb your words, and some small part of you will gain immortality. In some small but very real way, you'll live forever, some part of you ensconced safely for all eternity in a handful of vectors deep inside a pile of inscrutable matrices."

That sounds like a sort-of-religion of the future, actually.


I did have chatGPT create a new religion for me. I have to admit, it was quite compelling.


Care to proselytize? Just for interest's sake.


Some of us find that writing things down helps us form and test opinions. And hopefully there will always* be a market for new explanations of novel ideas, before they are well enough understood for LLMs to do a better job.

* I figure I’ve got about 25 years left, so always = 25 years. Good luck, kids.


It definitely does. But if I'm never going to be able to get feedback of any sort on them, or even know if anyone read it, why should I bother with hosting and maintaining it online? This use case can be solved for using a pen and paper. Or a notes app.

> Good luck, kids.

Thanks!


Time to introduce a robots.txt extension:

DisallowModelUse: *


> Google AdSense becomes mostly irrelevant

AdSense is going to be able to be more targeted and relevant than ever before.

Last week, Linus Tech Tips used ChatGPT to build a Gaming PC... from parts selection to instructions on how to build. When chatGPT said, "first, select a CPU", Linus asked it questions like "What CPU should I choose, if I only care about gaming?", and got excellent answers.

I can imagine BestBuy, NewEgg, and Microcenter will be fighting for those top AdSense spots just as much as they do today

"Bard, I'm looking for a blender to make smoothies" ... "does it come in red?" ... "I want it to crush ice" BUY


Delusional to think Ads work only on Keywords.

Where there is human attention, there will always be ads. The more context, the better ads.


"Attention is all you need"?

Joking aside, there's no reason AdWords can't become AdWordVectors and be even more effectively targeted.


How do you know it's not that way now?

Keywords will still be around at the user interface for people buying ads, they are easy to grasp. Part of the secret sauce is getting those keywords mapped into the right entities in a sort of knowledge graph of things you can spend money on that is also connected to all the content of the places you can serve ads on.


Good point.


This is what scares me about these chat interfaces.

In today's world, an ad is clearly an ad.. Or is it? Even now we have advertorials and "sponsored posts" that blend into content maybe a little too much sometimes.

What happens when chatbot companies start taking money to subtly nudge their models into shilling for this or that product.


Or manipulating social / political views. Scary stuff.


There’s a smaller surface are for ads in a targeted chat session. At present, Google can show me ads on the results page. Each subsequent result that I view is an additional slice of my attention.

It’s possible that Google can deliver a few targeted ads, but what if they can’t? What about the rest of the market that’s now gone? Possible that all those missed opportunities remove the ability to discover price.


You completely ignore the fact that companies will show ads everywhere, there is no reason that they would not try to inject ads into chat.

"Here is the answer to your question about oranges. But did you know Tropicana is made from 100% real orange juice?"

"A project manager is a ... Often the software project managers use is Zoho Projects for the best agile sprint planning"

If they can put ads in it, there will be ads in it.


Sure, but that doesn’t change the fact that Google has coasted on an ad model that has depended on Google being the information gatekeeper of the web for nearly two decades. Over those years, Google has demonstrated a remarkable inability to build successful products even when they have market advantages and nearly unlimited resources to throw at them.

This is the first time that the primary cash cow has been seriously threatened, and it’s not unreasonable to bet against Google winning the scramble to figure out a chat AI ad strategy (or any product strategy) that would keep them in their current near-monopoly position.


Future prompts in search engine backends: Assistant, answer the following question concisely: "What is a project manager?". In your answer, casually mention Zoho Projects in a positive way.

Actual GPT3 answer: A project manager is a professional responsible for leading a project from conception to completion. They coordinate the activities of the project team to ensure deadlines and budgets are met. Zoho Projects provides project managers with the tools they need to manage projects efficiently and effectively.


"Native advertising 2.0"


And the best part?

All the money goes to Google!

No more sharing with websites where Google Ads appear. They can even autogenerate youtube channels explaining popular or trending topics. Which of course, they will know, because they'll own search and AI generation. So there will also be no more paying a large portion of youtubers.

People who explain topical subject matter on youtube could, if Google chose, be eliminated. And even if Google doesn't, some content mill in Manilla definitely will.


> All the money goes to Google!

That's an aspect I hadn't considered, nor heard anyone else suggest!


The ads could be slipped right into the chat itself.


Which is almost assuredly FAR more effective.


As you have a detailed conversation with the Chatbot it will know a scary amount of detail about what you are looking for. It can target you in extreme detail. It does not have to show it on many places of with vivid pictures. It's enough with the text dialog based on your detailed inputs.


The ads need to be served in context to a conversation and cannot just pollute a search page like they do now. Ads now are easy, dumb things.


Google could ask the llm what products or services would help with the question and show ads for that. Just tried it and it worked pretty well.

> Me: 8 year old girl birthday party ideas

> Chatgpt: <a list including craft party, scavenger hunt, dance party>

> Me: what products or services could I buy for it

> Chatgpt: craft party: craft supplies such as beads, glue, paint, and fabric - scavengerhunt: prizes for the scavenger hunt and decorations - dance party: Hire a DJ or a dance instructor, and purchase party lights and decorations

Though in reality Google already has highly tuned models for extracting ads out of any prompt


The question becomes will you trust information that is paid for by advertisers or you will trust information that is paid by you, the user?

With ads in link based search engine, you can skip or block them, but if it is a part of a one sentence answer, there is not much you can do about it, so consuming it will be much more frustrating.

Of course, there will still be a lot of people who will choose the free information paid by the advertisers, but there will also be a growing number of users who will prefer not to have advertisers pay for the information they put into their heads (it is already clear that such information will be of higher quality).

My prediction is that in 10 years, all free information paid by advertisers will have 'for entertainment purposes only' label, because by then we will understand as a society that that is its peak value.


"a growing number", "a lot". Where are those users now? We are a tiny lot, nearly economically inconsequential. Your prediction is optimistic.


Those users are now paying for Kagi search for example. They are maybe a tiny lot because this evolution of information consumption has just begun. My prediction was for 10 years from now. Patience.


The motivation for writing a blog post may be the same as when blogoslhere originally started - for your community to read it.


I’m afraid that we will be drowning in synthetic blog posts. Same goes for comments section…


We're already drowning in very high quality synthetic comments. In fact the high quality is the best way to recognize them... What the actual users post is trash, and then all of a sudden there's a huge thread of very educated users having a conversation that just happens to plug a product.

There will be some shifts for sure, but I'm not convinced that they'll be that large, since we're already pretty screwed on the signal to noise ratio of the www.


It's fascinating to think about the future landscape of the search and web.

Some assumptions: 1. Url-based web will not wither away. 2. Asking questions in the chat-like mode is more natural to people. 3. Generated answers cost more when longer. 4. Generated answers are some kind of distilled knowledge and can't be right all the time. 4. People don't like long answers and prefer the concise one. 5. Sources and citations make generated answers more credible. 6. Fully exploring a question needs a lot of information from different views. 7. Generated answers

some simple thoughts: The search behavior would hugely be two main steps: 1.getting some concise answers from the AI model directly through a chat, which might be enough for 90% use cases. 2.some more extensive search just like how people are searching today, which might be a kind of niche.

For websites, being cited in the generated answers will be the new kind of SEO things, and it would be a good strategy to producing some newest, deep or long-tail knowledge and information, which leads to a more traditional way of search because AI model doesn't have enough data to generate a good answer.

...


>Asking questions in the chat-like mode is more natural to people.

It's not just that it's chat, its the ability to refine. Currently, I search something. It returns garbage. I search something new. What I dont do is tell the search what it did wrong the first time. I might sort of do that with -words, but its a fight every time.

The beauty of these new chat systems is that they have short term memory. Its bein able to work within the parameters and context of the conversation. I dont particularly care if it is "chat like" or has its own syntax, what I want is a short term state.

And at the same time, I want long term state. I want to be able to save instructions as simple commands and retrieve them later. Like if I am searching for product reviews, to only return articles where it is convinced the people actually bought and tried the products, not just assembled a list from an online search.


I think this is the same type of thinking that people had when they think technology will "steal" jobs, when in reality we have lower and lower unemployment as time goes on.

Most likely this will not actually happen, and even if it did your content would still be valuable as an AI is analyzing it in a more nuanced way than just looking for keywords. Which, by the way, is exactly what search engines do.


> I think this is the same type of thinking that people had when they think technology will "steal" jobs, when in reality we have lower and lower unemployment as time goes on.

Technical changes do kill jobs. We always find a way to invent jobs, of course, but that doesn't mean old jobs aren't viable.

Movie theaters once employed professional musicians, not they don't, because movies have audio built in. Obviously a net-loss since musicians are jobs people like. Less coal miners or farmers is probably a good thing though.

It all depends on the type of job you replace. If you replace hard manual labor jobs, you're a net-good. Replacing a job people like... and you'll get a negative label. Doesn't change the fact that progress marches on, but jobs are killed by tech changes.


A lot of the jobs we have now resemble David Graeber's "Bullshit Jobs" though. I suspect many jobs that largely consist of making powerpoints and looking out of a window could disappear tomorrow without upsetting anyone except the incumbents.


I completely disagree and I love David Graeber.

We will automate some bullshit jobs but create all kinds of new bullshit jobs that have titles that start with AI.

Thousands of titles like "AI ____ ____ Manager" that also does nothing but schedule meetings about meetings about AI.

The mistake to me is to believe bullshit jobs are the end result of some systemic inefficiency that AI is going to automate out of existence. I just don't think that is at all the case because otherwise we would just cut so many bullshit jobs right now without AI.


It has stolen in that productivity has skyrocketed while wages have been kept relatively suppressed. That's the feat of technology.


Why would keywords disappear? Wouldn’t you just use keywords that appear in the user input and response to serve ads?


>Those new language models are "Google killers" because they reset all the assumptions that people have made about search for several decades.

There is problem with those AIs - you view the world trough the ideological prism of their creators and censors. So chatgpt that is more than happy to make jokes for some races and not others or other types of shenanigans is something I am sure will happily hide the information I actually want to find and feeds me what it wants me to find. So until there are guarantees about ideological neutrality they are not suitable for search for me.


> What is the motivation to write a blog post by then?

LLM’s have done a poor job with attribution and provenance, but that will change.

At some point, it becomes a bit like academia or celebrity: your motivation to write is the social exposure your writing earns, which leads to real world opportunities like jobs or sponsorships or whatnot.

And the great/terrible thing is that these models will know whose opinions are influencing others. The upside is that spam disappears if human nature changes and nobody is influenced by spammers. The downside is. . . Spam and content become totally inseperable.


>Those new language models are "Google killers" because they reset all the assumptions that people have made about search for several decades. Imagine that people start using those chat bots massively as a replacement for Google search. Then the notion of keyword disappears. Google AdSense becomes mostly irrelevant.

Look up the term "native advertising", that should help you in understanding how online ad ecosystem works.


How so? How does native advertising solve the problem of diminishing volumes of keyword searches? How does it even relate to search ads?


if you inject native advertising into the responses? nothing is technically limiting the chat responses from being exclusively the output of the LLM. mix LLM model output with native advertising copy and it's nearly undetectable if you're not looking out for it. and good luck catching those integrated ads in your adblock.


> AdSense becomes mostly irrelevant

Google doesn't make money with AdSense, it pays publishers with it. I agree that there won't be a need for AdSense, that just means Google gets to keep 100% of the profit instead.

No, not everyone gets a fresh start at all. To train anything close to ChatGPT you really do need to be the size of Google or Microsoft to have enough the compute power.


Google realized this years ago, hence Google Now (the voice interface) then Google Assistant. The problem right now is their backend isn't competitive, but Bard could change that.


If you are looking for news and recent events, LLMs are useless.


For the next six months... At most. Efficient model updates are already in the pipeline, and the only reason there's a learning cutoff is probably AI ethics.


you could imagine those AI trained to incorporate ads in their answers.


was this comment satire?


I don't think most people realize how much infrastructure separates something like ChatGPT from Google-scale deployment. OpenAI isn't suddenly building transoceanic fiber and datacenters near most major population centers. They aren't signing production-ready contract vehicles with most major OEMs and governments. And in the gap it would take a new entrant to acquire 10% of those assets, Google has 100,000 engineers who would iterate on this and 1000 other technologies 1000 times.


You are missing the Microsoft partnership. MS has a 48% stake in OpenAI and provides all the infrastructure through Azure, including purpose built machines for model training. Microsoft has also launched GitHub Copilot, summarization features in Teams, and is widely reported to be adding GPT features to Bing.


Google has far more resources for training models and inference. Likely more than all their nearest competitors combined.


If you’re talking about physical hardware, Google pales in comparison to Amazon, and Gcloud is still smaller than Azure. It’s possible google’s private compute makes up for the Azure difference but it’s not like they’re in different leagues in terms of access to hardware.


Does it pale? How? AWS is very opaque when it comes to power demand, but you could use that to estimate machine counts and they admitted to Greenpeace that in 2016 they used 500-650MW — a wide range that obviously obfuscates on purpose. See the last Clicking Clean report from 2017.

Google/Alphabet used 6514 terawatt hours in 2016 according to data collected from yearly reports at https://www.statista.com/statistics/788540/energy-consumptio...

If my math is right, dividing that by 8760 hours in a year, you get 743MW.

Of course, that also includes office space, etc. (did the AWS number, too?) but it should be clear that, cross checking with data center builds, optic fiber and energy purchases as well, for years Google+YT+Apps+GCP were larger than Amazon and all other AWS customers combined. I didn't even factor in efficiency, something that Amazon started focusing on quite a bit later.

Someone might be able to extrapolate both numbers to today based on infrastructure, other metrics or other spend in quarterly financial statements (or power procurement, which will be complicated by the non trivial Amazon vs AWS distinction).

All of the above to say that Amazon probably has more compute now, but it's a stretch to talk about "paling".


Yeah. I have seen internal numbers on YouTube's daily ingest a few times over the years and every time my jaw drops. Like, the number from 2020 is ridiculous compared to their own number from 2017, and that was ridiculous.


I have an idea what you're talking about, because I worked with the YT folks to reduce their Colossus costs and on Google storage in general until 2015. Another humbling and iluminating experience was comparing the Borg resources in the main crawling/indexing cluster to the Top500 list of the same period, something that always comes to mind and makes my eyes go wide when people compare DDG to Google. Or the day when a colleague taped a page saying only ”1E” next to our cubicle because we had just reached that much storage across the fleet.


Unlike Azure and Amazon, Google doesn't have to rely on Nvidia GPUs for training and inference, they achieved significant performance gains by using their custom TPUs.


Your comment is simply incorrect. Amazon has had their own TPU equivalents for training and inference for years:

https://aws.amazon.com/machine-learning/trainium/

https://aws.amazon.com/machine-learning/inferentia/

I really don't think this would be a limiting factor regardless, even if Amazon didn't already have multiple generations of these products. It's not as if an Amazon or Microsoft sized company is incapable of developing custom silicon to meet an objective, once an objective is identified. TPUs also aren't really that complicated to design, at least compared to GPUs.

I'm slightly surprised Microsoft hasn't bothered to release any custom ML chips for Azure yet, but I guess they've run the numbers and decided to focus on other priorities for now.


But you are confusing yourself. Just because AWS has more servers being rented to and used by their customers doesn't necessarily mean they have the most CPU power available for themselves to run AI models.

I also doubt computing power is the real bottleneck. Anyone of these companies (and most other companies too) can build enough large servers sites and they have the money. The costly and difficult part is the engineering resources, doing the right thing technically (AI wise) and business wise, not lose time, not bet on the wrong horse etc.


I suspect that Google's own usage (search, YouTube, CDN, etc) is still bigger than GCP, is it correct?


> google’s private compute


Source?


There’s definitely no legitimate source for that info. Details of the exact machines that sit in the Azure and Google clouds are proprietary.


Yeah they also trash up the brand. Seeing ChatGPT be thrown into Teams and Bing let’s me know where it’s headed


MS cannot deliver successful stuff on the web at scale.

They can deliver unsuccessful stuff on the web at scale, they can deliver successful stuff that turns out to be inconsequential for their bottom line on the web in the long-term (AJAX came from MS), but it's just not in their DNA to take over the web. They had lots of chances to do it during the last 20 years or so, they had all the silver bullets at their disposal, they just couldn't deliver what it took.


> MS cannot deliver successful stuff on the web at scale.

It's this type of confidence that you come back to HN over and over for.

Let's ignore Azure, Office 365, and Microsoft's other online properties. In just video games: Microsoft owns and operates the Xbox network with over 100 million active monthly users[1]. That runs on the internet with all the features of a social network and more. I think they can deliver successful stuff at web scale OK, whatever their other shortcomings.

1 - https://hothardware.com/news/xbox-live-surpasses-100-million...


> Azure, Office 365

I explicitly mentioned the web, Office365 is not a successful web story. It is a successful enterprise story (afaik it is still wildly profitable for MS the company), but it is not a successful web story. Ditto for Xbox.


It doesn't fit your very niche definition of the web, that you still haven't defined explicitly. Let's try:

Is the web a consumer product that runs on the internet?


> It doesn't fit your very niche definition of the web

What's niche about websites?

Leaving aside the snarky tone, I'll give you an example of how MS has continuously botched their web work for 20 years.

If I go on Bing Maps (which I'm actually using on one of my pet projects on account of their permissive licensing) and I type in my Bucharest street address the auto-completion thing works fine, which is a plus (Apple's is much worst at that), but then again their map web project ends up pointing me about 200-300 meters from where I actually live. Google Maps does it pitch-perfect, has done it pitch-perfect for years (I think there have been around 10 years since they've added exact address searches for Bucharest). Many other such cases.

Later edit: Forgot to mention, two of the closest POIs shown on Bing Maps have been closed for two years with other places having taking their place in the meantime. Again, GMaps has been almost instantaneous on putting that on their maps, MS seems to be a lot slower at that. That's what the web is all about, data that counts and that is of interest.


> What's niche about websites?

Office 365 is web based. So does that count or is it conveniently a website also used by companies (and consumers) so we discount it?

What's unique about web sites that you think is harder than what they've done elsewhere? What makes web sites harder to scale?

Their inability to do mapping well has nothing to do with websites. So please, kindly stick to a definition and stop moving the goal post.


Uh sorry what? Azure is the second largest cloud provider, well ahead of google.. That's like the definition of 'web scale'

Not sure what decade you're stuck in here with comments about ajax


Not agreeing with GP but the capacity of their public clouds is very different from total capacity. Amazon and Microsoft have bigger public clouds, but Google's own workload is probably bigger than anyone else's or even the public clouds by a large factor.


> Azure is the second largest cloud provider, well ahead of google.

I explicitly mentioned the "web", as in, what we're doing right now on this website. Leaving aside the fact that Azure is mostly used big corporate/government entities, there's no web-startup that will potentially dominate the web and that would go into Azure just as.


What makes you think anything being discussed in this thread is specific to HTML and web browsers?


You're absolutely right, if they closed up github tomorrow, nobody would even notice. They could also pull vscode off the repositories, and download sites as well and there'd be nary an eye flutter. /s

These two products alone are used by maybe 70% of developers, and don't forget about copilot and all the integrations between github/vscode.


As someone using OpenAI in production, I can attest to the lack of stability and consistent performance in the current offering. Depending on the time of the day (and who knows what else), the same calls to GPT3 can take from 500ms to 15 seconds.


chatgpt launched about 2 months back and have over 100 million users, they have secured funding from microsoft so they are going to optimise and scale its infrastructure.

go for their paid offering if latency is bothering you


Parent is clearly making API calls to GPT3, which isn't really comparable to using ChatGPT (paid or unpaid) via a browser.


Correct.


Maybe it’s changed now, but last time I looked, Google had failed to materially diversify its income, so it is susceptible to competition. All that expensive infrastructure is going to be a weight around Google’s neck if their search revenues start falling.

Of course, even if search revenue falls, it won’t happen overnight.

But I honestly don’t see how laying all that fibre or owning those data centres is a moat around Google. These things are hugely capital intensive, to be sure, but theres a big market for both.


Google cannot make a good SMS app for Android. They will never be able to launch a simple chat bot. What they will launch will be a monstrosity. The only Google AI product that people will use is any AI embedded within their existing apps and services...which is a great thing, but I wouldn't bet on Google being able to launch any new product that would have a decent UX.


What issues do you have with the modern Messages client on Android? I think it's great.


they will acqui-hire a good product and team, as they did with YouTube, Android, waze, etc.


I’m sure people were saying the same things regarding IBM in the 80’s, Microsoft or Cisco in the 90’s… Google is a big corporation with inertia and averse to risk regarding its core business, they are not immune to being disrupted.


> I don't think most people realize how much infrastructure separates something like ChatGPT from Google-scale deployment. OpenAI isn't suddenly building transoceanic fiber and datacenters near most major population centers.

This feels like "thinking inside the box" to me. None of these things are necessary requirements for being a "Google killer".


And they literally published the transformer paper back in 2017. I doubt that their next move was to go on vacation for 6 years.


You don't need to own your own datacenters to do this, anymore than someone like Netflix (who run on Amazon cloud) does. I'm sure any of the cloud providers would happily take the money of any company that had the $$$s to pay for it. The barriers to entry really don't seem that great... the tech behind ChatCPT is well understood and multiple similar things already exist from various companies.


AIUI, Netflix runs a lot of their business in AWS, but content streaming isn’t something they host in AWS.


Well they have those "last mile" boxes which they give out to ISPs to install close to the end users to improve performance (and reduce overall bandwidth needs), is this what you mean? Would love to read more about this if you have some links.


It’s called Open Connect if you want to search for it:

https://openconnect.netflix.com/

Some people who’ve worked on it post around here and they’ve funded things like FreeBSD development which is interesting for seeing the kind of problems you have at that kind of traffic volume.


Right, I meant other than OpenConnect. As far as I know the rest is AWS, and the OpenConnect boxes need to get their content from somewhere :-)


OpenAI already has the infrastructure Azure and Bing. Microsoft has a bigger public cloud then google


The public cloud is probably a small part of the Google Infrastructure.


Based on the busy mock-up in TFA I don’t think it’ll take 100,000 engineers to beat them.


What is TFA in this context? Does anyone have a link to to said "busy mock-up"?


TFA would be "the f*cking article", in this case.


Fred's right, it's "the f'ing article", as in RTFA (read the f'ing article) popularized afaik on Slashdot in it's golden age, which came from RTFM (read the f'ing manual), a popular response to a question that can be easily answered by reading the appropriate man(1) page or other such reference material.


As far as I've read, DuckDuckGo doesn't have 100,000 engineers.


No but they don’t have a search engine either. They mostly use bing.


They’re leveraging Microsoft’s engineering.


I guess a couple of reasons:

1) Because it is disruptive. Things may get shook up, and Google may not end up in as exclusive position as they are currently in. There's risk.

2) Because it's not obvious how advertising would fit in with a conversational interface. Google may stay as #1 search/answer engine, but would revenue be adversely affected?


I think it could fit in with a conversational chat bot the same way that ads are part of podcasts and YouTube channels: a conversational and explicit ad that helps pay for the otherwise freely available content.

Full disclosure: I work at Google, but nowhere near the chatbot stuff. This is my humble opinion and nothing more.


The problem with this is that it pretty explicitly makes the chatbot a worse product. The beauty of google ads is that they don't degrade the service. If they're not interesting just scroll right past. I don't think people would use the adbot service if Bing is providing a competitor that only has banner ads or maybe a clearly marked ad link. Bing wins because they go from making no money on search to some, but if google copies the strategy their market size is destroyed.


ChatGPT and the like are the automated tools to seamlessly introduce advertising into text/image content - with the right prompt all the advertisements are between the lines. All it needs at the end is: this paragraph includes content sponsored by [Comapny name].

Edit: Why have ads separable from content if you can just weave them in? Ad-blocking is toast, you'll have to use another AI service to fish them out and resummarize.


Maybe we can fast-forward past this projected future where our computer overlords are just trying to sell us stuff, and get to the bit where we are insignificant ants they mostly ignore. Sounds better, really.


Banner, sidebar, in-line, interstitial, modal… what advertising modes wouldn’t work on conversational interface?


Yes, plus it could introduce of new forms of advertising in the conversation itself. Algorithmic product placement. It could get weird.


If ChatGPT tells me it's enjoying a refreshing Coke, I'm not buying it ! :)


Welp, statistics say otherwise :-)


Conversational interfaces seem ideally suited to voice assistants. When you ask "hey google, what's the default password for my router?" does it make you sit through a 30 second ad before saying "the default password is password".


That sounds like a complete nightmare. At least in a web page I can at least look away from the ad.


It may be worse than that if this Sony patent is any guide... https://www.techradar.com/news/sony-patent-would-have-you-ye...


Amazon Alexa does yell ads like that. It's horrific.


None of those would work with a chatGPT like search platform. Search ads work because they're largely indistinguishable from the organic content. You search google and get a million links, the first few are ads. The entire model of chatbots is completely different. So the only truly effective way to advertise on a chatbot is to make the response itself into the ad. But would people use such a service, or would they move to another one that just uses banner ads which monetize at much lower rates? If chatbots completely replace the current search ecosystem Google will be forced to either lose huge market share or make much less money per search.


> Search ads work because they're largely indistinguishable from the organic content.

I'm not sure this is true. Surely they are more lucrative when they are indistinguishable, but for a lot of Google's history they were very noticeably different, a fact Google even prided itself on, and bragged about. Seems like it was growth requirements that changed that, not whether the ads were originally lucrative when people could tell they were ads.


I'd argue that it it give Google even more capacity. Just one more place to advertise.

I'd say your answer is an "advertisement" for ChatGPT. The parent AND the grandparent didn't mention OpenAI. This is how you could advertise in answers.


Well, those are ways you could try to do it, but whether they would "work" (attract click-thrus, not alienate users) is another question.


I think it's the first credible challenge to google's results in 2 ways.

1. People are typing questions into it and finding it hugely useful in ways that overlap with google. It just answers you, and you can ask clarifying questions, argue with it, etc. It's proving useful.

2. More subtly, the excitement around it seems to reveal that people are open to alternatives. For decades, alternative search engines haven't made inroads. If people are typing queries and questions into somewhere new, well that implies they're open to something new. So whether it's chatGPT or bing.com or ... the zeitgeist is shifting.


Its ability to synthesize answers with data from numerous sources is also game-changing and not something a traditional search engine could ever hope to do.

It’s not infrequent that googling for the intricacies of some badly-documented library turns up almost nothing useful, or the bits that are useful are scattered sparsely among the results, some of which are pages deep. It’s so much easier to ask ChatGPT to explain the struct, function, etc in question and have it pull the pertinent info from whatever corners it of the internet it found these things in. Even if it’s only 80% accurate it’s a massive time saver.


Hey Google, how big is Maine? G: It's xxx big. Person: Do I know anyone there ? G:No one is in your address book. P: What's the closest person I know to Maine? G:Bob. G: Is this about your asking about Maple Syrup last night? There are closer places to get fresh Maple Syrup, how about Michigan? Contrived, but not impossible.


I think anyone who's used ChatGPT seriously could tell you that this isn't the same thing, at all.

You're giving "assistant"-like questions, but the problem with those assistants has always been how shallow their responses have been, which significantly limits their usefulness.

GPT's responses are still shallow in an absolute sense, but relatively speaking they're the Mariana Trench compared to Google's little creek.


Depends what the objective is.

If I want to be amused by a Hacker News comment in the style of the Bible, draft a conclusion for my essay or a engage in a long and superficially appealing conversation about philosophy, I'm not using Google's publicly-viewable AI products

Then again, Google - with and without a conversational interface - will do just fine with what the capital of Maine is, much better with what the weather is like in Maine tomorrow, and there's a lot more usefulness and revenue in associating it with stuff in my address book and selling me flight tickets to Maine which is... some way outside ChatGPT's wheelhouse.


I think that's fair. But one is a mature product, and the other was just born/launched and will improve massively over the next decades. That this new tech is competing already bodes badly for google (yes they will incorporate their own AI, but the cat is out of the bag...)


>what am I missing?

Google currently controls ~90% of the search market. AI-driven chat/search is a serious threat to this dominance. It's likely that after the market settles it won't have the same marketshare. Given how much Google has been dependent on Search/Ads and its other failures to execute, this is a serious revenue threat.

This industry (and site) does have a tendency to exaggerate and take a current trend too far. Google is far too massive to 'die'. I believe even keyword-based search will survive. But going from 90% to 50% will be bad for Google.

IMHO, the worst case realistic scenario is this: Google loses a lot of funds, is forced to close more unprofitable projects. This causes more lack of trust, and more projects are closed. Eventually Google is kicked down to a tier below Apple & Microsoft.


I’d like to think this could be good for the rest of the company if every PM was told that ad sales weren’t going to make their stock options skyrocket, so they need to build a profitable product of their own. Unfortunately, I suspect that’d mean cramming ads everywhere.


OpenAI has had huge mindshare with students recently. Every (ish) kid who writes an essay or needs information summarised is hitting ChatGPT. Kids are where change happens, ask Facebook. They get on new things instantly - within days we were reading reports of kids writing essays with the latest tool. Young people are also naturally viral, sharing the cool new tool with entire classes. This is a marketer’s dream. You don’t need to change how sluggish corporates work, just focus on kids and watch them work.

Google has to fight to get that mindshare back.

Not sure I’d be building my company on the ADHD-fueled Google roundabout that generates and destroys systems monthly. You just know whatever they release is someone’s promotion project, until it’s in GA.


> OpenAI has had huge mindshare with students recently.

Clubhouse was once the new hotness - until it wasn't.


There's no comparison. Microsoft didn't consider integrating Clubhouse into one of their primary consumer platforms. One of the biggest software companies in the world has looked at the state of play and decided that this was important enough to shift their search strategy as quickly as they can.

I didn't say it's a guarantee of success, it's a possibility. There is a non-zero chance that ChatGPT takes market share away from Google unless it moves very quickly.

Having said that, the status-quo play is obviously the easiest bet. It's far easier to be Google than OpenAI or Microsoft at this point.

https://www.wsj.com/articles/microsoft-adds-chatgpt-ai-techn...


Clubhouse wasn't popular with kids. It was popular, and only briefly, with adult tech people.


True. I was only highlighting that popularity can be fleeting - and was for Clubhouse. ChatGPT's popularity may equally be a flash in the pan.


Anyone else remember the beginnings of youtube? Google tried to compete with them, with their own Google Videos. It sucked - and I remember reading how the engineers running it couldn't even figure out why they lost. In the end, Google just had to buy their competition, because they couldn't figure out any other way to win.

They've tried to compete elsewhere, too, and I don't think they've ever been able to make a go of it outside their cash cow. The only thing they've really been able to do is 'search results + ads'.

I don't think they'll be able to modify that winning combo even in the slightest and still be successful. And in this case, they can't buy the competition. Micro$oft already did.


Disclosure: work at google

YouTube was founded in ~2005. Google bought it in 2006. It is now 2023.

YouTube has spent 2 years as its own company and 17 as part of Google.

Try to remember what YouTube functionality was in 2006. It was very different and has grown a lot.

The narrative that Google doesn’t know how to innovate YouTube doesn’t add up.


Youtube now is 90% of what it was 10 years ago and what might it good in 2006 is the same reason it's good now, the UI is clean and it's easy to use. That's it. Also, following the acquisition, for many years most of the people working on it were the original Youtube folk, not the "Google people."

Google hasn't shown they can do new product in a very long time... see the GCP mess, Stadia, and the hundreds of other total failures (Plus, Wave, and many I've forgotten).


Google has to dump Sundar and bring in a more old school leader. Google is a massive company and has to stop doing it's "everything is beta all the time, with teams internally competing for the same territory"

I'm so put off by it, and been made a fool by it so many times, I'm phasing out Google in my life and long ago stopped recommending Google products to people in my life.


YouTube was grabbing lots of attention and with the attention comes ad revenue so it made perfect sense for Google to buy YouTube and scale it up. YouTube grew into something exceptionally valuable for both the Google and the people who use it.


The point is that they had to buy youtube, not that they weren't able to put a shiny interface on top after they bought it.

What I'm trying to get at is, imagine if Microsoft bought youtube before google could - given Google's track record with their own video search, they would not have been able to compete with youtube, and they would almost certainly have simply lost that market. I think that's what happened here. Google is amazing at algorithms, but has very little business sense...they can buy a successful product, but rarely create one of their own.


the point was that Google Video sucked and wasn't able to beat YouTube.

Same story for Google+


Google+ was better than Facebook, which was its primary competitor. The problem there was the same that any new social network has: nobody is going to switch if they can't bring most of their social graph over.


Was it though? Maybe for some users in some use cases?

Clearly TikTok is a better surveillance network and managed just fine.


Google already has a trackrecord of beating Microsoft. Do you remember Windows Phone? Well Google made Android in response to Windows Phone + Bing threat not in response to iPhone.


Well, Steve Ballmer wasn't exactly an innovator - just kept milking the cash cow.

Satya Nadella seems way more tuned into the zeitgeist. His heavy bet on OpenAI may seem excessive, but at worst it's going to be cheap insurance and at best may be a game changer.

I think Google's producing Android was a reaction to BOTH iPhone and Windows Phone - they didn't want to be frozen out of the mobile advertizing market by competitors that owned the platforms.


Windows Phone launched in late 2010, long after Android. Do you mean Windows CE / Windows Mobile? But if so, those weren't really a serious threat to anyone.


Yeah I meant Windows Mobile/Phone.


Google bought Android Inc. in 2005, 5 years before the launch of Windows Phone.


..and Chrome


As an engineer my concern about "google killers" is that I can't see an easy way to scale and control/optimize them in business settings. Apart from factual misstatements happening in the ChatGPT, what about source attribution? How is the relevance of a source determined? How is the flow of information through the network preserved (sourceA => sourceB => sourceC)? With Google we also don't know exactly but I can image some version of PageRank as tuneable. Finally, how to add new pages to index and measure potential "forgetting" that could happen?

Unless somebody could clarify those for me, this is what currently petrifies me -- some uncontrolled black box presenting its clandestine view of the web with no way to follow the breadcrumbs.


Google was already going this route. ChatGPT is simply pushing GOOG and META to go faster.

Simple as.


But without ChatGPT when would Google put a LLM into Search? 2025? 2030?


They wouldn't have been the first ones to release an AI chat bot, but that doesn't matter. What matters is, given ChatGPT is here, is Google going to lose the coming battle to monetize advertising in AI agents?


It still can be a Google killer even if Google comes up with something better because Google makes its money from sending people to the highest bidder. So far there's no clear path to make money from ChatGPT, let alone match the sum Google is making from Search.

If this new paradigm dominates the way people use computers and it's not as profitable as Search, Google might indeed have to scale back.


ChatGPT can make money by subscriptions, or they can also put ads next to the chat?


Can they? Google made $162B from search ads last year, to match that with the leaked $40/month subscription you will need 340 million subscribers. Can they really get 340 million subscribers at $40, which is %50 more than Netflix subscribers at double the price?


Can they make $100 billion a year profit?


Because sitting at a computer or a phone is, in fact, unnatural. We are trained to do that. Asking a question of another person is natural. If you can get Alexa or Google assistant to reliably, without lying, answer questions, that could be huge. Caveats, but so much $ is put into those things and they suck. Also, if you could get a Google AI to be like a real assistant, context and understanding of what you're doing, that could also be huge. Just getting the assitants of the world to really intract would slice a large piece off the Google search pie, and potentially set up whoever does that well to be the next major interface to tech.


Not sure if this is accurate. Voice interaction is very slow, and low bandwidth. Visual interaction is much faster. Hence people love their spreadsheets.

Even if a voice assistant allowed you to interrupt it, to make fast course corrections, it would still be much slower than, say, interacting with the filters on Google Flights.

And I am saying this after having built for myself a bi directional voice interface to ChatGPT. There are certainly situations where it is great to use it, such as while driving, or perhaps in the kitchen while having your hands full. And probably on mobile, where screen real estate is scarce. But those doing information, work, or even just online shopping, probably won’t be giving up their screens anytime soon.


I think simply there are 2 killer features in LLM's vs Google search for me:

1 - Natural Language with prepositions and easy ways to include, exclude and filter 2 - Refinement - "No that wasn't quite right because X - please factor this in and try again" is a lot more intuitive than multiple rounds of operator uses and "memory exclusion" of pages you have already seen.

I find that chatGPT will give me what I need within a few iterations, Google search sometimes takes a lot of searching and reading to get an idea of what I need.

I feel like ChatGPT + Github Code search could be a killer combination for programmers


If I'm looking for a specific answer, then with my googling skills, I can easily find an article or tutorial written for a target audience in mind.

If I on the other hand use generative AI, then the answer (and hopefully correct one) would be generated only for me. This is the personal touch Google currently misses and I guess it's appealing for many people.

Currently LLM-s are not Google killers, they can't find me a restaurant, a nice watch or other stuff I'd pay money for. Yet.


rather than google killers, I would say it is SEO or publishers killers. There will be less traffic to websites or blogs, certain type of websites will be impacted most than the other. in fact google will have more way to recommend and curate content in their way.


And that's exactly why the CEO of Google published the blog post about Google + AI, because they don't think AI is a potential Google killer. /sarcasm Yes, they took the risk seriously and that's great. However, Google's lack of good UX will probably produce a monstrosity of a tool that no one will use. The only usable parts will be what's embedded in the Google Search engine.


The reason is that high amount of usage is itself disruptive. The same way Google has an advantage of receiving 90%+ of search requests, ChatGPT has the advantage of being tested with millions of requests per day. If Google cannot test a similar AI technology with the public, it will hardly get results that are comparable with ChatGPT.


I use google in confluence with chatGPT. When I’m researching something, chatGPT gets me started and points me in the right direction but then I use google to find first hand sources/more detail/images/videos etc.


This is not that easy. Kodak and Nokia could have pivoted as well (and boy they tried).

And this is not even a problem of scale, it is obviously difficult to change course for a giant supertanker, but the most insidious problem is the money makers inside the company, they usually have a lot of power and they won't allow anyone to butcher their margins.

Killing the cash cow is difficult. I don't see Google taking the risk.


Branding is one thing. "Google" has been a commonplace verb for a long time, which alone is worth billions to them. ChatGPT is the first time something has come close to stealing that spotlight, since for the time being, it's answer a lot of queries far better than Google would. So even if Google makes something technically better later, it might be too late to replace ChatGPT as the AI king.


ChatGPT will have a very hard time. If I want to get on ChatGPT I usually google ChatGPT and click the first link. It is a crazy competitive advantage.


"Google killer" is hyperbole, but I think Google does have a challenge since chat-based search may be much harder to monetise & thus less profitable. One reason is because people are used to having ads on the search results page, but ChatGPT presents itself as a human advisor, and in this context I suppose people would be quite unhappy to have ads injected into their conversation.


That’s because you are trying to have a nuanced conversation about this, but “hot takes” are the most valuable currency on social media these days.


Unfortunately I don't think it's that easy.

That is, the cat is out of the bag:

ChatGPT not only showed us the power of AI, but it showed us a bunch of non-AI things like:

- Ad-free results - Clutter-free results - The elegance of not having to click on links

A competitor could capitalize on this, putting Google's ad-driven, click-driven, clutter-driven model at serious risk.


I agree, I think it's a fairly unimaginative take (a common one though) that ChatGPT is likely to lead to the usurping of Google.

It's a two-part claim; a) yes, conversational AI agents will replace typing searches into a text box, but b) I see no reason to think that Google can't easily monetize that format.

Regarding the first point, I think Google appears to have been caught a little bit off guard as to how soon this transition would happen. People seem to be over-indexing on the "code red". I do think it's a strategic mis-step by Google to not have a product ready to go here. (Their broader strategy was quite risk-averse and that was probably sensible, given the shit-storms that previous systems like Galactica and Tay generated; Google couldn't be the first ones to publish a prototype/demo system like ChatGPT, or the NYT would have jumped all over them for the inevitable questionable utterances.)

But the second part; given AI agents are here, who's going to win the competition to monetize them? It seems clear to me that Google is in a great place to monetize and capitalize on this technology, and I think they will win if their language models are better. (So far, they seem to be way ahead; LaMDA was early 2022, and it's clearly better than ChatGPT.) If Google's version of this service is substantially better, but intersperses ads based on what you're talking about (the Gmail model for ads), would people use this? I think it's clear that consumers would take the better free service that comes with ads, vs. paying ChatGPT or accepting inferior quality.

Let alone the fact that Google can put the assistant onto billions of Android phones, fine-tune a model per user, offload compute power with device-based inference to save OpEx, and so on; all of these will give whoever is running the AI agent a lot more ad targeting power.


> yes, conversational AI agents will replace typing searches into a text box, but b) I see no reason to think that Google can't easily monetize that format.

Current user workflow:

- ask question in google.

- get shitty results

- checkout 5 pages worth of results are try a few more searches. In the process you've seen 5x the ads you would've maybe 10 years ago when the results were better.

- In the middle of this you maybe clicked on 3 or 4 sites that were spam sites which themselves had on adsense ads.

New workflow:

- Ask google a question.

- Immediately get a detailed thought out prospectus, or presentation, or whatever with a top-down overview of what you wanted, maybe you're still curious so you ask a couple follow up questions. Unless they put an ad between every sentence, you'd only have seen 1/10th the ads in the 3 replies it took to come close to google search.


1 click from a search ad on the first page of google is worth more to an advertiser than all the impressions derived from the all the lame sites you trudge through.

Source: I've run a digital ad agency for 8 years.


> what am I missing? Why would a chatbot like ChatGPT disrupt Google vs forcing Google to simply evolve.

Because it's not clear yet whether anyone else can currently develop a chatbot interface as capable as ChatGPT.

For many ChatGPT is already replacing a lot of Google searches so G needs to hurry.


For many? Citation needed. Yes there is a lot of hype, but is it really replacing a lot of Google searches, if so is that your anecdote or is there data here?


Yes, sorry that's an anecdote of me and everyone I know that has tried ChatgGPT. Obviously in the grand scheme of things this is nothing, but if other early adopters behave similarly it is a great threat to Google because if it catches early adopters then everyone else will follow after a while.


The question becomes dose the best model win; or dose Google's existing processes, infrastructure, and advertising relationships allow it to purchase or reimplement the best model?

Probably still an open question but a better chance then anyone has had to disrupt them for two decades.


Way too many responses to your question are trying to engage with debate over products rather than focusing on the topic of monetization.

Google makes money through ads, and especially ads that get you to click through to somewhere else.

They do this through SEM ads that appear on your search query to direct you to a paid destination.

And they do it through a display network on 3rd party websites that Google search inevitably ends up funneling you to.

If you are simply engaging with an AI that's synthesizing those results so you don't have to, that's less time you spend on those sites seeing Google's ads, and less incentive to click through to paid results.

Their entire business model basically goes up in smoke if AI successfully intermediates the Internet.

This doesn't preclude them from competing, as you point out, but you generally don't want to see your cash cow get slaughtered and then suddenly be in a highly competitive market for what will replace it.

Having a 90% market share in the Titanic isn't an enviable position.


Agreed that what the product is exactly doesn't matter too much. It's kind of inevitable at this point that search is going to go in the direction of a chatbot.

Ask jeeves back in the day already knew that what people really want is a question answered. Google search and its competitors were a well-lived offramp on that road. Ultimately free-form interaction is just more intuitive.

But with that said, I also am not so quick to call the death of SEM ads. Just because chatbots exist doesn't mean people don't want to visit other websites. Display ads will continue to be a thing.

Similarly, there's no reason chatbots can't now direct you to sites or advertise products as part of their responses. Heck, this is a much more sinister form of marketing with astronomically higher click rate since a chatbot is responding authoritatively with a recommendation.

Yes, google is going to have to pivot... but this is a problem they're very well suited to solving and they have a very strong incumbent advantage in the meantime.


Ironically, using ml/dl language models, which have only statistics and no intelligence, is a Google killer to the extent that Google uses AI and destroys their real useful search. The declining quality of Googel search seems to only be accelerating.


> This just doesn't make sense to me.

It does if you remember that Google is in the search ads business, not in the search business.

I guess you could find a way to weasel in ad copy into ChatGPT's answers, but that will kinda massively kill the vibe.


i have come to realize that most of the "google killer" chanters, have in fact not used ChatGPT for in-detail work.

It absolutely has the same problem as Self-driving and only after 10 years we have accepted that it still is off for a very long time.

Mercedes is doing Marketing by calling it self driving, but limiting it to areas the car seems to have a 99% understanding off. They are betting on the fact that they will make more money on the 99% buying these cars, then the few cars that inevitably will crash.

Essentially how insurance has worked for ever.


If anyone with a couple million dollars can create a great LLM-powered search engine then maybe Google's search engine division should be valued a couple million dollars.


Maybe what they mean is Google Search killers?


> But I guess I'm wondering: what am I missing? Why would a chatbot like ChatGPT disrupt Google vs forcing Google to simply evolve. And perhaps make even more money?

People thought google has no time to develop something like ChatGPT when they were totally wrong.


Blockbuster should have been able to easily pivot too.


I think Google is in a tough position, because for them ads (both in search and other websites) are a huge portion of their business, and they need to find a way to monetize LLMs without killing its own cash cow.

Microsoft has a lot of advantages here - they can introduce LLM to search at a much smaller scale (order of magnitude), which means its cheaper, and they have plenty of other products making tons of money, so they can take their time to figure it out (also they're already adding ChatGPT to tolls like Teams and possibly Office, so they'll be able to increase revenue from these products).

Google is also seen as a bit of a dinosaur - they struggle to introduce new products, and recently we've been hearing more about products they kill rather than huge successes. It seems that as a company they lost their innovative spirit, and that's why people don't believe they'll evolve quickly enough.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: