Hacker News new | past | comments | ask | show | jobs | submit login
AI Leapfrogging: How AI Will Transform “Lagging” Industries (2023) (nfx.com)
30 points by hacb 17 days ago | hide | past | favorite | 59 comments



> Generative AI is an instantaneous push-button solution. It generates a legal brief or construction plan from scratch. Interacting with software is like chatting to a friend. You don’t have to re-learn your whole process – you simply remove tedious tasks from your to-do list.

We’re getting sued for the homes we built that have unreachable floors and outdoor facing windows placed inscrutably on load bearing interior walls but I feel really good about our legal defense


The core tool being offered here is allowing experts to focus more on checking work than solving work. It's way faster to check if a Sudoku puzzle is correct than it is to solve the Sudoku puzzle.


But this doesn't apply to all domains equally.

Debbuging, for example, is harder than programming. And the difficulty is related to the 'amount of code' you are debbuging.

This means it is pretty easy to verify the Copilot's output when he just spits out a couple of lines at a time. But you will have a pretty bad time when you ask ChatGPT for a complete service. Things will not work out and the time you take to fix it will be greater than what you saved in the first place.


Absolutely, which is why you have the LLM generate one method for you at a time.

This still increases your productivity dramatically, and shift you into the role of editor rather than writer.


That depends entirely on how quickly you can write the method on your own.


> The core tool being offered here is allowing experts to focus more on checking work than solving work.

That sounds like a nightmare, not an improvement. Really, think about it. It's the same shit as "self driving" cars that need continuous human monitoring. You're taking an relatively engaging task and replacing it with a mind-numbing slog that humans are particularly bad at.

Not to mention the skill of being able to "check work" usually flows from deep experience of "doing work."

> It's way faster to check if a Sudoku puzzle is correct than it is to solve the Sudoku puzzle.

Yeah, and which of those tasks do humans choose to do? I don't see many "100 Solved Sudoku Puzzles To Check" books in the bookstore.


/Not to mention the skill of being able to "check work" usually flows from deep experience of "doing work."

Precisely, but you need far fewer of these people, which is why this is being so heavily pushed.


>> Not to mention the skill of being able to "check work" usually flows from deep experience of "doing work."

> Precisely, but you need far fewer of these people, which is why this is being so heavily pushed.

That sounds like extremely specious reasoning to me, probably due to working backward from technology to application in order to hype the former.

Firstly, is it's anyone experience that it's easier to understand an unreliable system that was barfed out of some unreliable process (doesn't have to be an LLM, could be a bad offshore team), than it is to try to build it right from the start? It's still garbage out. It's like abusing the QA process by saying quality is only their job, then carelessly pumping out crap work and expecting them to catch all the mistakes.

Secondly, where are these "fewer" skilled people supposed to come from? The technology, if embraced this way, will have the effect of cutting off the the skils pipeline. That would work in the short/medium term, but in a generation when you start to see lots of retirements, you'll hit a skills dead end.


>The technology, if embraced this way, will have the effect of cutting off the the skils pipeline. That would work in the short/medium term, but in a generation when you start to see lots of retirements, you'll hit a skills dead end.

When have corporations ever cared about the next generation, let alone anything beyond the current quarter?


The sudoku puzzle analogy has a single correct answer and many easily assessed errors.

That does not map to instantiating a construction plan from scratch.


But it's literally how construction projects work. You have a bunch of junior engineers who hammer out the work, and then a master PE who reviews and signs off on it (who also takes full legal responsibility for all the work).

The master engineer isn't the one who does all the hours and hours of nitty gritty design work. She just does review and makes adjustments as needed.


Yes but the junior engineers aren’t the dumbest morons you’ve ever seen. This isn’t a realistic depiction of how the tools would get used anyway. It’s not “generate a plan from scratch” and submit for review. It’s “carefully iterate small parts over and over”.

The junior engineers would be the ones reviewing the outputs. The senior engineer would be the one reviewing what is still constructively the junior engineer’s work.


It's also way easier, and much less responsibility, to miss a mistake than to make one.

"Oh, yeah, that was a fuckup but the AI did it, not me! What, I was supposed to catch it? Well, I blame the AI!"


Insurance exists. Money points to something in reality (usually).

Negligence and duty don’t change and the implementing human (the person checking for mistakes) will be just as liable as the human implementing someone else’s work today.

But true… it is unlikely the AI firms or the suits that force it into every nook and cranny they can will ever be held accountable for the mistakes it will inevitably make. Not without a couple catastrophes first.


I was thinking that, but then I remembered how often humans make mistakes and and don't check their own work. For me, a common example is loose/lose, which I only find in my writing by assistance from text-to-speech. Did you spot the deliberate mistake in this comment? Because that too is one I miss, when the two words are separated by a line-break.


Engineers already sign off on work done by others all the time. This isn't some new or radical concept. For civil engineering projects, you can face jail time if you signed off on bad work regardless of who did it.


> We’re getting sued for the homes we built that have unreachable floors and outdoor facing windows placed inscrutably on load bearing interior walls but I feel really good about our legal defense

This seems oddly specific, has this already happened in real life?


Not in the sense of an architect actually building a generative AI blueprint. But transformer ANNs are simply incapable of designing physically plausible construction plans. Theoretically, not being able to solve graph connectivity seems like a big issue. Empirically, not being able to count is a fatal flaw.


> Theoretically, not being able to solve graph connectivity seems like a big issue. Empirically, not being able to count is a fatal flaw.

Thank you for capturing the state so eloquently. I've asked Bard about how to dilute hydrogen peroxide from 10% (do not touch) to 3% (medical) while producing a specific quantity of the dilute... Results were harmful.


This is happening a lot in Australia, very dodgy builders resulting in illegal (from the building code) houses. Some are unsafe, others are failing apart way quicker due to the shortcuts taken.


There have been some gen ai architecture design tools posted here with such flaws. They were not so bold as to call them construction plans.


GroverhausAI, coming to a neighborhood near you soon!


I read it as obviously and entirely tongue-in-cheek.


I always love how the new "shiny" AI will help an industry, despite the fact that old "boring" AI has not even penetrated yet.

The article specifically mentions agriculture, where machine vision has barely been deployed, and the company that is making the automatic pest / weed killer using lasers and MV is still in a "trial".

A case of: When AI is your hammer, everything starts to look like a nail.


> A case of: When AI is your hammer, everything starts to look like a nail.

Sure, but before half the world decided that "AGI" meant what I always called "ASI", the "G" meant "General", as in it's a general-use tool, so the hammer/nail analogy isn't anything like as apt as when "AI" meant "plays chess and nothing else".


Its completely bonkers to read in 2024 that "AGRICULTURE, hospitality, education, legal, construction, and manufacturing" have had no productivity gains due to technology.


I think the author just lives in a bubble and assumes agriculture is the same way they saw it 20 years ago.

Meanwhile image recognition is already being used to identify weeds to avoid spraying on the entire crop. Combines are somewhat self driving.


We use drones and satellite imagery to evaluate crop health and yields. To say there has been no advancement means that either they don't know what they're talking about or living under a rock. But then its people pushing AI hype so it could just be that they're just trying to trick people that don't know better to believe the hype.


Many people who advocate for technologies appear to have zero capability of understanding that the world exists outside of their own understanding, so aren't even capable of forming questions like "how was this done in the past?" or "are these problems real?"


Hi, I just wondering what you think if we change the statement a little bit. Maybe it's wrong if we say there has been no advancement. But can we say it's slower to adopt all these technologies in agriculture?

I think the author just lives in a bubble, period. The example of the Chinese credit industry is a stretch at best. UnionPay has been around for at least a couple decades, China is far from having "totally skipped" traditional credit cards.


I know someone who in the last 10 years was involved in a project to drive trackers. From space. And not like R&D, from what I understand, they're selling this at the moment.


The opening sentence alone was enough to make me laugh and close this article.


While it might seem a little extreme, the first sentence isn’t necessarily wrong, right?


While it may be technically true, the article simply uses it as a "hook"; and instead of explaining the likely plethora of factors contributing to that, it allows the reader to assume it is because of AI, which is ridiculous.


You should have read the article, they explained it in the second paragraph, and I have no idea why anyone would assume it's because of AI.


Because of the article's title? Why would I want to learn about that, when I opened an article about AI?

Let's engage with reality a moment: A country like Kenya "leapfrogged" credit cards because they failed to implement them on a useful timeline compared to other countries. Leapfrogging by falling behind isn't the great example the article wants it to be.


Third paragraph:

"We believe that the same way mobile payments leapfrogged credit cards in some markets, and mobile phones leapfrogged desktop computers in developing economies, AI too will (at least initially) leapfrog more legacy technologies that don’t have a “good enough” palliative (I mean alternative) in place."

>Leapfrogging by falling behind isn't the great example the article wants it to be.

It doesn't really matter if you end up with a better solution anyway, it's really the point of the article.


That's not the point at all.


The article has no point. Total nonsense and buzzwords to drive AI hype.


I mean, it's true, but it is a classic lead-up to a superficially convincing-sounding, yet incorrect, just-so explanation of _why_ that is. Which they then go on to make.


Here's an "AI" summary of the article:

> We are thought leaders in AI. Like us and trust us!!!

> Our NFX portfolio company EvenUp has used our “AI inside” idea. EvenUp is an AI-driven platform that uses medical records and legal data to speed up the process of personal injury law.

> You should invest in EvenUp or become a customer. You should give us your money because we r smart and on trend!!!


Call me a Luddite but I don't want more and more technology. I don't want AI. I want to build my own house, grow my own food, make my own shoes.

We've had lots and lots of economic growth... but at what cost?


It's not possible to live a life of modern conveniences, cost and time savings without technology and specialisation.

Youtubers who grow their own farm can only do it because it's their full time job. Or all their time is spent on the farm. Growing food is not easy, and would cost way more home grown.

Not something to romanticise. Nor would going out to the woods help, the carbon footprint would be so much higher than being able to have a expensive house with mega efficient everything.


Yeah, I thought I'd like & appreciate AI. But of course it's been perverted and twisted by the bigcorps to the point it's just a useless gimmick for making rich people even more money, while making the life of normies much worse.

Can't wait to never be able to contact a company again because they want me to talk to their ai shitbot that's designed to trick me into going away and not bothering them.


>I don't want AI.

Don't use it.

>I want to build my own house, grow my own food, make my own shoes.

AI is not stopping you from doing these things.


I'm not saying this as a direct 1:1 link between the two. I'm using AI as an example for advancing technology. And by the way, advanced technology like smartphones and social media and the internet are all basically required for modern life. As much as I'd like to enjoy living on a farm in the middle of no where, not many other people would enjoy that either and therefore I'd be (mostly) alone, defeating the entire purpose of a neo-Luddite lifestyle - to get back to being a human in an increasingly alienated world.


There are many groups in the US that live with very little use of advanced technology, but very few people have joined them from outside, the vast majority leave after experiencing the lifestyle for some time, modern conveniences are hard to leave behind. It's easy to judge others for not wanting to live your "neo-Luddite lifestyle", I believe they are less delusional than you.


You seem a little aggressive here. Trying to go to a more primitive lifestyle now, in 2024, is largely futile. If it were, say, 500 years ago or 2000 years ago, "modern" life back then would have been simpler - and participating in society would involve many of the agrarian practices I've mentioned. With it comes horrible things like serfdom and slavery, too. Modern life has its benefits as well.

And I have met a lot of Amish people, they seem pretty content. They build furniture and barns. Drive horse carriages.

This is just my vision of an "ideal" life and more importantly an ideal _society_ - it's not as though I'm actively trying to move out to the woods or become Amish.

I just wish for less endless technological advancement and more self-sufficiency.


I'm long-term bullish on AI, but tired of reading about the things it will one day do. It's way too close to all the hype around blockchain and web 3.

Show me actual results today, not tomorrow's promises. At least TFA cites some results.

But I have a hard time extrapolating that out further. I suppose I'm not cut out for thought leadership. :)


The actual results today is buying 30B of NVIDIA GPUs to create 3B of revenue.


AI is useful especially when there is a tight feedback loop. Validating what the AI suggests via a 'System' or a Competent Mahoot riding it. I find it incredibly useful when I am specific and can test out.

I installed the new Ubuntu LTS and was trying to install a cargo package and was running into issue. Google/Bing search took me on a tangent. I put in the error in ChatGPT 3.5, got exact apt install to solve the issue and it worked when tried.

A million of these a day is going to have one heck of an effect on the planet.

I am not saying AI fixes the search but the Search Results are hit and miss any way. The real low hanging fruit are the domains, where the existing system is hit and miss.

The AI long case is, that People are also glorified LLMs/Pattern-matching machines. People who can make Computers sing will make AI sing (most likely). For most domains, AI elevates their 'base line' from the existing floor.


Can someone explain to me how:

landline -> desktop -> mobile is a progression of technologies ?

Maybe "AI" might be able to transform technology, but if thats the demonstration of an example, I don't think the progression works how they think it does.


Seems like a logical progression of technologies to me. You could throw FAX machines in there.


> Seems like a logical progression of technologies to me. You could throw FAX machines in there.

The implicit assumption seems to be the latter technologies in the list are strictly superior to the former, which is completely false. The article says this:

> Technological leapfrogging occurs when an industry or market (usually an outmoded industry or emerging market) skips a step along the technology transformation chain.

> Instead of learning to use a personal computer and then a mobile phone, you skip right to mobile. In many emerging markets, mobile is the dominant computing paradigm.

That kind of leapfrogging actually seems like a massive handicap. Mobile phones have severe limitations compared to PCs as devices for productive work.


It doesn't mean emerging markets don't use desktops or laptops. It means they won't be frozen on old tech.

Think mobile payments. South East Asia has mobile payments with 0% additional fees. Meanwhile the West is shackled to cards, adding 2-3% for what? Yet stuck with cards for a while due to not starting with mobile.


> Think mobile payments. South East Asia has mobile payments with 0% additional fees.

If that's the case, how to the mobile payments companies make money?

> Meanwhile the West is shackled to cards, adding 2-3% for what? Yet stuck with cards for a while due to not starting with mobile.

IIRC, the 2-3% fees aren't due to technology, they're due to regulation and legal agreements. And it's mostly a spat between the merchants and banks, because most cards (in the US, at least) have "rewards" that remit a portion of those fees to the card user (e.g. all my cards pay me at least 1% cash back, and more in certain circumstances depending on the card).


My take: The author and people like them are just working on a teleological progression of the hype trains they've followed to chase investment $$.

I always have to remind myself that the motivations of the people involved in the tech industry today differ substantially from those from before the .com boom.

For many, "tech" doesn't actually mean the technology and its engineering applications. It's entirely the business and investment world and spin-land built as a huge shell around it. It doesn't matter if it "fails" from an engineering POV if a significant % of shareholders can grow a portfolio.


Cool story bro, but you're going to need to actually have some products to demonstrate for that to be real.

Push button legal contracts, great, we have those already, they are called templates. The problem is not the contract, its the litigation that follows the contract.

> Generative AI is an instantaneous push-button solution. It generates a legal brief or construction plan from scratch. Interacting with software is like chatting to a friend. You don’t have to re-learn your whole process – you simply remove tedious tasks from your to-do list.

Ok ok ok, but thats a could. I mean I can ask a chatbot to make a contract for me, but is it legal, does it have a back door in it that allows something stupid to happen?

Same with Visas, the hard part isn't the form filling, its getting past the burdens of proof.

Where Legal chatbots might be useful, some day, is flagging for odd clauses.

_eventually_ I can see that "AI" will help automate a bunch of legal stuff, but its out of reach of current LLMs, as their inference of implications from text is sketchy at best. Moreover, new laws/contracting terms have less training data, which tends to bias the output in the wrong direction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: