Hacker News new | past | comments | ask | show | jobs | submit login
Programming Is Mostly Thinking (2014) (agileotter.blogspot.com)
891 points by ingve 12 days ago | hide | past | favorite | 322 comments





Great article. I just want to comment on this quote from the article:

"Really good developers do 90% or more of the work before they ever touch the keyboard;"

While that may be true sometimes, I think that ignores the fact that most people can't keep a whole lot of constraints and concepts in their head at the same time. So the amount of pure thinking you can do without writing anything at all is extremely limited.

My solution to this problem is to actually hit the keyboard almost immediately once I have one or more possible ways to go about a problem, without first fully developing them into a well specified design. And then, I try as many of those as I think necessary, by actually writing the code. With experience, I've found that many times, what I initially thought would be the best solution turned out to be much worse than what was initially a less promising one. Nothing makes problems more apparent than concrete, running code.

In other words, I think that rather than just thinking, you need to put your ideas to the test by actually materializing them into code. And only then you can truly appreciate all consequences your ideas have on the final code.

This is not an original idea, of course, I think it's just another way of describing the idea of software prototyping, or the idea that you should "throw away" your first iteration.

In yet different words: writing code should be actually seen as part of the "thinking process".


I had the same thought as I read that line. I think he's actually describing Linus Torvalds there, who, legend has it, thought about Git for a month or so and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.

But for the rest of us (especially myself), it seems to be more like an interplay between thinking of what to write, writing it, testing it, thinking some more, changing some minor or major parts of what we wrote, and so on, until it feels good enough.

In the end, it's a bit of an art, coming up with the final working version.


Git is a special case i would say . Because it is fairly self contained. It had minimal dependencies on external components. It mostly relied on filesystem API. Everything else was “invented” inside of Git.

This is special because most real world systems has a lot more dependencies. That’s when experimentation is required. Because one cannot know all relevant API’s beforehand and their behaviors. Therefore the only way is to do it and find out.

Algorithms are in essence mathematical problems, therefore is abstract and should be able to be solved in the head or use pen and paper.

Reality is that most programming problems are not algorithms but connecting and translating between systems. And these systems are like blackboxes that require exploration.


> Git is a special case i would say . Because it is fairly self contained. It had minimal dependencies on external components. It mostly relied on filesystem API. Everything else was “invented” inside of Git.

This type of developer tends to prefer building software like this. There is a whole crowd of hardcore C/C++/Rust devs who eschew taking dependencies in favour of writing everything themselves (and they mostly nerd-snipe themselves with the excessive NIH syndrome, like Jonathon Blow off writing Powerpoint[1]...)

Torvalds seems to be mostly a special case in that he found a sufficiently low-level niche where extreme NIH is not a handicap.

[1]: https://www.youtube.com/watch?v=t2nkimbPphY&list=PLmV5I2fxai...


It's really easy to remember the semantics of C. At least if you punt a bit on the parts that savage you in the name of UB. You know what libc has in it, give or take, because it's tiny.

Therefore if you walk through the woodland thinking about a program to write in C there is exactly zero interrupt to check docs to see how some dependency might behave. There is no uncertainty over what it can do, or what things will cost you writing another language to do reasonably.

Further, when you come to write it, there's friction when you think "oh, I want a trie here", but that's of a very different nature to "my python dependency segfaults sometimes".

It's probably not a path to maximum output. From the programming is basically a computer game perspective it has a lot going for it.

Lua is basically the same. An underappreciated feature of a language is never, ever having to look up the docs to know how to express your idea.


ie closed off systems that don't interact with external systems.

and these are the type of coders that also favor types.

whereas if you do the 'informational' type programs as described by Rich hickey -- ie interact with outside systems a lot. you will find a lot of dependencies, and types get in the way


I tend to see this as a sign that a design is still too complicated. Keep simplifying, which may include splitting into components that are separately easy to keep in your head.

This is really important for maintenance later on. If it's too complicated now to keep in your head, how will you ever have a chance to maintain it 3 years down the line? Or explain it to somebody else?


I'm more than half the time figuring out the environment. Just as you learn a new language by doing the exercises, I'm learning a bunch of stuff while I try to port our iptables semantics o firewalld: [a] gitlab CI/CD instead of Jenkins [b] getting firewalld (requires systemd) running in a container [c] the ansible firewalld module doesn't support --direct required for destination filtering [d] inventing a test suite for firewall rules, since the prebuilt I've found would involve weeks of yak shaving to get operating. So I'm simultaneously learning about four environments/languages at once - and this is typical for the kind of project I get assigned. There's a *lot* of exploratory coding happening. I didn't choose this stuff - it's part of the new requirements. I try for simple first, and often the tools don't support simple.

This is the only practical way (IMHO) to do a good job, but there can be an irreducibly complex kernel to a problem which manifests itself in the interactions between components even when each atomic component is simple.

Then the component APIs need improvement.

Without an argument for this always being possible, this just looks like unjustified dogma from the Clean Code era.

At the microlevel (where we pass actual data objects between functions), the difference in the amount of work required between designing data layout "on paper" and "in code" is often negligible and not in favor of "paper", because some important interactions can sneak out of sight.

I do data flow diagrams a lot (to understand the domain, figure out dependencies, and draw rough component and procedure boundaries) but leave the details of data formats and APIs to exploratory coding. It still makes me change the diagrams, because I've missed something.


The real world bank processes themselves are significantly more complicated than for any one person to hold it in their head. Simplification is important but only until the point it still completes 100% of the required functionality.

Code also functions as documentation for the actual process. In many cases “whatever the software do” is the process itself.


If you can do that, sure. Architecting a clear design beforehand isn't always feasible though, especially when you're doing a thing for the first time or you're exploring what works and what doesn't, like in game programming, for example. And then, there are also the various levels at which designing and implementation takes place.

In the end, I find my mental picture is still the most important. And when that fades after a while, or for code written by someone else, then I just have to go read the code. Though it may exist, so far I haven't found a way that's obviously better.

Some thing I've tried (besides commenting code) are doing diagrams (they lose sync over time) and using AI assistants to explain code (not very useful yet). I didn't feel they made the difference, but we have to keep learning in this job.


Of course it can be helpful to do some prototyping to see which parts still need design improvements and to understand the problem space better. That's part of coming up with the good design and architecture, it takes work!

Sometimes, as code get’s written, it becomes clearer what kind of component split is better, which things can be cleanly separated and which less so.

I don't do it in my head. I do diagrams, then discuss them with other people until everyone is on the same page. It's amazing how convoluted get data from db, do something to it, send it back can get, especially if there is a queue or multiple consumers in play, when it's actually the simplest thing in the world, which is why people get over-confident and write super-confusing code.

Diagrams are what I tend to use as well, my background is Engineering (the non software kind) for solving engineering problems one of the first thing we are taught to do at uni is to sketch out the problem and I have somewhat carried that habit over when I need to write a computer program.

I map out on paper the logical steps my code needs to follow a bit like a flow chart tracking the change in states.

When I write code I'll create like a skeleton with placeholder functions I think I'll need as stubs and fill them out as I go, I'm not wedded to the design sometimes I'll remove/ replace etc whole sections as I get further in but it helps me think about it if I have the whole skeleton "on the page"


Well that explains why Git has such a god awful API. Maybe he should've done some prototyping too.

I'm going to take a stab here: you've never used cvs or svn. git, for all its warts, is quite literally a 10x improvement on those, which is what it was (mostly) competing with.

I started my career with Clearcase (ick) and added CVS for personal projects shortly after. CVS always kind of sucked, even compared with Clearcase. Subversion was a massive improvement, and I was pretty happy with it for a long time. I resisted moving from Subversion to Git for a while but eventually caved like nearly everyone else. After learning it sufficiently, I now enjoy Git, and I think the model it uses is better in nearly every way than Subversion.

But the point of the parent of your post is correct, in my opinion. The Git interface sucks. Subversion's was much more consistent, and therefore better. Imagine how much better Git could be if it had had a little more thought and consistency put into the interface.

I thought it was pretty universally agreed that the Git interface sucks. I'm surprised to see someone arguing otherwise.


Subversion was a major improvement over CVS, in that it actually had sane branching and atomic commits. (In CVS, if you commit multiple files, they're not actually committed in a single action - they're individual file-level transactions that are generally grouped together based on commit message and similar (but not identical!) timestamps.) Some weirdness like using paths for branching, but that's not a big deal.

I actually migrated my company from CVS to SVN in part so we could do branchy development effectively, and also so I personally could use git-svn to interact with the repo. We ended up eventually moving Mercurial since Git didn't have a good Windows story at the time. Mercurial and Git are pretty much equivalent in my experience, just they decided to give things confusing names. (git fetch / pull and hg fetch / pull have their meanings swapped)


> I thought it was pretty universally agreed

Depends what you consider “universally agreed”.

At least one person (me) thinks that: git interface is good enough as is (function>form here), regexps are not too terse - that’s the whole point of them.

Related if you squint a lot: https://prog21.dadgum.com/170.html


It's really hard to overstate how much of a sea change git was.

It's very rare that a new piece of software just completely supplants existing solutions as widely and quickly as git did in the version control space.


And all that because the company owning the commercial version control system they had been using free of charge until that point got greedy, and wanted them to start paying for its use.

Their greed literally killed their own business model, and brought us a better versioning system. Bless their greedy heart.


What do you mean by API? Linus's original got didn't have an API, just a bunch of low level C commands ('plumbing'). The CLI ('porcelain') was originally just wrappers around the plumbing.

Those C functions are the API for git.

On the other side the hooks system of git is very good api design imo.

Yeah, could be.. IIRC, he said he doesn't find version control and databases interesting. So he just did what had to be done, did it quickly and then delegated, so he could get back to more satisfying work.

I can relate to that.


baseless conjecture

> I think he's actually describing Linus Torvalds there, who, legend has it, thought about Git for a month or so and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.

That sounds a bit weird. As I remember, linux developers were using semi-closed system called BitLocker (or something like that) for many years. For some reason the open source systems at that time weren't sufficient. The problems with BitLocker were constantly discussed, so it might be that Linus was thinking about the problems for years before he wrote git.


Well, if you want to take what I said literally, it seems I need to explain..

My point is, he thought about it for some time before he was free to start the work, then he laid down the basics in less than a week, so he was able to start using Git to build Git, polished it for a while and then turned it over.

Here's an interview with the man himself telling the story 10 years later, a very interesting read:

https://www.linuxfoundation.org/blog/blog/10-years-of-git-an...

https://en.wikipedia.org/wiki/Git#History


>> …and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.

How very biblical. “And Torwalds saw everything that he had made, and behold, it was very good. And there was evening, and there was morning—the sixth day.”


> And there was evening, and there was morning—the sixth day.

I presume you're using zero-based numbering for this?


You've were downvoted but that part made me smile for the same reason :-)

> The seventh day he rested

This is obviously Sunday :-)


I tend to iterate.

I get a general idea, then start writing code; usually the "sticky" parts, where I anticipate the highest likelihood of trouble.

I've learned that I can't anticipate all the problems, and I really need to encounter them in practice.

This method often means that I need to throw out a lot of work.

I seldom write stuff down[0], until I know that I'm on the right track, which reduces what I call "Concrete Galoshes."[1]

[0] https://littlegreenviper.com/miscellany/evolutionary-design-...

[1] https://littlegreenviper.com/miscellany/concrete-galoshes/


I do the same, iterate. When I am happy with the code I imagine I've probably rewritten it roughly three times.

Now I could have spent that time "whiteboarding" and it's possible I would have come close to the same solution. But whiteboarding in my mind is still guessing, anticipating - coding is of course real.

I think that as you gain experience as a programmer you are able to intuit the right way to begin to code a problem, the iterating is still there but more incremental.


I think once you are an experienced programmer, beyond being able to break down the target state into chunks of task, you are able to intuit pitfalls/blockers within those chunks better than less experienced programmers.

An experienced programmer is also more cognizant of the importance of architectural decisions, hitting the balance between keeping things simple vs abstractions and the balance between making things flexible vs YAGNI.

Once those important bits are taken care of, rest of it is more or less personal style.


Yeah, while I understand rewrite-based iterations, and have certainly done them before, they've gotten less and less common over time because I'm thinking about projects at higher levels than I used to. The final design is more and more often what I already have in my head before I start.

I never hold all the code designed in my head at once, but it's more like multiple linked thoughts. One idea for the overall structure composed of multiple smaller pieces, then the smaller pieces each have their own design that I can individually hold in my head. Often recursively down, depending on how big the given project is and how much it naturally breaks down. There's certainly unknowns or bugs as I go, but it's usually more like handling an edge case than anything wrong with the design that ends in a rewrite.


I don’t think this methodology works, unless we are very experienced.

I wanted to work that way, when I was younger, but the results were seldom good.

Good judgment comes from experience. Experience comes from bad judgment.

-Attributed to Nasrudin


Who's Nasrudin?

Apparently this quote has been attributed to an Uncle Zeke :) [0]

[0]: https://quoteinvestigator.com/2017/02/23/judgment/


Nasrudin (or Nasreddin)[0] is an apocryphal Sufi priest, who is sort of a "collection bin" for wise and witty sayings. Great stories. Lots of humor, and lots of wisdom.

One of my "go-tos" from him, is the Smoke Seller[1]. I think that story applies to the Tech Scene.

I first heard the GC quote as attributed to Will Rogers, then, to Rita Mae Brown.

[0] https://en.wikipedia.org/wiki/Nasreddin

[1] https://www.tell-a-tale.com/nasreddin-hodja-story-smoke-sell...


Yeah, the same. I rewrite code until I'm happy with it. When starting new program, it might cause lots of time wasted because I might need to spend weeks rewriting and re-tossing everything until I feel I got it good enough. Tried to do it faster, but I just can't. The only way is to write a working code and reflect on it.

My only optimization of this process is to use Java and not just throw out everything, but keep refactoring. Idea allows for very quick and safe refactoring cycles, so I can iterate on overall architecture or any selected components.

I really envy on people who can get it right first time. I just can't, despite having 20 years of programming under my seat. And when time is tight and I need to accept obviously bad design, that what makes me burning out.


Nobody gets it right the first time.

Good design evolves from knowing the problem space.

Until you've explored it you don't know it.

I've seen some really good systems that have been built in one shot. They were all ground up rewrites of other very well known but fatally flawed systems.

And even then, within them, much of the architecture had to be reworked or also had some other trade off that had to be made.


The secret to designing entire applications in your head is to be intimately familiar with the underlying platform and gotcha's of the technology you're using. And the only way to learn those is to spend a lot of time in hands-on coding and active study. It also implies that you're using the same technology stack over and over and over again instead of pushing yourself into new areas. There's nothing wrong with this; I actually prefer sticking to the same tech stack, so I can focus on the problem itself; but I would note that the kind of 'great developer' in view here is probably fairly one-dimensional with respect to the tools they use.

You make me feel a lot better about my skill set!

I think first on a macro level, and use mind maps and diagrams to keep things linked and organised.

As I've grown older, the importance of architecture over micro decision has become blindingly apparent.

The micro can be optimised. Macro level decisions are often permanent.


I think this is probably a lot of the value of YAGNI.

The more crap you add the harder it is to fix bad architecture.

And the crap is often stuff that would be trivial to add if the bad architecture weren't there, so if you fix that you can add the feature when you need it in a week.


I think that's probably part of it; but on a really simple level with YAGNI you're not expending effort on something that isn't needed which reduces cost.

What I try to do is think about the classes of functionality that might be needed in the future. How could I build X feature in a years time?

Leave doors open, not closed.


Right, I always thought this is what TDD is used for, very often I design my code in tests and let it kind of guide my implementation.

I kind of imagine what the end result should be in my head (given value A and B, these rows should be X and Y), then write the tests in what I _think_ would be a good api for my system and go from there.

The end result is that my code is testable by default and I get to go through multiple cycles on Red -> Green -> Refactor until I end up being happy with.

Does anyone else work like this?


Sometimes, when I feel like I know the basic domain and can think of something reasonable that I could use as a "north star" test end-point, I work like this. Think anything that could be a unit test, or even a simple functional test. But when I don't know the domain, or if it's some complex system that I have to come up from scratch, writing tests first often makes no sense at all. Then I'd usually start literally drawing on paper, or typing descriptions of what it might do, then just start coding, and the tests come much, much later.

Right tool for the job, as always! The only thing I can't stand is blind zealotry to one way or the other. I've had to work with some real TDD zealots in the past, and long story short, I won't work with people like that again.


TDD comes up with some really novel designs sometimes.

Like, I expect it should look one way but after I'm done with a few TDD cycles I'm at a state that's either hard to get there or unnecessary.

I think this is why some people don't like TDD much, sometimes you have to let go of your ideas, or if you're stuck to them, you need to go back much earlier and try again.

I kind of like this though, makes it kind of like you're following a choose your own adventure book.


I prefer to write an initial implementation, and then in the testing process figure out which interfaces simplify my tests, and then I refactor the implementation to use those interfaces. Generally, this avoids unnecessary abstraction, as the interfaces for testing tend to be the same ones you might need for extensibility.

I'm not sure why folks think they need to hold all this in their head. For me, at least, the "think about the problem stage" involves a lot of:

* Scribbling out some basic design notes. * Figuring out the bits I'm not completely sure about * Maybe coding up some 'prototype' code to prove out the bits I'm less sure about * Repeat until I think I know what I'm doing/validated my assumptions * Put together a 'more formal' design to share with the team. Sometimes my coworkers think of things that I hadn't, so it's not quite a formality. :) * Code the thing up.

By the time I get to the "code it up" stage, I've probably put in most of the work. I've written things down as design, but nothing hyperdetailed. Just what I need to remind myself (and my team) of the decisions I've made, what the approach is, the rejected ideas, and what needs doing. Some projects need a fair bit on paper, some don't.


My pet theory is that developers exist on a spectrum between "planner" and "prototyper" - one extreme spends a lot of time thinking about a solution before putting it into code - hopefully hitting the goal on first attempt. The other iterates towards it. Both are good to have on the team.

I could not agree more, it's rare to write a program where you know all the dependencies, libraries you will use and the overall effect to other parts of the program by heart. So, gradual design process is best.

I would point out, though, that that part also touched understanding requirements, which is many times a very difficult process. We might have a technical requirement conjured, by someone less knowledgeable about the inner workings, from a customer requirement and the resolution of the technical requirement may not even closely address the end-users' use-case. So, a lot of time also goes into understanding what it is that the end-users actually need.


I agree this is how it often goes.

But this also makes it difficult to give accurate estimates because you sometimes need to prototype 2,3 or even more designs to workout the best option.

> writing code should be actually seen as part of the "thinking process".

Unfortunately most of the times leadership dont' see things this way. For them the tough work of thinking ends with architecture or another layer down. Then the engineers are responsible only for translating those designs into software just by typing away with a keyboard.

This leads to mismatch in delivery expectations between leadership and developers.


In my opinion, you shouldn't need to prototype all of these options .. but you will need to stress test any points where you have uncertainty.

The prototype should provide you with cast iron certainty that the final design can be implemented, to avoid wasting a huge amount of effort.


If you know so little that you have to make 3 prototypes to understand your problem, do you think designing it by any other process will make it possible make an accurate estimate?

(not so much of a reply, but more of my thoughts on the discussion in the replies)

I would say the topic is two-sided.

The first is when we do greenfield development (maybe, of some new part of an already existent software): the domain is not really well known and the direction of the future development is even less so. So, there is not much to think about: document what is known, make a rough layout of the system, and go coding. Too much investing in the design at the early stages may result in something (a) overcomplicated, (b) missing very important parts of the domain and thus irrelevant to the problem, (c) having nothing to do with how the software will evolve.

The second (and it is that side I think the post is about) is when we change some already working part. This time it pays hugely to ponder on how to best accommodate the change (and other information the request to make this change brings to our understanding of the domain) into the software before jumping to code. This way I've managed to reduce what was initially thought to take days (if not weeks) of coding to writing just a couple of lines or even renaming an input field in our UI. No amount of exploratory coding of the initial solution would result in such tremendous savings in development time and software complexity.


I agree with the spirit of "writing code" as part of the thinking process but I have to point out a few very dangerous pitfalls there.

first is the urge to write the whole prototype yourself from scratch. Not necessary, better avoided. You should just hack some things together, or pick something close to what you want off github. Idea is to have something working. I am a big proponent of implementing my ideas in a spreadsheet, then off to some code.

Second is modern software solutions are complex (think kubernetes, cloud provider quirks, authentication, sql/nosql, external apis) and easy to get lost in the minutiae, they shroud the original idea and takes strenuous effort to think clearly through the layers. To counter this, I keep a single project in my language of choice with the core business logic. No dependencies. everything else is stubbed or mocked. It runs in the IDE, on my laptop, offline with tests. This extra effort has paid off well to focus on core priorities and identify when bullshit tries to creep in. You could also use diagrams or whatever but working executable code is awesome to have.

third is to document my findings. Often I tend to tinker with the prototype way beyond the point of any meaningful threshold. its 2am before i know it and i kinda lose the lessons when i start the next day. Keeping a log in parallel with building the prototype helps me stay focussed, be clear in what my goals are and avoid repeating the same mistakes.

fourth is the rather controversial topic of estimation. When i have a running prototype, I tend to get excited and get too optimistic with my estimates. Rookie mistake. Always pad your estimates one order of magnitude higher. You still need to go through a lot of bs to get it into production. Remember that you will be working with a team, mostly idiots. Linus works alone.


Pen, paper, diagrams.

Xmind and draw.io

That quote is already a quote in the article. The article author himself writes:

> What is really happening?

> • Programmers were typing on and off all day. Those 30 minutes are to recreate the net result of all the work they wrote, un-wrote, edited, and reworked through the day. It is not all the effort they put in, it is only the residue of the effort.

So at least there the article agrees with you.


The way I read that is that only 10% of the work is writing out the actual implementation that you're sticking with. How you get there isn't as important. Ie someone might want to take notes on paper and draw graphs while others might want to type things out. That's all still planning.

Agree with your point. I think “developers that do 90% of their thinking before they touch the keyboard are really good” is the actual correct inference.

Plenty of good developers use the code as a notepad / scratch space to shape ideas, and that can also get the job done.


I'd like to second that, especially if combined with a process where a lot of code should get discarded before making it into the repository. Undoing and reconsidering initial ideas is crucial to any creative flow I've had.

It's not even that, it's also the iceberg effect of all our personal knowledge-bases; I'd rather experiment on my machine figuring out how I want to do something rather than read endless documentation.

Documentation is a good starter/unblocker but once I've got the basics down then I'll run wild in a REPL or something figuring out exactly how I want to do something.

Pure thinking planning is a good way to end up with all sorts of things that weren't factored in, imo. We should always encourage play during the planning process.


Mathematics was invented by the human mind to minimize waste and maximize work productivity. By allowing reality mapping abstractions to take precedence over empirical falsifications of propositions.

And what most people can't do, such as keeping in their heads absolutely all the concepts of a theoretical computer software application, is an indication that real programmers exist on a higher elevation where information technology is literally second nature to them. To put it bluntly and succinctly.

For computer software development to be part of thinking, a more intimate fusion between man and machine needs to happen. Instead of the position that a programmer is a separate and autonomous entity from his fungible software.

The best programmers simulate machines in their heads, basically.


> The best programmers simulate machines in their heads, basically.

Yes, but they still suck at it.

That's why people create procedures like prototyping, test driven design, type driven design, paper-prototypes, API mocking, and etc.


The point is that there are no programmers that can simulate machines in their heads. These elite engineers only exist in theory. Because if they did exist, they would appear so alien and freakish to you that they would never be able to communicate their form of software development paradigms and patterns. These rare type of programmers only exist in the future, assuming we're on a timeline toward such a singularity. And we're not, except for some exceptions that cultivate a colony away from what is commonly called the tech industry.

EDIT: Unless you're saying that SOLID, VIPER, TDD, etc. are already alien invaders from a perfect world and only good and skilled humans can adhere to the rules with uncanny accuracy?


String theory was invented by the human mind to minimize productivity and maximize nerd sniping. Pure math.

I don't think the quote suggests that a programmer would mentally design a whole system before writing any code. As programmers, we are used to thinking in problems as steps needing resolution and that's exactly the 90% there. When you're quickly prototyping to see what fits better as a solution to the problem you're facing, you must have already thought what are the requirements, what are the constraints, what would be a reasonable API given your use case. Poking around until you find a reasonable path forward means you have already defined which way is forwards.

I don’t like that line at all.

Personally, I think good developers get characters on to the screen and update as needed.

One problem with so much upfront work is how missing even a tiny thing can blow it all up, and it is really easy to miss things.


> writing code should be actually seen as part of the "thinking process"

I agree. This is how I work as well. I start coding as quickly as possible, and I fully plan on throwing away my first draft and starting again. I do a fair bit of thinking beforehand, of course, but I also want to get the hidden "gotchas" brought to light as quickly as possible, and often the fastest way is to just start writing the code.


I don't get great results myself from diving into coding immediately. I personally get better results if I have some storyboards, workflows, ins/outs, etc. identified and worked-over first.

But when I do get down to writing code, it very much takes an evolutionary path. Typically the first thing I start writing is bloated, inefficient, or otherwise suboptimal in a variety of ways. Mostly it's to get the relationships between concepts established and start modeling the process.

Once I have something that starts looking like it'll work, then I sort of take the sculptor's approach and start taking away everything that isn't the program I want.

So yeah, a fair amount of planning, but the first draft of anything I write, really, code or otherwise, is something for me, myself, to respond to. Then I keep working it over until it wouldn't annoy me if I was someone else picking it up cold.


There's more than one way to do it.

How I work:

- First I make a very general design of modules, taking into account how they are going to communicate with each other, all based in experience with previous systems.

- I foresee problematic parts, usually integration points, and write simple programs to validate assumptions and test performance.

- I write a skeleton for the main program with a dumbed-down GUI.

From that point on, I develop each module and now, yes, there's a lot of thinking in advance.


I completely agree with you. This article is on the right track but it completely ignore the importance of exploratory programming to guide that thinking process.

I also find it critical to start writing immediately. Just my thoughts and research results. I also like to attempt to write code too early. I'll get blocked very quickly or realize what I'm writing won't work, and it brings the blockage to the forefront of my mind. If I don't try to write code there will be some competing theories in my mind and they won't be prioritized correctly.

> While that may be true sometimes, I think that ignores the fact that most people can't keep a whole lot of constraints and concepts in their head at the same time.

Indeed, and this limitation specifically is why I dislike VIPER: the design pattern itself was taking up too many of my attention slots, leaving less available for the actual code.

(I think I completely failed to convince anyone that this was important).


are you talking about the modal text editor for emacs


Correct, that.

The most I've done with emacs (and vim) is following a google search result for "how do I exit …"


surely. thanks

Germans say "probieren geht über studieren" which means, rather try it than think it too much.

ditto. My coworkers will sit down at the whiteboard and start making ERDs, and I'll just start writing a DB migration and sketching up the models. They think I'm crazy, because I have to do rollbacks and recreate my table a few times as I think of more things, and I wind up deleting some code that I already wrote and technically worked. Who cares? Time-wise, it comes out the same in the end, and I find and solve more hidden surprises by experimenting than they do by planning what they're going to do once they're finally ready to type.

I think it's just two ways of doing the same thing.


Same, and since we are on the topic of measuring developer productivity; usually my bad-but-kinda-working prototype is not only much faster to develop, it also has more lines of code, maximizing my measurable productivity!

I'd take the phrase with a grain of salt. What's certain true is that you can't just type your way to a solution.

Whether you plan before pen meets paper or plan while noodling is a matter of taste.


Also not every task requires deep thought. If you are writing some CRUD, it is usually not going to be all that much thinking, but more touching the keyboard.

I wish I had thought a little more about this CRUD.

I hand wrote HTML forms and that was not a great plan. I made a dialog generator class in about a half hour that replaced dozens of CreatePermission.html type garbage I wrote a decade ago.


This is what I do I right off the bat wrote down an line or two about what I need to do.

Then I break that down into small and smaller steps.

Then I hack it together to make it work.

Then I refactor to make it not a mess.


I used to think a lot before coding.

Then I learned TDD, and now I can discover the design while I code. It's a huge improvement for me!


I think it was Feynman who said something to the effect of “writing is thinking”.

yeah, i just sketched out some code ideas on paper over a few days, checked and rechecked them to make sure they were right, and then after i wrote the code on the computer tonight, it was full of bugs that i took hours and hours to figure out anyway. debugging output, stepping through in the debugger, randomly banging on shit to see what would happen because i was out of ideas. i would have asked coworkers but i'm fresh out of those at the moment

i am not smart enough this week to debug a raytracer on paper before typing it in, if i ever was

things like hypothesis can make a computer very powerful for checking out ideas


I've no coworkers either, and over time both I and my code suffer for it. Some say thinking is at it's essence a social endeavor.

I should say rather, Thinking.

"Most people" are not "Really good developers".

Great article! I've posted it in other comments before, but it's worth repeating:

The best explanation I've seen is in the book "The Secret Life of Programs" by Jonathan E. Steinhart. I'll quote that paragraph verbatim:

---

Computer programming is a two-step process:

1. Understand the universe.

2. Explain it to a three-year-old.

What does this mean? Well, you can't write computer programs to do things that you yourself don't understand. For example, you can't write a spellchecker if you don't know the rules for spelling, and you can't write a good action video game if you don't know physics. So, the first step in becoming a good computer programmer is to learn as much as you can about everything else. Solutions to problems often come from unexpected places, so don't ignore something just because it doesn't seem immediately relevant.

The second step of the process requires explaining what you know to a machine that has a very rigid view of the world, like young children do. This rigidity in children is really obvious when they're about three years old. Let's say you're trying to get out the door. You ask your child, "Where are your shoes?" The response: "There." She did answer your question. The problem is, she doesn't understand that you're really asking her to put her shoes on so that you both can go somewhere. Flexibility and the ability to make inferences are skills that children learn as they grow up. But computers are like Peter Pan: they never grow up.


Any other similar book recommendations?

Not really a book, and not sure how similar it would be, but you might enjoy this work from 2001 - "Programmers' stone". Introduction into "mapping" vs "packing" definitely influenced my own understanding of programming vs thinking back then.

https://www.datapacrat.com/Opinion/Reciprocality/r0/index.ht...


Edit: Aften writing this long nitpicky comment, I have though of a much shorter and simpler point I want to make: Programming is mostly thinking and there are many ways to do the work of thinking. Different people and problems call for different ways of thinking and learning to think/program in different ways will give more tools to choose from. Thus I don't like arguments that there is one right way that programming happens or should happen.

I'm sorry, but yout entire comment reads like a list of platitudes about programming that don't actually match reality.

> Well, you can't write computer programs to do things that you yourself don't understand.

Not true. There are many times where writing software to do a thing is how I come to understand how that thing actually works.

Additionally, while an understanding of physics helps with modeling physics, much of that physics modeling is done to implement video games and absolute fidelity to reality is not the goal. There is often an exploration of the model space to find the right balance of fidelity, user experience and challenge.

Software writing is absolutely mostly thinking, but that doesn't mean all or even most of the thinking should al always come first. Computer programming can be an exploratory cognitive tool.

> So, the first step in becoming a good computer programmer is to learn as much as you can about everything else. Solutions to problems often come from unexpected places, so don't ignore something just because it doesn't seem immediately relevant.

I'm all about generalist and autodidacts, but becomming one isn't a necessary first step to being a good programmer.

> The second step of the process requires explaining what you know to a machine that has a very rigid view of the world, like young children do.

Umm... children have "rigid" world views? Do you know any children?

> Let's say you're trying to get out the door. You ask your child, "Where are your shoes?" The response: "There." She did answer your question.

Oh, you don't mean rigid, you mean they can't always infer social subtexts.

> Flexibility and the ability to make inferences are skills that children learn as they grow up. But computers are like Peter Pan: they never grow up.

Computes make inferrences all the time. Deriving logical conclusions from known facts is absolutely something computers can be programmed to do and is arguably one of their main uses cases.

I have spent time explaining to things to children of various ages, including 3 year olds, and find the experience absolutely nothing like programming a computer.


Are you replying to me or to the author of the quote? :)

> Software writing is absolutely mostly thinking, but that doesn't mean all or even most of the thinking should al always come first. Computer programming can be an exploratory cognitive tool.

Absolutely, explaining something to the child also can be exploratory cognitive tool.


I would say very young children up until they acquire concepts like a theory of mind, cause and effect happening outside of their field of observation, and so on, are pretty rigid in many ways like computers. It's a valuable insight.

Or at least they don't make mistakes in exceptionally novel and unusual ways until they're a bit older.


> I would say very young children up until they acquire concepts like a theory of mind, cause and effect happening outside of their field of observation, and so on, are pretty rigid in many ways like computers. It's a valuable insight.

I don't see any overlapp between the two skill sets, since you do I'd be curious for examples of where you do see overlap.


    I'm confident enough to tout this number as effectively true, though I should mention that no company I work with has so far been willing to delete a whole day's work to prove or disprove this experiment yet.
Long ago when I was much more tolerant, I had a boss that would review all code changes every night and delete anything he didn't like. This same boss also believed that version control was overcomplicated and decided the company should standardize on remote access to a network drive at his house.

The effect of this was that I'd occasionally come in the next morning to find that my previous day's work had been deleted. Before I eventually installed an illicit copy of SVN, I got very good at recreating the previous day's work. Rarely took more than an hour, including testing all the edge cases.


I don't have a big sample size, but 2/2 of my first embedded jobs both used network shares and copy+paste to version their code. Because I had kind-of PTSD from the first job, I right off asked the boss on the second job if they had a git repository somewhere. He thought that git is the same as Github and told me they don't want their code to be public.

When they were bought of by some bigger company, we got access to their intranet. I digged through that and found a gitlab instance. So then I just versioned my own code (which I was working on mostly on my own), documented all of it on there, even installed a gitlab runner and had a step-by-step documentary on how to get my code working. When they kicked me out (because I was kind of an asshole, I assume), they asked me to hand over my code. I showed them all of what I did and told them how to reproduce it. After that the boss was kinda impressed and thanked me for my work. Maybe I had a little positive impact on a shitty job by being an asshole and doing stuff the way that I thought would be the right way to do it.

Edit: Oh, before I found that gitlab instance I just initialized raw git repositories on their network share and pushed everything to that


You got fired and your response is to give them a gift? Fascinating.

Well, I was severely depressed and was on sick leave for quite some time, but when I was there I just did my job as best as I can. I am not an inherent asshole. I just get triggered hard when some things don't work out (no initial training, barely any documentation, people being arrogant). I just want to be better than this myself.

Bad boss or zen teacher, we will never know!

The bigger problem here is the manager getting involved with code.

Even when done with good intentions, managers being involved in code/reviews almost always ends up being net negative for the team.


Why?

There are many reasons. First a manager is not a peer but brings in a sense of authority into the mix so the discussions will not be honest. Manager's inputs have a sense of finality and people will hesitate to comment or override them even when they are questionable.

There are human elements too. Even if someone has honest inputs, any (monetary or otherwise) rewards or lack of them will be attributed to those inputs (or lack of them). Overall, it just encourages bad behaviours among the team and invites trouble.

These should not happen in an ideal world but as we are dealing with people things will be far from ideal.


Anyone who has made seious use of Microsoft Office products in the 00's and 10's knows these things to be true (or they reflexively click save every 5-10 minutes).

Was your work better or worse second time around?

Probably a bit of both, but hindsight helped. It doesn't usually end up exactly the same though. Regardless, whatever I wrote worked well enough that it outlived the company. A former client running it reached out to have it modified last year.

With writing the second version is definitely better, sucks having to redo but improvement makes it worth while.

Crikey what a sociopath to work for. I’m sorry this happened to you.

This is a good article to send to non-programmers. Just as programmers need domain knowledge, those who are trying to get something out of programmers need to understand a bit about it.

I think I recognise that tiny diffs that I might commit can be the ones that take hours to create because of the debugging or design or learning involved. It's all so easy to be unimpressed by the quantity of output and having something explained to you is quite different from bashing your head against a brick wall for hours trying to work it out yourself.


This. The smallest pieces of code I’ve put out were usually by far the most time consuming, most impactful and most satisfying after you “get it”. One line commits that improve performance by 100x but took days to find, alongside having to explain during syncs why a ticket is not moving.

This is why domain knowledge is key. I work in finance, I've sat on trading desks looking at various exchanges, writing code to implement this or that strategy.

You can't think about what the computer should do if you don't know what the business should do.

From this perspective, it might make sense to train coders a bit like how we train translators. For example, I have a friend who is a translator. She speaks a bunch of languages, it's very impressive. She knows the grammar, idioms, and so on of a wide number of languages, and can pick up new ones like how you or I can pick up a new coding language.

But she also spent a significant amount of time learning about the pharmaceutical industry. Stuff about how that business works, what kinds of things they do, different things that interface with translation. So now she works translating medical documents.

Lawyers and accountants are another profession where you have a language gap. What I mean is, when you become a professional, you learn the language of your profession, and you learn how to talk in terms of the law, or accounting, or software. What I've always found is that the good professionals are the ones who can give you answers not in terms of their professional language, but in terms of business.

Particularly with lawyers, the ones who are less good will tell you every possible outcome, in legalese, leaving you to make a decision about which button to press. The good lawyers will say "yes, there's a bunch of minor things that could happen, but in practice every client in your positions does X, because they all have this business goal".

---

As for his thought experiment, I recall a case from my first trading job. We had a trader who'd created a VBA module in Excel. It did some process for looking through stocks for targets to trade. No version control, just saved file on disk.

Our new recruit lands on the desk, and one day within a couple of weeks, he somehow deletes the whole VBA module and saves it. All gone, no backup, and IT can't do anything either.

Our trader colleague goes red. He calms down, but what can you do? You should have backups, and what are you doing with VBA anyway?

He sits down and types out the whole thing, as if he were a terminal screen from the 80s printing each character after the next.

Boom, done.


> This is why domain knowledge is key.

Very true. There’s a huge difference developing in a well known vs. new domain. My mantra is that you have to first be experienced in a domain to be able to craft a good solution.

Right now I am pouring most of my time in a fairly new domain, just to get an experience. I sit next to the domain experts (my decision) to quickly accumulate the needed knowledge.


> This is why domain knowledge is key. > Lawyers and accountants are another profession where you have a language gap.

I fully agree with you. However, my experience as a software engineer with a CPA is that, generally speaking, companies do not care too greatly about that domain knowledge. They’d rather have a software engineer with 15 years working in accounting-related software than someone with my background or similar and then stick them into a room to chat with an accountant for 30 minutes.


> This is why domain knowledge is key

In the comment thread, I keep seeing prescriptions over and over for the one way that programming should work.

Computer programming is an incredibly broad discipline that covers such a broad range of types of work. I think it is incredibly hard to make generalizations that actually apply to the whole breadth of what computer programming encompases.

Rather than trying learn or teach one perfect one single methodology that applies accross every sub field of programming, I think that one should aim to build a toolbag of approaches and methodologies along with an understanding where they tend to work well.


> This is why domain knowledge is key.

Yeah but in my country all companies have a non-compete clause which makes it completely useless for me to learn any domain-specific knowledge because I won't be able to transfer it to my next job if current employer fires me. Therefore I focus on general programming skills because these are transferable across industries.


The transferable skill is learning and getting on top of the business, then translating that to code. Of course you can't transfer the actual business rules; every business is different. You just get better and better at asking the right questions. Or you just stick with a company for a long time. There are many businesses that can't be picked up in a few weeks. Maybe a few years.

In some countries (Austria), the company that you have a non-compete clause with should pay you a salary if you can’t reasonably be employed due to it. So it is not enforced most of the time.

cripes what country is that

This is laid out pretty early on by Bjourne in his PPP book[0],

> We do not assume that you — our reader — want to become a professional programmer and spend the rest of your working life writing code. Even the best programmers — especially the best programmers — spend most of their time not writing code. Understanding problems takes serious time and often requires significant intellectual effort. That intellectual challenge is what many programmers refer to when they say that programming is interesting.

Picked up the new edition[1] as it was on the front page recently[2].

[0]: https://www.stroustrup.com/PPP2e_Ch01.pdf

[1]: https://www.stroustrup.com/programming.html

[2]: https://news.ycombinator.com/item?id=40086779


I think this is mostly right, but my biggest problem is that it feels like we spend time arguing the same things over and over. Which DB to use, which language is best, nulls or not in code and in DB, API formatting, log formatting, etc.

These aren't particularly interesting, and sure it's good to revisit them time and again, but these are the types of time sinks I find myself in in the last 3 companies I've worked for that feel like they should be mostly solved.

In fact, a company with a strong mindset, even if questionable, is probably way more productive. If it was set in stone we use Perl, MongoDB, CGI... I'd probably ultimately be more productive than I've been lately despite the stack.


> “If it was set in stone we use Perl, MongoDB, CGI... I'd probably ultimately be more productive than I've been lately despite the stack.”

Facebook decided to stick with PHP and MySQL from their early days rather than rewrite, and they’re still today on a stack derived from the original one.

It was the right decision IMO. They prioritized product velocity and trusted that issues with the stack could be resolved with money when the time comes.

And that’s what they’ve done by any metric. While nominally a PHP family language, Meta’s Hack and its associated homegrown ecosystem provides one of the best developer experiences on the planet, and has scaled up to three billion active users.


I disagree! These decisions are fundamental in the engineering process.

Should I use steel, concrete or wood to build this bridge?

The mindless coding part starts one year later when you found that your mongoDB does not do joins, and you start implementing this as an extra layer in the client side.


What you're referring to is politics. Different people have different preferences, often because they're more familiar with one of them, or for other possibly good reasons. Somehow you have to decide who wins.

The hardest part is finding out what _not_ to code, either before (design) or after (learn from prototype or the previous iteration) having written some.

No code is faster than no code!

Sometimes you have to write it to understand why you shouldn’t have written it.

Sometimes you knew you shouldn't have written it and then did so anyway.

*Bjarne

“Programming is mostly thinking” is one of these things we tell ourselves like it is some deep truth but it’s the most unproductive of observations.

Programming is thinking in the same exact way all knowledge work is thinking:

- Design in all it’s forms is mostly thinking

- Accounting is mostly thinking

- Management in general is mostly thinking

The meaningful difference is not the thinking, it’s what are you thinking about.

Your manager needs to “debug” people-problems, so they need lots of time with people (i.e. meetings).

You are debugging computer problems, so you need lots of time with your computer.

There’s an obvious tension there and none of the extremes work, you (and your manager) need to find a way to balance both of your workloads to minimize stepping on each others toes, just like with any other coworker.


The article isn't for programmers, it's for non-programmers (like management) who think it is mostly typing, and describing what's going on when we're not typing.

It's not nearly as unproductive as my old PhD college professor who went on and on about the amount of time you lose per day moving your hand off your keyboard when you could be memorizing shortcuts and macros instead of working

An important difference is that in programming, it is often better to do the same thing with less code (result).

I don't mean producing cryptic code-golf-style code, but the aspect that all the stuff you produce you have to maintain. This is certainly different from a novel author who doesn't care so much about maintenance and is probably more concerned about the emotions that his text is producing.


> how can you experiment with learning on-the-job to create systems where the thinking is optimized?

Best optimization is less interruptions as reasearch shows their devastating effect on programming:

- 10-15 min to resume work after an interruption

- A programmer is likely to get just one uninterrupted 2-hour session in a day

- Worst time to interrupt: during edits, searches & comprehension

I've been wondering if there's a way to track interruptions to showcase this.

[0] http://blog.ninlabs.com/2013/01/programmer-interrupted/


If you ask a manager to hold an hour's meeting spread across 6 hours in 10 min slots you will get the funniest looks.

Yet developers are expected to complete a few hours of coding task in between an endless barrage of meetings, quick and short pings & syncups over slack/zoom.

For the few times I've had to work on the weekends at home, I've observed that the difference in the quality of work done over a (distraction free) weekend is much better than that of a hectic weekday.


> If you ask a manager to hold an hour's meeting spread across 6 hours in 10 min slots you will get the funniest looks.

This is a great analogy I haven’t heard it before. They think it’s like that quick work where you check your calendar and throw in your two cents on an email chain. It’s not. Much more like holding a meeting.


The horrible trap of this is being able to get so little work done during the day, that you end up risking any but possibly all of your otherwise free time compensating for some company's idiotic structure, and this is a catastrophe

This is why I work at night 80% of the time. It's absolutely not for everyone, it's not for every case, and the other 20% is coordination with daytime people, but the amount of productivity that comes from good uninterrupted hours long sessions is simply unmatched. Once again, not for everyone, probably not for most.

This and a high demand for my time is why I am roughly a magnitude more productive when I am in home office. Nobody bothers me there and if they do I can decide myself when to react.

If you want to tackle particularly hard problems and you get an interruption every 10 to 20 minutes you can just shelve the whole thing, because chances are you will just produce bullshit code that produces headache down the line.


I once led a project to develop a tool that tracks how people use their time in a large corporation. We designed it to be privacy-respecting, so it would log that you are using the Web browser, but not the specific URL, which is of course relevant (e.g. Intranet versus fb.com). Every now and then, a pop up would ask the user to self-rate how productive they feel, with a free-text field to comment. Again, not assigned to user IDs in order to respect privacy, or people would start lying to pretend to be super-human.

We wrote a Windows front end and a Scala back end for data gathering and roled it out to a group of volunteers (including devs, lawyers and finace people even). Sadly the project ran out of time and budget just as things were getting interesting (after a first round of data analysis), so we never published a paper about it.

We also looked at existing tools such as Rescue Time ( https://www.rescuetime.com/ ) but decided an external cloud was not acceptable to store our internal productivity data.


Good programming is sometimes mostly thinking, because "no plan survives first contact with the enemy." Pragmatic programming is a judicious combination of planning and putting code to IDE, with the balance adapting to the use case.

This. Programming is mostly reconnaissance, not just thinking. If you don’t write code for days, you’re either fully aware of the problem surface or are just guessing it. There’s not much to think about in the latter case.

The first run with the IDE is like completing a level of a game the first time. The second time it will be quicker.

I agree we can expand thinking to “thinking with help from tools”.


That’s an iteration of Peter Naur’s « Programming as Theory Building » that has been pivotal in my understanding of what programming really is about.

Programming is not about producing programs per se, it is about forming certain insights about affairs of the world, and eventually outputing code that is nothing more than a mere representation of the theory you have built.


Off topic. I'm not a developer but I do write code at work, on which some important internal processes depend. I get the impression that most people don't see what I do as work, engaged as they are in "busy" work. So I'm glad when I read things like this that my struggles are those of a real developer.

Sounds like you are a "real developer". Don't sell yourself short.

Developers need to learn how to think algorithmically. I still spend most of my time writing pseudocode and making diagrams (before with pen and paper, now with my iPad). It's the programmers' version of the Abraham Lincoln's quote "Give me six hours to chop down a tree and I will spend the first four sharpening the axe."

I don’t really know what “think algorithmically means,” but what I’d like to see as a lead engineer is for my seniors to think in terms of maintenance above all else. Nothing clever, nothing coupled, nothing DRY. It should be as dumb and durable as an AK47.

>I don’t really know what “think algorithmically means,”

I would say thinking about algorithms and data structures for algorithmic complexity not to explode.

>Nothing clever

A lot of devs use nested loops and List.remove()/indexOf() instead of maps, etc., the terrible performance gets accepted as the state of the art, and then you have to do complex workarounds not to call some treatments too often, etc., increasing the complexity.

Performance yields simplicity: a small increase in cleverness in some code can allow for a large reduction in complexity in all the code that uses it.

Whenever I do a library, I make it as fast as I can, for user code to be able to use it as carelessly as possible, and to avoid another library popping up when someone wants better performances.


We need this to be more prevalent. But the sad fact is most architects try to justify their position and high salaries by creating "robust" software. You know what I mean - factories over factories, micro services and what not. If we kept it simple I don't think we would need many architects. We would just need experienced devs that know the codebase well and help with PRs and design processes, no need to call such a person 'architect', there's not much to architect in such a role.

I was shown what it means to write robust software by a guy with a PhD in... philosophy out of all things(so a literal philosophiae doctor).

Ironically enough it was nothing like what some architecture astronauts wring - just a set of simple to follow rules, like organizing files by domain, using immutable data structures and pure functions where reasonable etc.

Also I hadn't seen him use dependent types in the one project we worked together on and generics appeared only when it really made sense.

Apparently it boils down to using the right tools, not everything you've got at once.


I love how so much of distributed systems/robust software wisdom is basically: stop OOP. Go back to lambda.

OOP was a great concept initially. Somehow it got equated with the corporate driven insanity of attaching functions to data structures in arbitrary ways, and all the folly that follows. Because "objects" are easy to imagine and pure functions aren't? I don't know but I'd like to understand why corporations keep peddling programming paradigms that fundamentally detract from what computer science knows about managing complex distributed systems.


> Nothing clever, nothing coupled

Yes, simple is good. Simple is not always easy though. A good goal to strive for nevertheless.

> nothing DRY

That's interesting. Would you prefer all the code to be repeated in multiple places?


Depends. I haven’t come up with the rubric yet but it’s something like “don't abstract out functionality across data types”. I see this all the time: “I did this one thing here with data type A, and I’m doing something similar with data type B; let’s just create some abstraction for both of them!” Invariably it ends up collapsing, and if the whole program is constructed this way, it becomes monstrous to untangle, like exponentially complicated on the order of abstractions. I think it’s just a breathtaking misunderstanding of what DRY means. It’s not literally “don’t repeat yourself”. It’s “encapsulate behaviors that you need to synchronize.”

Also, limit your abstractions’ external knowledge to zero.


Very good explanation!

> “I did this one thing here with data type A, and I’m doing something similar with data type B; let’s just create some abstraction for both of them!”

I'm guilty of this. I even fought hard against the people who wanted to keep the code duplicated for the different data types.

> “encapsulate behaviors that you need to synchronize.”

I like that!


The problem is that most developers don't not actually understand DRY. They see a few lines repeated a few times in different functions and create a mess of abstraction just to remove the repeated code. Eventually more conditions are added to the abstracted functions to handle more cases, and the complexity increases, all to avoid having to look at a couple lines of repeated code. This is not what DRY is about.

Yep, exactly. I went into further detail in another comment.

Bit OP but probably means “no fancy silver bullet acronyms”.

In my mind this is breaking down the problem into a relevant data structure and algorithms that operate on that data structure.

If for instance you used a tree but were constantly looking up an index in the tree you likely needed a flat array instead. The most basic example of this is sorting, obviously but the same basic concepts apply to many many problems.

I think the issue that happens in modern times, specially in webdev, is we aren't actually solving problems. We are just glueing services together and marshalling data around which fundamentally doesn't need to be algorithmic... Most "coders" are glorified secretaries who now just automate what would have been done by a secretary before.

Call service A (database/ S3 etc), remove irrelevant data, send to service B, give feedback.

It's just significantly harder to do this in a computer than for a human to do it. For instance if I give you a list of names but some of them have letters swapped around you could likely easily see that and correct it. To do that "algorithmically" is likely impossible and hence ML and NLP became a thing. And data validation on user input.

So algorithmically in the modern sense is more, follow these steps exactly to produce this outcome and generating user flows where that is the only option.

Human do logic much much better than computers but I think the conclusion has become that the worst computer program is probably better at it that the average human. Just look at many niche products catered to X wealth group. I could have a cheap bank account and do exactly what is required by that bank account or I can pay a lot of money and have a private banker that I can call and they will interpret what I say into the actions that actually need to happen... I feel I am struggling to actually write what's in my mind but hopefully that gives you an idea...

To answer your nothing clever , well clever is relative. If I have some code which is effectively a array and an algorithm to remove index 'X' from it, would it be "clever" code to you if that array was labeled "Carousel" and I used the exact same generic algorithms to insert or remove elements from the carousel?

For most developers these days they expect to have a class of some sort with a .append and .remove function but why isn't it just an array of structs which use the exact same functions as every single other array... That people generally will complain that that code is "clever" but in reality it is really dumb. I can see it's clearly an array being operated on but OOP has caused brain rot and developers actually don't know what that means... Wait maybe that was OPs point... People no longer think algorithmically.

---

Machine learning, Natural Language Processing


> I think the issue that happens in modern times, specially in webdev, is we aren't actually solving problems. We are just glueing services together and marshalling data around which fundamentally doesn't need to be algorithmic...

This is true and is the cause of much frustration everywhere. Employers want “good” devs, so they do complicated interviews testing advanced coding ability. And then the actual workload is equal parts gluing CRUD components together, cosmetic changes to keep stakeholders happy, and standing round the water cooler raging at all the organisational things you can’t change.


I still use pen and paper. Actually as I progress with my career and knowledge I use pen and paper more and digital counterparts less.

It might be me not taking my time to learn Mathematica/Julia tho...


it's an odd analogy because programs are complex systems and involve interaction between countless of people. With large software projects you don't even know where you want to go or what's going to happen until you work. A large project doesn't fit into some pre-planned algorithm in anyone's head, it's a living thing.

diagrams and this kind of planning is mostly a waste of time to be honest. You just need to start to work, and rework if necessary. This article is basically the peak of the bell curve meme. It's not 90% thinking, it's 10% thinking and 90% "just type".

Novelists for example know this very well. Beginners are always obsessed with intellectually planning out their book. The experienced writer will always tell you, stop yapping and start typing.


Your part of your comment doesn't fit with the rest. With complex projects, you often don't even know exactly what you're building, it doesn't make sense to start coding. You first need to build a conceptual model, discuss it with the interested parties and only then start building. Diagrams are very useful to solidify your design and communicate it to others.

There's a weird tension between planning and itterating. You can never forsee anywhere close to enough with just planning. But if you just start without a plan you can easily work yourself into a dead end. So you need enough planning to avoid the dead ends, whilst starting early enough so you get your reality checks so you have enough information to get to an actual solution.

Relevant factors here are how cheaply you can detect failure (in terms of time, material, political capital, team morale) and how easily you can backtrack out of a bad design decision (in terms of political capital, how much other things need to be redone due to coupling, and other limitations).

The earlier you can detect bad decisions, and the easier you can revert them, the less planning you need. But sometimes those are difficult.

It also suggests that continuous validation and forward looking to detect bad decisions early can be warranted. Something which I myself need to get better at.


> Novelists for example know this very well. Beginners are always obsessed with intellectually planning out their book. The experienced writer will always tell you, stop yapping and start typing.

This is not true in general. Brandon Sanderson for example outlines extensively before writing: https://faq.brandonsanderson.com/knowledge-base/can-you-go-i...


> You just need to start to work, and rework if necessary

And making changes on paper is cheaper than in code.


I'm tempted to break out the notebook again, but... beyond something that's already merged, what situations make paper changes cheaper than code changes? I can type way faster than I can write.

Do you have any resources for this? especially for the adhd kind - I end up going down rabbit holes in the planning part. How do you deal with information overload and overwhelm OR the exploration exploitation dilemma?

There are 2 bad habits in programming: people that start writing code the 1st second, and people that keep thinking and investigating for months without writing any code. My solution to that: just force to do the opposite. In your case: start writing code immediately. Ni matter how bad or good. Look the youtube channel “tsoding daily” he just goes ahead. The code is not always the best, but he gets things done. He does research offline (you can tell) but if you find yourself doing just research, reading and thinking, force yourself to actually start writing code.

Or his Twitch videos. That he starts writing immediately and that we're able to watch the process is great. Moreover the tone is friendly and funny.

I wonder if good REPL habits could help the ADHD brain?

It still feels like you are coding so your brain is attached, but with rapid prototyping you are also designing, moving parts around to see where they would fit best.


Does it really take four hours to sharpen an axe? I've never done it.

Doing it right, with only manual tools, I believe so, remembering back to one of the elder firefighters that taught me (who was also an old-school forester).

Takes about 20 minutes to sharpen a chainsaw chain these days though...


10/20 minutes to sharpen a pretty dull kitchen knife with some decent whetstones.

Also, as someone famous once said: if I had 4 hours to sharpen an axe, I'd spend 2 hours preparing the whetstones.


If I had 2 hours to prepare whetstones I’d do 1 hour of billable work and then order some whetstones online.

If I had 1 hour of billable work, I'd charge per project and upfront, to allow me to claim unemployment for the following weeks.

Question in my head is, can LLMs think algorithmically?

LLMs can't think.

Source?

LLMs string together words using probability and randomness. This makes their output sound extremely confident and believable, but it may often be bullshit. This is not comparable to thought as seen in humans and other animals.

unfortunately that is exactly what the humans are doing an alarming fraction of the time

One of the differences is that humans are very good at not doing word associations if we think they don't exist, which makes us able to outperform LLMs even without a hundred billion dollars worth of hardware strapped into our skulls.

that's called epistemic humility, or knowing what you don't know, or at least keeping your mouth shut, and in my experience actually humans suck at it, in all those forms

Ask an LLM.

LLMs can think.

Source?

I use them a lot. They sure seem thinky.

The other day I had one write a website for me. Totally novel concept. No issues.


I have a similar experience. Just thought it'd be cute to ask you both for sources. Interesting that asking you for sources got me upvoted, while asking the other guy for sources got me downvoted :)

Interesting question.

LLMs can be cajoled into producing algorithms.

In fact this is the Chain-of-Thought optimisation.

LLMs give better results when asked for a series of steps to produce a result than when just asked for the result.

To ask if LLMs “think” is an open question and requires a definition of thinking :-)


Like a bad coder with a great memory, yes

The problem is the word “producing” of the parent comment, where it should be “reproducing”.

Various programming paradigms (modular programming, object-oriented, functional, test-driven etc) have developed to reduce precisely this cognitive load. The idea being that it is easier to reason and solve problems that are broken down into smaller pieces.

But its an incomplete revolution. If you look at the UML diagram of a fully developed application its a mess of interlocked pieces.

Things get particularly hard to reason about when you add concurrency.

One could hypothesize that programming languages that "help thinking" are more productive / popular but not sure how one would test it.


One of the things I find interesting about the limited general intelligence of language models (about which I tend to be pretty deflationary) is that my own thought processes during programming are almost entirely non-verbal. I would find it to be incredibly constraining to have to express all my intermediate thoughts in the form of text. It is one of the things that makes me think we need a leap or two before AGI.

Most software developer jobs are not really programming jobs. By that I mean the amount of code written is fairly insignificant to the amount of "other" work which is mainly integration/assembly/glue work, or testing/verification work.

There definitely are some jobs and some occasions where writing code is the dominant task, but in general the trend I've seen over the past 20 years I've been working has been more and more to "make these two things fit together properly" and "why aren't these two things fitting together properly?"

So in part I think we do a disservice to people entering this industry when we emphasize the "coding" or "programming" aspects in education and training. Programming language skills are great, but very important is systems design and integration skills.


Writers get a bit dualistic on this topic. It's not "sitting in a hammock and dreaming up the the entire project" vs. "hacking it out in a few sprints with no plan". You can hack on code in the morning, sit in a hammock that afternoon, and deliver production code the next day. It's not either-or. It's a positive feedback loop between thinking and doing that improves both.

In the 2020s, we still have software engineering managers that think of LOC as a success metric.

“How long would it take you to type the 6 hours work of diff?” is a great question to force the cognitively lazy software manager to figure out how naive that is.

Nowadays I feel great when my PRs have more lines removed than added. And I really question if the added code was worth the added value if it’s the opposite.


Conversely, how long would it take the average manager to re-utter any directions they gave the previous day?

Author listed a handful of the thinking aspects that take up the 11/12 non-motion work.. but left out naming things! The amount of time in conversation about naming, or even renaming the things I've already named.. there's even a name for it in the extreme, bikeshedding. Even sometimes I'll be fixated on how to phrase the comments for a function or even reformat things for line lengths to fit.

Programming is mostly communicating.


Yep, with seniority programming gradually goes from problem solving to product communication and solution proposition

Seniority in helping large organizations navigate software development, not seniority in actually building software.

I would absolutely agree, for any interesting programming problem. Certainly, the kind of programming I enjoy requires lots of thought and planning.

That said, don't underestimate how much boilerplate code is produced. Yet another webshop, yet another forum, yet another customization of that ERP or CRM system. Crank it out, fast and cheap.

Maybe that's the difference between "coding" and "programming"?


> Maybe that's the difference between "coding" and "programming"?

I know I'm not alone in using these terms to distinguish between each mode of my own work. There is overlap, but coding is typing, remembering names, syntax, etc. whereas programming is design or "mostly thinking".


I usually think of coding and programming as fairly interchangeable words (vs “developing”, which I think encapsulates both the design/thinking and typing/coding aspects of the process better)

Implementing known solutions is less thinking and more typing, but on the other hand it feels like CoPilot and so on is changing that. If you have something straightforward to build, you know the broad strokes of how it's going to come together, the actual output of the code is so greatly accelerated now that whatever thinking is left takes a proportionally higher chunk of time.

... and "whatever is left" is the thinking and planning side of things, which even in its diminished role in implementing a known solution, still comes into play every once in a while.


Agree and disagree. Certain programming domains and problems are mostly thinking. Bug fixing is often debugging, reading and comprehension rather than thinking. Shitting out CRUD interfaces after you've done it a few times is not really thinking.

Other posters have it right I think. Fluency with the requisite domains greatly reduces the thinking time of programming.


Debugging is not thinking? Reading, understanding and reasoning about why something is happening is THE THING thinking is about.

Fluency increases the speed in which you move to other subjects but does not reduce your thinking, you're going to more complex issues more often.


It's not just thinking though. You're not sitting at your desk quietly running simulations in your head, and if a non programmer was watching you debug it would look very busy.

I'd wager the more technically fluent people get the more they spend time on thinking about the bigger picture or the edge cases.

Bug fixing is probably one of the best example: if you're already underwater you'll want to bandaid a solution. But the faster you can implement a fix the more you'll have leeway, and the more durable you'll try to make it, including trying to fix root causes, or prevent similar cases altogether.


Fluency in bug fixing looks like, "there was an unhandled concurrency error on write in the message importing service therefore I will implement a retry from the point of loading the message" and then you just do that. There are only a few appropriate ways to handle concurrency errors so once you have done it a few times, you are just picking the pattern that fits this particular case.

One might say, "yes but if you see so many concurrency related bugs, what is the root cause and why don't you do that?" And sometimes the answer is just, "I work on a codebase that is 20 years old with hundreds of services and each one needs to have appropriate error handling on a case by case basis to suit the specific service so the root cause fix is going and doing that 100 times."


It is an iterative process, unless “to code” is narrowly defined as “entering instructions” with a keyboard. Writing influences design and vice versa.

A good analogy that works for me is writing long form content. You need clear thought process and some idea of what you want to say, before you start writing. But then thinking also gets refined as you write. This stretches further: a English lit major who specialises as a writer (journalist?) writing about a topic with notes from a collection experts and a scientist writing a paper are two different activities. Most professional programming is of the former variety admitting templates / standardisation. The latter case requires a lot more thinking work on the material before it gets written down.


I tend to spend a lot of time in REPL or throwaway unit tests, tinkering out solutions. It helps me think to try things out in practice, sometimes visualising with a diagramming language or canvas. Elixir is especially nice in this regard, the language is well designed for quick prototypes but also has industrial grade inspection and monitoring readily available.

Walks, hikes and strolls are also a good techniques for figuring things out, famously preferred by philosophers like Kierkegaard and Nietzsche.

Sometimes I've brought it up with non-technical bosses how development is actually done, didn't work, they just dug in about monitoring and control and whatnot. Working remote is the way to go, if one can't find good management to sell to.


"Weeks of coding can save you hours of planning." - Unknown.

"Everyone has a meal plan until they get a fruit punch in the face." -Tyson?

> If I give you the diff, how long will it take you to type the changes back into the code base and recover your six-hours' work?

The diff will help, but it'll be an order of magnitude faster to do it the second time, diff provided or not.

For the same reason.


At my previous job, I calculated that over the last year I worked there, I wrote 80 lines of non-test, production code. 80. About one line per 3-4 days of work. I think I could have retyped all the code I wrote that year in less than an hour.

The rest of the time? Spent in two daily stand up meetings [1], each at least 40 minutes long (and just shy of half of them lasted longer than three hours).

I should also say the code base was C, C++ and Lua, and had nothing to do with the web.

[1] Because my new manager hated the one daily standup with other teams, so he insisted on just our team having one.


Were the intense daily meetings any help? I can imagine that if there's a ticket to be solved, and I can talk about the problem for 40 minutes to more experienced coworkers, that actually speeds up the necessary dev time by quite a lot.

Of course, it will probably just devolve into a disengaged group of people that read emails or Slack in another window, so there's that.


80% of meetings are useless. Especially long ones.

Not really. It was mostly about tests. Officially, I was a developer for the team (longest one on the team). Unofficially I was QA, as our new manager shut out QA entirely [1] and I became the "go-to" person for tests. Never a question about how the system worked as a whole, just test test tests testing tests tests write the new tests did you write the new tests how do we run the tests aaaaaaaaaaah! Never mind that I thought I had a simple test harness set up, nope. They were completely baffled by the thought of automation it seems.

[1] "Because I don't want them to be biased by knowing the implementation when testing" but in reality, quality went to hell.


I liked "Code is just the residue of the work"

Who hasn't accidentally thrown away a days worth of work with the wrong rm or git command? It is indeed significantly quicker to recreate a piece of work and usually the code quality improves for me.

I’ve often found it alarming to see how much better the re-do is. I wonder whether I should re-write more code.

In the software engineering literature, there is something known as "second system effect": the second time a system is designed it will be bloated, over-engineered and ultimately fail, because people want to do it all better, too much so for anyone's good.

But it seems this is only true for complete system designs from scratch after a first system has already been deployed, not for the small "deleted some code and now I'm rewriting it quickly" incidents (for which there is no special term yet?).


I think this was the reasoning behind the adage "make it work, make it right, make it fast" (or something along those lines).

You'd do a fairly rough draft of the project first, just trying to make it do what you intend it to. Then you'd rewrite it so it works without glaring bugs or issues, then optimise it to make it better/more performant/more clearly organised after that.


Absolutely. Parallel to thinking LOC is a good metric, comes with "we have to reuse code" Because lots of people think, writing the code is very expensive. It is not!

writing it is not expensive. however, fixing the same bug in all the redundant reimplementations, adding the same feature to all of them, and keeping straight the minor differences between them, is expensive

Not only fixing the same bug twice, but also fixing bugs that happen because of using the same functionality in different places. For example, possible inconsistency that results from maintaining state in multiple different locations can be a nightmare, esp. in hard-to-debug systems like highly parallelized or distributed architecture.

I see "code that looks the same" being treated as "code that means the same" resulting in problems much more often than "code that means the same" being duplicated.

can you clarify your comment with some examples, because i'm not sure what you mean, even whether you intend to disagree with the parent comment or agree with it

Lets take pending orders and paid for orders as an example.

For both, there is a function to remove an order-line that looks the same. Many people would be tempted at that point, to abstract over pending and paid orders to have both reference the same function, via adding a base class of order, for example, because the code looks the same.

But for a pending order, it means removing an item from the basket, while for the paid for order, it means removing an item due to unavailability. So the code means different things.

Lets then take the system to have evolved further, where now on the paid for order some additional logic should be kicked off when an item is removed. If both pending and paid for orders reference the same function, you have to add conditionals or something, while if each has its own, they can evolve independantly.

And it definitely is disagreement with the parent comment. Sorry to have not elaborated on it in the first place.


i see, thank you!

i've never lost work to a wrong git command because i know how to use `git reflog` and `gitk`. it's possible to lose work with git (by not checking it in, `rm -r`g the work tree with the reflog in it, or having a catastrophic hardware failure) but it is rare enough i haven't had it happen yet

Yes, that can often result in a better-designed refactored version, since you can start with a fully-formed idea!

Not for ages and definitely not since Github- just keep committing and pushing as a backup

This is literally impossible with GitHub Desktop and a functioning Recycle Bin

Famuosly described as the tale of the two programmers - the one who spends their time thinking will provide exponentially better work, though because good work look obvious they will often not receive credit.

Eg. https://realmensch.org/2017/08/25/the-parable-of-the-two-pro...


Funnily enough this happened to me.

Earlier in my career I had a very intense, productive working day and then blundered a rebase command, deleting all my data.

Rewriting took only about 20 minutes.

However, like an idiot, I deleted it again, in the exact same way!

This time I had the muscle memory for which files to open and where to edit, and the whole diff took about 5 minutes to re-add.

On the way out to the car park it really made me pause to wonder what on earth I had been doing all day.


In case you didn’t know, doesn’t delete the commit. You can use `git reflog` to find the commits you were recently on and recover your code.

Sometimes you really wonder where your time went. You can spend 1 hour writing a perfect function and then the rest of the day figuring out why your import does work in dev and not in prod.

I also once butchered the result of 40 hours of work through a loose git history rewrite. I spent a good hour trying different recovery options (to no avail) and then 2 hours typing everything back in from memory. Maybe it turned out even better then before, because all kind of debugging clutter was removed.


Sometimes, I spend an hour writing a perfect function, and then spend another hour re-reading the beauty of it, just to be pointed out how imperfect the function is in the PR review :))

This is also relevant in the context of using LLMs to help you code.

The way I see it, programming is about “20% syntax and 80% wisdom”. At least as it stands today.

LLMs are good (perhaps even great) for the 20% that is syntax related but much less useful for the 80% that is wisdom related.

I can certainly see how those ratios might change over time as LLMs (or their progeny) get more and more capable. But for now, that ratio feels about right.


Sadly like half of that wisdom has to do with avoiding personal stress caused by business processes.

Well, exactly. That’s kinda my point. Programming is so much more than just writing code. And sadly, some of that other stuff involves dealing with humans and their manifest idiosyncrasies.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: