London based software development consultant

  • 414 Posts
  • 55 Comments
Joined 4 months ago
cake
Cake day: September 29th, 2025

help-circle





  • codeinaboxOPtoProgramming•Code Is Cheap Now. Software Isn’t
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    This article is quite interesting! There are a few standout quotes for me:

    On one hand, we are witnessing the true democratisation of software creation. The barrier to entry has effectively collapsed. For the first time, non-developers aren’t just consumers of software - they are the architects of their own tools.

    The democratisation effect is something I’ve been thinking about myself, as hiring developers or learning to code doesn’t come cheap. However, if it allows non-profits to build ideas that can make our world a better place, then that is a good thing.

    We’re entering a new era of software development where the goal isn’t always longevity. For years, the industry has been obsessed with building “platforms” and “ecosystems,” but the tide is shifting toward something more ephemeral. We’re moving from SaaS to scratchpads.

    A lot of this new software isn’t meant to live forever. In fact, it’s the opposite. People are increasingly building tools to solve a single, specific problem exactly once—and then discarding them. It is software as a disposable utility, designed for the immediate “now” rather than the distant “later.”

    I’ve not thought about it in this way but this is a really good point. When you make code cheap, it makes it easier to create bespoke short-lived solutions.

    The real cost of software isn’t the initial write; it’s the maintenance, the edge cases, the mounting UX debt, and the complexities of data ownership. These “fast” solutions are brittle.

    Though, as much as these tools might democratise software development, they still require engineering expertise to be sustainable.



















  • codeinaboxOPtoProgramming•LLMS Are Not Fun
    link
    fedilink
    English
    arrow-up
    67
    ·
    15 days ago

    I use AI coding tools, and I often find them quite useful, but I completely agree with this statement:

    And if you think of LLMs as an extra teammate, there’s no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time.___

    At first I found AI coding tools like a junior developer, in that it will keep trying to solve the problem, and never give up or grow frustrated. However, I can’t teach an LLM, yes I can give it guard rails and detailed prompts, but it can’t learn in the same way a teammate can. It will always require supervision and review of its output. Whereas, I can teach a teammate new or different ways to do things, and over time their skills and knowledge will grow, as will my trust in them.


  • My understanding of how this relates to Jevons paradox, is because it had been believed that advances in tooling would mean that companies could lower their headcount, because developers would become more efficient, however it has the opposite effect:

    Every abstraction layer - from assembly to C to Python to frameworks to low-code - followed the same pattern. Each one was supposed to mean we’d need fewer developers. Each one instead enabled us to build more software.

    The meta-point here is that we keep making the same prediction error. Every time we make something more efficient, we predict it will mean less of that thing. But efficiency improvements don’t reduce demand - they reveal latent demand that was previously uneconomic to address. Coal. Computing. Cloud infrastructure. And now, knowledge work.



  • codeinaboxOPtoProgramming•Party of One for Code Review!
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    19 days ago

    Kent Beck does mention CodeRabbit, however he also highlights the benefits of pairing with humans, as he later goes on to say:

    It’s not pairing. Pairing is a conversation with someone who pushes back, who has their own ideas, who brings experience I don’t have. CodeRabbit is more like… a very thorough checklist that can read code.

    I’d rather be pairing.

    I miss the back-and-forth with another human who cares about the code. I miss being surprised by someone else’s solution. I miss the social pressure to explain my thinking out loud, which always makes the thinking better.



  • codeinaboxOPtoProgramming•JustHTML: Addressing some questions
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    25 days ago

    Even when I share these articles in the AI community, they get voted down. 🫤 I know these articles aren’t popular, because there is quite a lot of prejudice against AI coding tools. However, I do find them interesting, which is why I share them.


  • codeinaboxtoProgramming•LLM's hallucinating or taking our jobs?
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 month ago

    Based on my own experience of using Claude for AI coding, and using the Whisper model on my phone for dictation, for the most part AI tools can be very useful. Yet there is nearly always mistakes, even if they are quite minor at times, which is why I am sceptical of AI taking my job.

    Perhaps the biggest reason AI won’t take my job is it has no accountability. For example, if an AI coding tool introduces a major bug into the codebase, I doubt you’d be able to make OpenAI or Anthropic accountable. However if you have a human developer supervising it, that person is very much accountable. This is something that Cory Doctorow talks about in his reverse-centaur article.

    “And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”

    This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calls an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.


  • codeinaboxOPtoProgramming•The Bet On Juniors Just Got Better
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 month ago

    This really sums up Beck’s argument, that now is the perfect time to invest in junior developers, because AI allows them to learn and skill up faster:

    The juniors working this way compress their ramp dramatically. Tasks that used to take days take hours. Not because the AI does the work, but because the AI collapses the search space. Instead of spending three hours figuring out which API to use, they spend twenty minutes evaluating options the AI surfaced. The time freed this way isn’t invested in another unprofitable feature, though, it’s invested in learning.



  • codeinaboxOPtoProgramming•AI Is still making code worse: A new CMU study confirms
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    1 month ago

    This quote from the article very much sums up my own experience of Claude:

    In my recent experience at least, these improvements mean you can generate good quality code, with the right guardrails in place. However without them (or when it ignores them, which is another matter) the output still trends towards the same issues: long functions, heavy nesting of conditional logic, unnecessary comments, repeated logic – code that is far more complex than it needs to be.

    AI coding tools definitely helpful with boilerplate code but they still require a lot of supervision. I am interested to see if these tools can be used to tackle tech debt, as often the argument for not addressing tech debt is a lack of time, or if they would just contribute it to it, even with thorough instructions and guardrails.



  • codeinaboxOPtoWeb Development•The Performance Inequality Gap, 2026
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    This quote really sums up the situation:

    This is a technical and business challenge, but also an ethical crisis. Anyone who cares to look can see the tragic consequences for those who most need the help technology can offer. Meanwhile, the lies, half-truths, and excuses made by frontend’s influencer class are in defence of these approaches are, if anything, getting worse.

    Through no action of their own, frontend developers have been blessed with more compute and bandwidth every year. Instead of converting that bounty into delightful experiences and positive business results, the dominant culture of frontend has leant into self-aggrandising narratives that venerate failure as success. The result is a web that increasingly punishes the poor for their bad luck while paying developers huge salaries to deliver business-undermining results.

    The developer community really needs to be building websites that work on all devices and connections, and not just for those who can afford the latest technology and high-speed internet connections.



  • codeinaboxOPtoProgramming•Programming peaked
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    2 months ago

    The way the author described programming in 2025 did make me chuckle, and I do think he makes some excellent points in the process.

    It’s 2025. We write JavaScript with types now. It runs not just in a browser, but on Linux. It has a dependency manager, and in true JavaScript style, there’s a central repository which anyone can push anything to. Nowadays it’s mostly used to inject Bitcoin miners or ransomware onto unsuspecting servers, but you might find a useful utility to pad a string if you need it.

    In order to test our application, we build it regularly. On a modern computer, with approximately 16 cores, each running at 3 GHz, TypeScript only takes a few seconds to compile and run.


  • codeinaboxtoProgramming•FAWK: LLMs can write a language interpreter
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    As the author notes, it is very impressive what generative AI can produce these days.

    The frontier of what the LLMs can do has moved since the last time I tried to vibe-code something. I didn’t expect to have a working interpreter the same day I dreamt of a new programming language. It now seems possible.

    However, as they point out, there’s definitely downsides to this approach.

    The downside of vibe coding the whole interpreter is that I have zero knowledge of the code. I only interacted with the agent by telling it to implement a thing and write tests for it, and I only really reviewed the tests. I reckon this would be an issue in the future when I want to manually make some change in the actual code, because I have no familiarity with it.




Moderates