dtagames a day ago

This is an excellent article but starts out with the false dichotomy of vibe vs craft programming.

An alternate term is starting to appear, flow coding or agentic coding, where the tool is supervised and direced into coding precise areas under human architectural control, with all the programmer understanding that entails.

The author hints at this idea toward the end and likens it to a skilled builder who uses tools to make a precise part.

  • CarlosBaquero 21 hours ago

    I do agree that in the end the most productive approach will be a mix. Humans with AI support. My fear is on the possibility that learning to program always with AI support, will limit the quality of learning. Only time will tell.

  • isaacremuant 19 hours ago

    It's not an excellent article.

    It's basic and confuses uses abstractions with not knowing fundamentals.

    It goes into all the common places of "AI will replace you if you don't think" and pretends you fundamentally don't reason or think if you interact with an abstraction or AI.

    The examples are so rudimentary that I very much suspect this person is not particularly good at software engineering.

    People will read this title and comment because doom posting about AI is what people want to do if they're uncertain about "losing value" but it's very low quality.

  • xeonmc a day ago

    One could also categorize them as slop vs churn programming.

    • spacephysics 20 hours ago

      Honest question, do you think this will lead to more work for “hands-on” programmers to fix errors, or this could just be the early stages of agenic coding becoming better and more akin to the best (or close to it) “hands-on” programmer?

      Or a different outcome

abbadadda 20 hours ago

The way to differentiate oneself is to know and understand the details, and what is actually happening, I think. Applications are messy whether they run on bare metal or in the cloud. If you truly understand what’s going on at a deeper level with the programming language, compiler, and the system architecture (both software engineering architecture and system component architecture) you’ll be a far more valued programmer than one that is just “getting things to work.”

markus_zhang a day ago

This is an interesting observation.

The difference between code completion, intellisense and AI coding, is that code completion merely gives you the name of a variable or a function, while AI coding thinks for you, which is really bad.

That's why I despise Copilot and other AI tools that are embedded in IDEs. I'm OK with ChatGPT because it is in a different window and I'm much inclined to use it -- and when I use it I try not to let ChatGPT think for me, but things like Copilot tries to get into my coding.

  • Terr_ 21 hours ago

    Right, there's a difference between replacing your memory versus replacing your goal-setting.

NotGMan a day ago

The problem is in the very act of creating a false dichotomy of "vibe coder" vs "crafstman": the two extremes exist only in marketing/stupid blogger mentality.

The future programmer will have to use both until true AGI is created.

You cannot prompt yourself out of difficult scenarious without understanding the underlying mechanisms of the problem/solution.

The trivial prompting as in "it's not working for test X, fix it, it should be Y" work right now well only in more simplistic domains and cherry picked example.

This immediately becomes obvious in many gamedev scenarious where AI totaly fails and starts to produce garbage for even trivial things such as not being able to fix "Mouse Y should pitch camera up, not roll it".

I personally did found modern AI useful for some very obscure API cases for which there aren't many docs online. It didn't give me the correct solution but merely creating non-working boiler-plate was enough to give me the insight for how to fill in the blanks.

  • ncr100 a day ago

    I don't disagree.

    And, I'm skeptical agi will usher in human readable, maintainable, and auditable agent-built code. I suppose we'll have to kindly ask the AGI for auditable code.

    • cma 20 hours ago

      It's already good at making human written code human readable, not yet refactoring huge complex stuff but helping you read and navigate it.

  • brazzy 21 hours ago

    The real question is: how will programmers who start out learning to program with AI doing all the straightforward, clearly delineated and specified tasks for them, be able to make the jump to steering and overseeing AI when it struggles with more complex tasks?

    • CarlosBaquero 21 hours ago

      Exactly, that is the one million dollar question.

johnea 19 hours ago

The reference to Formula 1 not leading to driving decline is somewhat paradoxical.

Formula 1 has a lot of rules regulating what can and can't be done to change the cars. They basically enforce the traditional driving model.

Given this, I almost think this is an example of the opposite of what the author intends it to be. The reason Formula 1 hasn't led to a reduction in driving skill is because they don't allow changes to be made to the driver interface to the car.

ilaksh a day ago

..and the last solo writers, and the last solo accountants, and the last solo medical insurance evaluators, math tutors, etc.

I am doing a medical claim evaluation system (which I have been clear that they need the human to do their own review after, but they do have "copy" buttons next to each field) because I needed the money and that was the best I could find on short notice. As it is currently, the humans seem to be easily overwhelmed by the constant flood of requests which all seem to require them to apply as little human judgement as possible. They do have actual doctors there, but it's like a legal exercise more than it is applying any actual human judgement in 95% of cases.

The humans in this case are actually finding and applying one or more of thousands and thousands of rules and predetermined decisions based on prior studies. There is an enormous effort in these guidelines to avoid as much actual judgement on the spot as possible by including criteria for what treatments have to have already occurred to unlock the next thing, etc. So it's clear to me that this entire industry would have already been 98% automated if they had LLMs before.

The next project I am going after is an instructional design project that generates practice standardized test questions for different grade levels. In a previous project I made an actual voice-to-voice AI tutor.

I think we are rapidly heading to a point in the next 0-3 years where if you want an affordable service, you will only be able to get an AI. Interacting with actual humans will be for wealthy people. I mean the educational project is in a very elite neighborhood, and yet from looking into them, I see that parents complain about how expensive their tutoring services are, which is surely the next project after the practice test generation.

Also, the last interview I had to try to get a freelance contract online, the potential client had pre-generated all of the requirements, architecture, and indeed entire project code using ChatGPT. The only thing preventing them from doing the entire thing was the complexity of deploying it and the fact that the future of their business depended on it being a robust solution.

But I am literally competing with the AI in terms of the architecture because the degree to which I deviate from what it suggested to this potential client (which is actually fine, it's just the using the most popular approaches of the moment) I feel the client may question all such deviations.

And I tried to give a realistic but extremely tight timeline that would allow me time to actually write a lot of the code myself if I wanted. But this may have been a mistake because they will have dozens of other candidates pitching an estimate that assumes the use of AI with minimal human intervention.