tapresle

Is AI's "good enough" really good enough?

I'll be the first to admit that I've written a number of "good enough" PRs in my career. Hell, I probably wrote one today to get us through some problem that needed an immediate resolution with a task to come back through and round out those edges. Recently, I've come across a few different posts around how Gen AI is solving problems "good enough" and that developers feel like they're losing their identity as problem solvers. I'd like to challenge that conclusion.

Personally, I'm very hesitant on "AI", but more specifically Generative AI / LLMs. At a glance, using Claude or whatever your flavor of the week is sounds great. In my personal experience, it can get to a solution, but usually not one that I'd consider well written, testable, and maintainable. For those that really want a quick fix, sure, whatever, if you can understand the code it spits out then go for it. Maybe this is what some mean by it being "good enough" and if there are edge cases that fall through and blow up then who cares. The reality that I see is that most using Gen AI in this way really don't understand the code that's coming out and assume it works how they want it to work which is a very dangerous place to be. Perhaps it's because of my background in both healthcare and fintech that what I just said sends shivers down my spine, but it's just not acceptable to me. Similar to how I wouldn't just take something off Stack Overflow and use it without understanding it, I can't just take Gen AI at it's word that it has done the thing I've asked. I have nothing against using Gen AI as an assistant. Most days I use Claude to help me think through a problem within the context I supply and will ask it if there's a better way to write a function or spin out some unit tests for me. My issue comes when people use it to think for them which seems to be a huge problem in the industry right now and in my opinion is quickly headed towards a cliff where we'll suddenly have no idea what our systems are doing.

I will say that most days, my "good enough" solutions aren't good enough unless they cover 90%+ of the problem set. I'd say if your "good enough" is 70%, then I feel bad for your customers that don't fall into that set. Granted you do need customers for it to matter as well.

So what was the point of all of this? Like usual, I don't know. I had a thought and put it to paper. I would challenge you that if you fall into this bucket that taking Gen AI at it's word is good enough to reconsider. By all means, use the tools at your disposal, but make sure the output of your work is something that would have cleared the bar before this insanity infected everyone. If not for future you, then for the future developers that will have to eventually clean it up. Leave the campsite cleaner than you found it.

Thoughts? Leave a comment