My First Time Using AI for Coding
I finally got around to giving AI coding assistance a try. Jotting down my thoughts for future reference.
In No Particular Order
- I wasn't using it to vibe code (where I ask it to generate a bunch of code for me). I was asking it for details on how to do specific things I was trying to accomplish.
- I've tried it five times now. Most of what I got back either wasn't a valid solution or didn't work at all.
-
Twice I asked it something that's not possible to do. The questions weren't a trap. I thought something would work and prompted the AI four times trying to get valid code. Each time it produced code it claimed would work.
After the fourth, I quit using it and started digging through docs. That's where I learned that what I was trying to do simply can't be done. At no point did the AI recognize this.
The current implementation seems programmed to always give you a want to do something even if the task is impossible.
That sucks when you're trying to code. It's terrifying to think about in other domains.
-
The thing that was valuable is that it showed me some new functions that I hadn't seen before. The specific code it gave me didn't work, but I was able to find the functions in the documentation and figure out how to use them from there.
This was in a language I don't use much so it likely would have taken me a lot longer to find those functions.
- I understand the appeal of the interactions. The confident wording in the answers makes it feel like you're getting an answer from an expert who's dealt with the same issue before and has the solution. That is, of course, not the case. As the google introduction says, it's basically just really advanced auto complete.
- I quickly found myself on guard against the possibility the answers were wrong. It's a stronger version of my interactions with stack overflow or blog posts I know were written by a human. In those cases, the chances of completely not working code are dramatically lower.
- Some of the interactions made me feel stupid. The machine was telling me something would work and then when I tried it, it didn't. It made me feel like I was fucking something up. And, that I wasn't smart enough to figure out what I was doing wrong. That feels sucks.
- There's such a huge amount of content in the models that I expect they generate valid code in lots of cases. No idea what the percentages are.
- I could see the responses getting way better of time. I could also see them not getting any better than they are now. It reminds me of lossless audio compression. At a certain point, you can't compress things anymore without losing quality. But, it's the inverse angle. At what point will the models hit a threshold they can't improve past?
-
I have no desire to do the vibe coding thing. First off, given how much wrong code I got I have very little confidence in it. I know other folks are having good luck, but that hasn't been my experience at all.
More importantly, I want to make things. I used to do photography for a living. Vibe coding feels like the difference between taking a image myself vs getting it from someone (or something else). If the goal is merely to have a photo exist in the world, that might be fine. But, it won't be my photo.
That's a big part of the reason I make things. I have visions for things I want to see in the world and I'm driven to make them.
My visions are super lo-fidelity to start with. They vague directions with high level thoughts. Without going through the process of building I wouldn't end up at the same place. The editing and refinement are key to both my experience and the final product.
Where I'm At
The appeal is there. The results are not.
If the results are valid (and it told me when things I was asking weren't possible), the AI tools would be front of mind.
As it is, they aren't there for the work that I've been doing. I'll keep an eye on them. Poking from time to time to see how things progress.
-a
Endnotes
What I'd really like to see is links to the code the AI used to generate its answers. That would be very valuable.
I was using the "Claude Haiku 3.5" AI mode. I've heard several coders mention it. (I've sense been told that using "Claude Sonnet (4 or 4.5)" might yield better results, but those cost money which I'm not willing to get into at this point.
I'm not getting into the environmental impact that the data centers that run the AI bots create here. That is, without a doubt, a huge concern.
Here's the google video talking about AI (via Large Language Models) being advanced auto-complete
I wonder how long this stuff will be free to use. It certainly isn't free to run. All indications are that huge amount of money are going into computers that run the AI models. At some point, the investors who are currently pouring money in are going to want to see an return on their investment coming back to them. The path looks like a direct like to enshittification.
This post also doesn't get into all the privacy and data collection concerns. There's no way that prompts won't be used for marketing, ads at best.
One of the things I was trying to do is to use an Intersection Observer on a web page to trigger when an element in one subtree of the DOM overlapped an element from another subtree. That simply can't be done. The elements have to be in the same subtree. That didn't stop the AI from telling me four different failing ways to do it.
Swift is the language I'm not as familiar with. I also find the language's documentation hard to navigate and pares. That was the first thing that got me to give the AI a try.