Building Something New

Time is flying, alright. Here we are two weeks into the new year.

Going back to work has, unsurprisingly, reclaimed quite a bit of my time. Work forges on however, and I am eternally grateful for all that I was able to get accomplished over my winter break time. It’s amazing, really, to look back and see just how much I was able to accomplish

It was interesting also to look back through my journal and see that the work on this “2.0” version of The Vegan Gourmand really only began in the second week of November. I’d spent quite some time working on another project – one with a much longer finish line – and I was growing hungry for a quicker win; something I could potentially finish by the end of the year.

The idea to completely custom-engineer everything from the ground up was an audacious one, and I think the decision to pivot and begin pursuing this was less to do with the allure of the possibility of getting it done by the end of the year, and more by the novelty and possibility of such a thing – period. In other words, I wanted something I could get done quicker, but when this idea came to mind, quicker was thrown out the window for “cooler”, if you will.

After having used a hodge-podge of a page builder, a heavily modified theme, extremely code-heavy pages, and an industry-leading yet also so-so for my needs database plugin, the idea of being able to wave my AI wand and basically have a bespoke system created specifically for my needs felt almost too good to be true.

I can say, it nearly was. I succeeded in what I set out to do, and it wasn’t by luck – this is a result that can be replicated. What I will say is that there is work in coding with AI, and that work is essentially double-checking every last thing it does, because AI can hallucinate. It would be one thing if I actually knew coding really well and was using it just to do all the heavy-lifting, but when you understand all of this at a 5th-grade level at best, you can’t really know for certain.

What I ended up doing was creating a framework that was essentially a peer feedback loop. I did the main bit of coding in VSCode with Claude Code planning and writing the code. The bulk of the initial codebase was written entirely by Claude, which did a fantastic job. As time went on, I began using ChatGPT as a peer consultant alongside Claude, essentially having it play the role of an architect and advisor while Claude remained the engineer. There were certain times I would ask ChatGPT how to approach a project or solve a problem, but the majority of the time, since Claude had visibility of my entire codebase while ChatGPT did not, I would allow Claude to come up with the plans, and then forwarded those plans to ChatGPT with the request I had made, and asked for its opinion.

What I found is that by taking two frontier systems – and eventually three, when I began employing Google Gemini to do some parallel review – is that eventually as something is passed enough times to create enough iterations with enough different AI models and perspectives reviewing it, you begin to see them simultaneously converge upon a point of completion, as they express an increasing level of confidence in the work with excellent justifications.

Is there a bit of blind faith in this? Absolutely. It’s cutting edge stuff, and I think any time you work in that frontier, you do so on a leap of faith. But it works – it is packed with all the functionality I need, and none of what I don’t, which therefore makes it extremely lightweight, and incredibly efficient. What I love is how easy it is to envision a change you’d like to make, some new or improved interface I’d like to bring in, and there’s no fatigue. There’s no extended review. Everything is done at breakneck speed, especially now that it is moving toward increased agentic use, where it can in effect run multiple “agents” at once all doing different things.

Is it perfect? No. Does it replace the need for humans? No, but I think it does materially change their role and focus – but in a good way. It takes them more and more away from the nitty-gritty code writing diagnostic side of things, and more into the theoretical, creative, design-oriented, functionality-driven side of things, which honestly I find to be the more important and much more interesting side to be on. That side, coincidentally, the creative side, is the one that I think AI will have the greatest challenge ever growing into.

I say that because it’s great at doing things, and solving problems where well-understood rule-based systems exist. But when it comes to novel creativity, literally exercising artistic vision – coming up with an idea out of thin air – it doesn’t and may never do that at all. I tend to think those are kind of the inherent and intrinsic guardrails in this that stop AI from becoming the thing that people fear it becoming.

Enough on that subject, time to get back to work. More to share with you soon!

Leave a Reply

Your email address will not be published. Required fields are marked *