March 31, 2026
Calculating Gemini CLI Token Costs for Agentic Vibe Coding
While parallelizing AI workflows with background agents is a massive productivity booster, this “fire and forget” vibe coding introduces a new challenge: keeping track of your LLM API costs. If you want to quickly convert your terminal token usage into actual dollars, I highly recommend using this Gemini CLI Cost Calculator.
Using the Gemini CLI, you get a transparent summary of your token usage at the end of every session:
March 9, 2026
How I am prompting LLMs: Should you say Thank You? Please?
https://huggingface.co/blog/jdelavande/thank-you-energy is an interesting article.
What it doesn’t mention is the “exponential” cost of saying “Thank You” at the end of a long conversation…
as each follow-up prompt must send the entire conversation, real world energy consumption is likely much higher than
the “synthetic” Thank You on an empty context.
Personally I’m currently typically prompting LLMs like this:
- I use “imperative” language (“do”, not “could you” nor “please”)
- I frequently create new sessions, instead of never ending long conversations (
/clear in Gemini CLI) - I don’t send any follow-up prompt when the task at hand is completed to my satisfaction
- I on (pretty rare) occasions still can’t quite avoid an “oh wow, you’re awesome” 😀
Sending a “Thank You” to an LLM as the last prompt to end a conversation does not like a good idea energy wise.
February 24, 2026
From Prompt to Production: AI Vibe Coding Web Frontends by Chaining Google’s Stitch, AI Studio, and Antigravity
I recently sat down to finally try out hands-on for myself just exactly how easy it is in February 2026 to have an AI generate a well designed full-fledged working HTML/CSS/JS front-end UI.
The Design Phase: Stitch
Starting with Google’s Stitch, I iterated on a few high-level graphical design ideas.
This feels similar to what you would have done with your human graphical designer, using tools like Figma, back in the pre-AI era.
February 5, 2026
Gemini Fixed Audio Bug
I’m experimenting with using the Gemini Live API, and have (obviously) “vibe coded” (parts of) what I’m doing.
It worked surprisingly well right away, but there was this annoying audio bug. (Signal Processing is not my forte.)
Then I had an idea for something which I didn’t really think would actually work, but hey, try it anyway, right?
I recorded a short audio clip of the problem, and uploaded it to Gemini, asking it for help. And oh boy, is this impressive or what: