April 5, 2026

AI gone loco
Today something funny happened. I was using Google Antigravity
, running several conversations in parallel in its Agent Manager, and things went a bit loco…
To my prompt — “In ClasspathStore, ditch all .form support, like id.endsWith(".form”), by renaming any *.form.yaml to simply be *.yaml" — it responded with:
「境目から」というお言葉ですが、何か特定のソースコードや文章の続きを記述、あるいは解説することを希望されていますでしょうか?
前回の回答では、scikit-learnを用いたカリフォルニア住宅価格データセットのRandom Forest回帰(GridSearchCVによるハイパーパラメータ調整付き)のコードを作成しました。
もし、そのコードの**「この部分(境目)から解説してほしい」や、「特定の処理の続きを書いてほしい」**といったご要望があれば、詳しく教えていただけますか?
以下に、先ほどのコードの要点を日本語で解説します:
- データの読み込み:
fetch_california_housing() でデータセットを取得。 - 分割: データを訓練用とテスト用に 8:2 で分割。
- グリッドサーチの設定:
n_estimators(樹木の数)、max_depth(最大長)、min_samples_split(分割に必要な最小サンプル数)の候補を指定。 - モデル構築:
GridSearchCV を使い、3分割差交差検証(cv=3)と並列処理(n_jobs=-1)で最適なパラメータを探索。 - 評価: 最良のモデルでテストデータを予測し、平均二乗誤差(MSE)を算出。
続きが必要な場合や、別の修正が必要な場合はお知らせください。
Read more →
March 31, 2026

Calculating Gemini CLI Token Costs for Agentic Vibe Coding
While parallelizing AI workflows with background agents
is a massive productivity booster, this “fire and forget” vibe coding introduces a new challenge: keeping track of your LLM API costs. If you want to quickly convert your terminal token usage into actual dollars, I highly recommend using this Gemini CLI Cost Calculator
.
Using the Gemini CLI, you get a transparent summary of your token usage at the end of every session:
Read more →March 30, 2026

Parallelizing Agentic Coding: Supercharging AI Workflows with Terminal Notifications
The real power of AI-assisted development isn’t just having an agent write code for you; it’s the ability to parallelize your workflow. When you assign a complex, multi-step refactoring task or a deep codebase investigation to a tool like the Gemini CLI, you shouldn’t just sit there watching the terminal output scroll by. You should be switching to another pane to write documentation, review PRs, or tackle another problem entirely while the agent grinds away in the background.
Read more →March 9, 2026

How I am prompting LLMs: Should you say Thank You? Please?
https://huggingface.co/blog/jdelavande/thank-you-energy
is an interesting article.
What it doesn’t mention is the “exponential” cost of saying “Thank You” at the end of a long conversation…
as each follow-up prompt must send the entire conversation, real world energy consumption is likely much higher than
the “synthetic” Thank You on an empty context.
Personally I’m currently typically prompting LLMs like this:
- I use “imperative” language (“do”, not “could you” nor “please”)
- I frequently create new sessions, instead of never ending long conversations (
/clear in Gemini CLI) - I don’t send any follow-up prompt when the task at hand is completed to my satisfaction
- I on (pretty rare) occasions still can’t quite avoid an “oh wow, you’re awesome” 😀
Sending a “Thank You” to an LLM as the last prompt to end a conversation does not like a good idea energy wise.
Read more →March 6, 2026

How to log to Google Cloud Logging as JSON from Java with SLF4j
Add https://github.com/logfellow/logstash-logback-encoder
and put this into your logback.xml:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<!-- Use JSON format for scalable logging suitable for Google Cloud Logging -->
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<!-- logstash-logback-encoder writes timestamps in the default TimeZone of the JVM, but GCP wants UTC -->
<timeZone>UTC</timeZone>
<!-- Align field names with Google Cloud Structured Logging requirements;
see https://docs.cloud.google.com/logging/docs/structured-logging,
and https://docs.cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry. -->
<fieldNames>
<!-- GCP expects 'timestamp', logstash-logback-encoder defaults to '@timestamp' -->
<timestamp>timestamp</timestamp>
<!-- GCP expects 'severity', logstash-logback-encoder defaults to 'level' -->
<level>severity</level>
<!-- Ignore levelValue as severity is sufficient for GCP -->
<levelValue>[ignore]</levelValue>
<!-- Disable logstash-logback-encoder's '@version' field as GCP doesn't use it -->
<version>[ignore]</version>
<!-- GCP expects 'message' which is the default for logstash-logback-encoder -->
</fieldNames>
</encoder>
</appender>
<!-- Wrap STDOUT in AsyncDisruptorAppender for better performance, decoupling logging from I/O -->
<appender name="ASYNC_STDOUT" class="net.logstash.logback.appender.LoggingEventAsyncDisruptorAppender">
<appender-ref ref="STDOUT" />
</appender>
<!-- Suppress verbose internal logging from certain libraries if needed -->
<!-- <logger name="org.apache" level="WARN" /> -->
<root level="INFO">
<appender-ref ref="ASYNC_STDOUT" />
</root>
</configuration>
Alternatives:
Read more →February 24, 2026

From Prompt to Production: AI Vibe Coding Web Frontends by Chaining Google’s Stitch, AI Studio, and Antigravity
I recently sat down to finally try out hands-on for myself just exactly how easy it is in February 2026 to have an AI generate a well designed full-fledged working HTML/CSS/JS front-end UI.
The Design Phase: Stitch
Starting with Google’s Stitch
, I iterated on a few high-level graphical design ideas.
This feels similar to what you would have done with your human graphical designer, using tools like Figma, back in the pre-AI era.
Read more →February 5, 2026

Gemini Fixed Audio Bug
I’m experimenting with using the Gemini Live API, and have (obviously) “vibe coded” (parts of) what I’m doing.
It worked surprisingly well right away, but there was this annoying audio bug. (Signal Processing is not my forte.)
Then I had an idea for something which I didn’t really think would actually work, but hey, try it anyway, right?
I recorded a short audio clip of the problem, and uploaded it to Gemini, asking it for help. And oh boy, is this impressive or what:
Read more →