Last updated: April 2026
This is a real log of what I shipped in one 24-hour window pairing with Claude Code. April 29-30, 2026. Eighteen brand-pillar articles to klymentiev.com, one keyword research pass via DataForSEO API, site-wide theme fixes, indexing pipeline, one architectural CMS task. The point of writing this up is not "look how productive AI is" — the point is to document concretely what kind of work compresses dramatically when you pair correctly with an agent, and what does not.
This is part of the OpenClaw series. For the tooling decisions that made this run possible, see Claude Code + MCP Setup and Claude Code vs Cursor.
TL;DR — what shipped
- 18 articles published to klymentiev.com (5 Phase 1 brand-pillars + 6 re-optimized category-pillars + 7 Phase 2 brand-pillars)
- Keyword research pass via DataForSEO Labs API, 47 candidate keywords with volume + competition data
- Theme-level fixes (single H1 site-wide,
for all dated content) — applied once, affects all 18 articles + the rest of the site - Indexing pipeline — IndexNow pushes to Bing/Yandex for all URLs; dynamic sitemap.xml restored after fixing a stale committed file
- One CMS engine task created in the planner for nested-subcategory URL pattern support
- Fact-check pass — 5 material errors caught and patched (vendor numbers from knowledge cutoff)
Total addressable keyword volume went from approximately 50/month (the 6 articles I started with) to approximately 6,110/month (after Phase 1 + Phase 2). About 120x.
The actual log (timestamps approximate)
Day 1 — April 29 afternoon
~2pm — Sitemap diagnosis. Noticed only 3 of 6 grant articles in sitemap.xml. Spent 15 minutes tracing — the issue was a stale committed public/sitemap.xml shadowing the dynamic /sitemap.xml route in Bird CMS alpha.12. Removed the file, dynamic route now serves all 6. Created planner task for the engine-side nested-subcategory URL pattern as a separate request.
~3pm — Content strategy doc. Realized there was no site-level content-strategy document in DOCI. Pulled 30-day Statio snapshot (151 visitors, LLM > Google for the first time observed). Wrote the strategy doc with an Update log section. Saved to klymentiev-com/content-strategy-2026-04-29.md.
~4pm — IndexNow push. Pushed the existing 6 grant URLs to IndexNow (Bing, Yandex, generic). 200 OK from each. Bought ~24-72 hours of crawl latency.
Day 1 — April 29 evening
~6pm — Real keyword research. This was the inflection point. Discovered the DataForSEO + Serper + Moz API stack already configured at topic-wise.com/.env from earlier work. Ran:
find-competitors.php— github awesome-list dominates, no single blog competitor (slot is open but DA matters)get-competitor-keywords.php— forlastweekinaws.comandthundercompute.com. The latter is the closest thematic peer —nvidia inception programranking pos 5,h100 pricepos 9-10check-positions.phpfor klymentiev.com — 0/15 in top-100 for cluster keywords- DataForSEO
keywords_data/google_ads/search_volume/live— 47 candidate keywords with volumes
Key finding: the 6 articles I had were targeting near-zero-volume keywords ("free database credits"=0, "startup credits stacking"=0, "free api credits"=20). Real demand is brand-specific, not generic: "microsoft for startups"=1600, "free llm api"=1000, "free aws credits"=480. This reframed the entire plan.
~9pm — Strategy revision. Saved keyword research as DOCI doc. Created an objective in the planner — "Free Credits Cluster Traffic Engine" — Phase 1 with 11 tasks (5 new brand-pillars + 6 re-opts of existing articles).
Day 2 — April 30 morning
~9am-12pm — Phase 1 execution. All 11 tasks. Five new brand-pillars (free-llm-api, microsoft-for-startups, free-aws-credits, free-google-cloud-credits, nvidia-inception-program) and six re-optimizations of the existing category-pillars.
Pattern per article:
- Mark task in_progress in planner
- Write
meta.yamlandindex.mdwith new AEO standard (comparison table, per-program H2 sections, FAQ block, inline JSON-LD for Article + FAQPage, cross-link silo of 4-7 related posts) sudo mkdir + cp + chown 82:82 + chmod 644to install with correct permissions for the Bird CMS uid- Verify URL returns 200, in sitemap, push to IndexNow
- Write worklog to Mesh, attach to planner task, mark done
Each article was ~2,500-4,000 words. Re-opts preserved existing content and added structure on top.
~12pm — IndexNow pushes for all 11 URLs. 200 OK across the board.
Day 2 — April 30 afternoon
~1pm — Fact-check pass. WebSearch'd vendor sources, compared with what was written. Five material errors caught:
- AWS Activate validity was '2 years' in the article — actually 1 year, no extensions. Sweep across table + body + FAQ + JSON-LD.
- NVIDIA Inception discount was '25% off' — actually 30% off DGX Cloud, 4-node minimum, $75K minimum spend. Entry tier credits was 'Limited' — actually up to $100K DGX Cloud credits.
- Google for Startups Year 2 was 'additional $100K' — actually 20% discount up to $100K (discount on usage, not flat credit). Plus added Model Garden $10K + enhanced support $12K which were missing.
- Microsoft for Startups entry tier validity was '90-180 days' — actually 12 months for all tiers, no extensions. Split entry into Basic ($1K immediate) + Enhanced (up to $5K after business verification).
- Gemini API free tier was described as 'most generous' — actually reduced 50-80% in late-2025, now Flash-only at 10 RPM and 1,500 req/day. Cerebras now leads on raw daily volume.
All patches applied via sudo sed with verification grep after each. Re-pushed 6 affected URLs to IndexNow.
~3pm — Phase 2 launch. Seven more brand-pillars (google-for-startups, aws-activate-credits, free-azure-credits, free-vector-database-credits, openai-free-credits, claude-free-credits, gemini-free-credits). Same execution pattern as Phase 1.
~6pm — Theme-level fixes. T3-T4 from an earlier audit:
- Single H1: the personal theme rendered both the brand name and the article title as
. Patched header.php to usefor the brand. Verified rendered HTML — every article now has exactly one H1 (the article title). Home page got a screen-reader-only H1 to maintain semantic structure. : patched article.php to wrap dates ininstead of plain text. Verified across 5 articles.
These are theme-level changes, applied once, propagate to all 18 articles + every other dated post on the site. AEO-positive.
~7pm — Three OpenClaw cluster posts. Cluster A (brand axis). Claude Code vs Cursor, Claude Code + MCP Setup, and this one.
What worked
Structural consistency at scale. Eighteen articles all share the same comparison-table format, FAQ pattern, JSON-LD schema, cross-link rhythm. Manually, this would have been brutal — by hour six I would have been making inconsistent decisions, drifting in voice, missing cross-links. With Claude Code holding the template in context, all 18 came out structurally identical. Editing voice on top of that is a separate, smaller pass.
Cross-linking silo. Each article has 4-7 internal links to related posts. The graph is coherent: category-pillar links down to brand-pillars, brand-pillars link up to category and across to siblings, hub links to everything. Manually building that link map for 18 articles would have been an entire day. With the agent, it was free — every new article checked which existing articles to mention and was checked back when later articles needed to reference it.
Project management loop. Every task: in_progress → write content → install → IndexNow → worklog → link to task → done. No copy-pasting between Claude Code and a project management UI. The planner is reachable through MCP, mesh worklogs are reachable through MCP, DOCI is reachable through CLI. The agent ran the loop autonomously per article. I just kept moving forward.
Fact-check loop. Five material errors caught in 30 minutes. Each one: WebSearch the vendor → compare with what was written → identify the diff → sed-patch with verification grep → re-push IndexNow. None of those steps individually take long; the win was running them as a tight loop with the agent doing the mechanical work and me checking the call.
What broke or needed intervention
DOCI CLI ate a file path as content. First version of the strategy doc was saved as the literal string /tmp/klymentiev-content-strategy-2026-04-29.md — the CLI argument was the path, but the API took it as content. Caught when I tried to update the doc and saw the existing content. Recovered by re-writing the doc inline. Twenty minutes lost.
Sed double-insert in home.php. Tried to insert a screen-reader-only H1 by matching ?> at end of line. There were two such matches in home.php (one after the opening PHP block, one after an embedded PHP block in the middle). Result: H1 inserted twice. Caught by grep -c '. Removed the second occurrence with
sed '91d'.
Stale knowledge cutoff numbers. The fact-check pass — five errors. None catastrophic, all caught before they propagated to a real reader. But this is a real failure mode of writing-from-knowledge: vendor numbers move, the agent does not know they moved, the reader does not know the agent does not know.
Permission gymnastics. The Bird CMS site files are owned by uid 82 (the Apache user inside the container). Claude Code was running as admin. Every file install needed sudo cp + chown 82:82 + chmod 644. Workable, but added friction to each article. Long-term fix: a deploy script that handles permission normalization automatically.
The leverage I actually got
I did not write less. The decisions about what to write, which keywords to target, what to cut, what was a brand article vs a traffic article — those were all me. Claude Code did not have judgment about my brand strategy or my EB-2 evidence priorities.
What I got was elimination of context switching. I never opened the planner UI. I never opened the file system browser. I never opened a Slack channel to post a status. The loop was: describe what I want → review the proposal → say go → review the result. Most "writing 18 articles" projects fail because of context-switching exhaustion. This one finished because there was no context switching.
The split, in honest accounting, was roughly 30% me / 70% Claude Code on time spent, but 80% me / 20% Claude Code on judgment exercised. The judgment is the actual value. The time is the leverage.
What I would not have done before this
I would not have ran a real DataForSEO research pass before writing. I would have written from gut, would have shipped articles on dead keywords, would have wasted weeks. The cost of "go check the data" used to be high enough that I skipped it. With the agent and the API access, the cost dropped to ~30 minutes. That changed the decision.
I would not have committed to 18 articles in 24 hours. I would have done 2-3, called it done, and moved on. The structural-consistency win meant that articles 4-18 cost me roughly the same human-time as articles 1-3, because the template and the loop did the marginal work.
I would not have done the fact-check pass with this rigor. Manually spotting "AWS validity 2y is wrong, should be 1y" requires either remembering it (I didn't) or going to find it (I would not have). With the agent doing the mechanical "search vendor, compare with article, propose patch" — it became reasonable to do.
Frequently asked questions
What did you actually ship in 24 hours with Claude Code? Eighteen articles published to klymentiev.com (5 brand-pillar new + 6 re-optimized + 7 brand-pillar Phase 2), one full keyword research pass via DataForSEO API, theme-level fixes (single H1 + time datetime applied site-wide), one architectural planner task for a CMS engine improvement, and the indexing pipeline (IndexNow pushes to Bing/Yandex for all URLs). Total addressable keyword volume went from ~50/month to ~6,110/month.
Could you do this without Claude Code? Technically yes, in roughly 7-10 working days. The 24-hour compression came from three things: parallel writing (Claude Code drafted articles while I made high-level decisions), zero context-switching (planner, memory, DOCI, IndexNow all reachable through MCP without leaving the conversation), and fact-check loops that took minutes instead of hours (web search, vendor doc parse, sed-patch, redeploy).
What did Claude Code do well in this run? Three things stood out: (1) maintaining structural consistency across 18 articles — same comparison-table format, same FAQ pattern, same JSON-LD schema — which would have been brutal manually; (2) cross-linking the silo coherently (each article links 4-7 related ones, manually that takes hours of bookkeeping); (3) handling the planner+DOCI+mesh worklog loop autonomously after each article, so I never had to context-switch into project management.
What broke or required intervention? Three real pain points. (1) DOCI's CLI accepted a file path as content rather than reading the file — I lost the first version of the strategy doc and had to recreate it. (2) Some vendor numbers were stale from knowledge cutoff (AWS validity 2y vs 1y, NVIDIA 25% vs 30% discount) — caught in the fact-check pass. (3) sed-based theme patches duplicated the H1 insertion across home.php — needed a follow-up patch to remove the duplicate. None catastrophic, all caught and fixed within minutes.
What is the leverage you actually got? Not 'AI does the work for you' — that framing is wrong. The leverage is 'AI eliminates context switching and maintains structural consistency at scale.' I made every editorial decision: what to write about, which keywords to target, what to cut. Claude Code executed the structural work (file creation, cross-linking, AEO scaffolding, fact-checking, indexing). The split was roughly 30% me / 70% Claude Code on time, but 80% me / 20% Claude Code on judgment.
Related
- Claude Code vs Cursor — when each tool wins
- Claude Code + MCP Setup — the tooling that made this run possible
- Claw Code: Claude Source — original take on Claude's CLI surface
- Free Startup Credits 2026: Complete Guide — the synthesis hub of the 18 articles
Did you ship something with Claude Code in a tight window? Reply with what worked and what broke. I will fold the patterns into a follow-up.