Arnab Saha - Engineering Leader
Posts

Shipping on the Go: Something Shifted 2026

February 17, 2026

10 min read

aiproductivityengineeringworkflowbuilding

I've been using AI coding tools for some time. Claude, Cursor, Copilot. The promise of "AI revolutionizing your workflow" has been around for a while. Meaningful improvements, sure. But not transformative.

Then something changed.

In the last few weeks, I've shipped more than in a long time. Not because I found more time. Not because the models got smarter. But because the relationship between having an idea and executing it fundamentally shifted.

Thought to execution. No barrier.

Let me show you what I mean.

What I Actually Built (In Days)

Here's what shipped recently, almost entirely on the go:

Recall : A Personal RAG System

  • 21,000+ vectors from 2,000 meeting transcripts
  • Hybrid search (BM25 + vector embeddings)
  • GPU offload skill that reduced indexing from 20 hours to 5 minutes
  • Full UI with search, browse, and 1:1 meeting prep
  • Temporal search ("meetings from last week")
  • Deployed on my home k8s cluster

The GPU offload was a fun one. Built a reusable skill that wakes my GPU machine via Wake-on-LAN, offloads embedding work, then auto-shuts it down. Saves power, keeps the NAS cool.

Vault : Multi-Market Portfolio Tracker

  • TSX, NYSE, NASDAQ, NSE support
  • Automatic currency conversion (CAD/USD/INR)
  • Navy/Gold dark theme
  • Kite & Groww import for Indian markets

Linux-Whisper : Voice-to-Text for Linux

Wispr Flow is great on macOS, but Linux had no equivalent. Started building this with my AI assistant, then finished the polish on my Linux machine. Now I can voice-dictate anywhere, including from the terminal.

This project is what unlocked the "walk and ship" workflow. When typing isn't required, the barrier drops dramatically.

This isn't a flex. It's evidence of something real.

The Paradigm Shift

What changed wasn't the AI model. It was three things combining:

1. Local-First Memory

Every conversation, every decision, every project note lives in my workspace. Not scattered across ChatGPT threads that I'll never find again. When I wake up the next morning, my AI assistant reads the memory files and picks up exactly where we left off.

/home/Arnab/clawd/
├── MEMORY.md           # Long-term context
├── memory/             # Daily work logs
│   ├── 2026-02-14.md
│   ├── 2026-02-13.md
│   └── ...
├── projects/           # Everything I'm building
└── skills/             # Reusable capabilities

This isn't just persistence. It's ownership. My knowledge graph. My decisions. My preferences. Not locked in someone else's cloud.

2. Agentic Execution

Context is essential, but it's only half the equation. The other half is access — giving the agent the right tools to act on that context. File system access. Shell commands. APIs. Deployment pipelines.

The AI doesn't just suggest code. It runs commands. Deploys containers. Fixes bugs in production. Creates PRs.

Me: "The blog thumbnail is broken"

Agent: Checking image path...
       File exists but wasn't in Docker build (layer caching)
       Rebuilding with --no-cache...
       Pushing to registry...
       Rolling out deployment...
       ✅ Fixed. Hard refresh to see it.

The loop from "something's wrong" to "it's fixed in production" happened in one conversation. No context switching. No "I'll do it when I'm at my desk."

3. Voice as Input

I built Linux-Whisper because Wispr Flow doesn't support Linux natively. Now any device becomes an input surface.

Walking the dog → voice message → code change deployed.

The friction of typing, especially on mobile, was the last barrier. Remove that, and ideas become actions in real-time.

What This Actually Looks Like

Monday, 11 PM

Lying in bed, thought about a bug in the daily sync script. Sent a voice message:

"The daily sync should check if there are new files before waking the GPU. No point booting a 1000W machine for nothing."

Woke up to a commit with exactly that logic.

Wednesday, Walking to Lunch

Noticed the Grafana dashboard wasn't showing Recall health metrics. Debugged via text:

Me: "Recall Grafana shows no data for health panels"

Agent: Checking metrics endpoint...
       Found it - metrics use recall_ prefix but 
       dashboard queries noterag_
       Also: health metrics only update on /health call,
       not /metrics scrape
       
       Fixing both...
       ✅ Deployed. Should show data on next scrape.

Friday, 6 AM (Couldn't Sleep)

Had an idea for blog post ideation. By 7 AM, there was a new skill that scans my memory files, analyzes project activity, and generates prioritized post ideas. This post came from that system.

The Mental Model Shift

Before: I have an idea → Add to todo list → Wait for "coding time" → Context-switch into it → Maybe do it

Now: I have an idea → Say it → It happens

The profound part isn't the execution. It's what this does to your creative rhythm.

When there's no friction between thought and action, you stop self-censoring ideas. "That's too small to bother with" becomes "let me just do it now." "I should look into that" becomes "let's check."

You stop accumulating mental debt.

What Doesn't Work

Let me be honest about the limits.

Deep Architecture Work

If I need to think through a complex system design, I still want a whiteboard or a quiet hour at my desk. The AI can help, but some thinking needs space.

Sensitive Operations

I don't do production database changes from my phone. Some things deserve focused attention.

Long Debugging Sessions

If something is deeply broken and needs hours of investigation, mobile isn't the right medium. The context window is too constrained.

The 80/20 rule: ~80% of my building work can happen on the go. The 20% that can't is usually the deep work that deserves a real setup anyway.

Getting Started

If you want to try this:

1. Establish Persistent Memory

Use a system where the AI can read and write files. ChatGPT won't cut it. You need Claude with file access, Cursor with a workspace, or a self-hosted setup like mine.

2. Codify Your Workflows

If the AI can run ./deploy.sh, it can deploy for you. Start by turning your common tasks into scripts.

3. Trust, Then Verify

Start with staging environments. Let the AI deploy to local/dev first. Build confidence before touching production.

4. Add Voice

The moment typing isn't required, the barrier drops dramatically. Use whatever transcription works on your platform.

The Deeper Point

This isn't about productivity hacks or building faster.

It's about removing the gap between who you are and what you create.

I used to have a backlog of ideas I'd "get to eventually." Most died there. Now, ideas become experiments within minutes. Some fail fast. Some become real projects. But the graveyard of "things I meant to build" is empty.

Thought to execution. No barrier.

That's the shift.


Built with OpenClaw on a NAS. The memory is mine. The projects are real .