How we'd build the next Linear (a thought experiment)
Not a pitch. A teardown of what makes Linear feel different, and what we'd steal if we shipped a competitor tomorrow.
Built on tools you trust
← swipe · 12 tools →
A thought experiment
This isn't a pitch. We're not building a Linear competitor and neither should you (more on that at the end). But the question "what makes Linear feel different, and what would it take to ship the same *feel* as a v1" is genuinely useful. Especially if you're building any tool where speed and craft are part of the product.
This post is the architecture we'd reach for, if we did.
What makes Linear feel different
It isn't features. Jira, Asana, ClickUp, and a dozen others have broadly equivalent feature sets. Linear feels different because of *responsiveness*: every interaction is sub-100ms, every keyboard shortcut works, the data updates in real time, and the UI never makes you wait.
That's not a UI library decision. It's a system architecture decision, made consistently from the database to the keystroke.
The stack
For a modern Linear-shaped MVP, we'd pick:
- Frontend: React (or Solid, if we wanted to push perf), with a client-side data layer that holds the entire workspace's open state in memory.
- Backend: A single typed RPC layer (tRPC or a custom typed GraphQL) backed by Postgres. No microservices in v1; one service, one database.
- Data layer: Postgres for primary storage, Redis for ephemeral state (presence, online users, transient locks).
- Realtime: WebSocket or Server-Sent Events from the backend, delivering deltas (not refetches) to clients.
- Hosting: Cloudflare or Fly.io edge for the API, regional Postgres with read replicas, single-region writes for v1.
Nothing exotic. The hard part isn't the stack; it's how the layers talk to each other.
The data model: offline-first sync
The single most important architectural decision is *where the authoritative state lives during a user interaction*.
The naive approach (and the one most apps use) is: user clicks something, we send a request to the server, server updates the database, server sends back the new state, UI updates. End-to-end latency is the network round-trip + server work + render. Even on a fast connection, that's 100-300ms. It feels slow because it is.
Linear's approach (broadly): the client holds an authoritative-enough view of the workspace locally. User clicks something, the client applies the change immediately to its local state, the UI updates in under 16ms (one frame), and the change is queued for the server in the background. The server confirms (or rejects) asynchronously.
This is "offline-first" or "local-first" architecture. The trade-off is complexity: you now have two sources of truth (client and server) that need to converge.
Conflict resolution
Two approaches:
- CRDTs (Conflict-free Replicated Data Types). Mathematically guarantee convergence, no central coordinator needed. Libraries like Yjs and Automerge are production-quality. Tradeoff: document/object sizes grow over time as edit history accumulates, the data model is constrained.
- Operational Transform / Server-coordinated. Client sends operations, server is the arbiter, server broadcasts canonical ordering. Simpler data model, requires a server in the loop.
For a Linear-shaped tool (issues, comments, status), server-coordinated ops are usually enough. CRDTs are overkill unless you're building truly collaborative documents (like Figma or Notion). For our v1, we'd go server-coordinated with a versioned op log.
Realtime updates
The pattern: server publishes a change to a per-workspace channel, clients with that workspace open consume the delta and apply it to their local state.
Implementation:
- WebSocket connection per client, multiplexed by workspace ID.
- Server publishes change events to Redis pub/sub, fan out to WebSocket connections.
- Each event carries the minimum delta (changed fields), not the full resource. Reduces bandwidth and apply-cost.
- Client merges the delta into its local cache, triggers a UI update for the affected component(s).
The hardest part is *not* implementation — it's the protocol design. What's the smallest set of event types? How do you handle a client that missed events while disconnected? How do you back-fill on reconnect without overwriting local pending changes?
Performance targets
The numbers we'd hold ourselves to:
- Cold load (initial workspace fetch): under 1.5s on a 4G connection.
- Interaction latency (any keyboard shortcut, any click): under 100ms perceptually, under 16ms for the local UI update.
- Sync latency (other clients see your change): under 250ms p95.
- Search: under 50ms for any local-cached query, under 200ms for server-roundtrip.
Hitting these requires discipline at every layer. No bloated bundles, no synchronous state computations on render, no needless re-renders, no sync calls in the hot path. It's not glamorous; it's compounding small decisions.
What we'd skip in v1
To ship the *feel*, not the full feature set:
- No SAML, no SCIM, no enterprise admin. Email + Google sign-in.
- No mobile app. PWA is enough for v1.
- No on-prem, no self-hosted. Hosted only.
- No deep integrations marketplace. GitHub + Slack out of the box, Zapier/n8n for the rest.
- No sub-issues, no roadmaps, no portfolios. Single project view, single board.
The point: ship something that *feels* like Linear with 5% of the feature surface, prove the core architecture, then expand.
Why we won't build this
Two reasons:
- Linear is good enough that we'd be building a worse version of what already exists. Their team has 5+ years of compounding work on the architecture. Catching up is a multi-year project for a team that doesn't have anything else to do.
- We're an agency, not a SaaS team. We ship for clients. The product mindset, the on-call rotation, the support ops — that's a different business.
What we *could* do is ship the rough shape of this architecture as a client engagement. The local-first sync model, the server-coordinated ops, the realtime layer — those are weeks of work, not years. The result wouldn't be a Linear-killer, but it would be a tool with Linear-grade responsiveness for a specific domain (a CRM, a fulfillment dashboard, a fleet ops tool, anything where speed matters).
That's the actual offer here. We can't build the next Linear, but we can build the rough shape of it for your domain. The Web App MVP SKU is $19,995 in 30 days for a standard build. For a local-first sync architecture, expect closer to 60-90 days and a custom quote — email hello@hayaiti.com with what you're trying to build.
How the pieces show up in real builds (the meta-point)
This post is a thought experiment, but the patterns above show up across the kind of work the SKUs commit to — offline-first caching in the customer-portal scaffolding that ships on hayaiti.com itself, WebSocket-delta sync in trading dashboards from the founders' prior quant work, ops-log patterns in collaborative tools generally. The combination is the work; the parts are commodity.
Linear is the most polished public example of all of it together. The pieces are commodity. Putting them together with discipline is the work.
The Hayaiti team
Hayaiti
Hayaiti is a productized engineering studio. We ship web, software, iOS, and cybersecurity work on fixed prices and calendar-day timelines. The team takes turns on the shipping log.
More from the shipping log
RAG vs fine-tuning in 2026: default to RAG
Default to RAG. Reach for fine-tuning when retrieval can't close the gap. The six-axis decision matrix the playbook walks every LLM build through.
A 120-line lock-free token bucket in Rust
The rate limiter we run in our internal tooling. ~120 lines of Rust, lock-free, observable, and the failure modes we hit before it stabilised.
Want help shipping this?
We turn posts like this into production code. Fixed price. Calendar-day timelines. Source code in your repo on day one.