Building Tinker Token
An opinionated playground for naming design tokens.
—
Token naming has plenty of coverage, with no shortage of thinking on how to structure token names. But most tools for generating them are heavyweight. I’ve used Airtable bases, Notion databases, spreadsheets with definition lists and formulas. They’re robust tools that also double as token repositories and documentation. And that’s their strength.
But sometimes I just want to generate names quickly, without the setup overhead. Something that enforces a structure I trust, so I can focus on the output, not the plumbing.
That’s why I built Tinker Token.
Why Another Tool?
Tinker Token is opinionated. The naming schema is baked in – an evolving pattern that’s held up well in a few recent projects.
You toggle a few nodes, type your values, and the tool assembles a token name that complies with the pattern. Every output is guaranteed to follow the structure. No guesswork.
It’s especially useful if you’re new to token naming and want to see how the pieces fit together. It’s also fast enough for someone mid-project who just needs to generate names quickly.
The Schema Model
The tool uses a three-tier model that’s common in token architecture: Generic, Semantic, and Specific. Similar to primitive/alias/component or core/semantic/component. Different names, same abstraction.
Semantic is the core schema. Property comes first, describing purpose, like
backgroundColororfontSize. Intent lets you refine that purpose by targeting a functional role or sub-concept. Variant and scale act as modifiers, and state sits at the end. This is the tier most design systems live in.Generic simplifies the schema. It hides nodes that aren’t needed and adds a type node at the start, like
color.primary.500orspacing.md. The type names follow the official DTCG format module. This tier is simply a library of raw values grouped by type.Specific extends the schema. It introduces a component/pattern node at the start, setting a more targeted scope:
button.primary.bgColor.hover. Refine it further with intent, targeting a visual part, functional role, or sub-concept. In this tier, property moves to the end of the suffix group, after state.
The tiers inherit nodes and their values from one another. Enable a field and it appears in the schema as an empty node. Enter a value and it fills in. All three tiers share a prefix group: an optional namespace and up to three context nodes.
This is all based on well-established thinking about token abstraction. Tinker Token codifies a specific set of rules for it. The node groupings, the ordering logic, the way prefix and suffix nodes behave. The pattern is what the tool enforces. Toggle nodes, fill in values, and the structure takes care of the rest.
Tooling and Tech
I built Tinker Token with Cursor and Claude Code, which made it possible to move fast while staying intentional about the architecture. The stack is very simple: React and Vite, deployed on Vercel. No external UI library. The project is small enough that adding one would be overhead without payoff.
An Airtable base provides node presets with descriptions. It’s a lightweight backend that I can update without redeploying. When you open a node input’s preset menu, the suggestions come from a single Airtable table filtered by the node type property.
The token configuration lives in the URL. Tier, nodes, values, delimiters all sync in real time via URLSearchParams and history.replaceState. Shared links preserve the exact configuration. So does a page reload.
Typography matters when the output is a string of text. Token names need to be readable at a glance, and similar characters like l, 1, and I need to be distinct. For the token preview (schema and output), I chose Iosevka, a highly customizable monospace typeface with solid OpenType features. And its condensed variant was perfect for keeping long token names compact without eating up horizontal space.
How it All Evolved
I started with a Tailwind prototype to validate core functionality. Tailwind was great for this phase, allowing me to focus on interactions without thinking about styling. Get the schema model working, see if the tiers made sense.
Once the concept was solid, I moved into Figma and designed the frontend properly. I built a simple but scalable token system – colors, spacing, typography, radii – all output as CSS variables.
Then I stripped out Tailwind and re-implemented the styling using Figma’s MCP server. A limited set of components and tokens pulled directly into code. No manual translation, no guesswork. Tailwind had served its purpose, but once I had designs, it was too prescriptive for what I wanted.
Along the way, functionality and design matured together. The Figma file stayed current; the code stayed in sync. It’s the kind of design-to-code parity I push for in systems work. Nice to practice it on my own project.
Building with Intent
Vibe coding and spec-driven development solve different problems. One is exploratory, conversational, and great for quick prototypes. The other is structured and upfront: you define what to build before you build it.
The initial functional prototype was vibe-coded. But once the project needed to stay maintainable, I shifted to specs. Architecture docs, rules, and context files that will help me or any AI agent pick up where I left off.
This wasn’t a team project. But clarity still meant maintainability. The documentation wasn’t overhead. It’s what made the project buildable.
AI Enabling Speed
AI-assisted coding shines when you’re moving fast and staying intentional. You describe the problem, iterate on the approach, and move faster than a traditional handoff workflow ever allows.
Here’s an example: The token name output sits in a sticky bar with controls. On desktop, this bar is intentionally short to preserve vertical space for the form fields below. But as the token name grows, it can run into the controls on either side. And the layout needs to detect all this and react accordingly. For this, I explored actual collision detection with AI in half an hour. Quicker than it would take to explain to an engineer and open a ticket.
This is no different from how I’d treat the problem in a cross-functional product setting: constraints made explicit, edge cases tested early, solution validated with implementation in mind. Only shipped solo.
In Closing
None of this work happened in a vacuum. The thinking behind Tinker Token draws on inspiration from:
Nathan Curtis and his writing on token naming conventions.
Lukas Oppermann and GitHub’s Primer Design System.
Marta Conde and her Tokens Glossary Notion template.
Romina Kavcic and her The Design System Guide.
The Design Tokens Community Group and the emerging specification.
And numerous others. If you’ve published thinking on token structure that shaped how I approached this, thank you. The field is better for it.
Tinker Token is live at tinkertoken.com. It’s still evolving. Drag-to-reorder nodes, saved configurations, and maybe a Figma plugin next!
Update: Featured on Wireframe Live
Following launch, Donnie D’Amato covered Tinker Token on his show Wireframe Live. He walked through the schema model, the tier system, and how the tool approaches token naming. It was nice to see my thinking examined out loud.
You can watch the episode here: Wireframe Live – Tinker Token