design systemsbrandAItokensfuture

Brand as Infrastructure

22 min read

Brand as Infrastructure

Brand is not logos and taglines. It is something closer to a programming language. A structured, machine-readable definition of how something should look and feel and sound across every surface it ever touches. Not just the colours and the typography, but the tone of voice, the visual perspective of the photography, the motion language, the sonic identity, the spatial principles. All of it.

Look at almost any company and the setup is the same: a Figma file someone updates occasionally, a brand guidelines PDF that is already out of date, a Slack channel where designers answer questions that should have a definitive answer somewhere, and a codebase full of hardcoded values that bear increasingly little relationship to anything in the design tool. The strategic dimensions of the brand - positioning, voice, visual intent - live in the heads of a few people and in presentations that nobody can query programmatically.

This is not a design problem. It is an infrastructure problem. And design tokens are the first proven technical pattern for solving it.

What Tokens Actually Are

A design token is the smallest possible unit of a brand decision. It is a name attached to a value.

color.brand.primary = #0047FF
space.component.medium = 16px
font.family.display = "Canela", Georgia, serif
motion.duration.standard = 200ms

That is it. Four examples, four decisions. The colour of your primary button. The padding inside your cards. The typeface on your hero text. How fast things animate.

What makes tokens different from a CSS variable or a Sass constant is not the syntax. It is the intent. A token is supposed to be platform-agnostic. It lives above any specific technology. It is a fact about your brand that gets translated into whatever format each platform needs.

Your web app reads it as a CSS custom property. Your iOS app reads it as a Swift constant. Your Android app reads it as an XML resource. Your email templates read it as an inline style. Your AI-generated marketing copy eventually reads it as a constraint. The token itself stays the same. Only the rendering changes.

This separation, between the decision and the implementation of that decision, is the whole point. Most teams have not achieved it yet, but the tooling is getting closer.

Why Consistency Is a Hard Problem

Brand consistency sounds like a soft, subjective goal. It is not. It is a hard coordination problem.

Think about how a brand decision propagates through an organisation today. A designer adjusts the primary colour in Figma. They update the component library. They post in Slack that it changed. Some developers see the message. Some do not. The mobile team updates their end at a different time. Marketing builds a landing page that month using the old colour from a screenshot. An agency creates event materials using whatever was in the last brand deck they received. Six months later the brand looks like four different companies.

This is not because people are careless. It is because the brand only exists as a reference document, not as executable infrastructure. There is no single source of truth that every platform reads from programmatically.

Tokens change that. When the primary colour token updates, every platform that reads from the token source rebuilds automatically. The mobile app gets a new build. The web app gets updated CSS. The documentation site regenerates. Nobody has to remember to tell anyone anything.

Brand as infrastructure means treating brand decisions as configuration to be deployed, not information to be communicated.

The Three Layers Every Brand System Needs

Every well-built token system arrives at the same three-layer architecture.

Layer one: primitives. The raw palette. Every colour that exists in the system. Every step of every spacing scale. Every font size, every weight, every radius value. These are not named for how they are used. They are named for what they are.

color.blue.500 = #3b82f6
color.blue.600 = #2563eb
color.blue.700 = #1d4ed8
space.4 = 16px
space.8 = 32px

Nobody uses primitives directly in a component. They are not meant to be used directly. They are the vocabulary the next layer draws from.

Layer two: semantic tokens. This is where meaning gets attached. Semantic tokens reference primitives and describe usage.

color.action.primary.default    = {color.blue.600}
color.action.primary.hover      = {color.blue.700}
color.text.default              = {color.neutral.900}
color.text.subtle               = {color.neutral.600}
color.background.surface        = {color.neutral.0}

This layer is where most of the design decisions actually live. When you change the primary action colour from blue to violet, you update the semantic token to point to a different primitive. Every component that references that semantic token updates. Components do not know or care that the change happened.

Layer three: component tokens (optional). Some mature systems add a third layer for component-specific decisions that should not be shared across the whole system. The border radius of a button specifically. The padding inside a modal specifically. Most teams should not start here.

This three-layer structure is not new thinking. What is new is actually encoding it in a machine-readable format and building tooling around it that enforces the layers and automates the outputs.

Tokens are the proven foundation. But they are not the ceiling. The same architectural pattern - a named decision, a structured value, a reference system, a multi-platform output - extends to dimensions of brand identity that go far beyond colour and spacing. That is where this gets interesting.

Here is a working token system you can manipulate. Change a primitive colour and watch how the semantic layer, the live UI, and every platform output update from the same source. Try re-pointing a semantic token to a different primitive to see how the indirection layer works.

The Standard That Makes This Possible

For years, teams built their own token formats and their own tooling to transform them. Each company had slightly different JSON shapes, slightly different naming conventions, slightly different build scripts. Tools could not interoperate. Tokens exported from one system could not be read by another without custom transformation code.

The W3C Design Token Community Group has been working on a standard since 2019. It defines a specific JSON structure for tokens that any tool can implement against. The key ideas are simple: tokens have a $value, an optional $type (color, dimension, fontFamily, duration, and so on), and an optional $description. Token values can reference other tokens using a {dot.separated.path} syntax.

{
  "color": {
    "brand": {
      "primary": {
        "$value": "{color.blue.600}",
        "$type": "color",
        "$description": "Primary brand colour. Use for CTAs and primary interactive elements."
      }
    }
  }
}

The spec is stable enough now that the major tools have adopted it. More importantly, it means the tokens themselves can outlive any specific tool. Your token files are portable. If you decide to switch from one build tool to another in two years, your token source does not change.

That portability matters more as the tooling around tokens gets more ambitious.

The Case for a Brand Operating System

Most design system tools available today are oriented around a designer updating things in a GUI. That made sense when the primary consumer of brand decisions was a human and when those decisions were mostly visual. Both of those assumptions are breaking down.

AI agents are becoming frequent consumers of brand decisions, and the decisions themselves extend far beyond what a traditional design system covers. Colour and spacing are important, but so are tone of voice, visual perspective, motion language, sonic identity, and strategic positioning. A brand operating system needs to encode all of it in a way that is queryable, not just navigable.

Teams moving in this direction are finding that owning the brand infrastructure gives them flexibility that third-party tools cannot match. It is an area we are actively exploring at Cene. The advantages tend to cluster around three things.

Control over the schema. When you own the brand infrastructure, you decide what goes in it. Colours and spacing are the obvious starting point. But the schema should extend to tone of voice guidelines encoded as structured data. Motion curves with semantic names. Photography style described as camera perspective, lighting intent, and compositional principles. Brand sound design. Customer sentiment profiles. Competitive positioning. If you are plugging into a third-party tool, you are limited to what they decided a brand can represent. If you build your own, the schema is as wide as the brand itself.

Control over the API surface. The whole point of brand as infrastructure is that other systems can read from it programmatically. If your brand system lives in a SaaS tool, the API surface is whatever they expose. If you build it, the API surface is exactly what you design it to be. You can make it easy to query. You can make it easy to subscribe to changes. You can make it easy for an AI agent to ask not just "what colour is the primary button?" but "what is the correct visual treatment for a campaign targeting optimism in the Nordic market?"

Control over the update lifecycle. Brand decisions change. Sometimes slowly, sometimes suddenly (rebrand). When you own the infrastructure, you control how changes propagate and at what pace. You can deploy a token change to staging environments before production. You can run A/B tests on token values. You can roll back a change that broke something downstream. None of this is possible when your brand system is a Figma file someone emails around.

The Agentic Future

AI agents are going to create a lot of brand touchpoints. They already are.

Right now it is mostly text. An AI writes a product description, a support response, a social post. Teams are scrambling to write brand guidelines for AI output: what tone to use, what words to avoid, how formal to be. These guidelines live in prompt engineering. They are manual, fragile, and not composable.

But the scope is widening fast.

Agents are starting to generate images. Generate UI code. Generate video. Generate audio. Generate 3D assets. Every one of these modalities is a brand surface. Every one of them needs to know what "on brand" means in a machine-readable way.

Companies that have a well-structured, programmatically accessible brand system will be able to instruct agents in a way that produces consistent output. Companies that rely on static documents for brand governance will likely spend more time on manual review and correction as the volume of AI-generated touchpoints increases.

Think about what "give the agent access to your brand tokens" actually means at each stage:

Today: The agent knows your primary colour is #0047FF, your font is Inter, and your primary button has 8px of padding. It can generate UI code or copy HTML with those values applied correctly.

Near term: The agent understands semantic meaning. It knows that color.text.danger should be used when something has gone wrong, not for decorative purposes. It can make compositional decisions about which tokens to apply in which contexts.

Medium term: The agent has access to the full brand schema - not just the visual tokens but the motion language, the voice guidelines, the photography intent, the spatial principles, the positioning relative to competitors. It can generate a short video for a new product feature that is genuinely on brand because it has structured access to what on-brand means across every dimension. The brand is no longer a palette. It is an operating system.

Further out: The agent updates token values based on A/B test results, seasonal campaigns, or regional market differences. The brand system becomes dynamic. The infrastructure that lets agents read tokens also lets agents write them, within guardrails you control.

The last scenario is where things get philosophically interesting. A brand that can adapt autonomously, within defined constraints, is something genuinely new. It is not a static identity document. It is a living system with rules about how it can change.

The Full Brand Schema

Design tokens started with colours, spacing, and typography because those were the most obvious things to encode. But a brand is not just a visual system. It is a set of decisions about how something should be perceived across every dimension, and each of those dimensions can be structured.

Click any node to see what structured brand attributes could look like at each layer - from the token schema, through to how each platform consumes it.

Colour tokens translate directly to physical production. Pantone values, CMYK profiles, and RAL codes are just different output formats for the same underlying colour decision. If your brand infrastructure stores a colour token, there is no technical reason it cannot also store the physical equivalents. A printer or a manufacturer's API reads the same source your web app does.

Typography translates too. The decision about which typeface carries brand authority does not change when you go from a screen to a printed page to an embroidered logo on merchandise. What changes is the rendering context.

Motion is trickier. A 200ms ease curve for a button hover does not mean much on a physical object. But brand motion principles, the feeling of weight, the sense of energy or calm, the personality expressed through movement, those can be encoded as structured guidelines even if the specific values need interpretation for each medium.

Photography and imagery may be the most consequential extension. Brand photography has always carried intent. A designer chooses low-angle shots to convey authority. Warm light and shallow depth of field to suggest intimacy. Wide, upward-looking compositions with natural light to communicate optimism. These are not arbitrary aesthetic choices. They are brand decisions, and right now they exist only as mood boards, art direction briefs, and the intuition of whoever happens to be commissioning the shoot.

But these decisions can be structured. Camera perspective, lighting direction, colour grading, depth of field, subject framing, environmental context - each of these is a parameter that can be described precisely enough for a system to store and for a generative model to interpret. A token like imagery.mood.default = optimistic is not useful on its own, but a structured object that encodes what optimistic means visually - upward angles, open compositions, warm highlights, natural environments, people in motion rather than posed - gives an image generation model something concrete to work with.

This matters because image generation is scaling fast. Marketing teams are already using generative tools to produce campaign imagery, social media assets, product visualisations, and editorial illustrations. Without structured brand guidance, every generation is a coin flip. The model produces something that might be on brand or might not, and a human reviews it manually. With imagery tokens, the model starts from the brand's visual language rather than from zero. The output is not guaranteed to be perfect, but the baseline shifts from random to intentional. The same structured description that guides an AI image generator could also guide a human photographer on a physical shoot, or an agency producing video content. One source of visual intent, multiple rendering contexts.

Sound is starting to feel tractable. Sonic branding has existed for a long time (Intel, McDonald's, the Mac startup chime), but it has always lived outside any technical brand system. There is no reason why brand sound design, the tonal qualities, the instrument palette, the rhythm and pacing, could not be represented as structured data that an AI audio generator reads from.

Voice and tone are the most underexplored frontier here. Large language models can embody a consistent voice if given enough structured guidance. The tools do not quite exist yet to encode voice guidelines in a machine-readable format that reliably produces consistent output, but it feels close. When it arrives, it belongs in the brand token system alongside colour and typography.

Spatial computing is going to add another layer. What does brand mean in a room-scale mixed reality environment? What are the spatial layout principles? The principles around proximity and scale and material quality that define how a brand feels when you are standing inside it? This is not a question anyone has fully answered yet, but the companies building brand infrastructure now will be best positioned to extend it into these new contexts when they arrive.

The Economics of Consistency

Brand consistency at scale is genuinely expensive. Large companies spend significant resources on brand governance, reviewing materials produced by agencies, regional teams, and internal departments to ensure they are on brand. Smaller companies often accept inconsistency because they cannot afford the governance overhead.

Brand infrastructure has the potential to change this equation. If consistency is baked into the tooling rather than enforced through review, the cost structure shifts. The agent that generates an email campaign reads from the same source as the developer who builds the website. Consistency becomes a property of the system rather than an outcome of human effort.

There is a growing market for this kind of infrastructure. Several companies are building it as a service. Others are building it internally. The common thread is that brand is starting to be treated as a data architecture problem, not just a creative one. That intersection - design thinking and systems thinking applied to the same problem - is where most of the interesting work is happening.

What This Could Look Like

The interesting design question is what interface a brand system exposes to the world.

The obvious answer is an API. Your web app queries an endpoint and gets CSS custom properties. Your iOS app gets Swift constants. Your marketing tool gets a JSON payload with the current token values. Everything that touches the brand is a client of the same versioned source.

But an API is just one surface. The more useful question is: what are all the surfaces a brand system might need to expose, and which ones matter most for each type of consumer?

Package registries. Tokens published as an npm package, a Ruby gem, a Swift package, a pip module. Developers install the brand the same way they install any other dependency. Version pinning means a team can upgrade deliberately. Lockfiles mean builds are reproducible. Shopify already does this with Polaris - one token source published to npm and RubyGems simultaneously. The advantage is that it fits perfectly into existing developer workflows. No new infrastructure, no API keys, no network calls at runtime. The tokens are just there, in node_modules, versioned and deterministic.

CDN-hosted token files. A URL like brand.company.com/tokens/v2/base.json that any client fetches at build time or runtime. No SDK, no authentication, no package manager. A static JSON file on a CDN is the simplest possible distribution mechanism. It works for CI pipelines that need to pull tokens during a build step. It works for third-party agencies that need current brand values without access to your internal tooling. It works for serverless functions that need to resolve a token value without importing a package. The tradeoff is that there is no type safety, no autocomplete, no built-in versioning beyond what you encode in the URL path. But for reach and simplicity, nothing beats a URL.

Structured files in the repository. This may be the most important surface for the agentic future, and it is also the simplest. AI coding agents - Claude Code, GitHub Copilot, Cursor - already read markdown files from the project root for context. A .brand/ directory with structured markdown or JSON files means every agent that touches the codebase has brand context automatically, without an API call, without a package install, without any configuration at all. The agent opens a file, reads the brand guidelines, and applies them. This is how tools like Claude already consume project-level instructions: a markdown file at a known path. Brand guidelines encoded the same way - tone of voice in voice.md, colour semantics in colours.md, component usage rules in components.md - would give every AI tool in the development workflow immediate access to brand decisions. No integration work required.

IDE and editor integrations. A Language Server Protocol implementation that provides autocomplete, hover documentation, and linting for token names directly in the editor. A developer types color. and gets the full semantic palette with visual previews. A linter flags raw hex values that should be token references. A hover tooltip shows not just the resolved value but the semantic meaning: "Primary action colour. Use for CTAs and interactive elements." This is where tokens stop being a build-time concern and become part of the minute-to-minute developer experience. The overhead of learning a token system drops significantly when the editor teaches you as you type.

Semantic embeddings. This is further out but worth considering. Store brand guidelines - not just token values but the reasoning behind them, the usage rules, the contextual guidance - as vector embeddings in a queryable index. An agent generating a UI for an error state does not look up color.text.danger by key. It asks, in natural language, "what is the correct treatment when something has gone wrong?" and gets back the relevant tokens, the usage guidelines, and examples of correct application. This is the difference between a lookup table and a knowledge base. Exact key lookup works when you know the key. Semantic search works when you know the intent but not the taxonomy.

MCP servers. Model Context Protocol is how AI agents connect to external data sources and tools. A brand system exposed as an MCP server would mean an AI agent could query your brand directly, in context, while it is generating something. It could ask what the correct button treatment is for a destructive action and get a structured, authoritative answer from the brand system rather than from whatever context was jammed into a prompt. Adobe has already shipped this with Spectrum. It is not theoretical.

The point is that these are not competing options. They are complementary surfaces for different consumers. Developers get packages and IDE integrations. Build systems get CDN URLs. AI agents get MCP servers and repo-level markdown. The brand system underneath is the same. The interfaces are just projections of it, shaped for whoever is reading.

That distinction matters. Prompt-based brand guidance is fragile. It relies on the right instructions being included every time, interpreted consistently, and applied correctly. A queryable brand system is different. It is the same source every surface reads from, machine-readable by design, versioned so you know exactly what state the brand was in when something was generated.

The brand does not live in Figma. It does not live in a PDF. It lives in a system that other systems can read from, and the interface to that system is whatever makes it easiest for each client to consume correctly.

When you update the primary colour, every client gets the update. When you add a new semantic token for a new use case, every platform can start using it immediately. When you need to audit what is using which token, you query the system. When an agent generates a new campaign asset, it reads from the same source as the developer who built the product it is promoting.

The important thing is not which specific interface ends up winning. It is that the brand exists as infrastructure at all, built on an open standard like the W3C DTCG spec, with enough structure that any client can consume it reliably.

Brand is starting to look like an operating system. Not in the sense of a single product, but in the sense of a layered infrastructure that everything else runs on top of.


Who Is Already Building This

This is not speculative. Several organisations have built real token infrastructure, and the scope of what they are encoding is expanding.

Adobe is the most interesting case right now. Their design system, Spectrum, has been token-driven for years. But in 2025 they published @adobe/spectrum-design-data-mcp, a Model Context Protocol server that gives AI tools structured access to their entire token and component schema. An agent can query it to find the right colour token for a specific use case, get component API definitions, and receive design recommendations, all from a live, versioned source. It is exactly the architecture described above, shipping now, in production.

Salesforce coined the term "design token" in 2014, credit to Jina Anne and the Lightning Design System team. Their open-source tool Theo was the first serious attempt at platform-agnostic token transformation. The fact that a concept from a 2014 internal Salesforce project is now a W3C specification says something about how slowly but definitively this infrastructure is being standardised.

IBM Carbon has published their entire token system as open source at github.com/carbon-design-system/carbon. IBM uses it across a global product portfolio with multiple brands and regional variations. The token layer is what makes that scale manageable.

Shopify Polaris ships design tokens as both an npm package and a Ruby gem, making the same token values available to their JavaScript and Ruby stacks simultaneously. One source, two ecosystems.

GitHub Primer has open-sourced their token primitives at primer/primitives, with the entire build pipeline using style-dictionary. The JSON token files, the build scripts, and the compiled CSS outputs are all public. It is a useful reference for anyone building something similar.

Adobe Spectrum (again, worth the double mention) has renamed their token repository to spectrum-design-data to reflect that it has grown beyond tokens alone. It now includes component schemas, anatomy documentation, and the MCP server. The direction is clear: a brand data layer, not just a colour palette file.

Brand.ai is approaching this from a different angle entirely. Rather than starting with design tokens and working outward, they are building what they call a "Brand OS" - a platform that ingests existing brand materials and converts them into a machine-readable format that maps over 150 dimensions of brand identity. Their system includes a Brand Foundation layer that extracts structure from static brand assets, an ontology that covers positioning, cultural relevance, and customer sentiment alongside the visual language, and an execution layer for generating on-brand content with built-in compliance checking. It is the most ambitious attempt so far to make the full breadth of a brand queryable by AI agents, not just the colour palette and typography but the strategic and tonal dimensions as well. Where most token systems encode what the brand looks like, Brand.ai is trying to encode what the brand means.

The Foundation Is in Place

The W3C Design Tokens Community Group, chaired by Jina Anne and co-founded by Kaelig Deloumeau-Prigent, announced the first stable version of the Design Tokens Specification in October 2025. After years of drafts and iteration, the format is production-ready and vendor-neutral. More than ten design tools and open-source projects have implemented or are implementing the standard, including Figma, Penpot, Sketch, and Framer. The interoperability this enables is real: tokens defined in one tool can be consumed by pipelines, editors, and agents built by completely separate teams.

What is interesting is where this goes next. The W3C spec covers the visual layer well - colours, dimensions, typography, motion. But the broader brand schema - voice, imagery, sound, positioning, sentiment - does not have a standard yet. That is the open frontier. The tooling that emerges to structure those dimensions, and the interfaces that make them queryable by both humans and agents, will define what brand infrastructure looks like over the next decade.

At Cene, this is the problem we are working on. We think of it in three layers. The first is a brand operating system: infrastructure that encodes the full schema of a brand - not just the visual tokens but the voice, the imagery intent, the motion language, the strategic positioning - and makes it queryable by every tool and agent that touches the brand. The second is an agency operating system: infrastructure for how creative work gets planned, produced, and delivered, with the brand system embedded into the workflow rather than referenced from a PDF on the side. The third, further out, is a creative operating system: a unified layer that connects the creative tools, the brand intelligence, and the operational workflows into a single system.

These are long projects. Decades, probably. But the foundational patterns - structured decisions, semantic references, platform-agnostic outputs, machine-readable schemas - are already here. Design tokens proved the architecture. The question now is how far it extends and how the creative industry applies them systematically.


Further Reading