Multi-directional workflow with Claude via MCP

Hi.

This relates to connecting a AI like Anthropic’s Claude to Penpot via MCP and using a multi-directional workflow to develop a website (or App) locally with just a sketch to start with in Penpot, building a prototype in a local development site and then creating designs and style guide in Penpot, and finally automating design changes in the dev environment.

I’d love some advice / suggestions on the best steps / prompts to use with Claude / AIs to achieve the quickest route to the following workflow using :

  1. Start with a sketch & mood board

  2. Create a prototype in a local dev env

  3. Create:
    a) Penpot area for each web page
    b) if there’s lots of similar pages -eg account section, a single space for the combined UI elements
    c) a full design guide separating typography, buttons, forms, widgets etc
    d) ability to have second design guide (eg for admin area)

  4. Amend Penpot

  5. Automated local dev site updates

  6. 6. Best setup in Penpot to enable this.

Thanks!

I’ve just noticed Recreating Penpot-MCP Demos (updated: 13/02/2026) from @ucan and will have a good read

Penpot

@mcp-penpotmcp-penpot create a basic and meaningful prototyping interaction between the screens currently selected in Penpot. Look for possible interaction elements, such as buttons with text or icons, or cards that can act as interaction triggers to navigate between@mcp-penpotboards.

@mcp-penpot take the HTML / CSS generated from <> and recreate it in my current Penpot file. Do not just draw boxes, use ‘Flex layout’ for the containers, covert the buttons and other action triggers into components, and logically group the layers.

Generate a Designs System documentation file from a Penpot file:

As a Code Generation AI specialised in Design System documentation, your task is to fully extract the design system data from the provided Penpot file / API response and generate a single, comprehensive markdown file (~/AI/mcp-servers/design-system-documentation.md).

The output must be structured with the following, mandatory sections including all available detail:

  1. Introduction & Overview - A brief, professional introduction to the Design System (DS) and its purpose (eg. Consistency, scalability). Specify the source Penpot file, project name and date of extraction.
    Source Penpot File: [Insert Name of Penpot Project/File]
    Date of Extraction: [YYYY-MM-DD]

  2. Design System Colors - A comprehensive colour palette. Document semantic color with usage notes &colors with RGB, Hex & HSL values.

  3. Design Tokens: Typography - A complete typography system featuring font-family, text styles (font-size, weight, line-height & letter spacing), typography scale based on Minor Third ratio.
    Include details usage guidelines.

  4. Design Tokes: Spacing & Layout - Full spacing & Layout specifications:
    6-level spacing scale (XS to 2XL) in px and rem.
    12-column grid system with gutter and margins.
    Component dimensions table.
    5 responsive breakpoints.

  5. Design Tokes: Depth & Effects - Shadows & radius tokens:
    Shadow elevation with full CSS box-shadow value
    7 border radius values from 4px to 80px.
    Opacity values for various shapes.

  6. Component Inventory: Details of major components:
    Button (3 variants: Primary, Secondary, Icon:Text).
    Input field (with search variant).
    Card (Main card, Related card with dimensions and shadows).
    Modal/Dialog (Slide in).
    Navigation item (4 navigation types).
    Category Pill (with 4 different states).
    Tabs(with 4 different states).
    Banner Carousel (width fade transition, and position indicator).
    Additional components (e.g. Headers).

  7. Additional guidelines - Responsive behaviour, Accessibility, Performance, Usability and design principles.

  8. Implementation guidelines - CSS Variable example and HTML component usage.

Critical Requirements

Use Markdown Tables extensively for structured data (Colors, Typography, Spacing, Breakpoints).

All extracted values must be presented with their numeric value and unit (eg. 16px, 1rem, 0.75).

Ensure all sections are present and populated with the available data from the Penpot extraction process).

Use descriptive markdown headings (#, ##, ###) for clear hierarchy.

DO NOT GENERATE ANOTHER FILE TO DOCUMENT THE PROCESS, JUST GENERATE design-system-documentation.md file.

Context: The Design System foundations (Color and Typography tokens) have already been synchronised between Penpot and the codebase (CSS variables/ utility classes etc.) The LLM has access to the Penpot Model Context Protocol (MCP) and the design-system-documentation.md file. Do not generate extra documentation of the process. Once you finish show me the result opening the html in the internal browser in VSCode.

——

Generate a complete, single-page solution for a standard User Login Screen. Add the files to my project.

The output must consist of two separate, non-documentary code blocks only:

HTML (semantic and structured).

CSS (modular and token-based)

The size of the login screen must be mobile.

Design & Consistency restraints:

Layout/Aesthetics: It must adhere to to the overall aesthetic, visual hierarchy and layout style present in the Penpot file’s main screens.

Typography: Use the defined Text Style tokens.

Spacing: Apply spacing tokens for padding, margins and gaps within the form and container.

Components: Leverage properties for input fields, buttons & card containers (like border-radius and box-shadow tokes) as defined by the system.

Output constraint: DO NOT generate an introductory text, explanations, notes or markdown tables).

============

1. Introduction & Overview

This Design System (DS) serves as the foundational, single source of truth for the [Project Name] user interface and experience. Its primary purpose is to ensure visual consistency, improve collaboration between design and development teams, and enable scalability across digital products.

By utilizing pre-defined components, typography, and color tokens, this system reduces redundant work and accelerates development cycles.

Source Penpot File: [Insert Name of Penpot Project/File]

Date of Extraction: [YYYY-MM-DD]

Any advice on using a connected DAM - Cloudinary - to work with both systems well, would also be appreciated.

Hi @Mat I hope this helps!

You’re basically describing the ideal multi-directional workflow MCP enables but there are a couple of important nuances to keep in mind.

First, yes: what you’re proposing is possible with Penpot MCP, and it’s actually one of its biggest strengths. You’re not limited to “design → code” anymore. You can go:

  • design → code

  • code → design

  • design → design (like generating flows or interactions)

That said, the quality of the result will depend heavily on how structured your source is and the power of the LLM you’re using. If your Penpot file isn’t using proper layout (Flex), components, and tokens, the LLM will produce something, but it won’t be clean or reusable. MCP doesn’t fix messy design files, it amplifies whatever structure is already there.

Second, your prompts are powerful, but a bit overloaded. You’re asking the model to infer interactions, reconstruct layouts, generate full documentation, and output production-ready code all in one go. That’s too much if you want consistent results. You’ll get better outcomes if you split this into steps:

  1. Extract structure (tokens, components, layout logic)

  2. Validate / refine

  3. Generate documentation or code

Otherwise you risk mixing assumptions across tasks.

On the Cloudinary / DAM side, one important clarification: Penpot MCP does not integrate directly with DAM systems like Cloudinary. It doesn’t fetch, sync, or manage external assets. It only operates on what exists inside the Penpot file.

So the correct setup is more about workflow than integration:

  • Cloudinary should be your source of truth for assets (with clear naming and structure)

  • Penpot should consistently use those assets (ideally avoiding random uploads or duplicates)

  • MCP will then read and work with those references as part of the design

If you expect the LLM to manage or coordinate assets across systems, things will break quickly. MCP sits on top of the design file. It doesn’t orchestrate your asset pipeline.

So, your direction is good and solid. This is exactly the kind of workflow MCP unlocks but the key is structure, sequencing, and being clear about system boundaries. If you treat it like a pipeline, it becomes very powerful :slight_smile:

1 Like

Would you please explain what a connected DAM a and the cloud thing are and why they are important to you as what role?

I feel like we are ALL trying to get the

  • design → code

  • code → design

  • design → design (like generating flows or interactions)

Bridged and in particular with AI in the loop. I am experimenting with having:

  • design → graph(modeling app) → code

  • code → graph(modeling app) → design

Where the design is part of a model and is built with an interface that uses graphs to be the Source of Truth (SoT). AI is really good at understanding graphs. They are not good currently at understanding what we want them to do when dealing with penpot and the rules related to the design layering and aspects and such. My planned experiment is to have ai help me build the functions needed to encode those intents and preferences. Doing so will built a deterministic pipe between the code and the design(Penpot). As using those pipe happens, adjustments can be made to the functions to reach a desired standard way the design,graph,code all work as desired.

I really believe in Penpot and it is very much aligned with my own values and principles and want this to work. Give ai a pic of concept or idea, have it push a design using a MCP that is really just a compiler for design to put build it inside Penpot. Make adjustments in Penpot and it comes back to ai. Eventually should be able to use this as training and improvement for the transpiler (that name just came to me, will have to note it - seems more accurate than compiler here).

I am trying to do everything in the open, my stuff is AGPL and CC-BY-SA and this is my side hobby - not a designer nor a developer by trade, just interest, motivation and ideas.

Still building the foundation needed to get the vision going. What I could really use are some real .penpot design files that are full apps of some kind, either web or mobile that are fully fleshed out. Incorporating a good principled “design system” (still grasping that meaning) with @LauraKalbag moves towards breakpoints as design tokens and one design really is THE design. Anybody know of or got a .penpot file that meets those ideas? Again, all open. The idea is to collaborate and everyone benefit, not compete.

1 Like

@juan.delacruz - that’s really helpful, thanks! Sounds a good approach not trying to connect Penpot directly to a DAM. The correct terminology is useful.

@IvanTheGeek - I connect a Digital Asset Management system to my dev project. Having tools as the source of safey ensures non technical people can view, curate and organise assets to make it clearer to the LLM what’s important for automated tasks and ensure it can’t delete all your images. The AI loves info organised into strict datasets.

Cloudinary is a great option - free to individuals. Cloudinary Command Line Interface (CLI) | Documentation

1 Like

Via ChatGPT:

What is a Digital Asset Management (DAM) system in graphic design?

A Digital Asset Management (DAM) system is essentially the central nervous system for all design files in a creative workflow.

In the context of graphic design, it’s not just file storage—it’s a structured, searchable, versioned system for managing visual assets across their entire lifecycle.


:brain: What counts as a “digital asset”?

Typical assets include:

  • Logos (SVG, AI, PNG)
  • Design files (PSD, Figma, Sketch)
  • Images (JPEG, PNG, RAW)
  • Icons & illustrations
  • Brand kits (colors, typography, guidelines)
  • Videos, animations, marketing graphics

:building_construction: What a DAM system actually does

Think of a DAM as:

Git + database + search engine + CDN — but for design assets

1. Centralized storage

  • One “source of truth” for all assets
  • Eliminates “final_v3_REAL_final.png” chaos

2. Metadata & tagging

Assets are labeled with:

  • Project
  • Brand
  • Usage (web, print, social)
  • Permissions/licensing

This makes everything queryable, almost like a graph.


3. Version control (critical)

  • Track changes over time
  • Compare versions
  • Roll back if needed

Conceptually similar to Git:

  • Supports immutable history
  • Aligns with “append, don’t mutate” thinking

4. Search & retrieval

  • Quickly find assets:
    • “blue logo svg”
    • “spring campaign banner 2025”
  • Often includes AI-powered search (visual similarity, OCR, etc.)

5. Access control

  • Define who can:
    • View
    • Edit
    • Download

Important for teams, agencies, and clients.


6. Distribution

  • Share assets via links or portals
  • Provide correct formats automatically (web vs print, etc.)

:puzzle_piece: How designers use a DAM

Typical workflow:

  1. Designer creates asset (Figma, Photoshop, etc.)
  2. Uploads to DAM
  3. Adds metadata (tags, description, usage)
  4. System versions it
  5. Team searches and reuses it later

:brain: Why DAM matters

Without a DAM:

  • Files scattered across drives, Slack, email
  • Duplicate work
  • Wrong versions used
  • Brand inconsistency

With a DAM:

  • Consistency
  • Speed
  • Reusability
  • Governance

:link: Graph perspective (important insight)

A DAM can be thought of as a graph of design assets:

  • Asset = node
  • Version = time-based nodes
  • Relationships = edges
    • belongs to campaign
    • derived from another asset
    • used in UI screen
  • Metadata = properties or linked nodes

So instead of:

“A DAM is a tool”

Think:

A DAM is a lens over a design-focused graph


:vs_button: DAM vs simple file storage

Feature Google Drive / Dropbox DAM
Storage :white_check_mark: :white_check_mark:
Metadata search :cross_mark: limited :white_check_mark:
Versioning :warning: basic :white_check_mark: robust
Brand organization :cross_mark: :white_check_mark:
Asset relationships :cross_mark: :white_check_mark:
Workflow integration :cross_mark: :white_check_mark:

:brick: Examples of DAM systems

  • Bynder
  • Brandfolder
  • Adobe Experience Manager Assets
  • Canto

:light_bulb: Key insight

A DAM is not just storage—it’s structured meaning applied to files.

In a broader system:

  • Assets become nodes
  • Metadata becomes relationships
  • Versions become a time graph

So a DAM can be viewed as:

A “Design Asset Lens” over a larger truth graph


Again via ChatGPT:

Yes — but there’s an important distinction to make first:

Cloudinary ≠ pure DAM
It’s actually media processing + delivery + storage + API

So when you look for “open source alternatives,” they tend to fall into two different categories:


:compass: 1. Closest to Cloudinary (self-hosted, API + transformations)

These are the true conceptual replacements.

:wrench: Openinary (very close match)

  • Openinary
  • Open source, self-hostable
  • URL-based transformations (like Cloudinary)
  • Built-in storage + API + optimization (WebP, AVIF)

:backhand_index_pointing_right: Key capability:

  • On-the-fly transformations for images and videos via URL
  • Full control, no vendor lock-in

:light_bulb: This is probably the closest thing right now to “self-hosted Cloudinary.”


:wrench: Appwrite (partial replacement)

  • Appwrite
  • Open source backend-as-a-service
  • Includes storage + file APIs

:backhand_index_pointing_right: Positioning:

  • Open-source alternative to Cloudinary storage features

:warning: But:

  • Less focused on advanced media transformation pipeline

:brain: Insight (important)

These tools map to your thinking as:

Asset node + transformation functions = graph edges applied at runtime


:toolbox: 2. Open-source DAM / asset libraries (different layer)

These are closer to “design asset management” than Cloudinary itself.

:card_index_dividers: Nextcloud (+ plugins)

  • Nextcloud
  • Fully self-hosted file platform
  • Permissions, sharing, metadata, search
  • Extensible ecosystem

:light_bulb: With plugins, it becomes a lightweight DAM


:card_index_dividers: Islandora (heavyweight DAM / repository)

  • Islandora
  • Used by libraries, archives, institutions
  • Strong metadata + taxonomy

:warning: Very heavy / enterprise / academic


:card_index_dividers: Invenio (CERN)

  • Invenio
  • Python-based
  • Research + asset repository

:framed_picture: digiKam (local-first DAM)

  • digiKam
  • Desktop DAM for images
  • Tagging, metadata, search

:brain: 3. Real-world gap (what builders run into)

Common theme:

Storage is easy… but image processing and delivery is the hard part.


Capability reality

Capability Open source availability
File storage :white_check_mark: many options
DAM (organization) :white_check_mark: many
Image transformation CDN :warning: limited
Full Cloudinary replacement :cross_mark: rare (emerging)

:puzzle_piece: How to think about the stack (your Nexus lens)

This is actually a stacked system, not one tool:

Layer 1 — Storage

  • Nextcloud
  • S3-compatible storage (e.g., MinIO)

Layer 2 — DAM (organization / meaning)

  • Islandora / Invenio / custom graph

Layer 3 — Processing + delivery

  • Openinary (key missing piece)

:light_bulb: Recommended combinations (practical)

:test_tube: Closest Cloudinary replacement (self-hosted)

  • Openinary
    • S3 (MinIO, etc.)

:brick: Full “open-source DAM + delivery stack”

  • Nextcloud (asset storage + sharing)
  • Openinary (processing layer)

:brain: “Nexus-native” approach (aligned with your thinking)

  • Graph (truth graph) = source of meaning
  • Storage = blob layer
  • Processing = transformation functions

:backhand_index_pointing_right: DAM becomes:

a projection over the graph, not the system itself


:police_car_light: Key takeaway

Yes — there are open-source, self-hosted options, but:

There is no single mature open-source system that fully replaces Cloudinary’s entire stack (yet).

The closest emerging answer is:

  • Openinary (processing layer)
    combined with:
  • storage (Nextcloud / S3)
  • and optionally a DAM layer

If you want, we can go further into:

  • a TOML DSL for assets + transformations, or
  • modeling this as a Nexus subgraph (Asset → Variant → Usage → Delivery)

That’s where this gets especially powerful.

Cheers, I’ll check out the opensource Openinary.

Openinary looks great. Building a docker now to check it out. Love the name.

  • Setting up (free) storage looks complex.