I don’t like the way penpot auto sort name of library or token. When i named color from 1 to 10, the color have “10” name auto jump next to color has “1” name. I think penpot just leave it behind the previously created one. Drag and drop to change position.
Hello,
I’m a graphic designer and I found Penpot 2.11.0 very engaging and advanced in its wireframing (design tokens). I can now create a design system with Penpot at this stage, I love it.
However, the prototype stage can’t yet take full advantage of the design system. The interaction between variables isn’t completely encapsulated inside the component, it still requires an extra step, I’ll outline my workflow to illustrate the issue :
I create a component with two variants: default and active.
I add an interaction to the default variant and link it to the `active` variant :
PS: The relativeTo value has the same name as the destination value : Board. Even though I set the variant’s property to default and active, the values could be at least Board/default and Board/active to be more readable.
I add an interaction to the active variant that pointing to nowhere (I think this step isn’t very intuitive, why not an action called toggle change between the two variants)
This prevents the motions from being fully encapsulated within the component and reusable without additional work. Instead, I have to set the relativeTo’s value each time I create an instance.
Besides, any further modifications of interaction parameters at the parent‑component level will not be inherited by the instance, there’s no only one truth, it could be a big problem in the design system.
I’ve been testing Penpot from the start, and I’m impressed by how quickly it’s evolving. Thank you!
Hello @diacritica,
I tried out a few things with Penpots official MCP server and I am curious to know if there are any plans to add tools for workflow management to the current MCP toolset. These would help the model plan and keep track of the design process, for example:
Task Tracking: Breaking tasks into atomic, testable steps, adding them to a task list, and updating the list accordingly.
Error Handling: Preventing the model from repeatedly retrying arbitrary code, by analysing the error type, querying relevant API documentation, and updating its reasoning context.
Memory Management: Specifically for Penpot’s working environment, to prevent losing track of state across multiple code blocks or design elements.
Reasoning models will automatically plan tasks and execute them sequentially without requiring any additional external mechanisms. The use of reasoning models is highly recommended when working with the Penpot MCP server, especially for more complex tasks.
Error Handling
If your LLM writes code that applies the API incorrectly, this can be due to incorrect API documentation or insufficient instructions on how to apply the API correctly. We have made several improvements in recent weeks, so please make sure that you are using the latest version, which is now integrated into the Penpot repository.
Given adequate instructions/documentation, sufficiently smart models will, in our experience, typically succeed in writing working code, and they will even work around limitations and diagnose issues correctly in many cases.
If you encountered problems where a sufficiently smart LLM (such as Codex or Claude Opus) could not work with specific parts of the API, please let us know the specifics - either here or by creating an issue/discussion in the Penpot repository.
Memory Management
Models are instructed to use the built-in local persistence mechanism, which enables reuse of both code and data across tool executions, and, in our experience, advanced models will typically apply these mechanisms correctly.
Thank you for your quick response. I see your point, although ‘sufficiently
smart’ is a rather relative term when the examples given are Codex and Opus,
which are enormous frontier models.
I also found Opus 4.5 to be reliable and quick, for example, at recreating
designs from code to Penpot. It also handles task tracking, error handling and
memory management very well. However, I am experimenting with open-weight
models, such as Qwen3, as I would like to develop an AI-assisted design
workflow using small, locally running models, even if they only cover part of the
process.
I didn’t realise that the Penpot repository now includes MCP. I was still
working with penpot-mcp as a separate instance. Thank you for letting me know
and for your reply.
Indeed. However, there is only so much you can do to compensate for a lack of intelligence. If you have provided all of the necessary information as well as the tools that are required to get a task done, but the LLM still fails because it does not adhere to the instructions or fails to infer that it needs to query information, then the LLM simply is not smart enough, and the best solution is to use a more advanced model.
I am experimenting with open-weight models, such as Qwen3
Please do share your experience. Note that you should opt for VLM variants, as visual inspection is very important for many tasks.