Recreating Penpot-MCP Demos

Hey everyone,

I was happy to hear about the release of an official Penpot-MCP, and I tried to recreate the examples shown on Penpot’s official YouTube channel or in Penpot’s AI Paper. The latter referred to demo videos on Google Drive.

However, I could not recreate a single example as shown in these demos, so I am looking for guidance. It’s not that the results were “a bit off”, they did not resemble the desired output at all. So, I was wondering if there are more details on the Penpot MCP demos, e.g., prompt templates, LLMs and files used (e.g., Penpot or source files).

Thanks in advance.

The demos I am referring to:
@juan.delacruz | MCP demos - Google Drive

YouTube - Quick demo: Penpot MCP server in action

Hi @ucan

I understand perfectly what you are saying. I feel your frustration because using LLMs and Prompts can be sometimes very hard and results vary a lot. It is definitely not magic.

To give you more context, almost all the use cases you see in our videos were made using Opus 4.5 model and the Cursor agent. This is very important because the model, the agent, and the prompt change everything.

I have attached here the exact Penpot file we used in the demos so you can test with the same source.

Also, after fighting a lot with agents, here is a brief summary of “Good Prompting” that works for us. If the input is weak, the output will be bad:

  1. Define the Role: Don’t just say “You are a designer.” Be specific. “You are a Senior Product Designer expert in Accessibility and Design Systems.”

  2. Structure the Prompt: Think of it like a User Story ticket. Give Context, Objective, Restrictions (e.g., “only use existing components”), and Quality Criteria.

  3. Images are key: The AI “sees” but needs guidance. Tell it exactly where to look in the screenshot (e.g., “focus on the negative space in the header”).

  4. Give specific Rules: Don’t expect it to read the full documentation. Give explicit rules: “Use only colors from /core/colors” or “Do not invent new font sizes.”

  5. Iterate: Do not expect a perfect result in “one-shot.” It is a conversation: Analysis → Proposal → Feedback → Adjustments.

I hope this helps you get better results. It is a trial and error process!

2 Likes

Thank you for the helpful tips and resources, I’ll try them out right away! As for the prompts, I tried to follow the examples in the demo videos, which wasn’t easy because most of them were cut off.

I also realised later that Cursor IDE and Opus 4.5 had been used. As I don’t have a premium Cursor subscription, I could only set it to ‘auto’. However, I didn’t get a single result. Although the Penpot MCP server was running and connected, and I could see that the MCP server and tools were recognised correctly, the LLM still couldn’t access them. To resolve the issue, the model generated complete JavaScript code, which I then had to enter manually via the Penpot API REPL. I tried this, but it didn’t work.

That’s why I’m currently working with VSCode, where I’ve tried various models such as Codex, GPT-5.2 and Opus 4.5. Unfortunately, I haven’t obtained any usable results yet. I also realised that I was being far too general. My goals are to recreate a finished HTML/CSS page in Penpot and to generate design variations exclusively using components from the library.

So far, I have had the following results: (1) Designs are not created at all, or are only partially created and not in a usable state, (2) components from the library are not used or (3) elements are cut off.

I also made attempts without being too specific, for example, I prompted to create ‘just’ a button. The BookNook website has been the most successful so far, but I’m no longer sure about the model or prompt used, the model could have been GPT-5.2.

In any case, I’ll continue to try out your suggestions. Unfortunately, I can’t use Cursor here. I’ll let you know if I find a setup that works well.

Thank you all for your great work!


Here are examples of my first attempts. I mainly tried to create designs in Penpot using components from the library or to generate Penpot designs and components based on finished source code + reference image.

Cards

Card using components from the library

Recreating a card from HTML/CSS with a reference image

New wireframe based on a existing design

Web page

Web page using components

This example ist the best so far, but I can’t get consistent results using the same workflow. I provided HTML/CSS and a reference image. Model should have been GPT-5.2.

Here is the failed attempt using Opus 4.5

and the prompt

Take this HTML/CSS and recreate it in my current Penpot file. Do not just draw boxes: use 'Flex Layout' for the containers, convert the cards, buttons, list items into components and logically group the layers.

@included files: 
- index.html and style.css generated by Gemini 3 Pro 
- Reference Image