Penpot crashes when saving files

Hi!

I installed a Penpot instance via Docker on a Debian VM last November and everything was working perfectly.

Not knowing anything about Docker, I couldn’t update it correctly and had to reinstall everything in February. I used the official version of Docker and installed it on a specific /data partition to avoid saturating the system disk. Since then, I’ve had constant problems and Penpot has become very unstable:

Client side: (Firefox or Chrome)

Even small projects seem to take a long time to load completely, and update files take longer and longer (blue “saving” dot remains) and eventually crash (page “internal error,” status 504, code: repository-access-error).

At the same time, wss requests accumulate in the browser and increasingly return timeout errors.

Server side:

CPU consumption seems enormous (120% on average for 2 CPUs, while RAM consumption is less than 20%), even when Penpot is not open.

Penpot’s logging system seems to be the source of this overload.

The backend logs are saturated with continuous BLPOP redis timeouts.

I’ve tried a lot of things: increasing the nginx and valkey timeouts, manually activating the workers via docker-compose (which generated a lot of zombie processes), deactivating them (they still appear to be activated, but I only have one zombie process left).

I followed the installation instructions carefully, checked my URI, and configured my nginx WAF (front-end on another wm and shared for other services) as recommended.

If I restart everything, it works again, but it degrades within a few hours even if I don’t connect to Penpot.

I upgraded Penpot to the latest version available, and nothing changes no matter what I do.

I really want to use Penpot for my projects, but for now it’s unmanageable.

Does anyone have any ideas?

Thanks in advance to anyone who can help me out.

Have you looked at the Redis logs to see if they explain the timeout errors?

This all I have on valkey logs since my last restart

1:M 18 Aug 2025 14:06:58.361 * oO0OoO0OoO0Oo Valkey is starting oO0OoO0OoO0Oo
1:M 18 Aug 2025 14:06:58.361 * Valkey version=8.1.3, bits=64, commit=00000000, m odified=0, pid=1, just started
1:M 18 Aug 2025 14:06:58.361 # Warning: no config file specified, using the defa ult config. In order to specify a config file use valkey-server /path/to/valkey. conf
1:M 18 Aug 2025 14:06:58.361 * monotonic clock: POSIX clock_gettime
1:M 18 Aug 2025 14:06:58.361 * Running mode=standalone, port=6379.
1:M 18 Aug 2025 14:06:58.361 * Server initialized
1:M 18 Aug 2025 14:06:58.361 * Ready to accept connections tcp
1:M 19 Aug 2025 02:01:36.935 * 1 changes in 3600 seconds. Saving…
1:M 19 Aug 2025 02:01:36.935 * Background saving started by pid 326764
326764:C 19 Aug 2025 02:01:37.045 * DB saved on disk
326764:C 19 Aug 2025 02:01:37.045 * Fork CoW for RDB: current 0 MB, peak 0 MB, a verage 0 MB
1:M 19 Aug 2025 02:01:37.136 * Background saving terminated with success
1:M 19 Aug 2025 07:16:40.286 * 1 changes in 3600 seconds. Saving…
1:M 19 Aug 2025 07:16:40.286 * Background saving started by pid 470088
470088:C 19 Aug 2025 07:16:40.347 * DB saved on disk
470088:C 19 Aug 2025 07:16:40.348 * Fork CoW for RDB: current 0 MB, peak 0 MB, a verage 0 MB
1:M 19 Aug 2025 07:16:40.386 * Background saving terminated with success
1:M 19 Aug 2025 08:53:27.899 * 1 changes in 3600 seconds. Saving…
1:M 19 Aug 2025 08:53:27.899 * Background saving started by pid 472327
472327:C 19 Aug 2025 07:21:35.869 * DB saved on disk
472327:C 19 Aug 2025 07:21:35.869 * Fork CoW for RDB: current 0 MB, peak 0 MB, a verage 0 MB
1:M 19 Aug 2025 07:21:35.933 * Background saving terminated with success

But i have plenty of that in backend logs

[2025-08-19 11:32:27.912] E app.worker.runner - hint="redis pop operation timeout, consider increasing redis timeout (will retry in some instants)", timeout=#app/duration "5s"
SUMMARY:
 →  io.lettuce.core.RedisCommandTimeoutException: Command timed out after 10 second(s) (ExceptionFactory.java:63)
DETAIL:
 →  io.lettuce.core.RedisCommandTimeoutException: Command timed out after 10 second(s) (ExceptionFactory.java:63)
    at: io.lettuce.core.internal.ExceptionFactory.createTimeoutException(ExceptionFactory.java:63)
        io.lettuce.core.internal.Futures.awaitOrCancel(Futures.java:233)
        io.lettuce.core.FutureSyncInvocationHandler.handleInvocation(FutureSyncInvocationHandler.java:79)
        io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:86)
        jdk.proxy2.$Proxy32.blpop(:-1)
        app.redis$initialize_resources$reify$reify__24059.blpop(redis.clj:261)
        app.worker.runner$run_worker_loop_BANG_.invokeStatic(runner.clj:230)
        app.worker.runner$run_worker_loop_BANG_.invoke(runner.clj:151)
        app.worker.runner$start_thread_BANG_$fn__58548.invoke(runner.clj:263)
        promesa.exec$binding_conveyor_inner$fn__13480.invoke(exec.cljc:193)
        clojure.lang.AFn.run(AFn.java:22)
        java.lang.Thread.run(:-1)

Have you tried connecting to Penpot directly on the VM port without going through your nginx proxy to see if the 504 errors still happen?

Yes and that’s the same.

Hmmm… could it be some time mismatch between the host and the containers?

Maybe, I can see the container are using UTC and my host is using CEST.
Can that be the problem ?