I am looking for information data management with penpot. The size of the Docker volumes (mainly assets) of my self-hosted Penpot instance keeps increasing, and I would like to clean up to free up disk space.
Do you know, for example, if when deleting a user (manage.py delete-profile), the data in their personal Penpot workspace is also deleted? Or if there is a way to delete it manually once the user is deleted?
Same when deleting a Team, are all the assets properly deleted?
Any other useful information to free up space are welcome
The manage.py delete-profile command removes the user record but historically has not cleaned up all associated assets from the storage backend. The media objects (uploaded images, fonts, etc.) can remain in the assets Docker volume as orphans.
A few things you can do:
Check for orphaned assets - Penpot stores file references in PostgreSQL. After deleting profiles/teams, you can query the database to find media objects that are no longer referenced by any file:
SELECT * FROM storage_object WHERE id NOT IN (
SELECT media_id FROM file_media_object
);
Run the garbage collection task - Penpot has a built-in GC that should clean orphaned objects. You can trigger it manually:
python3 manage.py run-task storage-gc-touched
Team deletion - When you delete a team, the associated projects and files should be soft-deleted first (moved to trash). The storage GC runs later to clean the actual assets. If you deleted the team directly without going through the trash flow, assets may be orphaned.
Monitor with du - Track which volume is growing fastest. Usually penpot-assets is the culprit. The penpot-postgres volume grows more slowly.
For ongoing management, consider running the storage GC task on a cron schedule rather than relying on the automatic triggers.
Thank you for the reply.
Indeed, the teams were deleted without prior soft deletion
Point 2 “Run the garbage collection task” seems to be what I need to delete the orphaned objects (found in point 1).
Unfortunately, the manage.py command inside the backend container does not allow running the command as indicated :