A Toolkit for Assessing Cloudron Packaging Difficulty
The 70% Problem
Cloudron staff have said publicly that packaging an app takes around 40 hours. That number sounds right, but it hides an important distribution: roughly 30% of the effort is getting the app running inside a Cloudron container. The other 70% is SSO integration, upgrade path testing, backup correctness, and ongoing maintenance.
Most of the community discussion on the forum focuses on that first 30%. Someone asks “can we package X?” and the response is usually about Dockerfiles and start scripts. The harder question is: once it is packaged, what does it cost to keep it working? How often does the upstream break things? Does it have a sane authentication layer? Will backups actually restore?
The toolkit described in this post tries to answer those questions before you commit to the work.
What the Toolkit Does
The toolkit has three parts:
- An AI assessment agent that reads a GitHub repository and produces a structured report
- An interactive HTML scorer with pre-scored apps, a manual scoring interface, and a GitHub auto-lookup
- A packaging reference document with a verified base image inventory for Cloudron 9.1.3
The assessment produces two scores:
Structural difficulty (0 to 14): how hard is it to get the app running inside Cloudron? This covers processes, database requirements, runtime availability, message broker needs, filesystem write patterns, and authentication support.
Compliance and maintenance cost (0 to 13): how hard is it to keep it running well as a Cloudron citizen? This covers SSO quality, upstream release stability, backup complexity, platform model fit (ports, protocols, proxying), and configuration drift risk.
Each axis is scored 0 to 3 with specific criteria. Every score comes with evidence from the repo’s actual files, not guesses from the README.
The Scoring Axes
Structural (Axis A)
A1. Processes scores how many daemons the app needs. A single process scores 0. An app that needs Nginx plus a backend plus a worker plus a scheduler scores 3. Cloudron does not run systemd, so every process needs to be managed in the start script.
A2. Data storage scores database complexity. A standard MySQL or PostgreSQL database that maps to a Cloudron addon scores 0. Multiple databases, or SQLite with manual backup handling, scores higher.
A3. Runtime scores whether the language and dependencies exist in the base image. Python 3, Node.js 22, PHP 8.3, Ruby 3.2, Go, Java 21, and .NET 8 are all in cloudron/base:5.0.0. An app needing Lua from an external apt repo scores 1. An app needing Erlang with a custom OTP build scores 2.
A4. Message broker scores whether the app requires Redis, RabbitMQ, or similar. No broker is 0. Redis as a cache is 1. Redis as a required broker with persistence is 2.
A5. Filesystem writes scores how many paths the app writes to and how many symlinks you need. A clean app with one data directory is 0. An app that scatters writes across /etc, /var/lib, /var/log, and /tmp with configuration files that must be generated is 2.
A6. Authentication scores SSO readiness. Native LDAP or OIDC with clean configuration is 0. Partial support that needs workarounds is 1. No auth support at all is 2.
Compliance (Axis B)
B1. SSO quality scores how well the SSO integration actually works. Full OIDC with group mapping is 0. LDAP bind mode that restricts auth mechanisms is 1. No SSO path at all is 2.
B2. Upstream stability scores how predictable the project’s releases are. Mature, semver, no breaking changes is 0. Active but with occasional config-level breakage is 1. Frequent breaking changes or erratic release patterns is 2.
B3. Backup complexity scores whether Cloudron’s standard backup handles everything. Database addon plus flat files is 0. SQLite needing special snapshot handling is 1. Multiple stateful services needing coordinated backup is 2.
B4. Platform fit scores protocol and networking compatibility. Standard HTTP behind the reverse proxy is 0. WebSocket needing configuration is 1. Raw TCP ports, custom TLS certificates, or DNS SRV records is 3.
B5. Config drift scores runtime mutation risk. Pure environment variable configuration is 0. A generated config file from a template is 1. Plugin systems, auto-updaters, or runtime code generation is 2.
An Example: Prosody XMPP Server
Here is a condensed version of the Prosody assessment:
Structural difficulty: 2/14Compliance/maintenance: 7/13Confidence: HighStructurally, Prosody is simple. Single Lua process, supports MySQL and PostgreSQL via Cloudron addons, has native LDAP since v0.12, and all the dependencies come from a Debian package.
The compliance score is where it gets interesting. Prosody needs raw TCP ports (5222 for client connections, 5269 for server federation), which Cloudron supports via tcpPorts in the manifest. But XMPP requires TLS certificates for the bare domain (not the app subdomain), DNS SRV records that Cloudron does not manage, and component subdomains for multi-user chat. The config file is Lua code that must be generated dynamically on every boot.
None of these are packaging bugs. They are XMPP protocol requirements that affect any XMPP server on any hosting platform. But the assessment quantifies them so you know what you are signing up for before writing a single line of Dockerfile.
The Interactive Scorer
The HTML tool (cloudron-scorer.html) is a single 40 KB file with no dependencies. Open it in any browser or host it on a Surfer instance. It has four tabs:
Score an app: select options on each axis, see a live difficulty tier with colour coding and estimated packaging time.
Pre-scored apps: a gallery of roughly 40 apps from the Cloudron forum wishlist. Each has scores, a tier, and an expandable breakdown. Filterable by difficulty level. Early-stage projects are tagged so you can see which ones are not yet mature enough to package.
GitHub lookup: paste a GitHub URL, and the tool fetches the repo via the GitHub API (runs in your browser, no server needed). It scans key files for database and broker references, detects runtimes, and gives a colour-coded difficulty estimate. Note: GitHub allows 60 unauthenticated API requests per hour.
How to use: a step-by-step guide for manually assessing any app without the AI agent.
The AI Assessment Agent
The agent is a Claude Project system prompt. You create a project, paste the instructions in, and start conversations with “Assess this app for Cloudron packaging:” followed by a GitHub URL.
The agent will:
- Fetch and read the repo’s README, docker-compose.yml, Dockerfile, and package manifests
- Search for SSO, LDAP, and OIDC documentation
- Check the release history for stability patterns
- Score all eleven axes with specific evidence
- Produce a structured markdown report with a packaging approach and key risks
The output is designed to be posted directly as a forum reply on a wishlist thread.
Limitations
The agent reads code and documentation. It cannot test anything at runtime. SSO integration, filesystem write paths, WebSocket behaviour, and backup restore all need manual verification on a live Cloudron instance. It also tends to be slightly optimistic on structural scores, so if an app seems too easy, check the compliance axis and the key risks section.
How to Use the Toolkit
Quick assessment (5 minutes): open the HTML scorer, go to the GitHub lookup tab, paste the repo URL. You get a rough difficulty estimate instantly.
Detailed assessment (30 minutes): set up the Claude Project with the agent instructions. Give it a repo URL. Read the report, cross-reference with the packaging reference document, and decide whether to proceed.
Before committing to packaging: check both axes. A structurally trivial app (score 1) with high compliance cost (score 7) may be harder to maintain long term than a structurally moderate app (score 5) with low compliance cost (score 2).
Getting the Files
The complete toolkit is available at forgejo.wanderingmonster.dev/root/cloudron-packaging. You can also try the interactive scorer directly in your browser.
The key files:
| File | Purpose |
|---|---|
cloudron-assessment-agent.md | Claude Project instructions (the agent itself) |
cloudron-packaging-reference.md | Verified base image inventory and packaging patterns |
cloudron-scorer.html | Interactive scorer, app gallery, GitHub lookup |
example-assessment-facilmap.md | Full example agent output |
The scorer HTML works offline. The agent needs a Claude Pro or Team account.
Why This Matters
The Cloudron app store has a long wishlist. Community members regularly volunteer to package apps, then discover halfway through that SSO integration is a dead end, or that the upstream ships breaking changes every month, or that the app writes to seventeen different directories. The assessment does not eliminate that risk, but it quantifies it upfront.
If someone on the forum asks “should we package X?”, the answer should include a difficulty score and a compliance cost, not just “it has a Dockerfile.” The Dockerfile is the easy part.