Tier A — deploy automatically
- Static Vite/React apps that build to dist/
- Static Astro / static export apps
- Node SSR apps that expose a normal server process
- Conventional Express / Fastify / Koa / Node web apps
Research report
A practical report for a system where users bring their own server and a Lovable Git repo, and your platform decides whether to deploy it, adapt it, partially support it, or reject it with a useful reason.
Your current system already proves the easy part: static apps are straightforward. `npm install`, `npm run build`, get `dist/index.html`, point nginx at it, done. For conventional Node SSR, PM2 plus nginx reverse proxy also works well.
The real product challenge starts when a repo looks like a frontend app but actually depends on an external backend, a serverless provider runtime, a framework adapter, or a Cloudflare-specific execution model. That means your product should not be a shell-script launcher. It should be a classifier + build probe + bounded compatibility engine.
What Lovable usually generates
This is the common case. You usually get package.json, vite.config.*, index.html, src/, Tailwind-ish styling, and a standard npm install + npm run build flow. If output lands in dist/, your nginx static flow is exactly the right move.
Many Lovable repos use Supabase or Firebase from the frontend only. These are still deployable as static apps, but they depend on external managed services and correct environment variables, auth callback URLs, and backend readiness.
Some repos include supabase/functions, supabase/migrations, firebase.json, Firestore rules, or Firebase functions. The frontend may still be deployable on your server, but those artifacts are not generic VPS backend code and should not be mistaken for PM2 targets.
Sometimes the repo includes server/, api/, backend/, Express, Fastify, or a start script that launches an HTTP server. These fit your PM2 + nginx model if the build probe confirms they behave like a normal Node service.
This is where the pain begins. A repo with wrangler.toml, Cloudflare bindings, Workers semantics, or TanStack Start on a Cloudflare target is not the same species as a normal VPS deploy. Treat these as compatibility cases or rejections, not as ordinary SSR.
Recommended system design
Read package.json, lockfiles, framework config, wrangler files, monorepo manifests, server entrypoints, and deployment scripts. Extract runtime clues before running anything.
Classify the repo into one of: static SPA, static SSG, Node web app, Node SSR app, provider-managed frontend, edge-runtime app, or unknown. Use framework-specific rules instead of generic folder-guessing cosplay.
Run install and build in a sandbox. Inspect whether the result is dist/, out/, build/, .output/server, .next, a worker bundle, or total chaos. This is the part that catches repos that lie in package.json.
If the output is static, verify index rendering. If the output is server-like, boot it with an injected PORT, wait for a bind, and hit a smoke URL. If it looks like a worker runtime bundle, stop and classify it accordingly.
Choose exactly one outcome: deploy directly, deploy in compatibility mode, partial deploy with warnings, or reject with evidence. No vague “maybe it works” branch. That is how support tickets multiply.
Decision matrix
| Repo type | Strong signals | Recommended outcome |
|---|---|---|
| Static Vite/React SPA | vite.config.*, index.html, src/main.*, build => dist, no server or worker markers | Deploy directly with nginx static hosting + SPA fallback |
| Static app with Supabase/Firebase SDK only | @supabase/supabase-js or firebase SDK, VITE_* envs, no local functions runtime | Deploy directly, but require envs and warn about external backend readiness |
| Static frontend + provider-managed backend artifacts | supabase/functions, supabase/migrations, firebase.json, firestore.rules, Firebase functions | Deploy frontend only or mark backend as out-of-scope; never pretend those are PM2 apps |
| Conventional Node SSR / web server | next start, nuxt start, express/fastify/koa, node server.js, .output/server | Deploy with PM2 + nginx reverse proxy |
| Framework edge target that can retarget Node | recognized framework + official Node adapter path + no hard Cloudflare-only dependencies | Offer compatibility mode by switching adapter/preset, then rebuild and deploy |
| True Cloudflare runtime app | wrangler.toml + Durable Objects / KV / R2 / D1 / Queues / service bindings / waitUntil semantics | Reject for VPS deployment and explain why |
Compatibility mode
Best option when a framework officially supports both Cloudflare and Node. Example: change from a Cloudflare adapter/preset to a Node adapter/preset, rebuild, and run the resulting server output. Keep this explicit and auditable.
If the app uses an SSR-capable framework but does not actually need SSR behavior, try a static export path and deploy the generated assets instead of forcing a server runtime.
For simple worker-style apps that only expose a fetch handler and avoid Cloudflare-only services, a thin Node wrapper can sometimes translate HTTP requests into Web Request/Response handling. Use this sparingly. It is a compatibility mode, not a religion.
Rejection discipline
Best practical recommendation
1. Make static apps absolutely bulletproof. That is the highest-volume, highest-confidence Lovable case, and it already fits your nginx flow perfectly.
2. Add excellent support for Node SSR and conventional Node apps. This is the next biggest win. If the repo can be built into a normal Node process, PM2 + nginx remains the clean default.
3. Add bounded compatibility for official adapter switches. If a framework can move from a Cloudflare/edge target to a Node target using an official adapter or preset, automate that path carefully and show the user exactly what changed.
4. Reject true edge-platform apps cleanly. If the repo depends on Cloudflare platform primitives, do not pretend your prepared VPS is secretly Cloudflare in a trench coat. Reject it with evidence and a useful explanation.
Official references
Bottom line
The right story is not “paste any repo and we deploy everything.” The right story is: paste your Lovable repo and server, and we will analyze it, deploy what is compatible, adapt what is safely adaptable, and reject what does not belong on a generic VPS.
That positioning is stronger, more truthful, and much easier to operate at scale than pretending every SSR or edge runtime can be force-fed into the same deployment path.