From Custom Python per Customer to a Parameterized Pricing Engine
Smartness is an AI-powered dynamic pricing platform for hotels. In 2022 we were in full-startup mode: ship for the customer in front of you, iterate fast, and keep the commercial engine running. The natural way to accommodate customer-specific behaviour was a custom Python pipeline per customer, behind a common entrypoint—not because we thought that was the end state, but because it maximized short-term fit when every deal mattered. Two hotels might run similar logic, but that was coincidence, not a shared product surface: the usual way to get “feature X like customer Y” was to reuse pieces from an existing pipeline, not to design a parameter up front. Forks, special cases, and one-off logic lived in code instead of a database. After about a year of that pattern, the cost profile had shifted: every tweak still required a deploy, reviews still needed someone who knew that customer’s history, and the operational load was rising just as the business needed to compound rather than hand-craft each account.
Getting out took a full year of migration—and the whole time, the customer-base was still growing, so the surface area kept expanding under us.
This post is about the step after that: a single codebase where differentiation is parameters, not code—and how we got there gradually, with no downtime, including previews so internal tools and the product could show what prices would be before anyone committed a customization.
(We have since simplified the architecture again; a short note on that at the end.)
The old shape: code as configuration
The trade-off is a common one at high velocity: a customer needs a slightly different floor, a different blend of model outputs, or an extra guardrail, and the answer is a new or edited pipeline file—often assembled by borrowing from whoever already had something close. Repeat that a few dozen times and you no longer have a single mental model of “the engine”; you have many parallel stories, and a lot of the map lives in people’s heads (in our case, alongside HubSpot CRM, where customer requests were tracked).
That made changes slower and riskier than we wanted at scale. It also made it harder to reason about what “the product” even was: the behaviour was the union of all those pipelines, and working out which customers had a particular capability—to decide how the product should evolve—was mostly a manual code-reading exercise. There was no inventory; you read Python, pipeline by pipeline, knowing each copy had usually evolved on its own timeline, so even when two hotels had inherited the “same” idea, it wasn’t guaranteed to be the same implementation.
The target: one engine, many policies
The replacement was deliberately boring in a good way: one pricing pipeline, shared by everyone, with per-property (or per-customer) configuration expressed as data—thresholds, weights, which subsystems to enable, caps, horizons, and so on. The important shift is cultural as much as technical: policy belongs in parameters; the implementation belongs in one place.
That does not mean a single JSON blob with a thousand keys. It means a bounded vocabulary of knobs that the engine understands, validated against a schema, versioned, and reviewable without reading Python diffs. When something truly new is needed, you extend the engine once and expose a new parameter—not a new private codepath.
Migrating gradually, without a flag day
A rewrite like this does not get a big-bang cutover. We ran old and new in parallel: we produced two sets of prices, and a diff job summarized how far apart they were per property. The old system was not always right, so “match legacy bit-for-bit” was not possible at the beginning. We had to iterate both on the new and old codebases to make them converge.
We had to setup feature flags in other backends to handle the two price sources at the same time, given that the plan was to migrate customers gradually.
Properties stayed in a quiet band for a while before we trusted the new path as source of truth; anything noisy was triaged—fix the new engine, accept a known difference, or park it for a follow-up. Rollout was by cohort: internal and test properties first, then a small set of friendly customers, then the long tail. There was no single migration day and no customer-facing “we switched the engine last night” incident.
Previews: see prices before you save
Parameters are only half the win if changing them still feels like pulling a lever in the dark. We wired previews into internal tools and the product: before persisting a customization, you can run the same pricing logic as production on draft settings and see would-be prices for representative dates and room types.
That closed the loop for revenue managers and for us: fewer surprises after save, safer experimentation, and a shared object to look at when something “looks wrong”—the inputs and the parameter set, not a one-off tucked away in a specific customer’s pipeline.
What I’d do differently
The customer parameters were customizable in a branch-based manner: revenue managers could start working on updates for a customer on a new branch, save progress gradually, then overwrite the master parameters once finished. This was very cool from an engineering POV, but in practice it was more machinery than we needed and led to a lot of long-lived, easy-to-forget branches.
Lessons learned
The asymmetry that mattered was context, not line count: with few customers, messy, fast-moving work and breakage are cheap; fixing that shape once people rely on you, the product is stable, and pricing bugs are unacceptable, is not—same debt, sharper stakes. I suspect a stop-the-world rewrite would have been simpler than in-place migration; it simply was not an option anymore. We paid the bill the slow way: very little room for regressions, and no reset button.
Epilogue: simpler again
After this migration, we eventually moved to an even simpler model on top of the same idea—fewer moving parts, still parameters-first, but way less parameters (from ~200 to ~20).