To push back against this sentiment: “chasing AI money” isn’t
necessarily their thought process here; i.e. it’s not the only reason they would “switch to a freemium model trying to get you to subscribe to AI.”
Keeping in mind that:
1. “AI” (i.e. large ML model) -driven features are in demand (if not by existing users, then by not-yet-users, serving as a TAM-expansion strategy)
2. Large ML models require a lot of resources to run. Not just GPU power (which, if you have less of it, just translates to slower runs) but VRAM (which, if you have not-enough of it, multiplies runtime of these models by 10-100x; and if you also don't have enough main memory, you can't run the model at all); and also plain-old storage space, which can add up if there are a lot of different models involved. (Remember that the Affinity apps have mobile versions!)
3. Many users will be sold on the feature-set of the app, and want to use it / pay for it, but won't have local hardware powerful enough to run the ML models — and if you just let them install the app but then reveal that they can't actually run the models, they'll feel ripped off. And those users either won't find the offering compelling enough to buy better hardware; or they'll be stuck with the hardware they have for whatever reason (e.g. because it's their company-assigned workstation and they're not allowed to use anything else for work.)
Together, these factors mean that the "obvious" way to design these features in a product intended for mass-market appeal (rather than a product designed only "for professionals" with corporate backing, like VFX or CAD software) is to put the ML models on a backend cluster, and have the apps act as network clients for said cluster.
Which means that, rather than just shipping an app, you're now operating a software service, which has monthly costs for you, scaled to aggregate usage, for the lifetime of that cluster.
Which in turn means that you now need to recoup those OpEx costs to stay profitable.
You could do this by pricing the predicted per-user average lifetime OpEx cost into the purchase price of the product… but because you expect to add more ML-driven features as your apps evolve, which might drive increases usage, calculating an actual price here is hard. (Your best chance is probably to break each AI feature into its own “plugin” and cost + sell each plugin separately.)
Much easier to avoid trying to set a one-time price based on lifetime OpEx, by just passing on OpEx as OpEx (i.e. a subscription); and much friendlier to customers to avoid pricing in things customers don’t actually want, by only charging that subscription to people who actually want the features that require the backend cluster to work.