←back to thread

Affinity Studio now free

(www.affinity.studio)
1203 points dagmx | 4 comments | | HN request time: 0.655s | source
Show context
pentagrama ◴[] No.45762521[source]
I used Affinity for several years, so to add some background here:

Serif is the company that originally built this software.

--------

2014–2024

Serif developed the Affinity suite, a collection of three independent desktop apps sold with a one-time payment model:

- Affinity Designer: vector graphic design (Adobe Illustrator equivalent)

- Affinity Photo: digital image editing (Adobe Photoshop equivalent)

- Affinity Publisher: print and layout design (Adobe InDesign equivalent)

They were solid, professional tools without subscriptions like Adobe, a big reason why many designers loved them.

-------

2024

Canva acquired Serif.

-------

2025 (today)

The product has been relaunched. The three apps are now merged into a single app, simply called Affinity, and it follows a freemium model.

From what I’ve tested, you need a Canva account to download and open the app (you can opt out of some telemetry during setup).

The new app has four tabs:

- Vector: formerly Affinity Designer

- Pixel: formerly Affinity Photo

- Layout: formerly Affinity Publisher

- Canva AI: a new, paid AI-powered section

Screenshot https://imgur.com/a/h1S6fcK

Hope can help!

replies(16): >>45762570 #>>45763276 #>>45763555 #>>45763695 #>>45763766 #>>45763807 #>>45764042 #>>45764560 #>>45765389 #>>45765538 #>>45765942 #>>45767528 #>>45769728 #>>45769747 #>>45770368 #>>45770565 #
alt227 ◴[] No.45763276[source]
This is such a shame IMO. The Serif suite was great, and I used to try to get every designer I could to dump adobe and switch to serif.

Now that it has switched to a freemium model trying to get you to subscribe to AI, I wont be using this or telling other people about it any more. Their priorities have changed. No longer are they trying to to beat adobe at their own game, they are just chasing AI money like everyone else.

replies(9): >>45763916 #>>45764127 #>>45765466 #>>45766273 #>>45767409 #>>45767470 #>>45767559 #>>45767730 #>>45774320 #
derefr ◴[] No.45763916[source]
To push back against this sentiment: “chasing AI money” isn’t necessarily their thought process here; i.e. it’s not the only reason they would “switch to a freemium model trying to get you to subscribe to AI.”

Keeping in mind that:

1. “AI” (i.e. large ML model) -driven features are in demand (if not by existing users, then by not-yet-users, serving as a TAM-expansion strategy)

2. Large ML models require a lot of resources to run. Not just GPU power (which, if you have less of it, just translates to slower runs) but VRAM (which, if you have not-enough of it, multiplies runtime of these models by 10-100x; and if you also don't have enough main memory, you can't run the model at all); and also plain-old storage space, which can add up if there are a lot of different models involved. (Remember that the Affinity apps have mobile versions!)

3. Many users will be sold on the feature-set of the app, and want to use it / pay for it, but won't have local hardware powerful enough to run the ML models — and if you just let them install the app but then reveal that they can't actually run the models, they'll feel ripped off. And those users either won't find the offering compelling enough to buy better hardware; or they'll be stuck with the hardware they have for whatever reason (e.g. because it's their company-assigned workstation and they're not allowed to use anything else for work.)

Together, these factors mean that the "obvious" way to design these features in a product intended for mass-market appeal (rather than a product designed only "for professionals" with corporate backing, like VFX or CAD software) is to put the ML models on a backend cluster, and have the apps act as network clients for said cluster.

Which means that, rather than just shipping an app, you're now operating a software service, which has monthly costs for you, scaled to aggregate usage, for the lifetime of that cluster.

Which in turn means that you now need to recoup those OpEx costs to stay profitable.

You could do this by pricing the predicted per-user average lifetime OpEx cost into the purchase price of the product… but because you expect to add more ML-driven features as your apps evolve, which might drive increases usage, calculating an actual price here is hard. (Your best chance is probably to break each AI feature into its own “plugin” and cost + sell each plugin separately.)

Much easier to avoid trying to set a one-time price based on lifetime OpEx, by just passing on OpEx as OpEx (i.e. a subscription); and much friendlier to customers to avoid pricing in things customers don’t actually want, by only charging that subscription to people who actually want the features that require the backend cluster to work.

replies(4): >>45764245 #>>45764341 #>>45765384 #>>45766696 #
isodev ◴[] No.45765384[source]
> 1. “AI” (i.e. large ML model) -driven features are in demand

No, there’re not. People with influence or who have invested in the space say that these features are in demand/the next big thing. In reality, I haven’t seen a single user interview where the person actively wanted or was even excited about AI.

replies(3): >>45765866 #>>45766040 #>>45766282 #
1. derefr ◴[] No.45766282[source]
I didn't make any assertion about AI, only about "AI" (note the quotes in my GP comment) — i.e. the same old machine-learning-based features like super-resolution upscaling, patch-match, etc, that people have been adding to image-editing software for more than a decade now, but which now get branded as "AI" because people recognize them by this highly-saturated marketing term.

Few artists want generative-AI diffusion models in their paint program; but most artists appreciate "classical" ML-based tools and effects — many of which they might not even think of as being ML-based. Because, until recently, "classical ML" tools and effects have been things run client-side on the system, and so necessarily small and lightweight, only being shipped if they'll work on the lowest-common-denominator GPU (esp. "amount of VRAM") that artists might be using.

The interesting thing is that, due to the genAI craze, GPU training and inference clusters have been highly commoditized / brought into reach for the developers of these "classical ML" models. You don't need to invest in your own hyperscale on-prem GPU cluster to train models bigger than fit on a gaming PC any more. And this has led to increased interest in, and development of, larger "classical ML" models, because now they're not so tightly-bounded by running client-side on lowest-common-denominator hardware. They can instead throw (time on) a cloud GPU cluster to train their model; and then expect the downstream consumer of that model (= a company like Canva) to solve the problem of running the resulting model not by pushing back for something size-optimized to be run locally on user machines, but rather by standing up an model-inference-API backend running it on the same kind of GPU IaaS infra that was used to train it.

replies(2): >>45770193 #>>45771491 #
2. isodev ◴[] No.45770193[source]
Next time, tell it to make you a comment in 150 characters. Nobody has time to read AI slop
replies(1): >>45776856 #
3. throwaway2037 ◴[] No.45771491[source]
I never heard of "patch-match" before this post. I found something on Wiki: https://en.wikipedia.org/wiki/PatchMatch

Is the same algorithm that allows AI/LLM-enabled apps to remove things from photos? Example: Remove the person who accidentally appeared on the left side of the photo.

4. derefr ◴[] No.45776856[source]
My friend, my writing isn't AI slop. It's Adderall slop. AI slop is far more structured.

(Also, because I assume this is your issue: em-dash is option-shift-hyphen on US-English Mac keyboard. I've been using them in my writing for 25 years now, and I won't stop just because LLMs got ahold of them.)