←back to thread

1525 points saeedesmaili | 1 comments | | HN request time: 0.234s | source
Show context
cjs_ac ◴[] No.43652999[source]
For any given thing or category of thing, a tiny minority of the human population will be enthusiasts of that thing, but those enthusiasts will have an outsize effect in determining everyone else's taste for that thing. For example, very few people have any real interest in driving a car at 200 MPH, but Ferraris, Lamborghinis and Porsches are widely understood as desirable cars, because the people who are into cars like those marques.

If you're designing a consumer-oriented web service like Netflix or Spotify or Instagram, you will probably add in some user analytics service, and use the insights from that analysis to inform future development. However, that analysis will aggregate its results over all your users, and won't pick out the enthusiasts, who will shape discourse and public opinion about your service. Consequently, your results will be dominated by people who don't really have an opinion, and just take whatever they're given.

Think about web browsers. The first popular browser was Netscape Navigator; then, Internet Explorer came onto the scene. Mozilla Firefox clawed back a fair chunk of market share, and then Google Chrome came along and ate everyone's lunch. In all of these changes, most of the userbase didn't really care what browser they were using: the change was driven by enthusiasts recommending the latest and greatest to their less-technically-inclined friends and family.

So if you develop your product by following your analytics, you'll inevitably converge on something that just shoves content into the faces of an indiscriminating userbase, because that's what the median user of any given service wants. (This isn't to say that most people are tasteless blobs; I think everyone is a connoisseur of something, it's just that for any given individual, that something probably isn't your product.) But who knows - maybe that really is the most profitable way to run a tech business.

replies(43): >>43653102 #>>43653133 #>>43653161 #>>43653213 #>>43653214 #>>43653232 #>>43653255 #>>43653258 #>>43653326 #>>43653448 #>>43653455 #>>43653565 #>>43653604 #>>43653636 #>>43653811 #>>43653827 #>>43653845 #>>43654022 #>>43654156 #>>43654245 #>>43654301 #>>43654312 #>>43654338 #>>43654357 #>>43654677 #>>43654723 #>>43655344 #>>43655627 #>>43655701 #>>43655913 #>>43656046 #>>43656072 #>>43656178 #>>43656340 #>>43656803 #>>43657011 #>>43657050 #>>43657261 #>>43657715 #>>43663848 #>>43664249 #>>43668575 #>>43680835 #
mrandish ◴[] No.43657011[source]
> you will probably add in some user analytics service, and use the insights from that analysis to inform future development. However, that analysis will aggregate its results over all your users, and won't pick out the enthusiasts, who will shape discourse and public opinion about your service. Consequently, your results will be dominated by people who don't really have an opinion, and just take whatever they're given.

This is so spot on. I was a long-time serial entrepreneur who spent a couple decades across three successful startups discovering, shipping and growing new categories of tech products primarily for consumer, prosumer and hobbyists. Then I sold my last startup to a very large F500 silicon valley tech leader and ended up a senior product exec there. While there were a lot of positives like more mature engineering processes, testing and devops as a discipline, the exact issue you describe was a nightmare of product-damaging mistakes I called "analytics abuse." In my startups I valued having increasingly robust analytics over the years. In part because they helped increase my overall understanding of usage but mostly because they provoked good questions to explore. That exploration happened naturally because as the "product guy / founder" I never stopped spending a lot of time with our most passionate, opinionated, thought-leading customers. Over years of iteration I'd learned how to engage deeply and listen carefully to input from these customers. This involved interpreting, filtering and curating the mess of divergent personal preferences and pet feature ideas to tease out the more actionable product signals that could increase broad usage, adoption and passion around our products. I'd then bring those curated signals back to the product teams for evaluation and prioritization.

At BigCo they were diligent about meeting with customers, in fact they had entire processes around it, but their rigorous structures and meeting agendas often got in the way of just directly engaging and actively listening. Worse, the customer meetings the more senior product decision makers actually attended in person were mostly with the highest revenue customers. Junior PMs (and sometimes new grads) were delegated to meeting with the broader base of customers and filing reports. Those reports were then aggregated by ever-helpful program managers into tables of data and, eventually, slides - losing all nuance and any ability to spot an emerging outlier signal and tug on that thread to see where it goes.

I tried to convince everyone that we were missing important customer signals, especially from our smartest, most committed users. Being only one level removed from the CEO and quite credible based on prior success, I was definitely heard and most people agreed there was something being lost but no one could suggest a way to modify what we were doing that could scale across dozens of major products and hundreds of product managers, designers, execs and other stakeholders. In my experience, this general problem is why large companies, even the most well-run, successful ones full of smart people trying their best, end up gradually nerfing the deeper appeal in their own products. Frustratingly, almost every small, single step in that long slide pushes some short-term metric upward but the cumulative effect is the product loses another tiny piece of the soul that made our most evangelistic, thought-leading customers love the product and promote it widely. Ultimately, I ended up constantly arguing we should forego the uplift from some small, easy-to-prove, metric-chasing change to preserve some cumulative whole most people in the org weren't fully convinced even existed. It was exhausting. And there's no fighting the tide of people incentivized on narrow KPIs come bonus season.

I'm sorry to report I never found a solution to this problem, despite my best efforts over several years. I think it's just fundamental. Eventually I just told friends, "It's a genetic problem that's, sadly, endemic to the breed" (the 'breed' being well-run, very large tech companies with the smartest product people HR can hire at sufficient scale). Even if I was anointed CEO, given the size of the product matrix, I could only have personally driven a handful of products. I do think codifying premises and principles from the CEO level can help but it still gets diluted as the number of products, people and processes scales.

replies(1): >>43658107 #
mncharity ◴[] No.43658107[source]
Given several mrandish-equivalents, gathered into a side-channel Customer Advocacy org, is there some way to integrate their output without this problematic constantly arguing against metric-chasing?

I'm groping towards something vaguely ombudsman-y, or WW2 production/logistics trouble shooters. Or maybe even pre-Bush41 ARPA Project Managers - term-limited person-with-a-checkbook and few accountability constraints.

If one accepts this role has to be out-of-band, vs poking big hairy blob in hope of creating and maintaining signal channels with particular properties, and grants CEO-adjacent leverage, then it seems a remaining unresolved challenge is integrating the output signals at scale? If so, maybe (jest) CA granted KPI offsets?

replies(1): >>43658898 #
1. mrandish ◴[] No.43658898[source]
As I said, it's an extremely difficult problem. To be honest, I doubt it's really solvable in a scalable way across an entire org. The best you can probably do is a combination of implementing a few top down directives and, on the other end, fire fighting flare-ups around specific hot points. But I also hope (desperately) that I'm wrong and that you'll build that shining Camelot on the hill in your org.

Top Down

* Start with clear CEO buy-in supporting a clear manifesto. Include some case study-ish examples of how short-term metric-chasing can go wrong. Do education sessions around this across the product and design orgs. Socialize the concept of "Enshittification." Get people sharing their own examples, whether how Google Search used to be good or how they used to be able to find stuff on Amazon but now the fucking search doesn't even work with quotes or exclusion like it used to. Actually show how you can't find a specifically narrow type of product by excluding features. Ask "How did smart, good people slowly slide down a slippery slope to a pretty evil place?" Discuss how your org can avoid the same fate (or if it even should). Goal: Create awareness. Win (some) hearts and minds.

* Radical idea: seize control of all granular analytics data. Yes, I'm suggesting that product teams cannot directly access their own raw analytics data anymore until it's been corrected for short-term bias and to re-weight by user type. Nor can they unilaterally add new analytics to their product until your CA org has vetted that even gathering that new data won't inappropriately bias internal perception. Before distribution to product teams, granular usage data is first recast and contextualized into new user-type and time horizon buckets that make it hard to chase (or even see) lowest-common denominator "bad" product changes.

I think this is hugely important. I saw certain savvy PMs cleverly manipulate how analytics were tallied and also suggest new measures in a veiled effort to boost short-term incremental metric gains, almost always in the quarter before bonus season. I also saw designers who were heavily bought into the "less density, less choices" zen ethos I called "The Church of Saint Johnny Ive" (which seems to pathologically despise advanced and power users), actively weaponize analytics to generate data supporting their religiously-held worldview and force killing significant functionality beloved by smaller advanced user segments. If those designers ran Burger King the slogan would have to change to "Have it MY way (because I graduated from Stanford D-School and know what you should want)". If you don't seize control of the raw usage data so it can't be weaponized for KPIs (or religious agendas), you'll never be able to make serious traction. Also, doing this will trigger World War III and you'll find out right away if senior leadership is really committed to supporting you. :-)

* Create new segmentation categories of user types. For example, use in-product behavior to identify power users who are passionate and engaged (discount daily frequency and session time / amplify usage depth of specific advanced features), Identify long-time users who were early adopters and dramatically amplify their analytics signal. Every click they make should be worth hundreds of drive-by, newbie users who barely understand the entire product yet.

* Create KPI demerits for teams who make changes that annoy or dismay long-term users as measured by posts on user forums, social media and in deep interviews of unhappy or exiting customers. A handful of such posts should be able to wipe out the gains of a hundred incremental pixel-moving tweaks. Causing strong negative feedback from thoughtful users who care should be feared like touching the third-rail.

* On that topic, once you have control of the granular usage data, simply aggregate all small increases or decreases into one big bucket that's only released into the overall number on a time-delay, maybe even once a year right after KPI/bonus season. Make it so no one thinks they can get "get there" by optimizing 0.1% at a time. All the tweaking of shades of color or moving shit 4 pixels is a distraction at best and at worst ends up losing the beating heart that engages users who really give a shit about the overall experience.

* Assign a tangible economic cost to teams removing a long-time feature. Of course they always have analytics which say "not enough users use it." Institutionalize an organizational default position that's extremely skeptical of removing or moving (aka burying) stuff that's been there since the product's "boost" growth that made it what it is. That shit's grandfathered in and is "don't touch" unless they've got an overwhelming case and a senior product owner ready to make a career-betting stand over it.

* Overall, adjust the KPI/metrics economy through targeted inflation and devaluation of the currency to focus on longer-term objectives.

Bottom Up

* I like your KPI offsets idea.

* Also create a way of rewarding doing more of the right stuff. Special awards not based on specific metrics but on overall "getting it" and making sincere creative efforts to try stuff that's not likely to pay-off near-term.

* Feature user feedback forums more so they get more use. Spiff teams that get more feedback as measured both by quantity and degree of depth. Add specific categories like "Hey, Put That Back!" to encourage that sort of feedback. Don't just count posts and up votes. Inflate the weight of long, passionate or angry posts and posts that elicit more written replies in addition to up votes. Apply appropriate discounts to frequent feedbackers and amplify feedback from people who signed up just to bitch about this one thing. Teams should fear making changes that cause long-time users who rarely post feedback to post emotional rants.

* Find those individuals in the product, design and engineering orgs who believe in valuing the depth of long-term user commitment as much as you do. Make common cause with them. Have a secret club and handshake if you have to but support them and elicit their 'outside-channels' feedback. They're your best source of warning when the forces of short-term darkness are coming in the night with pitchforks (and they will).

Good luck, friend. We're all counting on you!