Riak is horribly unfriendly as a database: no SQL, it exposes eventual consistency directly to the developer, it’s relatively slow, and Erlang is a fairly unusual language.
While you can run Riak on a single server, you’d have to really want to.
Its strength is the ability to scale massively, but not many projects need that scale, and by the time you do, you’re probably already using some friendlier database and you’d rather make that one work.
Though it had a couple years head start when there really no other options for people wanting that kind of kit.
Our code was in Clojure, and we just wrapped the Java client. The conflict resolution was a steep learning curve, but overall, it was kind of nice (coming from Mongo).
But man, Clojure stack traces wrapping Java stack traces wrapping Erlang stack traces in a Kafka consumer... I wish that hell on no one.
It does not matter what your technology is, or how theoretically superior it is. Getting it to actually work well "in production" is a whole separate thing than simply designing it and writing code. When it's a very small system, it will look like it's doing great. As it gets bigger, the seams will start to burst, and you will find out that promises and theory don't always match reality.
In the end, while its aims are great, it takes a whoooooole lot of work to smooth out the bumps in such a system. You need experts in that technology to address bugs in a timely manner. You need developers versed in the system to properly build apps utilizing it. You need competent operators to build, orchestrate, operate and maintain the whole thing.
All of that is made easier by using simple technology that everybody knows, that there's a huge support community for, professional services for, etc. A technology like MySQL or Postgres etc, has the corporate, development, support, etc to make it easy to work with at any scale. A little janky at times, limited, but dependable, predictable, controllable.
A small bespoke system with a small support community and virtually no corporate support is, comparatively, a hell of a lot more difficult/costly to support and harder to make work reliably.
I fondly remember writing a Go driver for it. Was a good experience: https://github.com/riaken/riaken-core
Current development has been focused on improving the flexibility of secondary indexes. There was some funky stuff achieved by some users using overloaded 2i terms and distributed processing of regular expressions against those terms - the aim is now to make this more flexible to the modern developer using the language of projected attributes and filter expressions (ala DynamoDB). There's also some active work to both replicate-to and full-sync (i.e. reconcile with) external OpenSearch clusters.
The primary goal for OpenRiak is stability under load/failure as a K/V store - so the ultra-flexibility of in-built SOLR querying has been sacrificed in the move towards that aim. Anything that can do harm is to be offloaded or constrained.
Ohhh, this brings memories of developers hitting the wall... Between different SQL databases!
Back in 2016 I was delegated at work to do ops on a project that had big data ambitions in Threat Intelligence space.
Part of how they intended to support that was Apache Phoenix, an SQL database backed by HBase, running on top of Hadoop that also provided object storage (annoyingly through WebHDFS gateway).
Constant problems with hung Phoenix queries and instability of Hadoop in entirety led me to propose moving over to PostgreSQL, which generally went quite well... Except several cases of "basic SQL operations" that turned to have wildly different performance compared to Phoenix and most importantly, to MySQL in MyISAM mode, like doing SELECT (*) on huge tables.
Fun times, got to meet a postgres core team member thanks to it.
One of our biggest disappointments: we had plans to add a way to enforce strong consistency leveraging (IIRC) something akin to multi-paxos, but couldn't get it to work.
The engineering exodus around that time sorta killed the project though, and we never were able to do the big follow-up work to make it really shine.
(Disclaimer: Former Basho Principal Engineer, primary author of strong consistency work, lead riak_core dev from 2011-2015)
I think another 18 months would have been enough too. But it just wasn't the right environment after the hostile take-over / leadership transition.
I apologise if we do eventually cut it. Having worked through the code when chasing unstable tests, I developed an appreciation for the quality of the work.
I was part of a recent cloud migration. Part of on-prem (though unfortunately not migrated by my team) were this very first Riak Cluster I saw in production.
The engineering team used it as "kind of S3" for images, with 3 to 5 PHP scripts providing an interface to Riak and imageMagic. It seemed to me like a good abstraction and I think the migration to S3 was mostly painless.
Other than that I only had contact with Riak at university around 15 years ago, when we tested cluster setups of several NoSQL databases and tried to manually introduce faults to see if they could heal. Riak passed our test at that time, MongoDB didn't.
In the end more and more data was offloaded to MariaDB, until one day the last remaining data couldn't justify the cost of the Riak cluster. I think we swapped out an eight node Riak cluster for two largish MariaDB database (one being a hot-standby).
For one of the other clients it was the exact same scenario, only we had been contracted in to help run the Riak cluster, which we didn't do well. Once they had migrate of it, to Oracle I think, the client left.
To me it always felt like it was just the wrong tool for that particular job. Someone really wanted to be able to jump on the NoSQL hype and sell something. They picked Riak, because it honestly looked really good, and probably was, compared to MongoDB, CouchDB or whatever else happened to float around at the time. It just wasn't the right tool for the problems it was applied to.
(I can't, of course, speak to the truth of this, only that over a couple decades of knowing the dude in question and working with him on and off he had sufficient Clue that I expect he did put in the effort before coming to that conclusion)