←back to thread

382 points virtualwhys | 5 comments | | HN request time: 0.825s | source
1. didgetmaster ◴[] No.41904577[source]
How big of an issue is this really for db users who work daily with large tables that are frequently updated but still have to be fast for queries?

The article mentioned that there are nearly 900 different databases on the market. I am trying to build yet another one using a unique architecture I developed. It is very fast, but although it is designed for transactions; I haven't implemented them yet.

I think if I spend the time and effort to do it right, this could be a real game changer (I think the architecture lends itself very well to a superior implementation); but I don't want to waste too much time on it if people don't really care one way or the other.

replies(1): >>41911447 #
2. nasmorn ◴[] No.41911447[source]
Unless you develop some holy grail solution I don’t think anyone will use an unproven DB for OLTP. At least not without HuMongos marketing spend
replies(2): >>41913338 #>>41914726 #
3. guenthert ◴[] No.41913338[source]
To expand on this, DB administrators tend to be a conservative bunch. To some extend you can make a slow DB fast by spending big on hardware. No amount of money however will make an unsound DB reliable.
replies(1): >>41914632 #
4. didgetmaster ◴[] No.41914632{3}[source]
I think it is obvious that no one will want to put their valuable data in an 'unsound' DB.

To restate my original question: If you had two database systems that were equally reliable, but of course had different strengths and weaknesses, would the ability to update large tables without significantly impacting general query speeds, be a major factor in deciding between the two?

5. didgetmaster ◴[] No.41914726[source]
Is that how PostgeSQL got so popular? After all, at one point it was unproven and I am not aware of a 'HuMongos marketing spend' changing that.